Tuesday, 6 December 2011

Obtaining the size of a C++ array using templates and other techniques (from C++11) - Part 2

Welcome back!  In Part 1 we saw a nifty technique to obtain the size of an array via function template argument deduction.  However, this ended with the problem of how to use this value at compile time, e.g. as the size of another array.  This proves hard in C++98 but C++11 changes things.

The simplest way to enable compile time use is to make use of the new constexpr keyword.  This tells the compiler that the function can be completely evaluated (to a constant value) at compile.  As such C++11 allows this function to be called at compile time though instead the evaluated constant value is used.  Adding this to the definition of GetNumberOfElements as below

template<typename T, size_t sizeOfArray>
constexpr size_t GetNmberOfElements(T (&)[sizeOfArray])
{
  return sizeOfArray;
}


now allows the following to compile

char a[100];
char b[GetNumberOfElements(a)];


Is this the best that we can do?  Using raw array types isn't that good.  Even though the size of the array is known and can now be discovered at compile time using it in std::for_each is somewhat burdensome as the length must be obtained in order to obtain the end iterator, i.e.

std::for_each(&a[0], &a[GetNumberOfElements(a)], SomeFunctor());

Due to the raw array not being an object it doesn't have an end method which means the code must rely on the programmer specifying the correct index for the end of an array or using the technique above.

This can be remedied by moving from raw array types to std::array (first made available in TR1), e.g.

std::array<char, 100> a;
std::for_each(std::begin(a), std::end(a), SomeFunctor());

(Note the use of the C+11 non-member versions begin and end functions)

The downside of using std::array is that even though it can be initialized when defined, e.g.

std::array<int, 4> a = { 0, 1, 2, 3 };

It effectively requires the length of the array specifying twice.  Firstly as the template parameter and then again implicitly by the number of elements in the initializer list.

Given the new initializer list feature of C++11 it could be expected that this would allow initialization without the length parameter.  However it seems this is not supported "out of the box" though there are some interesting techniques to allow this.

All is not lost though.  Taking a step back and returning to the raw array the size issue can be circumvented by use of C++11's range-for statement.

char a[100];
SomeFunctor sf;
for (auto x : a)
sf(x);

To backtrack once again, the use of the non-member versions of begin and end can also be used on raw arrays so the example above can also be written as:

char a[100];
std::for_each(std::begin(a), std::end(a), SomeFunctor());

Which if the purpose of the loop is to invoke a functor then std::for_each is more succinct than range-for as it doesn't require an explicit instance of the functor.

I suspect the same technique for finding the length of a naked array along with constexpr is used by the non-member version of end to obtain the length.  In fact one version of end is probably a function template overloaded to accept a raw array as its container parameter, e.g.

template<typename T, size_t>
constexpr T* example_begin(T (&array)[])
{
  return &array[0];
}


template<typename T, size_t sizeOfArray>
constexpr T* example_end(T (&array)[sizeOfArray])
{
  return &array[sizeOfArray];
}


char a[100];
std::for_each(example_begin(a), example_end(a), SomeFunctor());

However, any looping mechanism that requires both the start and end to be specified rather than just the container does allow for an error to occur by having begin and end of different containers specified whereas range-for eliminates this possibility!

In summary, it's possible to find the size of a raw array using function template deduction and using C++11 features that value can be used at compile time.  However, if the reason that the size needs obtaining is to obtain the end iterator then C++11 makes this redundant through the use of range-for and the non-member versions of begin and more accurately end.

Friday, 18 November 2011

Obtaining the size of a C++ array using templates and other techniques (from C++11) - Part 1

Recently I was helping somebody debug an issue around the use of swprintf_s.  The issue turned out to an Obi-Wan (off by one) error.  I don't tend use the likes of printf() very much instead preferring to use a std::stringstream if I need to format into a string.

I'd assumed that the Microsoft's secure versions of these methods, i.e. those with _s suffix took a buffer size so when looking at the help for swprintf_s I was momentarily taken aback by the lack of a buffer parameter.  However I then noticed that swprintf_s is not just a regular function but is in fact a function template:

template <size t size>
int swprintf_s(
    wchar_t (&buffer)[size],
    const wchar_t *format [, argument] ...); // C++ only

One of the most useful properties of a function template is its ability to deduce its argument types.  In this case the argument is not a type parameter but a fundamental type (though using the size_t typedef) that specifies the size of the target string buffer (in characters not bytes). When used as:

wchar_t buf[10];
swprintf_s(buf, "%d", 10);

It deduces that the size of the buffer (buf) is 10.  This works because the template parameter is used to specify the size of the expected wchar_t array that swprintf_f expects.  It could have been specified as in swprintf_s<10>(buf, "%d, 10) but this is where the beauty lies in that the compiler is able to deduce it.  This is what function templates do and how they're often used so there's nothing novel here accept the application of finding an array size.  This is a really neat trick and I don't know why I've missed it for so long!

An important point here is that the signature is a reference to an array (note the & before buffer) as opposed to the array syntax of just wchar_t (buffer)[size]. If this were used the function template would be unable to deduce the parameter (size). This is because the syntax:

template<size_t size> foo(wchar_t (buffer)[size])

decays to become:

template<size_t size> foo(wchar_t* buffer);

When compiled, i.e. foo() can accept a pointer to a wchar_t array of any size. In fact a pointer to wchar_t* is fine. There is nothing special about this and it's just the standard decay that C (and C++) has always supported.

Anyway, after that slight diversion into decay let's return to deducing the size of an array. So why is this is this useful? In order to iterate over each of any arrays elements, e.g.

int a[] = { 0, 1, 2, 3, 4 };
for (int i = 0; i < sizeof(a)/sizeof(int); ++i)
SomeFn(i);

or slightly better:

int a[] = { 0, 1, 2, 3, 4 };
std::for_each(&a[0], &a[sizeof(a) / sizeof(int)], &SomeFn);

The number of elements is required in order to terminate the iteration.

The concept can be generalized to obtain the size of any type of array, i.e.

template<typename T, size_t sizeOfArray> int GetNmberOfElements(T (&)[sizeOfArray])
{
  return sizeOfArray;
}

Which can be used to rewrite the previous examples as:

int a[] = { 0, 1, 2, 3, 4 };
std::for_each(&a[0], &a[GetNumberOfElements(a)], &SomeFn);

Returning to the discussions about decay it should be noted that this mechanism only works for actual arrays.  The signature used to prevent decay, i.e. using the '&' means that a pointer cannot be passed, e.g.

char *pa = new char[100];
GetNumberOfElements<char, 100>(pa);

Won't compile with VC++ 2010 giving:

Error 3 error C2664: 'GetNumberOfElements' : cannot convert parameter 1 from 'char *' to 'char (&)[100]'

This makes perfect sense as it's explicitly requires an array.  Even if for some reason it could accept the pointer then it wouldn't be able to deduce the size because this information isn't syntactically available (though will be most likely embedded within memory block that pa points too; quite possibly a few bytes further back so that when delete [] is invoked the C++ runtime will know how much memory to free).  Looking at the MSDN help for swprintf_s() it is clear why additional definitions (the non-template overloads) are provided as these deal with passing pointers.

Now that this cool feature can be used to easily obtain array sizes the next thing you tend to want to do is then define other arrays using this information, i.e.

char a[100];
char b[GetNumberOfElements(a)];

However this won't compile as despite the fact that GetNumberOfElements() performs (well the compiler does) the size deduction at compile time the result is only available at runtime.  To define an array the size must be known at compile time.

There is a clever hack to make this available at compile time but it requires the use a macro which is unpleasant.  However, at this point it's C++11 to the rescue but that'll have to wait until part 2 which is available here.

Wednesday, 17 August 2011

Unit Testing C# Custom Attributes with NUnit Part 4

In Unit Testing C# Custom Attributes with NUnit Part 3 towards the end I showed the following code
Assert.That(MethodBase.GetCurrentMethod(), 
            Has.Attribute<FunkyAttribute>().Property("SettingOne").TypeOf<int>());
to test the type of a property on an attribute.  As it turns out this does not actually do that.  Instead it tests the type of the value returned from the property.  What's happening is that the object returned from Property() isn't some of meta-object representing a property but is the actual value of the property.

In the case above it amounts to the same thing.  As the property type is an int even if it has not been set the value will be 0 of which type is int.  However, if it's a reference type (quite likely a string) or a nullable type the value can be null.  E.g.

Assert.That(MethodBase.GetCurrentMethod(), 
     Has.Attribute<FunkyAttribute>().Property("FunkyName").TypeOf<string>());

 
Which causes the following when run:

AttrTestV3.FunkyTester.TestThatTheTypeOfFunkyNameWhenNotSetIs_string:
  Expected: attribute AttrTestDefs.FunkyAttribute property FunkyName <System.String>
  But was:  <AttrTestDefs.FunkyAttribute>

It seems that when there is no value Property() which tests for the presence of a the specified property name and implicitly returns the value of the property (if found) has nothing to return so for some reason the attribute type obtained from the call Has.Attribute() is returned.

Currently, there is no way with NUnit to handle this case.  I started a thread on the NUnit discussion group which discusses this issue and it seems like the authors of NUnit have some plans.

As an interim solution if the property is optional for a custom attribute then in the fixture make sure a value is assigned.  In the case above:

[Test]
[Funky(FunkyName = "dummy")]
public void TestThatTheTypeOfFunkyNameWhenNotSetIs2_string()
{
    Assert.That(MethodBase.GetCurrentMethod(),
         Has.Attribute<FunkyAttribute>().Property("FunkyName").TypeOf<string>());
}

I think that's the end again:-)  Here are the links to the previous parts: One, two & three.

Sunday, 14 August 2011

Spectating with Twitter: Twittating or Spectwiting

Today I went to watch (spectate) a small part of the London-Surrey Cycle Classic. Being a 160KM cycle race it wasn't possible to see a lot of it. Even though it featured one of Britain's most prominent and successful cyclists Mark Cavendish the BBC or I suppose any other TV companies chose not to broadcast it live, in fact there didn't seem to be any conventional mainstream live media coverage. Bummer!

However, Twitter to the rescue. There seemed to be several authoritative sources (@cyclingweekly@roadcyclinguk@antmccrossan and @CycleSurrey); presumably travelling with the race or being fed data from someone who was that were Tweeting throughout the race. Given the dearth of information, following these people or  the searches #CycleClassic and/or #testevent was the the only way to find out anything.

I suspect it's how most people were informed of the result; Mark Cavendish won! I was wondering if this was the first time for a major sporting event (unless you're from the BBC)  that the majority of spectators 'watched' via Twitter, i.e. Twittating or should that be Spectwiting! Now this doesn't differ too much from receiving text message updates to a sporting event or just checking the score except that this was largely unofficial and the only available channel.  In addition the community aspect of Twitter generated an almost a stadium like atmosphere albeit virtual.

The lack of live coverage also meant that in addition to no information there were no pictures either. However a lot of the tweets had accompanying pictures and some videos. Whilst not live TV it was something. Even more interesting given the nature of the event: a cycle road race, it meant that the stream of  Tweets was actually charting the progress of the race.

Thinking about this the natural extension is the possibility of broadcasting an event purely by the crowd on Twitter. If there are enough people to cover a course and whatever lag Twitter; and mobile Internet access has doesn't interfere too much then as the race passes the current status along with images could be Tweeted.

This stream could be consumed in real time by a special Twitter client that pulls it together chronologically and importantly will display pictures. These wouldn't be continuous motion (well they could be if the intervals between each Tweeter were short enought) but would give a visual sense of the event.  This is essentially newspaper reporting but in near real time as opposed to live TV which is real time.

One other thought on the basis that the highlights won't be available for a week is that it should be possible to write a program to consume the previously mentioned Twitter searches and pull these together along with the attached pictures and videos to create some sort of watch-able programme.

Thursday, 4 August 2011

A couple of computing jokes

The problem with a UDP joke is that you have no idea if people get it. (from @fearthecowboy)

A SQL query walks into a bar, goes up to a couple of tables and says, "Can I join you?".

Tuesday, 26 July 2011

Unit Testing C# Custom Attributes with NUnit Part 3

During Unit Testing C# Custom Attributes with NUnit Part 2 where the Assertions were converted from the Classic to the Constraint model it was simply a process of replacing

Assert.<SomeAssertion>(<object>)

with

Assert.That(<object>, Is.<SomeAssertion>)

The key thing being the 'Is' object that houses all the original assertions used.  Whilst doing this I came across the 'Has' object.  This doesn't appear in the documentation until about halfway when collections are covered.  Not mentioned at all in the documentation is the method Has.Attribute<AttributeType> which tests whether an attribute (custom or otherwise) is present on the object being tested.   For the current method this is simply:

Assert.That(MethodBase.GetCurrentMethod(), Has.Attribute<FunkyAttribute>());

Even better, having obtained the Attribute (if not present the assertion will fail) it too can be tested.  The important aspect here is that it contains a specific property which can easily be tested in the same statement by appending '.Property(<PropertyName>)' in a Fluent style to give:

Assert.That(MethodBase.GetCurrentMethod(), Has.Attribute<FunkyAttribute>().Property("FunkyName"));

The next is to test whether this property contains the correct value.  This can be obtained in a similar way by further appending '.EqualTo(<SomeValue>)' to give:

Assert.That(MethodBase.GetCurrentMethod(), Has.Attribute<FunkyAttribute>().Property("FunkyName").EqualTo("RipSnorter"));

In a single statement tests for the presence of the Custom Attribute, a property of it and finally that property's value have been conducted.  This is considerably shorter than both the original and second examples which required the reflection code to obtain the Custom Attribute followed by 3 separate assertions.

This does not meet the original testing requirements which were:

  • A Custom Attribute of the correct type existed on the test method.
  • That the property to be tested of the Custom Attribute had the expected name.
  • That the property to be tested of the Custom Attribute had the correct type.
  • That the value of property to be tested of the Custom Attribute could be set.
  • That the value of property to be tested of the Custom Attribute could be obtained.
  • That the property to be tested of the Custom Attribute had the expected value.

Remaining are setting, getting (obtain) and type checking.

It turns out that that the tests for being able to set and get the property are not needed as if a property is created on a Custom Attribute then a getter and setter must be supplied.  The property can be private but if this is the case then it cannot be set as part of the Attribute syntax.  If the former condition is not met or an attempt is made to set a private property then a compilation error will occur.

Whilst successfully testing the property's value would suggest a type match this it not strictly the case as if the expected value can be converted to the type of the property then the test will be successful, e.g.

Assert.That(MethodBase.GetCurrentMethod(), Has.Attribute<FunkyAttribute>().Property("SettingOne").EqualTo(77.0));

which causes the double to be converted to an int.  It wouldn't be possible to actually set 'SettingOne' to a double value when using the attribute as this will fail to compile, e.g.

[Funky(SettingOne = 77.9)]

As this is testing an implementation of a Custom Attribute rather than its application then it is necessary to check that the implementation hasn't been accidentally changed to allow this in which case the previous erroneous test would pass.  This means testing the type.

Unfortunately this is where the Fluent interface of NUnit's Constraint model falls downs a little as it's not possible to perform multiple tests on the same initial subject which in this case is the MethodBase returned from the static MethodBase.GetCurrentMethod() call.  Therefore an additional assertion is required:

Assert.That(MethodBase.GetCurrentMethod(), Has.Attribute<FunkyAttribute>().Property("SettingOne").TypeOf<int>());

This means the original example can be reduced to the following:

[Test]
//[Funky(FunkyName = "RipSnorter")]
public void TestThatNameIsRipSnorter()
{
 Assert.That(MethodBase.GetCurrentMethod(), Has.Attribute<FunkyAttribute>().Property("FunkyName").EqualTo("RipSnorter"));
 Assert.That(MethodBase.GetCurrentMethod(), Has.Attribute<FunkyAttribute>().Property("FunkyName").TypeOf<string>());
}

[Test]
[Funky(SettingOne = 77)]
public void TestThatSettingOneIs77()
{
 // Two asserts
 Assert.That(MethodBase.GetCurrentMethod(), 
             Has.Attribute<FunkyAttribute>().Property("SettingOne").EqualTo(77));
 Assert.That(MethodBase.GetCurrentMethod(), 
             Has.Attribute<FunkyAttribute>().Property("SettingOne").TypeOf<int>());

 // Use of '.And'.  Note the trailing '.'
 Assert.That(MethodBase.GetCurrentMethod(), 
             Has.Attribute<FunkyAttribute>().Property("SettingOne").TypeOf<int>().
             And.Attribute<FunkyAttribute>().Property("SettingOne").EqualTo(77));
// Use of overloaded '&'
 Assert.That(MethodBase.GetCurrentMethod(), 
             Has.Attribute<FunkyAttribute>().Property("SettingOne").TypeOf<int>()
           & Has.Attribute<FunkyAttribute>().Property("SettingOne").EqualTo(77));
}

with the Custom Attribute definition remaining as

public class FunkyAttribute : Attribute
{
 public int SettingOne { get; set; }
 public string FunkyName { get; set; }
}
In the TestSettingOneIs77 method the need for two separate assertions has been slightly improved upon.  This is by using the 'And' method which is of the Fluent style.  The final assertion is exactly the same as the previous but just demonstrates the overloaded '&' syntax instead.  

I don't think either of these styles is particularly better than the two line equivalent as they both require two calls to obtain the Custom Attribute; one for each test.  However, using NUnit's Constraint model coupled with Fluent interface reduces the required code dramatically so is worthwhile.  Additionally I'm not sure if the Classic model actually allows Attributes to be obtained.  

A test per-property on a Custom Attribute is probably also desirable but there's nothing stopping you combining all the individual property tests into a single assertion by the use of  '.And.HasAttribute().Property().EqualTo()'.  One of these would be required  for each of the remaining properties along with a similar one for the type check.  A test per-property is more readable.


I think I'm done for a while on this subject now!

Unit Testing C# Custom Attributes with NUnit Part 2

I thought that my last post about Unit Test C# Custom Attributes with NUnit was going to be the only one.  However, after reading more of the NUnit documentation I found that despite the QuickStart guide using the Classic model the preferred model for working with NUnit is that of Constraints.  As such I thought I ought to change over to this which is what the updated code below shows.

[Test]
[Funky(FunkyName = "RipSnorter")]
public void TestThatNameIsRipSnorter()
{
 TestAttrProperty<FunkyAttribute, string>(MethodBase.GetCurrentMethod(), "FunkyName", "RipSnorter");
}

[Test]
[Funky(SettingOne = 77)]
public void TestThatSettingOneIs77()
{
 TestAttrProperty<FunkyAttribute, int>(MethodBase.GetCurrentMethod(), "SettingOne", 77);
}

// Helpers
private void TestAttrProperty<TAttr, TProp>(MethodBase method, string argName, TProp expectedValue)
{
 object[] customAttributes = method.GetCustomAttributes(typeof(TAttr), false);

 Assert.AreEqual(1, customAttributes.Count());

 TAttr attr = (TAttr)customAttributes[0];

 PropertyInfo propertyInfo = attr.GetType().GetProperty(argName);

 Assert.That(propertyInfo, Is.Not.Null);
 Assert.That(propertyInfo.PropertyType, Is.EqualTo(typeof(TProp)));
 Assert.That(propertyInfo.CanRead, Is.True);
 Assert.That(propertyInfo.CanWrite, Is.True);
 Assert.That(propertyInfo.GetValue(attr, null), Is.EqualTo(expectedValue));
}

The major change is rather that than calling Assert.<Assertion> the Constraint model starts with specifying the object to be tested followed by the test.  Additionally, the Constraint model encourages a Fluent style interface, e.g.

Assert.IsNotNull(propertyInfo);

becomes

Assert.That(propertyInfo, Is.Not.Null);

I'm not going to explain the new model here (the above link has the details) but rather this is to just point out this style which can be contrasted to the sample in the previous post.

In addition I remembered that a far easier way to obtain the current method metadata, i.e. MethodBase was to simply to use reflection by calling MethodBase.GetCurrentMethod() from System.Reflection.

You might also have noticed that in this updated example the FunkyAttribute property Name has become FunkyName.

public class FunkyAttribute : Attribute
{
 public int SettingOne { get; set; }
 public string FunkyName { get; set; }
}

This was to differentiate from the Name property of the PropertyInfo class.

However, this isn't the main reason for part 2.  However, this post seems long enough already so the good bit will come in part 3 which I'll write immediately so not too much waiting around.  A part 4 also emerged!

Wednesday, 20 July 2011

Unit Testing C# Custom Attributes with NUnit

I've been experimenting with TDD and as usual I've seemed to pick a non-standard problem to start with.  In this case I was creating a new C# Custom Attribute class, e.g.

public class FunkyAttribute : Attribute
{
 public int SettingOne { get; set; }
 public string Name { get; set; }
}

which would be used such as

[Funky(Name="SomeThingFunkierThanJust_f")]
public static int f() { return 7; }

Testing this is a little strange as rather than having a standard test method which invokes a method and asserts the result, e.g.

Monday, 18 July 2011

A Simple WPF ComobBox based Brush Selector Control

Following the last Code Project article I popped my stack and finished the original article that spawned it.  It's up on Code Project now and you can find it here or in long hand: from http://www.codeproject.com/KB/WPF/BrushSelectorArticle.aspx

As the title suggests this one shows how to implement a simple control that allows a SolidColorBrush to be selected from a panel.  In fact it demonstrates how to do with using a Style and a UserControl and compares both approaches.

Tuesday, 12 July 2011

Exploring the use of Dependency Properties in WPF User Controls

I recently wrote my first WPF User Control. This was mainly to customize a ComboBox as opposed to implement a completely new control. As such, I needed to access some of the Dependency Properties (DPs) on the ComboBox. Whilst it's possible to dip into the Content property of a UserControl this is somewhat unpleasant as it violates encapsulation. The result was that I spent a bit of time experimenting with different ways to access these DPs whilst retaining the encapsulation of the embedded ComboBox. This is all written up as a Code Project article.

http://www.codeproject.com/KB/WPF/DPsInUserControl.aspx

Tuesday, 21 June 2011

Am I living in Dilbert?

I was in a lift at work yesterday. Unusually I was in the Sales building. I entered the lift on the ground floor and pressed the button for the second floor. The lift stopped at the first floor and a salesman entered. He proceeded to ask me "are you going down?". I replied "no" which I expected to be the end of the conversation, well at least concerning my very immediate travel plans. However, this was not to be and you can probably guess what came next: "are you going up?". A little to close to Dilbert for comfort:-)

Sunday, 12 June 2011

Getting WPF SizeChanged Events at start-up when using MVVM and DataContext

Like lots of people working with WPF I've been writing my own MVVM framework.  I started using this in an application I was writing.  One of the things it needed to do was obtain the dimensions of a Canvas object.  As such a subscription to the SizeChanged event was used.  The connection was formed using DataBinding to my implementation of an event-to-command mapper.

The code below are the classes from the MVVM framework plus a sample application that demonstrates the problem.  This is just a button within a Canvas that when pressed pops up a dialog displaying the Canvas's dimensions.

<Window x:Class="SizeChangedEventTest2.MainWindow"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        Title="MainWindow" Height="350" Width="525"
  xmlns:mvvm="clr-namespace:PABLib.MVVM;assembly=PABLib.MVVM"
  xmlns:local="clr-namespace:SizeChangedEventTest2">
 <Canvas mvvm:EventCommand.Name="SizeChanged" 
   mvvm:EventCommand.Command="{Binding SizeChanged}">
  <Button Content="Hello" Command="{Binding PressMe}"/>
 </Canvas>
</Window>

The code below shows my basic implementation of the command-to-event pattern.  I would have left it out but seeing but how it's used is crucial to the explanation of the problem and the solution. Please note that EventCommand is actually in the PABLib.MVVM namespace as referred to in the XAML above but I've left it out of the C# to save space.

public class EventCommand
{
 public static DependencyProperty CommandProperty = DependencyProperty.RegisterAttached("Command",
    typeof(ICommand),
    typeof(EventCommand));

 public static void SetCommand(DependencyObject target, ICommand value)
 {
  target.SetValue(EventCommand.CommandProperty, value);
 }

 public static ICommand GetCommand(DependencyObject target)
 {
  return (ICommand)target.GetValue(CommandProperty);
 }

 public static DependencyProperty EventNameProperty = DependencyProperty.RegisterAttached("Name",
    typeof(string),
    typeof(EventCommand),
    new FrameworkPropertyMetadata(NameChanged));

 public static void SetName(DependencyObject target, string value)
 {
  target.SetValue(EventCommand.EventNameProperty, value);
 }

 public static string GetName(DependencyObject target)
 {
  return (string)target.GetValue(EventNameProperty);
 }

 private static void NameChanged(DependencyObject target, DependencyPropertyChangedEventArgs e)
 {
  UIElement element = target as UIElement;

  if (element != null)
  {
   // If we're putting in a new command and there wasn't one already hook the event
   if ((e.NewValue != null) && (e.OldValue == null))
   {
    EventInfo eventInfo = element.GetType().GetEvent((string)e.NewValue);

    Delegate d = Delegate.CreateDelegate(eventInfo.EventHandlerType, typeof(EventCommand).GetMethod("Handler", BindingFlags.NonPublic | BindingFlags.Static));

    eventInfo.AddEventHandler(element, d);
   }
   // If we're clearing the command and it wasn't already null unhook the event
   else if ((e.NewValue == null) && (e.OldValue != null))
   {
    EventInfo eventInfo = element.GetType().GetEvent((string)e.OldValue);

    Delegate d = Delegate.CreateDelegate(eventInfo.EventHandlerType, typeof(EventCommand).GetMethod("Handler"));

    eventInfo.RemoveEventHandler(element, d);
   }
  }
 }

 static void Handler(object sender, EventArgs e)
 {
  UIElement element = (UIElement)sender;
  ICommand command = (ICommand)element.GetValue(EventCommand.CommandProperty);

  var src = Tuple.Create(sender, e);

  if (command != null && command.CanExecute(src) == true)
   command.Execute(src);
 }
}

The bindings used in the XAML refer to properties in Window's ViewModel. This is defined as follows:

class MainWindowViewModel
{
 public ICommand PressMe { get; private set; }
 public ICommand SizeChanged { get; private set; }

 private int m_width = 0;
 private int m_height = 0;

 public MainWindowViewModel()
 {
  SizeChanged = new PABLib.MVVM.RelayCommand<object>((x) =>
  {
   SizeChangedEventArgs args = (SizeChangedEventArgs)((Tuple<object, EventArgs>)x).Item2;
   m_width = (int)args.NewSize.Width;
   m_height = (int)args.NewSize.Height;
  });

  PressMe = new PABLib.MVVM.RelayCommand<object>((x) =>
  {
   MessageBox.Show(string.Format("Width:{0}, Height:{1}", m_width, m_height));
  });
 }
}

For the sake of completeness here is the implementation of RelayCommand. This is pretty much the basic version as originally created by Josh Smith.

public class RelayCommand<T> : ICommand
{
 Action<T> _Execute { get; set; }
 Predicate<T> _CanExecute { get; set; }

 public RelayCommand(Action<T> execute, Predicate<T> canExecute = null)
 {
  _Execute = execute;
  _CanExecute = canExecute;
 }

 public bool CanExecute(object parameter)
 {
  if (_CanExecute == null)
   return true;
  else
   return _CanExecute((T)parameter);
 }

 public void Execute(object parameter)
 {
  if (_Execute != null)
   _Execute((T)parameter);
 }

 public event EventHandler CanExecuteChanged
 {
  add { CommandManager.RequerySuggested += value; }
  remove { CommandManager.RequerySuggested -= value; }
 }
}

However, rather than just obtaining the dimensions when changed these were also required when the Canvas was first shown.  The problem was that when using my MVVM framework it was only capturing events if the window was resized but not the initial sizing event.  For the sample app. this meant pressing the button the first time yielded results of 0 for both width and height.  I switched back to a conventional code-behind page approach as a sanity check. This worked!

At this point I started debugging the code more and discovered that the initial SizeChanged event was being fired and handled by the EventCommand code.  However, when it came to invoke the ICommand associated with the EventCommand this was null (in the Handler method of EventCommand).  The strange thing here was that that the event name had been successfully passed to EventCommand but the command hadn't.  Both of these are stored as Attached Properties (as is normal for event-to-command implementations).

The difference between the event name and the command is that event name was a hard-coded string in the XAML whereas the command was being obtained using data binding to the main window's ViewModel.  Therefore the culprit appeared to be that the binding hadn't executed.  There was no problem with the validity of the binding as all the SizeChanged events bar the initial were being received and in debug mode VS was not reporting an issues with the binding.

The only thing I could think of is that the initial event was being fired before the binding had been processed.  This was confirmed by extending the Attached Property definition for the CommandProperty to include an CommandChanged callback e,.g.

public static DependencyProperty CommandProperty = DependencyProperty.RegisterAttached("Command",
   typeof(ICommand),
   typeof(EventCommand),
   new FrameworkPropertyMetadata(CommandChanged));

private static void CommandChanged(DependencyObject target, DependencyPropertyChangedEventArgs e)
{
}

A break point set on CommandChanged showed this wasn't invoked until after the event had fired confirming that the binding hadn't occurred.

The way the ViewModel was set as the Data Context for the main Window was by removing the StartupUri element from the Application element in App.xaml.cs, e.g.

<Application x:Class="SizeChangedEventTest2.App"
             xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
             xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">
    <Application.Resources>
         
    </Application.Resources>
</Application>

and modifying App.xaml.cs to be:

public partial class App : Application
{
 protected override void OnStartup(StartupEventArgs e)
 {
  base.OnStartup(e);

  MainWindowViewModel vm = new MainWindowViewModel();
  MainWindow win = new MainWindow();
  win.DataContext = vm;

  this.MainWindow = win;
  this.MainWindow.Show();
 }
}

After some searching I noticed other projects setting the DataContext of the main window (to the ViewModel) in different ways.  This got me to thinking that perhaps the DataContext was being established too late.

To address this App.xaml and App.xaml.cs were put back to their initial states and instead the ViewModel created and attached in the constructor for MainWindow, e.g.

public partial class MainWindow : Window
{
 public MainWindow()
 {
  this.DataContext = new MainWindowViewModel();

  InitializeComponent();
 }
}

This fixed the problem!  As an experiment InitializeComponent() was moved to top of the constructor.  It stopped working. I didn't particularly like creating the ViewModel here so instead this code was removed and instead it was created in XAML as follows:

<Window x:Class="SizeChangedEventTest2.MainWindow"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        Title="MainWindow" Height="350" Width="525"
  xmlns:mvvm="clr-namespace:PABLib.MVVM;assembly=PABLib.MVVM"
  xmlns:local="clr-namespace:SizeChangedEventTest2">
 <Window.DataContext>
  <local:MainWindowViewModel/>
 </Window.DataContext>
 <Canvas mvvm:EventCommand.Name="SizeChanged" mvvm:EventCommand.Command="{Binding SizeChanged}">
  <Button Content="Hello" Command="{Binding PressMe}"/>
 </Canvas>
</Window>

This too worked.   This is where I'm currently at.  From this I conclude that it is critically important to make sure that a View's DataContext is properly created and attached before the underlying Window is displayed otherwise initial events will be missed.

Sunday, 29 May 2011

Some thoughts on the iPad 2 actual feel

After using the demo iPad 2s in various shops I decided to take plunge and get one. I'd been reading so much stuff on the Internet especially RSS feeds on the iPhone that I wanted a larger screen and to be able to read longer articles in comfort.

This isn't a review but something I've noticed during my first week of ownership. Along with the iPad I also bought one of the new covers. When playing in the shops I'd only used the bare the metal version which was very pleasant to hold and manipulate.

However, I am now rather averse to using the iPad without the cover. I tend to hold the iPad in book mode and having the cover folded underneath the feel is of a natural fabric rather than metal. This has almost turned the iPad from a machine into something else: almost a book or another kind of natural object. It's an interesting sensory experience.

I occasionally remove the cover, usually to quickly reattach it which shows the other neat aspect which is how easily it self-aligns. The only thing it's missing is when folded underneath is the ability to 'stick' to the underneath so it does feel like a paperback folded back on itself with only a single page visible fighting to flatten itself.

Monday, 23 May 2011

First ever CodeProject article

I've been playing around with the WPF TreeView control trying to get it to draw a connected line org. chart style view of a tree.  I was going to write it up here but it got a bit long and formatting for the blog is a little tricky so I turned it into a Code Project article, my first one.  You can find it here.

Tuesday, 17 May 2011

Fluent Functors

I've been learning about BOOST Spirit; a C++ expression based compiler generator.  One of the examples is for a Roman Numeral parser.  This contained the following interesting code for pre-loading a symbol table.

struct ones_ : qi::symbols<char, unsigned>
{
    ones_()
    {
        add
            ("I"    , 1)
            ("II"   , 2)
            ("III"  , 3)
            ("IV"   , 4)
            ("V"    , 5)
            ("VI"   , 6)
            ("VII"  , 7)
            ("VIII" , 8)
            ("IX"   , 9)
        ;
    }

} ones;


So,
  • struct ones_ is a new class definition.
  • ones_() is the constructor
  • The call to add("I", 1) is a call to the member function add associating the string "I" with the value 1 by adding them to the symbol table.
There's nothing particularly strange there.  However, the continuation, i.e. add()()()()()... looked a little odd and puzzled me for a minute or so.  Then I realized add() must be returning *this and the other ()s were invoking the  parenthesis operator, i.e. in this case ones_& operator()(const char*, const int);

Following this little revelation to confirm the theory I constructed the following program which sums the numbers 1-6 printing 21.
#include <iostream>

class Func
{
private:
    int m_sum;

public:
    Func(const int n) : m_sum(n) { /* Empty */ }

    const int Sum() const { return m_sum; }

    Func& operator()(const int n)
    {
 m_sum += n;

 return *this;
    }

    static Func add(int n) { return Func(n); }
};

int main()
{
    std::cout << Func::add(1)(2)(3)(4)(5)(6).Sum() << std::endl;
}

All that's needed is a seed function, in this case add() which returns *this followed by operator()().  It would work fine without the named seed function using just operator()() but then it would loose a little meaning.

This isn't really that helpful and in most circumstances probably constitutes obfuscated code.  However it's certainly cute and where overloading is done fully and with meaning of which Spirit is a case in point then it becomes an effective and usable syntax.

As Fluent Programming seems to be on rise this another demonstration of it within C++, just like iostreams.  The added Syntactical Sugar provided by C++'s Functor mechanism makes for Fluent Functors.