Sunday, 5 January 2014

Keyboard configuration for Windows' developers on OS X (& also IntelliJ)

Recently I've been doing some ActionScript programming. Rather than target a Flash Player app. I've been using ActionScript in combination with Adobe AIR in order to create an iOS app. This has meant I've been spending time in OS X and using IntelliJ with the the ActionScript/Flex/AIR plugin as my IDE.

Most of my previous work has been done on UNIX (so command lines & vi) and Windows. In particular I depend on the various Windows & Visual Studio editor key combinations plus the Insert, Delete, Home & End keys. For starters this means I use a PC keyboard with the iMac rather than the Apple keyboard as it lacks these keys; I'm also based in the UK so I use a British PC keyboard.

In addition to these keys I wanted the following combos to be available across all of OS X and any apps.
  • Alt-Tab to cycle through apps.
  • Ctrl-F for find.
  • Ctrl-S for save current document (I habitually press this whilst editing).
  • Ctrl-C & Ctrl-V for copy & paste.
  • Ctrl-Z for undo.
  • Obtain the correct behaviour for the '\|' key and the '`¬' key. They were swapped initially.
  • '@' & '"' correctly mapped.
Additionally, I wanted these combos to be available in IntelliJ:
  • Ctrl-Left Arrow & Ctrl-Right Arrow to move the previous/next word respectively
    • Plus their selected text equivalents.
  • Ctrl-Home/Ctrl-End to move to the top/bottom of the document being edited.
This post is a description of what I installed & configured to allow to
achieve this.

Configuring a British PC keyboard

The first step was to tell OS X I was using a PC keyboard, specifically a British one. This is achieved through the System Preferences->Keyboard->Input Sources.



Here new input sources can be added by clicking the '+'. I added 'British - PC'. Adding doesn't mean it will be used though. For this also check the 'Show Input menu in menu bar' option. This adds a country flag and the name of the input source. Clicking on this allows the input source to be changed. If you swap between a PC keyboard and the iMac keyboard (which I do from time to time) this is an easy way to swap.



What all this gives you is the '"' and '@' keys in the right place. Otherwise they're transposed. Note: backslash and backquote remain transposed.

Windows' key combos

The second step was obtaining the Windows' key combos. This requires mapping the Windows' combos to the corresponding OS X combos whilst preventing the Windows' combos being interpreted as something else. After some searching it seemed like the preferred solution to this is using a 3rd party program called KeyRemap4MacBook. According to various reviews it does the job well but configuring it, especially creating your own mappings is complicated. The former being down to the UI and the latter to the XML format. All these things are true but once you've got used to it, like a lot of things it's nowhere near as daunting as it first seems; and the document is very good too. Part of the motivation for this post is to record the configuration & steps for my benefit should I need to do it again.

KeyRemap4MacBook comes with a number of canned mappings. In addition to mapping across the board they can be limited to include or exclude a specific set of apps. In particular I make use of a set of pre-defined mappings from the 'For PC Users' section which won't be applied in VMs (generally running Windows, especially useful when running Windows 8 in Parallels from the Bootcamp partition) and terminals.

As I still use the Apply keyboard from time to time when I want to do very Apple-ly stuff I have the 'Don't remap  Apple's keyboards' option enabled.

What I use

The canned mappings I use from 'For PC Users' section are:
  • Use PC Style Copy/Paste
  • Use PC Style Undo
  • Use PC Style Save
  • Use PC Style Find
These can easily be seen these in KeyRemap4MacBook using the 'show enabled only' (from the many definitions) option:



Without doing very little work this meets the majority of my needs. In addition to the 'For PC Users' and 'General' section you may also notice the three re-mappings at the start. These are custom mappings I had to create. I'm not going to explain the XML format as this is available from the documentation. Instead, here are my custom mappings.

<?xml version="1.0"?>
<root>
 <appdef>
  <appname>INTELLIJ</appname>
  <equal>com.jetbrains.intellij</equal>
 </appdef>

 <replacementdef>
  <replacementname>MY_IGNORE_APPS</replacementname>
  <replacementvalue>VIRTUALMACHINE, TERMINAL, REMOTEDESKTOPCONNECTION, VNC, INTELLIJ</replacementvalue>
 </replacementdef>

 <replacementdef>
  <replacementname>MY_IGNORE_APPS_APPENIDX</replacementname>
  <replacementvalue>(Except in Virtual Machine, Terminal, RDC, VNC and IntelliJ)</replacementvalue>
 </replacementdef>


 <item>
  <name>Use PC style alt-TAB for application switching</name>
  <appendix>{{ MY_IGNORE_APPS_APPENIDX }}</appendix>
  <identifier>private.swap_alt-tab_and_cmd-tab</identifier>
  <not>{{ MY_IGNORE_APPS }}</not>
  <autogen>__KeyToKey__ KeyCode::TAB, ModifierFlag::OPTION_L, KeyCode::TAB, ModifierFlag::COMMAND_L</autogen>
 </item>

 <item>
  <name>Swap backslash and backquote for British PC keyboard</name>
  <identifier>private.swap_backslash_and_quote_for_britishpc</identifier>
  <autogen>__KeyToKey__ KeyCode::DANISH_DOLLAR, KeyCode::BACKQUOTE</autogen>
  <autogen>__KeyToKey__ KeyCode::BACKQUOTE, KeyCode::DANISH_DOLLAR</autogen>
 </item>

 <item>
  <name>Use PC Ctrl-Home/End to move to top/bottom of document</name>
  <appendix>{{ MY_IGNORE_APPS_APPENIDX }}</appendix>
  <identifier>private.use_PC_ctrl-home/end</identifier>
  <not>{{ MY_IGNORE_APPS }}</not>
  <autogen>__KeyToKey__ KeyCode::HOME, ModifierFlag::CONTROL_L, KeyCode::CURSOR_UP, ModifierFlag::COMMAND_L</autogen>
  <autogen>__KeyToKey__ KeyCode::HOME, ModifierFlag::CONTROL_R, KeyCode::CURSOR_UP, ModifierFlag::COMMAND_L</autogen>
  <autogen>__KeyToKey__ KeyCode::END, ModifierFlag::CONTROL_L, KeyCode::CURSOR_DOWN, ModifierFlag::COMMAND_L</autogen>
  <autogen>__KeyToKey__ KeyCode::END, ModifierFlag::CONTROL_R, KeyCode::CURSOR_DOWN, ModifierFlag::COMMAND_L</autogen>
 </item>

</root>

I didn't want these mappings other than swapping backslash and backquote to be applied in various apps. i.e. VMs, VNC & RDC (where Windows is running anyway) and Terminal where it interferes with bash. To enable this I used the <not> element giving a list of excluded apps. along with using the appendix element to state this in the description.

Rather than copy the list of apps. and description I used KeyRemap4MacBook's replacement macro feature. There is a list of builtin apps. that can be referred too but I also looked at the XML file from the source that contains the 'For PC Users' mapping.

The _L & _R refer to keys which appear twice: on the left & right side of the keyboard.

The format allows multiple mappings to be grouped. These don't have to be similar but this the intention, i.e. all the ctrl-home/end mappings are together. Each <autogen> entry is a separate mapping but they are enabled/disabled collectively.

The format isn't too bad. The weird thing from an XML perspective is the <autogen> element. This is source combo followed by combo to generate instead separated by a comma. I think it would be easier to understand if this element were broken down into child elements with say <to> and <from> elements.

This private.xml is also available as a GIST.

IntelliJ

IntelliJ complicates things slightly as it provides its own key-mapping functionality similar to that of KeyRemap4MacBook but solely for itself. This means that there can be a conflict with KeyRemap4MacBook.

I'm writing this a while after I originally implemented it. In fact part of the reason I'm writing this post at all is so I have a record of what's required. Since getting this working it looks like I've changed my IntelliJ Keymap (from Preferences). Originally it was set to 'Mac OS X' but is now set to 'Default'.

When is was set to 'Mac OS X' the KeyRemap4MacBook mappings worked well except that Ctrl-Home/End wouldn't work. This is because that combination is mapped to something else. Additionally the 'Mac OS X' mappings don't provide support for Ctrl-Left/Right-Arrow for hoping back and forth over words. My initial solution to this was to modify (by taking a copy) the 'Mac OS X' mapping:
  • Change 'Move Caret to Next Word' from 'alt ->' to 'ctrl->'.
  • Change 'Move Caret to Previous Word' from 'alt <-' to 'ctrl <-'.
  • Change 'Move Caret to Next Word with Selection' from 'alt shift ->' to 'ctrl shift ->'.
  • Change 'Move Caret to Previous Word with Selection' from 'alt shift <-' to 'ctrl shift <-'.
  • Change 'Move Caret to Text End' from 'cmd end' to 'ctrl end'.
  • Change 'Move Caret to Text Start' from 'cmd home' to 'ctrl home'.
However, it seems that the 'Default' key mappings are as per-Windows but when KeyRemap4MacBook is running they all conflict. In fact I may have missed this completely when initially figuring this out.

Therefore the far easier solution is to select the 'Default' IntelliJ mapping and using KeyRemap4MacBook make it aware of IntelliJ and exclude it from key re-mapping as per the other applications. This is the purpose of the appdef section in private.xml. KeyRemap4MacBook doesn't need definitions for other excluded apps. as these are built-in.

The mappings are not perfect. IntelliJ is great but this is now down to IntelliJ's mapping and having excluded it from KeyRemap4MacBook mapping. I still miss Ctrl-Left/Right-Arrow and Ctrl-Home/End in other apps. but hopefully this should just be a case of defining more mappings and the Ctrl-Z (undo) mapping effectiveness seems to vary.



Monday, 30 December 2013

Using IntelliJ, Adobe ActionScript and AIR SDK to create & package iOS 7 apps.

Just a quick post. Lately I've been learning ActionScript. Having seen how easy it is to get an ActionScript project for Flash Player running on Android using Adobe AIR I wanted to do the same for my iPhone. Getting stuff running on the AIR emulator and on the iOS simulator (under OS X) and AIR was pretty easy. In my case this was using IntelliJ as the IDE (rather than Flash Builder) coupled with Flex 4.6 SDK. The real fun started when I started to package my application for submission to the App Store, in particular creating the App icons.

The version of the AIR SDK that comes with the Flex 4.6 SDK is 3.1. However this isn't aware of the new iOS 7 App icons. It would seem a simple matter of adding additional entries to the Application Descriptor file, i.e. to support the the 152x152 icon just add

<image152x152>icon152.png</image152x152>

to the <icon> section. Unfortunately the schema knows this isn't valid (well doesn't know about) and you end up with the following error:

error 103: application.icon.image152x152 is an unexpected element/attribute

To fix, the first step is to download & install the latest version AIR SDK which is 3.9 (4.0 beta aside). This does not mean download & install the latest version of the Flex SDK as this contains an older version of the AIR SDK. Also, as this needs installing on top of the one present in the existing Flex SDK installation do not download the installer version, instead use the zip (Windows) or tbz2 (OS X). The following link takes you to both: http://www.adobe.com/devnet/air/air-sdk-download.html

Then extract these within the Flex SDK (you might want to take a copy of this first but if things go wrong you can always re-download it). The easiest way is to just copy/move the archive to the Flex SDK directory and extract the files there which will overwrite the existing ones.

NOTE: Up to this point the same thing occurred on both Windows & OS X. The following steps only worked on OS X. In particular updating the scheme in the Application Descriptor didn't work and when reverted back to 3.1 (& support for iOS 7 App Icons removed) then packaging the app. was a problem as the AIR SDK seemed to be missing various binaries to create the ARM binaries. I haven't pursued this further as I was working on OS X at this point.

In theory everything should work now. However if you proceed to package the app. it will still give the same 103 error. This is because the scheme version number in the Application Descriptor needs updating. Most likely the line will be:

<application xmlns="http://ns.adobe.com/air/application/3.1">

the 3.1 needs changing to 3.9.

This may not fix the problem though. If you're using IntelliJ (sorry don't know about Flash Builder) and have selected the 'Generated' option for the Application Descriptor then it appears by default IntelliJ (AIR?) creates this with a version of 3.1. In this case you'll need to stop using this option. Instead choose the 'Custom template' and either create your own or have IntelliJ (AIR?) generate one for you. If you choose the latter option then IntelliJ offers a drop down to specify the version. However, it only lists 3.1 to 3.8. Therefore this will need manually changing to 3.9.


At this point it should be possible to successfully package an iOS app with iOS 7 App Icon support.

Wednesday, 14 August 2013

Capturing lvalue references in C++11 lambdas

Recently the question "what is the type of an lvalue reference when captured by reference in a C++11 lambda?" was asked. It turns out that it's a reference to whatever the original reference was too. This is just like taking a reference to an existing reference, e.g.

int foo = 7;
int& rfoo = foo;
int& rfoo1 = rfoo;
int& rfoo2 = rfoo1;

All references refer to foo rather than rfoo2->rfoo1->rfoo->foo meaning the following code

std::cout << "foo:" << foo << ", rfoo:" << rfoo 
          << ", rfoo1:" << rfoo1 << ", rfoo2:" << rfoo2 
          << '\n';
++foo;

std::cout << "foo:" << foo << ", rfoo:" << rfoo 
          << ", rfoo1:" << rfoo1 << ", rfoo2:" << rfoo2 
          << '\n';

std::cout << "&foo:" << &foo << ", &rfoo:" << &rfoo 
          << ", &rfoo1:" << &rfoo1 << ", &rfoo2:" << &rfoo2 
          << '\n';

Which gives:

foo:7, rfoo:7, rfoo1:7, rfoo2:7
foo:8, rfoo:8, rfoo1:8, rfoo2:8
&foo:00D3FB0C, &rfoo:00D3FB0C, &rfoo1:00D3FB0C, &rfoo2:00D3FB0C

I.e. all the references are aliases for the original foo hence the same value is displayed including when the original is modified and that the address of each variable is the same, that of foo.

There is nothing surprising here it's just basic C++ but it's along time since I've thought about it which is why with lambdas, l-value, r-value and universal references I sometimes I do a double take on what was once obvious.

The same happens with lambda capture but it's a slightly more interesting story. Take the following example:

int foo = 99;
int& rfoo = foo;
int& rfoo1 = foo;

std::cout << "foo:" << foo << ", rfoo:" << rfoo 
          << ", rfoo1:" << rfoo1 
          << '\n';

std::cout << "&foo:" << &foo << ", &rfoo:" << &rfoo 
          << ", &rfoo1:" << &rfoo1 
          << '\n';

auto l = [foo, rfoo, &rfoo1]()
{
    std::cout << "foo:" << foo << '\n';
    std::cout << "rfoo:" << rfoo << '\n';
    std::cout << "rfoo1:" << rfoo1 << '\n';

    std::cout << "&foo:" << &foo << ", &rfoo:" 
              << &rfoo << ", &rfoo1:" << &rfoo1 
              << '\n';
};

foo = 100;

l();

Which gives:

foo:99, rfoo:99, rfoo1:99
&foo:00D3FB0C, &rfoo:00D3FB0C, &rfoo1:00D3FB0C
foo:99
rfoo:99
rfoo1:100
&foo:00D3FAE0, &rfoo:00D3FAE4, &rfoo1:00D3FB0C

To begin with it behaves as per the first example in that foo, rfoo and rfoo1 all give the same value as rfoo and rfoo1 are effectively aliases for foo as shown when displaying their addresses; they're all the same.

However, when these same variables are captured it's a different story: The capture of foo is of no surprise as this is by-value so displays the captured value of 99 despite the original foo being changed to 100 prior to the lambda being invoked. Its address is that of a new variable; a member of the lambda.

It starts to get interesting with the capture of rfoo. When the lambda is invoked this too displays 99, the original captured value. Also, its address is not that of the original foo. It seems that the reference itself has not been captured but rather what it refers too, in this case an int with the value of 99. It appears to have been magically dereferenced as part of the capture.

This is the correct behaviour and when thought about becomes somewhat obvious. It's just like assigning a variable from a reference, e.g.

int foo = 7;
int& rfoo = foo;
int bar = rfoo;

bar doesn't become an int& and  rfoo is magically dereferenced except in this scenario there is nothing magical at all, it's as expected. If int were replaced with auto, e.g.

auto bar = rfoo;

then it would be expected that bar is an int as auto strips of CV and reference qualifiers.

Finally, there is rfoo1. This too is odd as it is attempting to take a reference to a reference. As seen in the first example this is perfectly fine. The end effect is that there can't be a reference to reference and so on and all are aliases of the original variable.

This is pretty much what's happening here. It's irrelevant that the target of the capture is a reference. In the end the capture by reference is capture by reference of the underlying variable, i.e. what rfoo1 refers too, in this case foo not rfoo1 itself. This is demonstrated twofold by rfoo1 within the lambda displaying the updated value of foo and also that the address of rfoo1 within the lambda is that of foo outside it.

This is as per the standard section 5.1.2 Lambda expression sub-note 14:

An entity is captured by copy if it is implicitly captured and the capture-default is = or if it is explicitly
captured with a capture that does not include an &. For each entity captured by copy, an unnamed nonstatic
data member is declared in the closure type. The declaration order of these members is unspecified.
The type of such a data member is the type of the corresponding captured entity if the entity is not a
reference to an object, or the referenced type otherwise. [ Note: If the captured entity is a reference to a
function, the corresponding data member is also a reference to a function. —end note ]

The sentence in bold states that for a reference captured by value then the type of the captured value is the type referred to, i.e. the reference aspect as been removed the crucial part being "or the referenced type otherwise". (NOTE: I haven't experimented with references to functions).

Finally, a vivid example showing that a reference captured by value involves a dereference.

class Bar
{
private:
int mValue;

public:
Bar(const Bar&) : mValue(9999)
{
}

public:
Bar(const int value) : mValue(value) {}
int GetValue() const { return mValue; }
void SetValue(const int value) { mValue = value; }
};

Bar bar(1);
Bar& rbar = bar;
Bar& rbar1 = bar;

std::cout << "&bar:" << &bar << ", &rbar:" << &rbar<< ", &rbar1:" << &rbar1 << '\n';

auto l2 = [bar, rbar, &rbar1]()
{
std::cout << "bar:" << bar.GetValue() << '\n';
std::cout << "rbar:" << rbar.GetValue() << '\n';
std::cout << "rbar1:" << rbar1.GetValue() << '\n';

std::cout << "&bar:" << &bar << ", &rbar:" << &rbar<< ", &rbar1:" << &rbar1 << '\n';
};

bar.SetValue(2);

l2();

The class bar provides a crude copy-constructor that sets the stored value to 9999. The following output is similar to that in the previous example in that the addresses of bar and rbar in the lambda differ from that of bar showing they're copies whilst rbar1 is the same. Secondly, the value of mValue stored within Bar is shown as 9999 for the first two captured variables meaning they were copy-constructed.

&bar:00D3FB0C, &rbar:00D3FB0C, &rbar1:00D3FB0C
bar:9999
rbar:9999
rbar1:2
&bar:00D3FAE0, &rbar:00D3FAE4, &rbar1:00D3FB0C

Making the copy-construct private (by commenting out the seemingly unnecessary 'public:') prevents compilation.

1>------ Build started: Project: References, Configuration: Debug Win32 ------
1>  main.cpp
1>c:\users\pete\desktop\references\references\main.cpp(85): error C2248: 'Bar::Bar' : cannot access private member declared in class 'Bar'
1>          c:\users\pete\desktop\references\references\main.cpp(59) : see declaration of 'Bar::Bar'
1>          c:\users\pete\desktop\references\references\main.cpp(54) : see declaration of 'Bar'
1>          c:\users\pete\desktop\references\references\main.cpp(59) : see declaration of 'Bar::Bar'
1>          c:\users\pete\desktop\references\references\main.cpp(54) : see declaration of 'Bar'

Writing this post has clarified the situation for me, I hope it helps you as well.

The sample code is available here.

Wednesday, 21 November 2012

Windows 8 Pro on an early 2009 iMac 21.5 (Core 2 Duo)

A couple of weeks back I thought I'd have a go writing a Windows Store App.  To do this requires Windows 8.  At the time I was running Windows 7 Home Premium on an early 2009 iMac 21.5 (Core 2 Duo).  This had been installed using Boot Camp including install Boot Camp assistant and the drivers supplied by Apple.

To upgrade to Windows 8 I wanted to avoid a re-installation of all my apps. and data etc so I went with an in place upgrade.  This all seemed to work properly and soon I was running Windows 8 and could access the Windows Store App templates from Visual Studio.  However, soon after Windows 8 kept crashing, well freezing.  It got to the point that after every reboot I'd be lucky to get 5 minutes of up time between each freeze.

Given that Apple haven't provided Windows 8 drivers yet this wasn't exactly a surprise.  I decided to try and work around this by rebooting to OS X and using VMWare Fusion to access the Boot Camp partition.  Whilst rebooting in OS X I managed to corrupt the Windows installation.  I use a non-Apple wireless keyboard (as I need the insert, delete, home & end plus the easily accessible cursor keys for VS development) so holding down Alt to select the OS to boot into didn't work.  When I realized it was going back into Windows I just turned the machine off.  After a couple of times the Windows installation was toast!  To get back to the point of trying Fusion I had to do a fresh Windows install.  In this case installing a minimal Windows 7 installation: just enough to allow the download of Windows 8.  I then installed Windows 8 using the preserve nothing option.

Having now gone through the steps I wanted to avoid I decided to give the new installation a go via direct boot, i.e. no Fusion.  That was two weeks ago.  Since then I've re-installed all the apps. and my personal data and (fingers crossed) haven't had a single crash.  As the freezes were usually happening during some graphical operation e.g. a status bar updating I assumed the fault probably lay with the video drivers.  I didn't install Boot Camp assistant and in particular the Windows 7 drivers from OS X disc.  Well, I did install one.  After a while I noticed I wasn't getting any sound even though all the audio drivers and hardware claimed they were happy.  Eventually I installed by the Cirrus Logic driver which made the speakers work. I haven't gone anywhere near the NVIDIA drivers.

So, the whole point of this post is for those who run Windows via Boot Camp on early iMacs and want to run Windows 8 then perhaps a fresh install (or maybe uninstall the Boot Camp supplied drivers prior to upgrade) is probably the way to go.

Friday, 3 August 2012

Specifying the directory to create SQL CE databases when using Entity Framework

In the last few posts I've been describing how to create instances of SQLCE in order to perform automated Integration Testing using NUnit and accessing the dB using Entity Framework.  I covered creating the dB using both Entity Framework and the SQL CE classes.  In particular I wanted control over the directory the dB was created in but I didn't want to tie to a specific location rather let it use the current working directory.

Using the Entity Framework's DbContext constructor that takes the name of a connection string or database name it's suddenly very easy to end up NOT creating the dB you expected where you expected it to be.  This post shows how to avoid these.  Generally speaking the use of the DbContext constructor that takes a Connection String should be avoided unless the name of a connection string from the .config file is being specified.

Example 1 - Using the SqlCeEngine class
1:  const string DB_NAME = "test1.sdf";  
2:  const string DB_PATH = @".\" + DB_NAME; // Use ".\" for CWD or a specific path  
3:  const string CONNECTION_STRING = "data source=" + DB_PATH;  
4:    
5:  using (var eng = new SqlCeEngine(CONNECTION_STRING))  
6:  {  
7:    eng.CreateDatabase();  
8:  }  
9:    
10:  using (var conn = new SqlCeConnection(CONNECTION_STRING))  
11:  {  
12:    conn.Open(); // do stuff with db...  
13:  }  
14:    

The important thing to note is that the constructor for SqlCeEngine that takes an argument requires a Connection String, i.e. a string containing the "data source=...".  Just specifying the dB path is not sufficient.  To specify a specific directory  include the absolute or relative path.  To specify the current working directory, e.g. bin\debug then just use ".\".

Example 2 - Using DbContext (doesn't work)
1:  using (var ctx = new DbContext("test2.sdf"))  
2:  {  
3:    ctx.Database.Create();  
4:  }  

This code appears to work but doesn't create an instance of an SQL CE dB as desired.  Instead it creates a localDB instance in the user's home directory.  In my case: C:\Users\Pete\._test.sdf.mdf (& corresponding log file).  This is not really surprising as Entity Framework had no way of knowing that a SQL CE dB should be created.

Example 3 - Using DbContext (does work)
1:  Database.DefaultConnectionFactory =  
2:    new SqlCeConnectionFactory(  
3:      "System.Data.SqlServerCe.4.0",  
4:      @".\", "");  
5:    
6:  using (var ctx = new DbContext("test2.sdf"))  
7:  {  
8:    ctx.Database.Create();  
9:    // do stuff with ctx...  
10:  }  

The difference between the last and this example is changing the default type of dB that EF should create.  As shown this is done by installing a different factory.

The 3rd parameter to SqlCeConnectionFactory is the directory that the dB should be created in.  Just like the first example specifying ".\" means the current working directory and specifying an absolute path to a directory will lead to them being created there.

NOTE: As per the post Integration Testing with NUnit and Entity Framework be aware that creating a dB using the Entity Framework results in the additional table '_MigrationHistory' being created which EF uses to keep the model and dB synchronized.

NOTE1: Whereas SqlCeEngine is a SQL CE class from the System.Data.SqlServerCe assembly, SqlCeConnectionFactory appears to be part of the System.Data.Entity assembly which is part of the Entity Framework.


In the above example the string passed to DbContext can be a name (of a connection string from the .config file) or a connection string.  In this case passing the name of the db, i.e. test2.sdf is equivalent to passing "data source=test2.sdf", well more or less.  If the '.sdf' suffix is omitted with "data source" then the resultant dB is called test2 but if just test2 is passed then the resulting dB will be called test2.sdf.

Example 4 - Using DbContext and the .config file
1:  using (var ctx = new DbContext("test5"))  
2:  {  
3:    ctx.Database.Create();  
4:  }  

App or Web .config
1:  <connectionStrings>  
2:    <add name="test5"  
3:      providerName="System.Data.SqlServerCe.4.0"  
4:      connectionString="Data Source=test5.sdf"/>  
5:  </connectionStrings>  

This time no factory is specified but the argument to DbContext is the name of a Connection String in the .config file.  As can be seen this contains similar information to that in the factory method enabling EF to create a dB of the correct type.

To use these the instances of these databases rather than calling the create method on the context just use the context directly or more likely in the case of EF a derived context which brings us to one last example.

Example 5 - Using a derived context and .config file
1:  public class TestCtx : DbContext  
2:  {      
3:  }  
4:  using (var ctx = new TestCtx())  
5:  {  
6:    ctx.Database.Create();  
7:  }    

App or Web .config
1:  <connectionStrings>  
2:    <add name="TestCtx"  
3:      providerName="System.Data.SqlServerCe.4.0"  
4:      connectionString="Data Source=test6.sdf"/>  
5:  </connectionStrings>  

If a derived context is created which will almost certainly be the case then if an instance of this is created and a dB created then EF will look for a Connection String in the .config file that has the same name as the context and take the information from there.

Thursday, 2 August 2012

Integration Testing with NUnit and Entity Framework

This post gives a quick introduction into creating SQL CE dBs for performing Integration Tests using NUnit.

In the previous post Using NUnit and Entity Framework DbContext to programmatically create SQL Server CE databases and specify the databse directory a basic way was shown to how to create a new dB (using Entity Framework's DbContext) programmtically.  This was used to generate a new dB for a test hosted by NUnit.

The subsequent post Generating a SQL Server CE database schema from a SQL Server database using Entity Framework showed how to generate a SQL CE dB schema from an existing SQL Server database.

This post ties theprevious ones together.  As mentioned in the first post the reason for this is an attempt at what amounts to Integration Testing using NUnit.  I'm currently building a Repository and Unit Of Work abstraction on top of Entity Framework which will allow the isolation of the dB code (in fact it will isolate and abstract away most forms of data storage).  This means any business logic can be tested with a test-double that implements the Repository and UnitOfWork interfaces; which is straight forward Unit Testing.  The Integration Testing is to verify that the Repository and Unit Of Work implementations work correctly.

The rest of the post isn't focused on these two patterns; though it may mention them.  Instead it documents my further experience of using NUnit to writes tests that interact with dB via Entity Framework.  The premise for this is that a dB already exists.

As such the approach to using Entity Framework is a hybrid of Database First and Code First in that the dB schema exists and needs be maintained outside of EF and also that EF should not generate model classes, i.e. allowing the use of Code First POCOs.  This is possible as the POCOs can be defined, a connection made to dB and then the two are conflated via an EF DbContext.  It then seems that EF creates the model on the fly (internally compiles it) and as long as the POCO types map to the dB types then it all works as if by magic!

The advantage of doing it this way is that the existing dB is SQL Express based but for the Integration Testing a new dB can be created when needed, potentially one per test.  In order to keep the test dBs isolated from the real dB SQL Server Compact Edition (SQL Server CE V4) was used.  Therefore the requirement was for the EF code to be able to work with SQL Express and SQL CE with the primary definition of the schema taken from SQL Express.  It's not possible to use exactly the same schema as SQL CE only has a subset of the data-types provides by SQL CE.  However, the process described in the post 
Generating a SQL Server CE database schema from a SQL Server database using Entity Framework showed how to create semantically equivalent SQL.


From this point onwards it's assumed that an SQL file to create the dB has been generated.  Now create a new C# class library project and using the NUGet add Entity Framework, NUnit and SQL CE 4.0.  All my work has been with EF 4.3.1.  Following this drag the Model1.edmx.sqlce file from the project used to generate to new project.  You may wish to rename it, e.g. to test.sqlce.


Creating the database

The post Generating a SQL Server CE database schema from a SQL Server database using Entity Framework showed how to create a new CE dB per-test using the EF DbContext to do the hard work.  A different approach is now taken as the problem with creating a dB using DbContext is that in addition to creating any specified tables and indices etc. it also creates an additional table called '__MigrationHistory' which contains a description of the EF model used to create the dB.  The description of the problem caused by this will be delayed until the "Why DbContext is no longer used to create the database" section.  Suffice to say for the present using the new mechanism avoids the creation of this table.

The code below is the beginnings of a test class.  It is assumed all the tests need a fresh copy of the dB hence the creation is performed in the Setup method.  All this code does is create a SQL CE dB and then
creates the schema.

1:  [TestFixture]  
2:  public class SimpleTests  
3:  {  
4:   const string DB_NAME = "test.sdf";  
5:   const string DB_PATH = @".\" + DB_NAME;  
6:   const string CONNECTION_STRING = "data source=" + DB_PATH;  
7:   [SetUp]  
8:   public void Setup()  
9:   {  
10:    DeleteDb();  
11:    using (var eng = new SqlCeEngine(CONNECTION_STRING))  
12:     eng.CreateDatabase();  
13:    using (var conn = new SqlCeConnection(CONNECTION_STRING))  
14:    {  
15:     conn.Open();  
16:      string sql=ReadSQLFromFile(@"C:\Users\Pete\work\Jub\EFTests\Test.sqlce");  
17:      string[] sqlCmds = sql.Split(new string[] { "GO" }, int.MaxValue, StringSplitOptions.RemoveEmptyEntries);  
18:      foreach (string sqlCmd in sqlCmds)  
19:       try  
20:       {  
21:        var cmd = conn.CreateCommand();  
22:    
23:        cmd.CommandText = sqlCmd;  
24:        cmd.ExecuteNonQuery();  
25:       }  
26:       catch (Exception e)  
27:       {  
28:        Console.Error.WriteLine("{0}:{1}", e.Message, sqlCmd);  
29:        throw;  
30:       }  
31:    }  
32:   }  
33:   public void DeleteDb()  
34:   {  
35:    if (File.Exists(DB_PATH))  
36:     File.Delete(DB_PATH);  
37:   }  
38:   private string ReadSQLFromFile(string sqlFilePath)  
39:   {  
40:    using (TextReader r = new StreamReader(sqlFilePath))  
41:    {  
42:     return r.ReadToEnd();  
43:    }  
44:   }  
45:  }  
46:    
The dB file (Test.sdf) will be created in the current working directory.  As the test assembly is located in <project>\bin\debug which is where the NUnit test runner picks up the DLL from this directory this is where it is created.  If a specific directory is required then the '.\' can be replaced with the required path.

The Setup method is marked with NUnit's SetUp attribute meaning it will be invoked on a per-test basis creating a new dB instance for each test.  The DeleteDb method could be marked with [TearDown] attribute but at the moment any previous dB is deleted before creating a new one.  It would be fine to do both as a belt and braces approach.  The reason I didn't make it the TearDown method is so that I could inspect the dB following a test if needed.

SQL CE does not support batch execution of SQL scripts which is where it gets interesting as the SQL generated previously is in batch form.  The code reads the entire file into a string and determines each individual statement by splitting string on the 'GO' command that separates each SQL command.

To help understand the SQL the following is the diagram of the dB I'm working with.  All fields are strings except for the Ids which are numeric.
Each of these commands is then executed.  The previously generated SQL (the SQL for the dB I'm working with is below) will not work completely out of the box.  The ALTER and DROP statements at the beginning don't apply as the schema is being applied to an empty dB, these should be removed.  Interestingly the schema generation step for my dB seems to miss out a 'GO' between the penultimate and ultimate statement.  I had to add one by hand.  Finally, the comments at the end prove a problem as there is no terminating 'GO'.  Removing these fixes the problem.  In the code above the exception handler re-throws the exception after writing out the details.  For everything to proceed the SQL needs modifying to execute perfectly.  If the re-throw is removed then the code will tolerate individual command failures which in this context really just amount to warnings.

NOTE: Text highlighted in red has been removed and text in blue added.

-- --------------------------------------------------
-- Entity Designer DDL Script for SQL Server Compact Edition
-- --------------------------------------------------
-- Date Created: 07/29/2012 12:28:35
-- Generated from EDMX file: C:\Users\Pete\work\Jub\DummyWebApplicationToGenerateSQLServerCE4Script\Model1.edmx
-- --------------------------------------------------


-- --------------------------------------------------
-- Dropping existing FOREIGN KEY constraints
-- NOTE: if the constraint does not exist, an ignorable error will be reported.
-- --------------------------------------------------

    ALTER TABLE [RepComments] DROP CONSTRAINT [FK_RepComments_Reps];
GO

-- --------------------------------------------------
-- Dropping existing tables
-- NOTE: if the table does not exist, an ignorable error will be reported.
-- --------------------------------------------------

    DROP TABLE [RepComments];
GO
    DROP TABLE [Reps];
GO
    DROP TABLE [Roads];
GO

-- --------------------------------------------------
-- Creating all tables
-- --------------------------------------------------

-- Creating table 'RepComments'
CREATE TABLE [RepComments] (
    [CommentId] int IDENTITY(1,1) NOT NULL,
    [RepId] int  NOT NULL,
    [Comment] ntext  NOT NULL
);
GO

-- Creating table 'Reps'
CREATE TABLE [Reps] (
    [RepId] int IDENTITY(1,1) NOT NULL,
    [RepName] nvarchar(50)  NOT NULL,
    [RoadName] nvarchar(256)  NOT NULL,
    [HouseNumberOrName] nvarchar(50)  NOT NULL,
    [ContactTelNumber] nvarchar(20)  NOT NULL,
    [Email] nvarchar(50)  NULL
);
GO

-- Creating table 'Roads'
CREATE TABLE [Roads] (
    [Name] nvarchar(256)  NOT NULL
);
GO

-- --------------------------------------------------
-- Creating all PRIMARY KEY constraints
-- --------------------------------------------------

-- Creating primary key on [CommentId] in table 'RepComments'
ALTER TABLE [RepComments]
ADD CONSTRAINT [PK_RepComments]
    PRIMARY KEY ([CommentId] );
GO

-- Creating primary key on [RepId] in table 'Reps'
ALTER TABLE [Reps]
ADD CONSTRAINT [PK_Reps]
    PRIMARY KEY ([RepId] );
GO

-- Creating primary key on [Name] in table 'Roads'
ALTER TABLE [Roads]
ADD CONSTRAINT [PK_Roads]
    PRIMARY KEY ([Name] );
GO

-- --------------------------------------------------
-- Creating all FOREIGN KEY constraints
-- --------------------------------------------------

-- Creating foreign key on [RepId] in table 'RepComments'
ALTER TABLE [RepComments]
ADD CONSTRAINT [FK_RepComments_Reps]
    FOREIGN KEY ([RepId])
    REFERENCES [Reps]
        ([RepId])
    ON DELETE NO ACTION ON UPDATE NO ACTION;
GO
-- Creating non-clustered index for FOREIGN KEY 'FK_RepComments_Reps'
CREATE INDEX [IX_FK_RepComments_Reps]
ON [RepComments]
    ([RepId]);
GO

-- --------------------------------------------------
-- Script has ended
-- --------------------------------------------------

Getting the SQL into a state where it will run flawlessly is a little bit of a hassle but given the number of times it will be used subsequently it's job a big job, well for a small dB anyway.  To verify that your dB has been created as needed an quick and easy way to test is to comment out the call to DeleteDb() and after a test has run open to the dB using Server Explorer within VS, i.e.



Using the dB in a test

Now that a fresh dB will be created for each test it's time to look at simple test:

1:  [Test]  
2:  public void TestOne()  
3:  {  
4:   using (var conn = new SqlCeConnection(CONNECTION_STRING))  
5:    using (var ctx = new TestCtx(conn))  
6:    {  
7:     ctx.Roads.Add(new Road() { Name = "Test" });  
8:     ctx.SaveChanges();  
9:     Assert.That(1, Is.EqualTo(ctx.Roads.Count()));  
10:   }  
11:  }  
Road in this case is defined as:

1:  class Road  
2:  {  
3:   [Key]  
4:   public string Name { get; set; }  
5:  }  

The first thing to note is that EF is not used to form the connection to the dB, instead one is made using the SqlCe specific classes.  Attempting to get EF to connect to a specific dB instance when not referring to a named connection strings in the .config file is a bit of an art (I may write another entry about this).  However, EF is quite happy to work with an existing connection.  This makes for a good separation of responsibilities in the code where EF manages the interactions with the dB but the control of the connection is elsewhere.

NOTE: It is likely that each test will require a connection and a context hence rather it might make more sense to move the creation of the SqlCeConnection and the context (TestCtx in this case) to a SetUp method and as these resources need disposing of adding a TearDown method to do that.  TestCtx could also be modified to pass true to the DbContext constructor to give ownership of the connection to the context so that it will dispose of it then context is disposed off.

I would have preferred to avoid having to defined a specific derived context and instead use DbContext directory, e.g.
1:  [Test]  
2:  public void TesTwo()  
3:  {  
4:   using (var conn = new SqlCeConnection(CONNECTION_STRING))  
5:    using (var ctx = new DbContext(conn, false))  
6:    {  
7:     ctx.Set<Road>().Add(new Road() { Name = "Test" });  
8:     ctx.SaveChanges();  
9:     Assert.That(1, Is.EqualTo(ctx.Set<Road>().Count()));  
10:    }  
11:  }  

However when SaveChanges() is called the following exception is thrown:

System.InvalidOperationException : The entity type Road is not part of the model for the current context.

This is because EF knows nothing about the Road type.  When a derived context is created for the first time I think EF performs reflection on any properties that expose DbSet.  These are the types that form the Model.  Another option is to create the model, optionally compile it and then pass it to an instance of DbContext.  This way involves a lot less code.

That's it.  The final section is just footnote about the move away from using EF to create the dB.

Why DbContext is no longer used to create the database

As mentioned creating the dB using:
1:  using (var ctx = new DbContext("bar.sdf"))  
2:  {  
3:   ctx.Database.Create();  
4:   // create schema etc.  
5:  }  
causes the '__MigrationHistory' table to be created.  Assuming this method was used, later on when TestCtx was used top open the dB and perform an operation the following exception would be thrown:

System.InvalidOperationException : The model backing the 'DbContext' context has changed since the database was created. Consider using Code First Migrations to update the database (http://go.microsoft.com/fwlink/?LinkId=238269).
This is because the context used to create the model was a raw DbContext (as per the previous post) whereas the dB was accessed via the TestCtx.  If the context used to create the dB is also changed to TestCtx then this problem goes away.
However, given the original dB is not intended to be created nor be maintained (code migrations) by EF then using the non-context/EF approach to dB completely removes EF from the picture.









Wednesday, 27 June 2012

Generating a SQL Server CE database schema from a SQL Server database using Entity Framework

In a previous entry I described how to programmatically create (& destroy) a SQL CE dB for integration testing using NUnit.  Since getting that working I ran into a couple of other problems which I've more or less solved so I thought I'd write those up.  To begin with though this is a prequel post describing how to obtain the SQL script to create the SQL CE dB.

If you happen to be working exclusively with CE then you'll already have your schema file.  In my case I'm using SQLExpress and as this is experimental work I created my dB by hand.  However, using the EF it's pretty easy to obtain the schema and have the EF wizard generate the CE schema.  This is important as there are differences in the dialect of SQL used by SQL Express and SQL CE and its easier to have a tool handle those, though it doesn't do all of them.

The basic flow is to generate an EF model (EDMX) file from the existing SQL Express database and then use the 'Generate database from model' functionality.  It is at this point that the target SQL dB can be chosen, i.e. SQL Server, SQL Server CE or some others.

To create a model requires adding a 'New Item' of type 'ADO.Net Entity Data Model' to a VS project so first a new dummy project needs creating.  This is where it gets a little complicated as not any type of project will do.  I'm working with CE 4 and require a schema for that version of the dB (though creating one for 3.5 works but I like to things as close to ideal as possible).  Due to this constraint it is necessary to chose a Web type project as for some reason the VS2010 integration provided by EF only supports the generation of CE 4 dBs for Web projects.  If a simple C# Windows Console project is selected then you're limited to CE 3.5.  Thus the simplest project type is the 'ASP.Net Empty Web Application' as shown below.


Having done this, next add a new item of type ADO.Net Entity Data Model as below. NOTE: The project will have to reference the Entity Framework assemblies.  The easiest way to do this (& the one most people are probably using) is to use the NuGet package.


Then follow the wizard.


Selecting "Generate from database".


Choose your SQLExpress (or SQL Server) dB but uncheck the "Save entity connection settings in Web.Config as:" as we're converting to SQLCE so want to minimize anything related to other types of SQL Server.


Finally select the SQL elements you require.  In this example only the existing tables were selected.  As this is generating the EF model from an existing database no SQL file is generated just the model for which the diagram is shown, i.e.


The next phase is to generate the SQL from the model (which was generated from the hand crafted db) but to make sure the SQL that's generated is compliant with SQL CE.

To generate the schema right click and select "Generate model from database..."


This brings up the "Generate database" wizard which is very similar to the previously used "Entity Data Model" wizard used to create the model.  From here choose the "New Connection" option which pops up another set of dialogs.  On the first choose the type of data source as "Microsoft SQL Server Compact 4.0".

Clicking on continue then leads to the next dialog where you need to create a dB.



Ok-ing this leads back to the "Generate database wizard".


This time check the "Save entity connection settings in Web.Config" checkbox.  This information will be useful later (to be covered in a different post).  Clicking "Next" the SQL is generated and present in the wizard.


This can be copied & pasted directly from here or pressing "Finish" will save the SQL to the file indicated at the top of the dialog box.  This file is added to the project.  The following prompt will appear when "Finish" is pressed.
 

This doesn't really matter as this is a throw away project but having the updated schemas maybe useful so go with "Yes".

The SQL can now be used to configure an empty SQL CE 4.0 database.  The easiest way is to open the SQL file and right-click selecting the "Execute SQL" menu item.


This brings up the SQL Server dialog from which if "New Database" is selected an CE 4 one can be specified.


Having specified a location and pressed "Ok" the SQL script is executed.  As can be seen below this is not without errors.  However, this isn't anything to worry about as the errors are to do with dropping tables and indices that currently don't exist as it's a newly created dB.  Performing the same steps but missing out the creation of the dB file as it already exists sees the SQL script execute flawlessly.



The final picture shows the newly created database in VS2010's Server Explorer demonstrating that the tables were indeed created.


The basis for this post is my experimentation on using NUnit to programmatically test some dB based functionality.  If a single instance of a database suffices for all your tests and you can execute the SQL by hand as above and then you can follow these steps.  In may case I want to a fresh database per test so I need to automate the running of the SQL Script combined the with the creation and destruction of the underlying database.  The creation and deletion aspect were covered in a previous post but the next step will have to wait until a later one.