Find It! App – Success at SUN ‘n FUN

The Find It! mobile app for SUN ‘n FUN 2016 #SNF2016 was a big success with over 3000 downloads and great ratings and reviews. FindIt! received 4.7 stars in the Google Play Store and a perfect 5.0 stars in the Apple App Store…

Log Parser

Log parser is a powerful, versatile tool that provides universal query access to text-based data such as log files, XML files and CSV files, as well as key data sources on the Windows® operating system such as the Event Log, the Registry, the file system, and Active Directory®.

We primarily use it for that analysis of IIS logs but could use it for any thing including our log4net logs.


Log Parser is a command line utility

To make it easier to work with, Microsoft developed a GUI for Log Parser.  It is called Log Parser Studio.

Basic Usage

Log Parser Studio needs access to the log files.  Since log files are produced and stored on the web server, you either need to install Log Parser Studio on the web server or download logs to your workstation.

  1. Open Log Parser Studio
  2. You will see a library of recipe queries.  Scroll down to see the IIS queries.
  3. Double click on the name of a query.  e.g. “IIS: HTTP Status Codes by Count”
  4. Click the folder open button (mustard color) in the toolbar
  5. Choose the folder where IIS logs are located
  6. Select files/folders
  7. Click the execute button (red exclamation) to run the query.
  8. Results will be displayed and can be exported

Advanced Usage

As you can see from the queries in the library the query language is SQL based.  Experiment with writing your own queries to get the information that you need just like you would in SQL.  As you would in SQL, just start by doing an exploratory query.


Then start experimenting.

Google for other examples.  Chances are somebody on the Internet has already written the query that you need.

Client Code Standards

Everyone has their own style of programming and I am not attempting to change that here.  That being said, some practices lead to unreadable and unmaintainable code.  Below is a list of coding practices that I often times see being employed and are generally a bad idea.  For each practice I will describe what is being done, why it is wrong, and give an example of the correct way of doing things.  Bear in mind that I am guilty of some of these myself so I think we all have room to grow in this regard.

Nested Ternary Operators

The ternary operator can be a quick way to conditionally use different values in an expression.  The basic cases for this are useful and easy to understand but there is a significant potential for abuse.  The easiest abuse case to point out is when a ternary operation is nested within another ternary operation.  When this happens it is very difficult to understand what the original programmer intended (even if you are the original programmer) which makes troubleshooting complicated.

The wrong way:

var result = conditional1 ? conditional2 ? value1 : value2 : value3;

I have seen much worse than this but even this is difficult to understand. A better way, still using the ternary operator:

var interimResult = conditional2 ? value1 : value2;
var result = conditional1 ? interimResult : value3;

There are other, more efficient, ways of doing this but you get the point.

Nested Anonymous Functions

Anonymous functions are a shortcut in javascript that can be useful if the function is used only in one place (such as with a callback). Without careful attention to formatting it can be difficult to tell where one function ends and the other begins. This problem is multiplied exponentially when Anonymous functions are nested or you are passing multiple functions in as parameters to a function.

The wrong way:

var result = getResult(
	function(param1, param2){
		var innerResult = getInnerResult(
			function(innerParam1){ stuff
				...return something
		return innerResult;

This is an over-simplified example but you can see how things can become confusing. This example is formatted far better than is generally the case and it would still be confusing as to where the responsibility of each function begins and ends. I understand that nesting functions is the only way to encapsulate a private scope (closure) in Javascript but I find that often times nested functions are used as a convenience instead of in the context of a closure.

A better way:

function function1(innerParam1){ stuff
	...return something

function function2(param1, param2){
	var innerResult = getInnerResult(function1);
	return innerResult;

var result = getResult(function2);

Both are the same amount of code, just separated differently and the second way is far easier to understand and maintain.

Overly Abbreviated Naming

Abbreviating variable names is an old tradition that dates back to the early days in programming when every character mattered. Nowadays it doesn’t so when you abbreviate a variable or function name you are trading code readability for the ability to type less characters when you are using the variable. While this sounds like a good idea up front, it often comes back to bite you later on. Additionally, most IDEs now will find the variable you are looking for after you type the first few characters anyway so speed is no longer a good excuse.

The wrong way:

//Uh, what does dyh stand for again?
var dyh = result;

If you were lucky enough to be the original coder of the line above then you have a chance of figuring out what it means. Otherwise you don’t without either guessing or asking the original programmer. This problem is worse in JavaScript because you don’t have strict typing to help you out.

The right way:

var driverYearHeat = result;

Now it is obvious what we are talking about. Bear in mind, I am not saying that all abbreviation is bad. There are certain common abbreviations that are acceptable. Also, abbreviations where it is still obvious what you are talking about is fine as well. Below is an incomplete list of common abbreviations:

  • i – Commonly used as the index of a for loop iteration
  • obj – Object
  • num – Number (as in number of, ie: numDrivers)
  • tpl – Template
  • impl – Implementation
  • j – The inner index of a for loop iteration

If you are unsure, just spell it out.  It’s not that big of a deal and will probably save you time later.

Using JQuery instead of Angular Directives

JQuery is great but should not be invoked directly on a DOM element when in angular.  There are a few reasons for this but the main ones are ease of unit testing and reusablity.  Instead, directives should be used.

Now, this is not to say that JQuery functions should not be used.  Just you should not target an element of the DOM directly by ID, class, or tag.  The $element parameter of a directive contains the DOM element that the directive encapsulates.  This is a JQuery element and as such all JQuery functions can be invoked from it.

The wrong way:

<div id="myDiv" />


The right way:

<div id="myDiv" hide="true" />

	angular.module().directive('hide', [ function () {
        	return {
        	    restrict: 'A',
        	    controller:['$scope', '$element', '$attrs', 
        	    function ($scope, $element, $attrs) {

Again, an oversimplified example. Note that the JQuery example is a lot less code, but you would need to invoke that specifically anywhere you needed this functionality. With the Angular version, just use the directive and you get the rest of the functionality without additional lines of code.


I’m sure as we mature as a company and AngularJS developers our coding standards will mature as well.  As such, I expect that this list will grow and change.

Check Security Authorization

Because each system we build has custom security requirements, there is no way to implement one security model and apply it to all systems.  Instead the framework supports extensibility points as part of the command and query architectures.

But before you go there, you’ll need to flesh out the requirements for security and build the necessary database tables.

Some important questions to answer regarding security requirements:

  • What are the roles?
  • Do users need to be granted/denied access to specific types of data? (i.e. can all users access all entities)
  • Do users need to be granted/denied access to specific data (e.g. have access to one division, but not another division)
  • What are the different rights that can be assigned to each role (very convenient if you can associate them one-to-one with biz commands)
  • Do different roles need to be disallowed from different client apps?
  • Who will manage security roles?  How will they do this?
  • Who will manage rights?  How will they do this?

After understanding that, the next step would be to create the necessary database tables to support those requirements, populate them with data, and build some queries or stored procedures to allow you to ask the necessary questions of the security system.  After all that, then we can wire up to the security system in the command and query architecture.

There are two different ways to apply security.

  1. Apply security on a per command basis
  2. Apply security on a per entity basis

Apply security on a per command basis

For Add, Edit, and Delete operations you will apply security on a per command basis.  In this case you want to check security ONCE per command execution.

For each of your commands that need security do the following:

public override bool CheckSecurity()

Fill in the method guts with a query that checks whether the user has permission to execute this command OR has permission to execute this command on this entity.  For example, you might have a system that allows for deleting of users.  In one scenario, only admins can delete users.  In that case, simply check whether the currently logged in user is an admin in the CheckSecurity method.  In another scenario, only the user can delete themselves.  In that case, check whether the user that is about to be deleted is the currently logged in user.  In both scenarios we are checking security ONCE per command execution.

In theory you could apply this type of security checking on Get methods too since they operate on exactly one entity, however, see below for a better option.

Apply security on a per entity basis

For Get and GetAll, we are performing read operations on 1 or more records in the database.  It is not efficient nor practical to do that checking in the command layer.  Instead, checking is done in the query layer.

You will need to extend the code generated queries using partial classes.  These partial classes should be placed in the /Queries/Extended folder in the Model project.  You will typically override the GetAll method to include filters that apply security correctly.  Note, that the Get method internally calls the GetAll method so you can limit your security settings to a single location and read security will be applied in a uniformed way everywhere.



Deploying files with using xcopy

Each job that needs to deploy files to a web server, does this through a batch file.  Inside that batch file is an xcopy command.

Here is an example

xcopy /exclude:..Scriptsexcludes.txt /E /F /Y /M .DerbyDerby.Website*.* \SBD-WEB-1inetpubDev1.derby.reurgency.netDerby

NOTE: the current working directory is always the folder called “Working” in our standard folder structure.

You will notice that the xcopy command uses a variety of switches that require some explanation and some WARNINGS.  Look at documentation here:

Here are my notes regarding these switches:

/exclude allows to exclude certain files. Those file names are listed in the excluded.txt file. At a minimum this should contain one entry for web.config.

/E copies any subfolder, even if empty.  This ensures that the proper structure exists on the web server.

/F displayes the full source and destination file names will copying.  This is helpful when looking at logs.

/Y overwrites existing files without prompting.  Important not to prompt since this runs automatically.

/M copies files with the archive attribute set.  This switch turns off the archive attribute.

That last switch /M works great as long as you only have one deployment script for this website using that switch.  i.e. if you have both a dev1 project and test1 project you will have a problem with that switch.

Here’s the way it works with dev1:

  1. gets latest files from SVN. Any new or modified files get their archive bit set.  This is not a thing, this is a basic feature of every OS file system.
  2. builds DLLs.  All DLLs build have their archive bit set.  Again, just basic file system behavior.
  3. runs the deploy batch file that contains the xcopy.  The xcopy  uses the /M switch which tells it to only copy the changed files (archive bit set).  It also resets the archive bit so it is not seen as changed the next time it runs.

This process makes deployments much faster.  However, it only works if there is one entity clearing that archive bit.  As soon as you have another environment (e.g. test1) you cannot use that trick for those other environments.  The issue is that two jobs are stomping on each other’s record of what has changed.  After a few runs, they will BOTH have out of sync files.

Therefore, here is best practice:

  • Dev1 will always use the /M switch because those deploys happen more frequently and we benefit the most from only copying changed files.
  • All other environments (Test1, Demo1, etc) will NOT use the /M switch because those jobs are run less frequently and it is OK if all files get copied each time.



I was doing some refactoring of my randomization method (which I’ll cover in a subsequent post). My new method makes two separate updates to the database. This was causing a “OptimisticConcurrencyException” to be thrown on the second update. Doing some searching on the internet, this error is caused when the database context believes it has “dirty” data due to a previous update.

I was able to resolve this issue by adding this line of code within my business layer class.

The Intellisense states that “AcceptAllChangesAfterSave” resets change tracking on the ObjectStateManager. I’m not sure what all that means but it resolved my issue.

Code First Migrations

Working within the .NET Code First environment has many advantages. Code First gives you strongly typed classes that mirror the database environment because the actual database environment was built from these same strongly typed classes. For me the biggest advantage of Code First is that it writes and executes all of the database definition SQL for you. From creating the tables, to setting the primary keys, and establishing relationships between tables through foreign keys, Code First handles it all (or at least most of it). This is a HUGE time saver. Consider if you did it the “old fashioned” way by creating the tables, keys, etc. first using SQL, you would still end up creating strongly typed classes within your project. Code First saves you that hassle.

Joe already went over the model creation portion of Code First. This blog post will cover the database migration portion of Code First.

After you’ve created your model, you need to create a database migration. This can be done by utilizing the Package Manager Console within Visual Studio 12. The Package Manager Console can be accessed by going to “TOOLS > Library Package Manager > Package Manager Console”. After this console has opened, you can pin it to your work environment for easier access later. From the Package Manager Console (which is “PM>”), type “add-migration migration name” where migration name is the name you want to give to this particular migration. If this is the very first migration, something like “InitialDatabaseCreation” might be appropriate for the migration name. This command will create a C# class within the “Migrations” folder of your Model project. This class will have the name of your migration name and the class will contain two methods: “up” and “down”. The “up” method upgrades the database with your latest model changes. The “down” method can be used to rollback the migration’s changes after they have been committed.

The “up” method will contain code to create the various tables and keys based upon your entity model. A sample of the “up” method’s code is below.

The above example is creating a table called “DriverYears” which has a primary key on the “DriverYearId” field. The table also has four foreign keys to other tables and the “add-migration” command also added indices on those foreign key fields. The code above is a combination of C# and LINQ.

The “down” method contains code to undo any changes created by the “up” method. Most likely it will contain code to drop tables from the database. Below is a sample of the “down” method that was created using the “add-migration” command above.

You can see from the sample above that the “down” method will remove any database objects created by the migration’s “up” method.

If you were to connect to your database server at this point, you would not see a database pertaining to your project. The “add-migration” command only creates the migration; you still need to execute it using the “update-database” command. If you look at the C# migration code, you will not see any actual code to create the database. This is handled behind the scenes by the “update-database” command. The “update-database” command will look to see if the database used in your connection string exists on the database server. If there is no database with the name specified in the connection string, the “update-database” will create the database with all of the default values (database/log file name/location, language, settings, etc.). If you need to change any of these default values, you will either have to create the database manually or change these settings after the “update-database” command has been executed.

We are using the convention “project_(Local)” for the name of our database, where project is the name of your Visual Studio project. An example of the Derby project’s connection string is shown below. Using this convention makes it unnecessary for different developers to constantly having to change the connection string when getting the latest version of the project from the SVN.

From the prompt within the Package Manager Console, type “update-database” and hit “enter” (you can add the “-verbose” switch to see exactly what SQL scripts the “update-database” command is executing). This command will create the database (if necessary) and perform everything that is within any “up” methods for migrations that have yet to be executed. If you connect to your database server now, you will see your database with a table for every entity you created within your model.

Any changes to your model after the “update-database” command has been executed, will require the creation of another Code First database migration. Because you may have multiple developers working on the same project and making changes to the entity model, you may run into a case where your database is out of sync with the model. In fact, it may be several migrations behind the current model. You can see what state your database is in by looking at the system table “__MigrationHistory”. This table will show which Code First migrations have been executed against this database.

With multiple developers working on the same project, there can be instances where the Code First model believes that it is out of sync with the database. The Code First model will throw an error stating that a migration needs to be executed to bring the database in line with the model (need to get actual error from desktop). This can occur even if no changes to the Code First model have been made. This error can be frustrating because it prevents you from executing your project. You can try to alleviate this error by executing the “update-database” command. Sometimes, however, this will not solver the error. Another solution to try and fix this error is to completely delete the database and run “update-database”. This will usually fix the problem but it is not an ideal solution because you will lose any data you had in the database and thus have to add this lost data again. The best solution to prevent this error is to tell the Code First model not to check whether it is in sync with the database model. This can be accomplished by adding the following line to your “Configuration.cs” file.

An even better solution is to add a Boolean property the configuration file and wrap the above statement within a check of this property.

As was stated earlier, any changes to the Code First model after an “update-database” has been executed will require a new migration. It is a best practice to create new migrations consisting of the smallest logical change to the model. Name the new migration after the most logical thing that was changed and run the “update-database” command. Then make any additional model changes, create a new migration, and run the “update-database” command again. This will make it easier to see which migrations added what specific elements to the model and database.

There are certain database features where it might be easier to implement directly against the database instead of implementing through the Code First entity model. A few database features in which this may apply are indices, constraints, functions, and stored procedures. As was mentioned in the post about Code First model creation, views should be created directly on the database with a corresponding entity created within the Code First model. However, within the migration the C# code that would create a table based upon the view “entity” needs to be removed or commented out. The migration will try to create a table that corresponds to the view “entity” you’ve created in C#.

Overall Code First is an excellent way to create a data model and corresponding database with database objects. It has proven to be a big time saver on the tedious task of database creation and modification.

How to use Reurgency.Scaffold

To code generate using Reurgency.Scaffold, simply use the “scaffold” commandlet in the package manager console.


scaffold <TemplateName> <EntityName>

This will execute the desired T4 code generation templates for the given entity. It will do this in the project that is currently set as the “Default Project” in the package manager console.

The following TemplateNames are currently available:

  • reQuery
  • reCommands
  • reApiController

MAKE SURE YOU FIRST CHANGE THE DEFAULT PROJECT IN PACKAGE MANAGER CONSOLE BEFORE YOU RUN THE SCAFFOLD COMMAND. Otherwise, code generation will occur in the wrong project.  If you do generate in the wrong project, just delete the files that were created and try again in the correct project.

Quick Reference

Query Layer

scaffold reQuery MyApp.Model.Entities.Employee

 Business Layer

scaffold reCommands MyApp.Model.Entities.Employee

 Services Layer

scaffold reApiController MyApp.Model.Entities.Employee

Complete Reference

Query Layer



Package Manager Console Command

scaffold reQuery <EntityName>

Example Usage

scaffold reQuery MyApp.Model.Entities.Employee

Output Path

  • MyApp.ModelQueriesGenerated<EntityName>Queries.cs


  • MyApp.ModelQueriesGeneratedEmployeeQueries.cs

Business Layer



Package Manager Console Command

scaffold reCommands <EntityName>

Example Usage

scaffold reCommands MyApp.Model.Entities.Employee

Output Paths

  • MyApp.BusinessCommands<PluralEntityName>Add.cs
  • MyApp.BusinessCommands<PluralEntityName>Count.cs
  • MyApp.BusinessCommands<PluralEntityName>Delete.cs
  • MyApp.BusinessCommands<PluralEntityName>Edit.cs
  • MyApp.BusinessCommands<PluralEntityName>Get.cs
  • MyApp.BusinessCommands<PluralEntityName>GetAll.cs


  • MyApp.BusinessCommandsEmployeesAdd.cs
  • MyApp.BusinessCommandsEmployeesCount.cs
  • MyApp.BusinessCommandsEmployeesDelete.cs
  • MyApp.BusinessCommandsEmployeesEdit.cs
  • MyApp.BusinessCommandsEmployeesGet.cs
  • MyApp.BusinessCommandsEmployeesGetAll.cs

Services Layer



Package Manager Console Command

scaffold reApiController <EntityName>

Example Usage

scaffold reApiController MyApp.Model.Entities.Employee

Output Path

  • MyApp.ServicesWebApiControllers<PluralEntityName>Controller.cs


  • MyApp.ServicesWebApiControllersEmployeesController.cs

Creating and Publishing NuGet Packages to reUrgency NuGet repository

reUrgency hosts a private NuGet repository that we can use to publish and consume NuGet packages that we create.  NuGet packages are a great way to share components across projects.

Advantages of NuGet over other ways of sharing

  • Easy to install using the NuGet Package Manager
  • Versioned – each NuGet package has a version associated with it and therefore it’s easy to see which version of a package you are using.
  • Upgrade when you want – unlike sharing via SVN Externals, each project gets to decide when/if that project upgrades to a version of a package.
  • Install and configure all kinds of things – when a NuGet package is installed a powershell script runs that can perform any setup action necessary.  This provides a lot of power and automation.
  • Enforces decoupling – because NuGet packages are completely separate projects it is very difficult to accidentally introduce tight coupling.  Therefore it is more reusable
  • Includes dependencies – NuGet packages include a set of dependent DLLs with associated versions.  This ensures that all dependent NuGet packages are installed first and that they are the proper version.  You can also use this feature to install lots of NuGet packages all at once.  i.e. you could create a package that simply has dependencies in a bundle for easily setting up projects.

reUrgency Repository

We have created our own private repo. You must be on the reUrgency VPN to access it.

NOTE: ignore the instructions on that page on how to publish. We have a much simpler method documented below.

In VS2012, in the package manager settings, add the following URL to the list of Package Sources:

Our repo is already setup, but in case we ever need to setup another one in the future, here is documentation on how to setup a private repo.

Anatomy of a NuGet Package

The most useful reading you can do is to read about the anatomy of a NuGet package.  Once you understand that, you can pretty easily extrapolate what you need to put where and how to diagnose problems.

There’s a lot of information in that document.  Make sure you read it and understand it, but see below for simple steps.

Setup a NuGet Package

Step 1

Install Visual Studio Template for NuGet projects

Step 2

Follow instructions here for creating a new package

Step 3

Create a deploy.ps1 file in the root and write the xcopy deployment code. Example:

xcopy /y C:devREsrcReurgency2trunkReurgency.Common.Packager*.nupkg \re-source-1.reurgency.netpackages

See next section for explanation of this non-standard (but much simpler) publishing process.


Simplified Publishing Process

Normally NuGet packages are published via an HTTP File post.  This was a little tricky to configure and I just decided to use a simpler method that is less fragile and allows anybody to manually upload new NuGet Package versions.

NuGet Packages are just files in a special folder that is hosted on a website.  There is no database or registry settings.  As soon as the .nupkg file is dropped in the proper folder on the web server it shows up in the NuGet repo.

So taking advantage of this simple model, I opted for a super simple xcopy deployment script.

xcopy /y C:devREsrcReurgency2trunkReurgency.Common.Packager*.nupkg \re-source-1.reurgency.netpackages

As you can see from that example the UNC path to the repo folder is


NOTE: you may have to change your VPN setting to allow file share access.

Publish a new version of a NuGet Package

As you make changes to your package and it’s dependencies, you will want to periodically publish a new version to the reUrgency repo so others can use it.

Step 1

Increment the version number in Package.nuspec

Step 2

Compile your package

Step 3

Run deploy.ps1

NOTE: By default Visual Studio will open a PowerShell script for editing (not run it).  To run it, right click on the file and choose “Open With…”.  Click “Add…” to add a new action.  PowerShell.exe is located at “C:WindowsSystem32WindowsPowerShellv1.0powershell.exe”

Example Packages

If you would like to see an example of working NuGet package, look here in SVN:


here on disk