Check Security Authorization

Because each system we build has custom security requirements, there is no way to implement one security model and apply it to all systems.  Instead the framework supports extensibility points as part of the command and query architectures.

But before you go there, you’ll need to flesh out the requirements for security and build the necessary database tables.

Some important questions to answer regarding security requirements:

  • What are the roles?
  • Do users need to be granted/denied access to specific types of data? (i.e. can all users access all entities)
  • Do users need to be granted/denied access to specific data (e.g. have access to one division, but not another division)
  • What are the different rights that can be assigned to each role (very convenient if you can associate them one-to-one with biz commands)
  • Do different roles need to be disallowed from different client apps?
  • Who will manage security roles?  How will they do this?
  • Who will manage rights?  How will they do this?

After understanding that, the next step would be to create the necessary database tables to support those requirements, populate them with data, and build some queries or stored procedures to allow you to ask the necessary questions of the security system.  After all that, then we can wire up to the security system in the command and query architecture.

There are two different ways to apply security.

  1. Apply security on a per command basis
  2. Apply security on a per entity basis

Apply security on a per command basis

For Add, Edit, and Delete operations you will apply security on a per command basis.  In this case you want to check security ONCE per command execution.

For each of your commands that need security do the following:

public override bool CheckSecurity()
{
}

Fill in the method guts with a query that checks whether the user has permission to execute this command OR has permission to execute this command on this entity.  For example, you might have a system that allows for deleting of users.  In one scenario, only admins can delete users.  In that case, simply check whether the currently logged in user is an admin in the CheckSecurity method.  In another scenario, only the user can delete themselves.  In that case, check whether the user that is about to be deleted is the currently logged in user.  In both scenarios we are checking security ONCE per command execution.

In theory you could apply this type of security checking on Get methods too since they operate on exactly one entity, however, see below for a better option.

Apply security on a per entity basis

For Get and GetAll, we are performing read operations on 1 or more records in the database.  It is not efficient nor practical to do that checking in the command layer.  Instead, checking is done in the query layer.

You will need to extend the code generated queries using partial classes.  These partial classes should be placed in the /Queries/Extended folder in the Model project.  You will typically override the GetAll method to include filters that apply security correctly.  Note, that the Get method internally calls the GetAll method so you can limit your security settings to a single location and read security will be applied in a uniformed way everywhere.

 

 

Deploying files with CruiseControl.net using xcopy

Each CruiseControl.net job that needs to deploy files to a web server, does this through a batch file.  Inside that batch file is an xcopy command.

Here is an example

xcopy /exclude:..Scriptsexcludes.txt /E /F /Y /M .DerbyDerby.Website*.* \SBD-WEB-1inetpubDev1.derby.reurgency.netDerby

NOTE: the current working directory is always the folder called “Working” in our standard folder structure.

You will notice that the xcopy command uses a variety of switches that require some explanation and some WARNINGS.  Look at documentation here: http://support.microsoft.com/kb/289483

Here are my notes regarding these switches:

/exclude allows to exclude certain files. Those file names are listed in the excluded.txt file. At a minimum this should contain one entry for web.config.

/E copies any subfolder, even if empty.  This ensures that the proper structure exists on the web server.

/F displayes the full source and destination file names will copying.  This is helpful when looking at CC.net logs.

/Y overwrites existing files without prompting.  Important not to prompt since this runs automatically.

/M copies files with the archive attribute set.  This switch turns off the archive attribute.

That last switch /M works great as long as you only have one deployment script for this website using that switch.  i.e. if you have both a dev1 cc.net project and test1 cc.net project you will have a problem with that switch.

Here’s the way it works with dev1:

  1. CC.net gets latest files from SVN. Any new or modified files get their archive bit set.  This is not a CC.net thing, this is a basic feature of every OS file system.
  2. CC.net builds DLLs.  All DLLs build have their archive bit set.  Again, just basic file system behavior.
  3. CC.net runs the deploy batch file that contains the xcopy.  The xcopy  uses the /M switch which tells it to only copy the changed files (archive bit set).  It also resets the archive bit so it is not seen as changed the next time it runs.

This process makes deployments much faster.  However, it only works if there is one entity clearing that archive bit.  As soon as you have another environment (e.g. test1) you cannot use that trick for those other environments.  The issue is that two jobs are stomping on each other’s record of what has changed.  After a few runs, they will BOTH have out of sync files.

Therefore, here is best practice:

  • Dev1 will always use the /M switch because those deploys happen more frequently and we benefit the most from only copying changed files.
  • All other environments (Test1, Demo1, etc) will NOT use the /M switch because those jobs are run less frequently and it is OK if all files get copied each time.

 

OptimisticConcurrencyException

I was doing some refactoring of my randomization method (which I’ll cover in a subsequent post). My new method makes two separate updates to the database. This was causing a “OptimisticConcurrencyException” to be thrown on the second update. Doing some searching on the internet, this error is caused when the database context believes it has “dirty” data due to a previous update.

I was able to resolve this issue by adding this line of code within my business layer class.

The Intellisense states that “AcceptAllChangesAfterSave” resets change tracking on the ObjectStateManager. I’m not sure what all that means but it resolved my issue.

Code First Migrations

Working within the .NET Code First environment has many advantages. Code First gives you strongly typed classes that mirror the database environment because the actual database environment was built from these same strongly typed classes. For me the biggest advantage of Code First is that it writes and executes all of the database definition SQL for you. From creating the tables, to setting the primary keys, and establishing relationships between tables through foreign keys, Code First handles it all (or at least most of it). This is a HUGE time saver. Consider if you did it the “old fashioned” way by creating the tables, keys, etc. first using SQL, you would still end up creating strongly typed classes within your project. Code First saves you that hassle.

Joe already went over the model creation portion of Code First. This blog post will cover the database migration portion of Code First.

After you’ve created your model, you need to create a database migration. This can be done by utilizing the Package Manager Console within Visual Studio 12. The Package Manager Console can be accessed by going to “TOOLS > Library Package Manager > Package Manager Console”. After this console has opened, you can pin it to your work environment for easier access later. From the Package Manager Console (which is “PM>”), type “add-migration migration name” where migration name is the name you want to give to this particular migration. If this is the very first migration, something like “InitialDatabaseCreation” might be appropriate for the migration name. This command will create a C# class within the “Migrations” folder of your Model project. This class will have the name of your migration name and the class will contain two methods: “up” and “down”. The “up” method upgrades the database with your latest model changes. The “down” method can be used to rollback the migration’s changes after they have been committed.

The “up” method will contain code to create the various tables and keys based upon your entity model. A sample of the “up” method’s code is below.

The above example is creating a table called “DriverYears” which has a primary key on the “DriverYearId” field. The table also has four foreign keys to other tables and the “add-migration” command also added indices on those foreign key fields. The code above is a combination of C# and LINQ.

The “down” method contains code to undo any changes created by the “up” method. Most likely it will contain code to drop tables from the database. Below is a sample of the “down” method that was created using the “add-migration” command above.

You can see from the sample above that the “down” method will remove any database objects created by the migration’s “up” method.

If you were to connect to your database server at this point, you would not see a database pertaining to your project. The “add-migration” command only creates the migration; you still need to execute it using the “update-database” command. If you look at the C# migration code, you will not see any actual code to create the database. This is handled behind the scenes by the “update-database” command. The “update-database” command will look to see if the database used in your connection string exists on the database server. If there is no database with the name specified in the connection string, the “update-database” will create the database with all of the default values (database/log file name/location, language, settings, etc.). If you need to change any of these default values, you will either have to create the database manually or change these settings after the “update-database” command has been executed.

We are using the convention “project_(Local)” for the name of our database, where project is the name of your Visual Studio project. An example of the Derby project’s connection string is shown below. Using this convention makes it unnecessary for different developers to constantly having to change the connection string when getting the latest version of the project from the SVN.

From the prompt within the Package Manager Console, type “update-database” and hit “enter” (you can add the “-verbose” switch to see exactly what SQL scripts the “update-database” command is executing). This command will create the database (if necessary) and perform everything that is within any “up” methods for migrations that have yet to be executed. If you connect to your database server now, you will see your database with a table for every entity you created within your model.

Any changes to your model after the “update-database” command has been executed, will require the creation of another Code First database migration. Because you may have multiple developers working on the same project and making changes to the entity model, you may run into a case where your database is out of sync with the model. In fact, it may be several migrations behind the current model. You can see what state your database is in by looking at the system table “__MigrationHistory”. This table will show which Code First migrations have been executed against this database.

With multiple developers working on the same project, there can be instances where the Code First model believes that it is out of sync with the database. The Code First model will throw an error stating that a migration needs to be executed to bring the database in line with the model (need to get actual error from desktop). This can occur even if no changes to the Code First model have been made. This error can be frustrating because it prevents you from executing your project. You can try to alleviate this error by executing the “update-database” command. Sometimes, however, this will not solver the error. Another solution to try and fix this error is to completely delete the database and run “update-database”. This will usually fix the problem but it is not an ideal solution because you will lose any data you had in the database and thus have to add this lost data again. The best solution to prevent this error is to tell the Code First model not to check whether it is in sync with the database model. This can be accomplished by adding the following line to your “Configuration.cs” file.

An even better solution is to add a Boolean property the configuration file and wrap the above statement within a check of this property.

As was stated earlier, any changes to the Code First model after an “update-database” has been executed will require a new migration. It is a best practice to create new migrations consisting of the smallest logical change to the model. Name the new migration after the most logical thing that was changed and run the “update-database” command. Then make any additional model changes, create a new migration, and run the “update-database” command again. This will make it easier to see which migrations added what specific elements to the model and database.

There are certain database features where it might be easier to implement directly against the database instead of implementing through the Code First entity model. A few database features in which this may apply are indices, constraints, functions, and stored procedures. As was mentioned in the post about Code First model creation, views should be created directly on the database with a corresponding entity created within the Code First model. However, within the migration the C# code that would create a table based upon the view “entity” needs to be removed or commented out. The migration will try to create a table that corresponds to the view “entity” you’ve created in C#.

Overall Code First is an excellent way to create a data model and corresponding database with database objects. It has proven to be a big time saver on the tedious task of database creation and modification.

AngularJS – SharedDataServices & Refresh Buttons

The use of Shared Data Service in AngularJS allows us to quickly save off information received from server so it can be re-used later, thus limiting the number of hits we need to make. This is especially useful in mobile applications where we have to be conscious of our bandwidth usage.

One of the problems these services have is that they are very tightly coupled to the state of the application, and thus an enemy of the refresh button. In most mobile applications the refresh button isn’t available so we can avoid this issue. However, in Hybrid applications we need to manage our state a little more carefully in case the user is visiting from a desktop or mobile browser.

Let’s assume the following example is one from a Customer Detail Page. This page assumes that The Customer Info is stored in our Shared Data Service, but needs to be smart enough to react if it isn’t present. To do this we can prepare for failure by passing the ID of the customer that is in sharedData in the hash.

AngularJS – Polling for data

Polling for data in AngularJS can be accomplished quickly using the $timeout function. You pass two parameters to the $timeout instance; The first being the method to call, the second being the delay (in milliseconds).

$timeout($scope.getSomeData, 200);

Combining this with the counter enables you to quickly wire-up a repeated call for data over a specified timeframe.

$scope.keepPolling = function () {
    if ($scope.isPolling) {
        if ($scope.pollCount > 0) {
            $scope.pollCount--;
            $timeout($scope.getSomeData, 200);
        } else {
            $scope.stopPolling();
        }
    }
};

A full example can be seen in the following Gist

How to use Reurgency.Scaffold

To code generate using Reurgency.Scaffold, simply use the “scaffold” commandlet in the package manager console.

Syntax

scaffold <TemplateName> <EntityName>

This will execute the desired T4 code generation templates for the given entity. It will do this in the project that is currently set as the “Default Project” in the package manager console.

The following TemplateNames are currently available:

  • reQuery
  • reCommands
  • reApiController

MAKE SURE YOU FIRST CHANGE THE DEFAULT PROJECT IN PACKAGE MANAGER CONSOLE BEFORE YOU RUN THE SCAFFOLD COMMAND. Otherwise, code generation will occur in the wrong project.  If you do generate in the wrong project, just delete the files that were created and try again in the correct project.

Quick Reference

Query Layer

scaffold reQuery MyApp.Model.Entities.Employee

 Business Layer

scaffold reCommands MyApp.Model.Entities.Employee

 Services Layer

scaffold reApiController MyApp.Model.Entities.Employee

Complete Reference

Query Layer

Project

MyApp.Model

Package Manager Console Command

scaffold reQuery <EntityName>

Example Usage

scaffold reQuery MyApp.Model.Entities.Employee

Output Path

  • MyApp.ModelQueriesGenerated<EntityName>Queries.cs

Example

  • MyApp.ModelQueriesGeneratedEmployeeQueries.cs

Business Layer

Project

MyApp.Business

Package Manager Console Command

scaffold reCommands <EntityName>

Example Usage

scaffold reCommands MyApp.Model.Entities.Employee

Output Paths

  • MyApp.BusinessCommands<PluralEntityName>Add.cs
  • MyApp.BusinessCommands<PluralEntityName>Count.cs
  • MyApp.BusinessCommands<PluralEntityName>Delete.cs
  • MyApp.BusinessCommands<PluralEntityName>Edit.cs
  • MyApp.BusinessCommands<PluralEntityName>Get.cs
  • MyApp.BusinessCommands<PluralEntityName>GetAll.cs

Examples

  • MyApp.BusinessCommandsEmployeesAdd.cs
  • MyApp.BusinessCommandsEmployeesCount.cs
  • MyApp.BusinessCommandsEmployeesDelete.cs
  • MyApp.BusinessCommandsEmployeesEdit.cs
  • MyApp.BusinessCommandsEmployeesGet.cs
  • MyApp.BusinessCommandsEmployeesGetAll.cs

Services Layer

Project

MyApp.Services

Package Manager Console Command

scaffold reApiController <EntityName>

Example Usage

scaffold reApiController MyApp.Model.Entities.Employee

Output Path

  • MyApp.ServicesWebApiControllers<PluralEntityName>Controller.cs

Example

  • MyApp.ServicesWebApiControllersEmployeesController.cs

Creating and Publishing NuGet Packages to reUrgency NuGet repository

reUrgency hosts a private NuGet repository that we can use to publish and consume NuGet packages that we create.  NuGet packages are a great way to share components across projects.

Advantages of NuGet over other ways of sharing

  • Easy to install using the NuGet Package Manager
  • Versioned – each NuGet package has a version associated with it and therefore it’s easy to see which version of a package you are using.
  • Upgrade when you want – unlike sharing via SVN Externals, each project gets to decide when/if that project upgrades to a version of a package.
  • Install and configure all kinds of things – when a NuGet package is installed a powershell script runs that can perform any setup action necessary.  This provides a lot of power and automation.
  • Enforces decoupling – because NuGet packages are completely separate projects it is very difficult to accidentally introduce tight coupling.  Therefore it is more reusable
  • Includes dependencies – NuGet packages include a set of dependent DLLs with associated versions.  This ensures that all dependent NuGet packages are installed first and that they are the proper version.  You can also use this feature to install lots of NuGet packages all at once.  i.e. you could create a package that simply has dependencies in a bundle for easily setting up projects.

reUrgency Repository

We have created our own private repo. You must be on the reUrgency VPN to access it.

http://nuget.reurgency.net/

NOTE: ignore the instructions on that page on how to publish. We have a much simpler method documented below.

In VS2012, in the package manager settings, add the following URL to the list of Package Sources:

http://nuget.reurgency.net/nuget

Our repo is already setup, but in case we ever need to setup another one in the future, here is documentation on how to setup a private repo.

http://docs.nuget.org/docs/creating-packages/hosting-your-own-nuget-feeds

Anatomy of a NuGet Package

The most useful reading you can do is to read about the anatomy of a NuGet package.  Once you understand that, you can pretty easily extrapolate what you need to put where and how to diagnose problems.

http://docs.nuget.org/docs/creating-packages/creating-and-publishing-a-package

There’s a lot of information in that document.  Make sure you read it and understand it, but see below for simple steps.

Setup a NuGet Package

Step 1

Install Visual Studio Template for NuGet projects

http://visualstudiogallery.msdn.microsoft.com/daf5c6db-386b-4994-bdd7-b6cd52f11b72

Step 2

Follow instructions here for creating a new package

http://www.eyecatch.no/projects/nuget-package-template/

Step 3

Create a deploy.ps1 file in the root and write the xcopy deployment code. Example:

xcopy /y C:devREsrcReurgency2trunkReurgency.Common.Packager*.nupkg \re-source-1.reurgency.netpackages

See next section for explanation of this non-standard (but much simpler) publishing process.

 

Simplified Publishing Process

Normally NuGet packages are published via an HTTP File post.  This was a little tricky to configure and I just decided to use a simpler method that is less fragile and allows anybody to manually upload new NuGet Package versions.

NuGet Packages are just files in a special folder that is hosted on a website.  There is no database or registry settings.  As soon as the .nupkg file is dropped in the proper folder on the web server it shows up in the NuGet repo.

So taking advantage of this simple model, I opted for a super simple xcopy deployment script.

xcopy /y C:devREsrcReurgency2trunkReurgency.Common.Packager*.nupkg \re-source-1.reurgency.netpackages

As you can see from that example the UNC path to the repo folder is

\re-source-1.reurgency.netpackages

NOTE: you may have to change your VPN setting to allow file share access.

Publish a new version of a NuGet Package

As you make changes to your package and it’s dependencies, you will want to periodically publish a new version to the reUrgency repo so others can use it.

Step 1

Increment the version number in Package.nuspec

Step 2

Compile your package

Step 3

Run deploy.ps1

NOTE: By default Visual Studio will open a PowerShell script for editing (not run it).  To run it, right click on the file and choose “Open With…”.  Click “Add…” to add a new action.  PowerShell.exe is located at “C:WindowsSystem32WindowsPowerShellv1.0powershell.exe”

Example Packages

If you would like to see an example of working NuGet package, look here in SVN:

https://source.reurgency.net/svn/Reurgency2/trunk/Reurgency.Common.Packager

or

here on disk

C:devREsrcReurgency2trunkReurgency.Common.Packager

 

AngularJS: Communicating between Scopes

One of the many features of the AngularJS JavaScript framework is scope.  Scope is a way to encapsulate data relevant to a particular piece of functionality and keep it separate from functionality not directly related to it.  While this separation is desirable, from it rises the need to communication and pass data between scopes.

From what I can tell, there are six ways to communicate between scopes in AngularJS:

  1. Global Variables
  2. Parent/Child direct communication
  3. Cookies
  4. Shared Properties Services
  5. Binding
  6. Events

Global Variables

Using global variables is a very simple way to pass data between scopes.  Use of global variables is generally perceived to be a bad idea, however, because these variables can easily, and accidentally, be overwritten.  Additionally, this practice leads to spaghetti code.

Parent/Child or Sibling Communication

Direct communication between parent and child scopes or sibling scopes is also a possibility but this is usually a bad idea as well.  For one, communication in this manner causes tight coupling.  Additionally, the mechanism for communication between scopes in this manner is clunky.  Child scopes are implemented as linked lists.  This means that each parent has a link to the first and last child scope, and those child scopes have a link to their next sibling and previous sibling.  While there could be certain situations where this kind of communication is desirable, generally there is a better way.

Cookies

Use of cookies works well but should generally be used for their intended purpose which is to persist data locally.  If cookies are abused, you run into the same problem as global variables.  Also, cookies only store string data so attempting to communicate any non-string data would have the overhead of serialization.

Shared Properties Services

Shared properties services are a good way to share and cache data between scopes.  These are objects that are injected into controllers and directives using mechanisms provided by AngularJS which are intended to contain data common to each.  While communication can be achieved in this manner using watchers on a specified shared property, generally this is most useful for caching purposes.

To define a shared properties serivce:


angular.module('Derby.services').service('sharedProperties', [function () {
    return {
        selectedDivisionId: 0,
        //...Define other default shared properties here
    };
}]);

To use a shared properties service, inject it into a controller or directive and use it as an object:


angular.module('RaceTrackerApp.heats', []).controller('HeatsController', ['$scope', 'sharedProperties',
    function ($scope, sharedProperties) {
        //You must add sharedProperties to $scope in order to watch it
        $scope.sharedProperties = sharedProperties;

        //Watch the selectedDivisionId for changes
        $scope.$watch('sharedProperties.selectedDivisionId', 
            //This function will be executed when selectedDivisionId changes 
            //from anywhere
            function(newValue,oldValue){
                //Do something
            }
        );

Binding

Binding in AngularJS is a built-in way to easily pass data between parent and child scopes through HTML.  To facilitate communication, you can add a watcher on a bound variable.  When this variable changes, the watcher will fire allowing you to take some additional action.  The downside to binding is that it is difficult to achieve outside of HTML and therefore if the two scopes in which you want to communicate are not connected via HTML then binding should probably not be used.

To use binding, define a variable in your parent scope as such:


angular.module('RaceTrackerApp.heats', []).controller('HeatsController', ['$scope',
    function ($scope) {
        $scope.heats = [];

And use the variable in HTML as such:


<div ng-repeat="heat in heats">

Events

The final method of communicating between scopes is using events.  There are two kinds of events in AngularJS:

  • Broadcast – Event bubbles downward to all decentant scopes
  • Emit – Events bubble upwards to all ancestor scopes

Besides the fact that each event type bubbles in a different direction, these both work the same.  Each can pass in additional payload data.  Also, each can be watched for using $on.

To dispatch an event:


//heatCardClicked is the event name and heatDriverYear is the payload
$scope.$emit('heatCardClicked', heatDriverYear);

$broadcast works in the same way but bubbles in a different direction. To watch for an event:


//Watches for the heatCardClicked event.  Expects a heatDriverYear payload
$scope.$on('heatCardClicked', 
    function(event, heatDriverYear){
        //Do stuff after the heat card is clicked
    }
);

HTTP Posting of data in AngularJS

Posting HTTP data within the Angular framework requires the use of the $http angular service.  The reUrgency 2.0 programming practices have us interacting with server-side data through the use of RESTful services.  This allows us at reUrgency to utilize the $resource service within the Angular framework.  The $resource service is a wrapper around $http service which makes it easier and more efficient to consume RESTful services.

The $resource service requires the file angular-resource.js.  This service must first be loaded within your application.

Once the $resource service has been loaded it has the following usage pattern.

The Angular module can then be setup with multiple factory patterns that return $resource services. Below is an example of one such factory within the Derby app.

By giving the service a name, in this case DivisionService, the service can be referenced and used anywhere within your Angular application, or even across Angular applications as is being done within the Derby project.

The above example is creating a $resource service. The new service, as previously mentioned, is called DivisionService and it also has a new action called “update” which is mapped to the “PUT” HTTP verb.

This blog entry is about the posting of data. Below is a code snippet of another one of our Derby $resource services. This one is called AssignLaneService.

The Angular $resource service supports the main HTTP verbs/methods of GET, POST, and DELETE. Why the PUT verb was neglected is anybody’s guess. As you can see in the PUT example above, the $resource action of “update” was created and mapped to the HTTP verb “PUT”. Angular gives you the default actions listed below.

So if a developer wanted to post data using the AssignLaneService above, they would call the “save” action on that service.

Below is an example of one more $resource service within the Derby application and it is called RandomizeHeatService. The RandomizeHeatService service and the AssignLaneService will be the focus of this blog entry going forward.

These two services, RandomizeHeatService and AssignLaneService , randomly assign drivers to a heat and assign a driver to a lane withing a heat. They both take some input, make an insert into the database (along with some other functionality), and return the appropriate object. When I originally created these two services, I was using HTTP GETs to perform this transferring of data. HTTP GET, however, should only be utilized when requesting data, not when modifying data. Using HTTP GET to modify data does not follow the REST standard. To modify data within the REST environment, POST or PUT should be used. It is debatable whether in my usage I should be using a PUT or a POST. I settled on using POSTs for the two services above.

When POSTing data within Angular using the $resource service, the $resource service will post your data as an object. I ran into this problem with the RandomizeHeatService. The server-side WebApi controller only requires the ID (which is an integer) of the selected heat to do its magic. However, I could not just pass that integer ID because Angular would take that integer and wrap it inside an object. So on the server-side I had to create a custom class with only one property. This custom class is shown below.

By the way, this custom class was created within the Model layer as an entity (but this “entity” is not allowed to be added to the database as a table). It seems like overkill to create a class just so we can pass a single integer, but it appears that is what is required by Angular.

The AssignLaneService has several parameters that need to be passed to the server. It made much more sense to create a class in this instance. The custom class used to pass data to the server within the AssignLaneService is shown below.

You might ask why I didn’t use an existing entity as my input parameter and that would be a valid question. However, the nature of our database structure and the data actually inputted by the end user (namely the driver number) do not exist within a single entity. For this reason, the class above was created within the Model layer to handle the inputted data. The server-side code handles all of the relationships between the various entities.

With the two input classes created for our WebApi controllers, the actual function signatures for the server-side C# code are shown below.

Note the “HttpPost” decoration on both methods. That will tell the WebApi controller that both of these methods are POST methods. Normally, you can just use the default methods of “Put” and “Post” within a WebApi controller, but we created custom post methods for each WebApi controller because neither method strictly posted the entire entity defined by its WebApi controller.

This causes the URL for the REST service to include the respective method name as shown at the beginning of this blog post for the RandomizeHeatService and AssignLaneService services.

Originally, I was not passing back to the client an HttpResponseMessage. Doing that did not follow the practices of REST. When data is posted to the server, the client expects back an HttpResponseMessage. Below is the server-side C# code that creates the HttpResponseMessage return object. Please note, that according to the REST standard, within the return header the location of the newly created resource should be included.

In order to utilize either REST service provided by the server, the client can take advantage of the Angular services RandomizeHeatService and AssignLaneService created from before. An example using the AssignLaneService is shown below.

As you can see from the above JavaScript, an object was created with properties that match our custom server-side entity. These properties are then assigned values and passed to the server via the “save” action of the AssignLaneService (the “save” action by default being a POST).

Lastly, the “save” action expects two function handlers to be provided. One for a successfully returned status code (in the case of a POST, a 201 code) and another to handle error status codes. These functions can either be inline anonymous functions or references to functions. The example above uses inline anonymous functions.

Hopefully, reading this blog post will give you a better understanding of how to POST data using Angular.