C# Model Coding Practices

What is a model?  A model is a programmatic representation of a concept or physical object that exists in real life.  We represent models in code as Entities in C# and as Tables in the Database.  Since these entities and tables represent the same underlying model, they need to remain synchronized.  In the past, generally the database was created first and then the entities were generated from it.  Recently, this paradigm has changed…

Now, we are using a code-first model creation paradigm in which the programmers write the C# Entities and from those entities the database is generated.  When working in this manner, there are best practices that will make the process more simple.

First, when creating the entities, adhere to the Database Design Standards set by Ty.  Even though this document was written for designing databases, it still applies.  Just substitute “Entity” for “Table” and “Field” for “Column” and you should have no problems.  In addition to this, make sure to make all fields virtual.

public virtual int UserRoleId { get; set; }

Next, for any foreign key field, add another field named and typed the same as the foreign key entity.  For example:

public virtual int UserRoleId { get; set; }
public virtual UserRole UserRole { get; set; }

This gets us most of the way to where we want to be, but sometimes we want certain behavior in table or column creation, in how Entity Framework treats the entity, or in data serialization.  Below are a list of common attributes and what they do:

Class level attributes:

  • DataContract – Tells the data serializer that we are using an opt-in policy.  This means that only fields which are decorated with the DataMember attribute will be serialized.  This attribute is highly recommended as it gives you complete control over which fields are transmitted.
  • Table – Specifies to the code-first model generator the table name that you wish to map this entity to.  This is useful if you want the table to be named something different than the entity and is not required in most cases.

Field level attributes:

  • DataMember – Works with the DataContract attribute.  Used to specify that the field it decorates is meant to be serialized.
  • Key – Identifies this field as the primary key for the database table it represents
  • DatabaseGenerated – Indicates to the code first model generator and Entity Framework that this field will be updated by the database and not the user.  This can be applied to fields like primary keys and database generated dates (last update date) but use discretion as to when you apply this attribute.
  • Required – Tells the Entity Framework validator that this field is required and allows you to specify an error message upon failed validation
  • StringLength – Tells the entity validator that this field has a max and min range for string length and allows you to specify these ranges and an error message.
  • Display – An intended display name for the field
  • ForeignKey – Indicates that the field is a foreign key

Below is an example of a class which uses most of these attributes:

    public class User
        /// The identifier for the user.  Also the primary key.
        public virtual Guid UserId { get; set; }

        /// Foreign key to the UserRole table/object.
        public virtual int UserRoleId { get; set; }

        /// UserRole object associated with this User.
        [ForeignKey( "UserRoleId" )]
        public virtual UserRole UserRole { get; set; }

        /// The login user ID for the user.
        [Required( ErrorMessage = "Login is required." )]
        [StringLength(100, ErrorMessage = "Login cannot be longer than 100 characters.")]
        public virtual string Login { get; set; }

        /// The hashed password for the user.
        [Required( ErrorMessage = "Password is required." )]
        [StringLength(40, ErrorMessage = "Password cannot be longer than 40 characters.")]
        public virtual string Password { get; set; }

        /// The driver's first name.
        [StringLength( 100, ErrorMessage = "First Name cannot be longer than 100 characters." )]
        [Display(Name = "First Name")]
        public virtual string FirstName { get; set; }

        /// The driver's last name
        [Required( ErrorMessage = "Last Name is required." )]
        [StringLength( 100, ErrorMessage = "Last Name cannot be longer than 100 characters." )]
        [Display(Name = "Last Name")]
        public virtual string LastName { get; set; }

        public virtual Guid BaseUserId { get; set; }


The next step is to create a database context class.  This is how Entity Framework communicates with the database.  You need to add a field which corresponds to each table you want to create in the database from code. Each record in the database context is a line of communication between Entity Framework and the database for the specified table/entity.  After creating your database context, the first thing you should do is remove the convention of cascading deletes.  While this can save time in certain situations, it also allows for the opportunity to delete large amounts of data by accident.  A basic database context class looks like this:

public class DatabaseContext : DbContext
        public  DatabaseContext()
            : base("name=DefaultConnection")

        // tables
        public DbSet Users { get; set; }

        protected override void OnModelCreating(DbModelBuilder modelBuilder)



This can optionally be generated by MVC scaffolding.

The last topic I have to discuss is, “How do I work with database views using Entity Framework and code-first?”  Currently, code first cannot create the view for you.  You need to create the view in the database itself, then create an entity that represents it.  You could probably add the creation script for the view to the migration created by code first, but that would be a topic for a different presentation.  To connect a view to an entity, the procedure is pretty much the same as with a table with the following exceptions:

  • Create the view in the database.  Specify the view’s base table as the “primary key”.  For example, the primary key for a UsersView should be UserId.  While views technically do not have a primary key, this makes Entity Framework happy and generated code works better.
  • Add the “Table” attribute to the entity’s class definition and specify the view name.  Otherwise the code-first model generator will automatically attempt to pluralize your view and it will end up pointing to the wrong object in the database.  Pluralization works well for table creation (Users) but not for views (UsersViews?)
  • Use the add-migration command to create the migration script and then comment out the table creation code.  As stated above, you can optionally add code for view creation here at this time.

Below is an example of the class definition and primary key setup for a view:

    //This makes the database generator link to "HeatDriverYearsView".  Otherwise   
    //it would attempt to pluralize the class name and create a new table 
    public class HeatDriverYearsView
        /// The identifier for the heat.  Also the primary key.
        //Even though views have no primary keys, this property will act like one.
        //There will only be one HeatDriverYearsView record per HeatId
        public virtual int HeatId { get; set; }

It is worth while to note that using views is one of a few ways to handle retrieving data from multiple tables.  Another way is to add commands to existing entities and use LINQ to perform any joins manually.  This has its own set of pros and cons so use your judgement as to which methodology to use. Basics and Setting Up a New Project

The basics of using start with using the GUI to manage the different builds for each environment. Within the web app there are a few tips that can help in diagnosing problems when the build is broken. We will cover those items along with the basic items that need to be set up on a new project in this blog. Web App


I will forego much of the obvious on the CC.NET home page. A couple  items worth noting is that this page can become stale very quickly especially if you keep it open in your browser. Each time returning to the home page it is a good idea to go ahead and click the Refresh button located in the upper right hand corner of the page to make sure that you have the most up to date information about the builds. Its also a good idea if you are waiting for a build to complete or have just forced a build that you go ahead and click the Refresh button to make sure you see the latest changes as quickly as possible. Another item to note is that when the build breaks all the people who commit after the build breaks will be listed as a breaker.


In order to see who is responsible for breaking the build / bringing donuts to the next meeting click on the project name to dig into the details for the project.  If the build is broken then under the Build Overview chart  clicking on the first that broke(where the color changed from green to orange) will show the details of what was committed and by whom.

Once drilling into a build there are some important items that can be helpful in trouble shooting what is breaking the build. On the left navigation click on Latest Build and then on the left navigation click View Build Log will show the scripts that are run by MSBuild to build the dlls for the project along with all other scripts set up to run such as deployment scripts that will create backups and deploy the dlls to the web server. You can look at what is failing on each script that is run but usually its best to scroll to the bottom of the page to find quickly what is causing the build to break or causing other scripts to not work properly.

Setting up a New CruiseControl.NET Project

Step 1:

Create a new build project in CC.Net config and configure

To configure a CC.Net project first navigate to the config file located on the VM at  C:Program Files (x86)CruiseControl.NETserverccnet.config.

An example of the configuration for a project in the ccnet.config:

<project name="Youngevity" queue="Q1" queuePriority="1">
   <labeller type="assemblyVersionLabeller">

      <intervalTrigger seconds="60" name="continuous" />

   <sourcecontrol type="svn">
      <executable>C:Program Files (x86)CollabNetSubversion Clientsvn.exe</executable>
      <timeout units="minutes">30</timeout>


      <logger>C:Program Files (x86)CruiseControl.NETserverThoughtWorks.CruiseControl.MsBuild.dll</logger>

   <xmllogger />
   <statistics />
   <modificationHistory onlyLogWhenChangesFound="true" />

Start by copying a project already set up in the config and renaming it. There are quite a few things that can be configured but for many projects much of the configuration will stay the same. The items that must be configured are:
1) Paths to the Working and Artifact folders
2) The interval trigger in the triggers section can be commented out if you want the build to only be updated when it is forced manually. This can be uncommented for a Dev1 environment that should be continually updated whenever any new code has been committed.
3) The trunkUrl under the SourceControl section should be set to the path of the correct SVN repository
4) Set the projectFile under the MSBuild task to the correct solution file
5) Change the path for batch files for all the executables for the new project

Step 2:

Allow permissions for the project  folder on the web server to allow CruiseControl access to deploy new files for the project

1) Navigate to the folder through windows explorer(this folder is usually under the default inetpub IIS folder)
2) Right click on folder, click on Properties, select the Sharing tab. Under Network File and Folder Sharing click on the Share button. In the drop down type in CruiseControl and hit enter. Make sure to change CruiseControl’s permission level to Read/Write, select from the list and click the Share button. Copy the Network Path as this is the path that you will need to use for the deployment batch file

Step 3:

Set up folders and scripts  for the new project

1) Create a new project folder under E:\CI on the build server
2) Create folders for Artifacts, Builds, Scripts, and Working files under the new project folder
3) Copy over batch files from another project into the Scripts folder and configure for the needs of the new project – there at least needs to be batch files for copying and zipping up latest dlls into the builds folder and a batch file for deploying dlls and files to the web server.

Step 4:

Allow User permission for CruiseControl to read from the SVN repository through VisualSVNServer

1) On the build server open up VisualSVNServer Manager
2) On the left section of the VisualSVNServer Manager expand the node of repositories and find the repository for the new project, right click on the repository, and click on Properties. Under the security tab add CruiseControl as a user with Read access.

If everything is configured correctly the project will be set up and can be managed from the CruiseControl.NET web app.


Applying security to WebAPI controllers

When using the Reurgency.Common framework, every REST service method requires security permissions to be applied via an attribute on either the method or the class.  This blog post describes how to do that.

First add some USING statements.

using Reurgency.Foundation.WebApi;
using Reurgency.Foundation.WebApi.Filters;

Then add the attribute HandleSecurityTokenRequest to either the class or individual methods

Here is an example of using the attribute on a class. ALL methods in the class will inherit this setting.

[HandleSecurityTokenRequest(AllowAnonymous = false)]
public class EmployeesController : <Employee>

That’s it.  You are done.

Read on to understand a little more how this works.

Here is an example of a using the attribute on a method. This is useful when you want to set different security on each method within a class.

[HandleSecurityTokenRequest(AllowAnonymous = true)]
public HttpResponseMessage Login(Credential credential)
BusinessCommandRepository biz = new BusinessCommandRepository(this.securityTokenId, "Login");

As you can see the attribute takes in one parameter called AllowAnonymous.  When set to false the system expects that a valid security token is passed in via headers or cookies.  If a valid security token is not passed in an HTTP Status code of 401 is returned to the client.   When AllowAnonymous is set to true, then the system will first look for a valid security token in the header or cookies and use that, else it will use the constant defined in Reurgency.Common.Model.Entities.SecurityToken.ANONYMOUSSECURITYTOKEN.

Each WebAPI method is responsible for instantiating the BusinessCommandRepository.  The constructor for that class requires a security token.  Simply use this.securityTokenId.

BusinessCommandRepository biz = new BusinessCommandRepository(this.securityTokenId, "Login");

The securityTokenId property is part of Reurgency.Foundation.WebApi.WebApiController.  Every one of your WebAPI controllers should inherit from this class.

It is the responsibility of the HandleSecurityTokenRequest filter attribute to populate this.securityTokenId from either header, cookies, or ANONYMOUSSECURITYTOKEN.  Because the filters are processed PRIOR to the WebAPI method’s execution, you are guaranteed that this.securityTokenId will be populated or a 401 error will be thrown.

WebAPI Routing and Custom Get Methods

The default structure of the routing we are using for our Web API service layer is set up to take an integer or a guid as the id for GET requests. This is done because it is a best practice when creating a database to have the primary key as an integer or guid. In our routing this is implemented with this default route:

config.Routes.MapHttpRoute("DefaultApiWithId", "Api/{controller}/{id}", new { id = RouteParameter.Optional }, new { id = @"^{?[dA-Fa-f]{8}-[dA-Fa-f]{4}-[dA-Fa-f]{4}-[dA-Fa-f]{4}-[dA-Fa-f]{12}}?$|d+" });

If an id is specified then the route uses a regular expression to allow only an integer or guid as the id parameter. This is important not only because it is a best practice but also because it allows us to create custom methods by using this route:

config.Routes.MapHttpRoute("DefaultApiWithAction", "Api/{controller}/{action}/{id}", new { id = RouteParameter.Optional });

This Web API route takes an action in the URI via the {action} placeholder. The action placeholder is a string that will match the method name that is specified in the Web API controller. This is possible because we are not allowing strings in the “DefaultApiWithId” route that we mentioned previously.

A problem that was encountered on the Youngevity project was that the Ids that were used from a third party api included numbers and letters. So if an id with both integers and letters was passed in the URI like:


then the route would match the DefaultApiWithAction route and would attempt to find a Web API controller method called RA1234. This problem will most likely be encountered again in the future on projects.

So what are the possible options for routing to allow a string as an id?

1) Pass the id in the querystring to bypass routing alltogether and let the controller parse the id out of the querystring
2) Reconfigure the routing that we already have in place and specify a route that accepts an id as a string
3) Create a custom GET method that takes in a string as an id parameter

The method we have chosen to use is the third. When I first looked at the problem I thought “I don’t want to create a custom method to do a normal GET method” but after thinking about the problem and talking to Ty realized that Ids should be created as integers or guids and that a string as an Id is actually an abnormal situation that should be handled via a custom method.

The simple solution:
1) Create a custom method like GetCustomerByString and decorate with the HttpGet attribute
2) Call the method via a URI like: api/customers/GetCustomerByString/RA1234

Story Points

Don’t use hours to estimate a requirement or feature.  Your estimate will be wrong, but you’ll be held to it.  Instead let’s use story points.  Story points are values that only have meaning within a given team.  That meaning is developed, learned, and refined over time.

 To understand story points, let’s look at something similar that we all understand.

Restaurants are rated on a scale of 1 to 5.  That star rating conveys a lot of information.  With that star rating you know the following:

  • Quality of food
  • Attire
  • Price
  • Wait time for food
  • Will there be valet parking?
  • Will there be a waiter or do I order at the counter?
  • How customizable will the meal be?
  • How knowledgeable will the staff be?

If I tell you we are going to a 5 star restaurant, I bet you could answer all of those questions with a high degree of confidence.  You would not know precisely how much it will cost, but you’ll have a ballpark feel for it.  If tell you we are going to a 2 star restaurant, again, I bet you have a pretty clear expectation.

If instead of the star rating, I just told you that the food was really good, would you know how to dress?  If I just told you that it was very reasonably priced, would know if you were ordering at the counter or talking to a waiter.  If told you that meals were $50 a plate, you might guess that it was a 4 or 5 star restaurant.  But in reality it could be a 3 star tourist trap on an island that has only 1 restaurant and has to fly all of their food in from the mainland.  The point being is that knowing only one dimension can sometimes you get close to the right answer, but can also be quite misleading and dangerous.

Story points allow you to embed information from lots of dimensions into the numeric value.  We might embed the following dimensions into our story points:

  • Coding effort
  • Complexity
  • Risk
  • Knowledge of technology
  • Quality of the domain expert
  • Platform
  • Usability requirements

So maybe we can setup story points like the following:


  • 1 point = easy, straightforward feature
  • 2 points
  • 3 points = moderate, somewhat complex feature or contains some moderate risk
  • 4 points
  • 5 points = hard, complex feature, that may contain high risk

These are just guidelines to get started.  Story points develop meaning over time.  Just as with restaurants, it took experience with restaurants before you understood the rating system.

In that one number you can embed any additional information or padding that you may need.  If you estimate everything in hours you will be challenged by people who don’t know better.  If you assign story points to things, you can use your intuition or any other knowledge to pad your estimate without getting into a debate with those who don’t understand.

Each developer will bring a different intuition to assign story points.  Some will be junior developers who spend most of their time focused on technology challenges.  Others will be senior architects who are assessing risks and future complexities far outside the experience of the junior developer.  Story points serve to normalize those large differences in experience and perspectives across the team.

Story points are a tool.  When used incorrectly you will get bad results.  Story points only work when the project is being managed in the Scrum (or similar) methodology. They work with time boxed sprints and user stories.  If you are doing time boxed sprints and user stories, then you may not get the desired results.

User Stories

What is a user story?

A user story is one or more sentences describing what a end user needs.  It is expressed in everyday language or the business language of the user.  A user story is NOT intended to capture ALL knowledge required to implement a given feature.  User stories facilitate conversations between customer and developer.  The details required to implement the story are acquired overtime via these conversations.  

Why user stories?

  • Verbal communication – yields greater knowledge and understanding
  • Comprehensible (non technical)
  • Right size for planning
  • Work well with iterative development
  • Encourage deferring detail
  • Opportunistic development
  • Participatory design (between developers and customer)
  • Build up tacit knowledge

Structure of a User Story

As a {user role} I want to {do something} so that {reason} 

  • User Role = Who
  • Do Something = What
  • Reason = Why

 NOTE: we are not talking about HOW to implement it.

The structure of the sentence forces us to talk about requirements and NOT design.  This prevents us from short-circuiting the analysis and design phases.

Attributes of a good user story

Very good blog explaining the important attributes of a user story

Just think of INVEST to remind you of the attributes of good user story.

  • Independent – loose coupling between user stories.  Makes planning much easier.
  • Negotiable – don’t design a solution or specify so many details that you paint yourself into a corner.  We should always be free to change our mind on HOW to implement the given requirement at a later time.
  • Valuable – must be a value to the end user
  • Estimable – must be able to put an estimated number of hours or days
  • Small – less than a week
  • Testable – so we know when we are done.

A useful tool for ensuring that user stories adhere to INVEST is to either break a user story apart into smaller user stories.  Occasionally, the opposite might be true (combine many smaller use stories), especially in support of Independent.

Drawbacks to User Stories

  • Difficult to understand relationship between stories
  • May require augmenting them with additional documentation if traceability is a mandate
  • May not scale well to large teams (too much conversation and too many pathways)

Guidelines for Good Stories

  • Start with Goal Stories – why is user using the system?
  • Slice the Cake – slice up large stories in way that the user can actually accomplish something (not part of something).  e.g. 1) collect data 2) write to database.  That is bad.  Good would be to 1) collect basic information, 2) collect detailed information
  • Write closed stories – don’t write a story that will never complete.  e.g. A user can manage his projects.  instead, breakup into smaller chunks that can be completed.  e.g. A user can add and remove team members from a project.
  • Put constraints on cards
  • Size the story to the horizon – stories you are going to tackle sooner need more precision.  Stories far in the future don’t need same level of precision.
  • Keep the UI out as long as possible – that’s a design task, not a requirements task
  • Some things aren’t stories – if it does not fit in a user story, use a different method.  Use this is a last resort, not as a cop out to learning how to write good user stories.
  • Include user roles in stories
  • Write for one user
  • Write in active voice
  • Customer writes – ideally, but requires training and discipline on the part of the customer
  • Don’t number story cards – tempting, but pointless overhead.  Short title is better.
  • Don’t forget the purpose – to remind you to discuss the feature

Final Thoughts

User stories are a TOOL.  They are a great tool that every software developer should master.  But like any tool, it can be misused and thus not yield the desired results.  Also, like any tool, it is not appropriate for every job.  Before you try to use this tool, I highly recommend reading some books, reading blogs of those who have had success, and start small.



Collecting Requirements

Any stakeholder can specify requirements.  Obviously the requests by users and project sponsors carry the most weight, however stakeholders such as developers, system administrators, and quality assurance personnel need to have a say since they usually have the best ability to manage the quality, cost, and time dimensions.

A requirement is usually a textual specification of the form “The system shall…”  But beyond just documenting WHAT the system should do, you should also document WHY the system must conform to the requirement.  This is helpful when developers interpret the requirements.  It is also helpful when trade-offs need to occur between conflicting requirements.

Where possible, it helpful to express requirements in numerical terms, for example, the “The system shall return results to the users within 3 seconds, given a maximum of 100 records per result set and a maximum of 40 simultaneous users.”  A requirement at this level of detail is helpful to developers, project sponsors, quality assurance personnel, database administrators, and system administrators.  In that example, the technical stakeholders have some pretty concrete metrics to work with and the project sponsor has a reasonable way to verify the fulfillment of the requirement.

Warning.  Stakeholders don’t always ask for what they really need.  They make requests.  It is your job to figure out what the real need is.  You should ask yourself, “What is driving this request?”  Requests provide context for needs.  Is there a marketing reason for this request?  Is there business process reason?  Is there a compliance or legal reason?  Is there a political reason?

Warning. Stakeholders quite often express their requirement in terms of features they have previous experience with.  For example, “I would like a system that can handle email attachments”.  In reality, what they want is a way to transfer files from one user to another, but if the only way they have ever done this is via email, they will explain their requirement in the only language they know.  This represents a different form of thinking for the stakeholder.  Instead of expressing WHAT they need, they are expressing HOW it should perform.  Depending on the sophistication of the stakeholder, this may or may not be misleading.  It is your job to translate “feature speak” back into needs, then recommend features that fulfill those needs.


Non-Functional Requirements

When we talk about requirements, we are almost always talking about functional requirements.  i.e. the features.  Features deal with WHAT the application is supposed to do.  However, there is another classification of requirements called non-functional requirements.  These fall into the following categories:

  • Usability – the human factors such as aesthetics, ease of learning, ease of use, and consistency of the user interface.
  • Reliability – frequency and severity of failure, recoverability, predictability, and accuracy.
  • Performance – transaction rate, speed, availability, accuracy, response time, recovery time, or memory usage.
  • Supportability – testability and maintainability.  This is important to Quality assurance personnel and system administrators.

NOTE: The actual categorization is not important when documenting non-functional requirements.  For example, there is not much use in debating whether a requirement is a reliability or performance requirement.  The list simply exists as a mental check list of perspectives to keep in mind when looking for requirements.

Unfortunately, these non-functional requirements are rarely considered in estimates.  Exploring non-functional requirements is just as important as exploring functional requirements and can sometimes affect the cost and effort required by orders of magnitude.  For example, if a server needs to have 99.9% uptime the cost is substantially less than a server that needs to be up 99.999% of the time.  Instead of thousands of dollars per year, you may need to be spending hundreds of thousands of dollars per year.

Here are some examples of non-functional requirements

  • System must support up to 100,000 total users
  • System must support up to 1,000 simultaneous users
  • System must support up to 1,000 tasks per user (100 million total tasks)
  • System must support up to 100 projects per user
  • System must support up to 1,000 tasks per project
  • System must be accessible on desktop and mobile devices
  • System must be understandable and usable by users without any documentation or training
  • System must respond to user requests in 2 seconds or less
  • System must be hosted on Rackspace (production)
  • System must run on exactly one VM (to reduce operations costs)
  • System must work on 90% of all browsers used in the U.S.
  • System must be up 99.9% of the time (normal operation and maintenance)
  • System must support disaster recovery of no more than 1 day of downtime
  • System must be easily supported.  No more than 1 FTE support personnel
  • System must be easily maintainable.  Respond to most feature requests in less than 1 week and bug fixes in less than 2 days.

Validating Matching Text using an Angular Directive

This is a nice angular directive I found which can be used with AngularJS validation to make sure the text in two inputs match. The most common scenario for this would be for a form where a user must create a new password.

The directive:

.directive('compareText', [function () {
        return {
            require: 'ngModel',
            link: function (scope, elem, attrs, ctrl) {
                var firstTextBox = '#' + attrs.compareText;
                elem.add(firstTextBox).on('keyup', function () {
                    scope.$apply(function () {
                        var v = elem.val() === $(firstTextBox).val();
                        ctrl.$setValidity('textMatch', v);


<form name="myForm">
   <input id="pw1" type="password" placeholder="* Password" name="password1" ng-model="model.Password1">
   <input id="pw2" type="password" placeholder="* Repeat Password" name="password2" compare-Text="pw1" ng-model="model.Password2">
   <div ng-show="myForm.password2.$error.textMatch">Must match Password</div>


This is a very simple and reusable directive that makes sure that the text in the inputs match using some jQuery.

Best practice for file upload and import

A common feature in applications is the ability for an end user import data from a file.  Generally the best user experience is to allow the user to select a file off their hard drive and upload it to the server for parsing and import into the database.

However, there can sometimes be technical complexity associated with file uploads and that complexity often distracts from the business logic.  We should first work through the business logic of the import before tackling the technical complexities of uploading, storing, parsing, and cleaning up files.

FIRST create a real dumb import screen that just has a text box in a browser.  We can then manually copy and paste into that text box and submit to server for parsing and import.  There are many benefits to doing this first.

  1. Quick to build
  2. Simple to support
  3. Avoid file loading complexity and focus on business process first
  4. Easy to test
  5. Unit test friendly
  6. Great fallback should the more complex solution of file loading ever fail

I am NOT saying don’t build a better user experience.  I am saying build a simple, easily testable feature FIRST that you always keep in your back pocket for a rainy day.