0 Comments

Aurelia, one of the leading JavaScript client frameworks for single page applications (SPAs), has been around for a while now and there are a number of resources for getting started with it. Rather than try to make yet another demo, I thought it might be fun to create a site repository that could be used as a template with a few more bells and whistles already setup. It is my hope that this will be useful not just to me, but to anyone else who wants to start a project. We’re going to use the command line tools to do this, but you can use Visual Studio 2017 if you want as well.

Setting up the ASP.NET Core Project

Prerequisites

Steps

  1. Create a folder for your project.
  2. Open a command prompt (cmd) in that folder.
  3. Install the SPA templates with the following command: 
    dotnet new --install Microsoft.AspNetCore.SpaTemplates::*

    aurelia-step3
  4. Create the Aurelia project like this: 
    dotnet new Aurelia 

    aurelia-step4
  5. Prepare the environment to run using these commands, ignoring any warnings from npm as they are expected…
    dotnet restore
    npm install
    setx ASPNETCORE_ENVIRONMENT "Development"

    aurelia-step5a
    aurelia-step5b
    aurelia-step5c
  6. Restart your command prompt to ensure that the environment change takes effect.
  7. Run your new app with the command line
    dotnet run

     Step 7

 

These steps should give you a bare bones ASP.NET Core site with a basic Aurelia setup that looks like this…

 

Working Demo

 

Publishing to a Host

Now that you have a working site you’ll want to publish it to an actual server somewhere. To do that you’ll perform a publish. You can do this by running this command:

dotnet publish -c Release

This command compiles the server side C# code and runs a webpack production build on the TypeScript and client assets. You can then find the files to upload to your host in the bin\Release\netcoreapp1.1\publish folder.

 

Exploring the Template

This template takes care of a lot of things that you would have to setup manually and makes a good starting point for adding more functionality, such as logging with Application Insights or another logging provider, fleshing out an administration interface for user management and authentication/authorization using Identity Server, or any number of other useful additions that are widely available.

 

To get started understanding what we’ve got here, you’ll want to take a look into the files in your new project. You’ll notice that in Startup.cs we have setup a fallback route that allows anything not dealt with by a static file or MVC route to be sent to the Home Index view. This is the secret sauce that lets all your links work even though they aren’t configured individually.

routes.MapSpaFallbackRoute(
                    name: "spa-fallback",
                    defaults: new { controller = "Home", action = "Index" });

 

To be continued…

 

0 Comments

I’ve been tracking the ASP.NET betas and release candidates over the past year or more and it’s coming along nicely. I like the separation of client side and server side in the new folder structure and the unification of the MVC and WebAPI controllers. For the past few years I’ve used jsViews, Kendo UI, Angular, Durandal, Knockout, and Aurelia for front end JavaScript development.

A while ago I was using TypeScript with these to help with intellisense and typing issues across the libraries and sites I worked on. I often find myself introducing whatever team I’m on to these technologies in one way or another. I’ve been working with the combination of ASP.NET Core, formerly ASP.NET 5, and Aurelia with TypeScript and Web Components.

I start with the yoeman generator for ASP.NET and add the pieces as I go. I use ASP.NET MVC for the API and plain old HTML and TypeScript for the UI. The client tool chain consists of the usual npm, jspm, and gulp. When the new ASP.NET bits are done I’ll make a template for all this and post it to github. In the meantime I’m considering doing a short video or slide deck walking through the setup if anyone is interested.

0 Comments

I've been poking around at a lot of JavaScript over the last year or two and have been refining this layered architecture for setting up applications. The main idea behind it is to cover all the old bases in a way that also reduces the number of requests and performs very well. The layers I am using are setup like so:

  • Web front end (static HTML)
    • UI views, scripts, and related image files
    • Localized strings as JSON files
  • Middle tier (WebAPI/SignalR)
  • Back end (database/web services)

So, to explain a little about this layout… The front end is all static. The files may be generated by a compiler or parser but upon publish they are static and require no server processing. This allows one to put these files into a CDN whether it is Azure or Amazon for massive scale and economy. Having the localization files as static JSON allows them to be packaged up with the front end and sent along with it. Depending on the configuration and build process these files could even be combined and minified further.

The middle tier is your standard WebAPI and/or SignalR server side code layer hosted on a standard ASP.NET server, usually Azure Web Sites in my case. This tier is basically an API that provides the site with any dynamic actions and information it needs.

Finally, the back end consists of a database used by the middle tier, usually a SQL server of some kind, and any external web services needed by the middle tier. You could lump the external web services into the middle tier but I prefer to think of them as something separate for the sake of organization.

There are some interesting issues you run into when implementing this pattern, including how to handle authorization and security trimming. I generally move the navigation page list into a JSON object that can be generated by the API. In this manner you avoid exposing all your pages to the client even though the templates for those pages may exist in your CDN as public files. All of the security should be enforced on the middle tier so that users cannot perform actions they are not supposed to be allowed to do.

I've found that this model is fast and performs well under load when done properly. The trick is keeping it simple while utilizing all the tooling available to generate the published result. If anyone is interested I could go into more detail about that.

0 Comments

Just a quick note on jQuery Deferred objects…

   1: var goGetSomeHtml = function() {
   2:     var deferred = $.Deferred();
   3:     $.ajax({
   4:         url: “some.htm”
   5:     }).done(function(data) {
   6:         deferred.resolve(data);
   7:     }).fail(function(jqXHR, textStatus) {
   8:         deferred.reject({ xhr:jqXHR, textStatus: textStatus });
   9:     });
  10:     return deferred.promise();
  11: };

They’re basically a callback wrapper. This example is a bit redundant since jQuery $.ajax returns a Deferred already, but you get the idea. You can use this for anything that requires a callback. It provides a pattern that pushes the callback function into user code in a way that is standardized. The resolve and reject functions on deferred pass their parameters into the callbacks in the done/fail functions.

This lets you pass the deferred promise around (a promise is just a deferred with the resolve/reject removed from its interface). A bit cleaner functional approach that removes the need for you to worry about call/apply syntax and context.

The above can then be used like this:

   1: $.when(
   2:     goGetSomeHtml()
   3: ).done(function(data) {
   4:     //do success callback stuff here...
   5: }).fail(function(state) {
   6:     //use state.xhr or state.textStatus to handle the error somehow.
   7: }

Enjoy. Smile