.NET Core Archives - Stackify Mon, 06 May 2024 05:45:05 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.4 https://stackify.com/wp-content/uploads/2023/02/favicon.png .NET Core Archives - Stackify 32 32 ASP.NET Razor Pages vs MVC: How Do Razor Pages Fit in Your Toolbox? https://stackify.com/asp-net-razor-pages-vs-mvc/ Sat, 26 Aug 2023 18:16:00 +0000 https://stackify.com/?p=13334 As part of the release of .NET Core 2.0, there are also some updates to ASP.NET. Among these is the addition of a new web framework for creating a “page” without the full complexity of ASP.NET MVC. New Razor Pages are a slimmer version of the MVC framework and, in some ways, an evolution of the old “.aspx” WebForms.

In this article, we are going to delve into some of the finer points of using ASP.NET Razor Pages versus MVC.

  • The basics of Razor Pages
  • ASP.NET MVVM vs MVC
  • Pros and cons of Razor Pages and MVC
  • Using Multiple GET or POST Actions via Handlers
  • Why you should use Razor Pages for everything
  • Code comparison of ASP.NET Razor Page vs. MVC
A Razor Page is very similar to the view component that ASP.NET MVC developers use. It has all the same syntax and functionality

The Basics: What are ASP.NET Razor Pages?

Razor Page is very similar to the view component that ASP.NET MVC developers use. It has all the same syntax and functionality.

The key difference is that the model and controller code are also included within the Razor Page. It is more of an MVVM (Model-View-ViewModel) framework. It enables two-way data binding and a more straightforward development experience with isolated concerns.

Here is a basic example of a Razor Page using inline code within a @functions block. It is actually recommended to put the PageModel code in a separate file. This is akin to how we did code behind files with ASP.NET WebForms.

@page
@model IndexModel
@using Microsoft.AspNetCore.Mvc.RazorPages

@functions {
    public class IndexModel : PageModel
    {
        public string Message { get; private set; } = "In page model: ";

        public void OnGet()
        {
            Message += $" Server seconds  { DateTime.Now.Second.ToString() }";
        }
    }
}

<h2>In page sample</h2>
<p>
    @Model.Message
</p>

We Have Two Choices Now: ASP.NET MVVM or MVC

You could say that we now choose an MVC or MVVM framework. I’m not going to go into all the details of MVC vs MVVM. This article does a good job of that with some examples. MVVM frameworks are most noted for two-way data binding of the data model.

MVC works well with apps with many dynamic server views, single-page apps, REST APIs, and AJAX calls. Razor Pages are perfect for simple pages that are read-only or do basic data input.

MVC has been all the rage recently for web applications across most programming languages. It definitely has its pros and cons. ASP.NET WebForms was designed as an MVVM solution. You could argue that Razor Pages are an evolution of the old WebForms.

Pros and Cons of Razor Pages

As someone who has been doing ASP.NET development for about 15 years, I am pretty conversant with all the ASP.NET frameworks. Based on my experimentation with the new Razor Pages, here are my views on the pros and cons and how I envisage using them.

Pro: More organized and less magical

I don’t know about you, but the first time I ever used ASP.NET MVC, I spent a lot of time figuring out how it worked. The naming of things and the dynamically created routes caused a lot of magic that I wasn’t used to. The fact that /Home/ goes to HomeController.Index() that loads a view file from “Views\Home\Index.cshtml” is a lot of magic to get comfortable with when starting.

Razor Pages don’t have any of that “magic” and the files are more organized. You have a Razor View and a code behind the file, just like WebForms did, versus MVC having separate files in different directories for the controller, view, and model.

Compare simple MVC and Razor Page projects. (Will show more code differences later in this article.)

Compare MVC vs Razor Page Files
Compare MVC vs Razor Page Files

Pro: Single Responsibility

If you have ever used an MVC framework before, you have likely seen some huge controller classes that are filled with many different actions. They are like a virus that grows over time as things get added.

With Razor Pages, each page is self-contained, with its view and code organized together. This follows the Single Responsibility Principle.

Con: Requires New Learning 

Since Razor Pages represent a new way of doing things, developers used to the MVC framework might have a learning curve to overcome.

Con: Limitations for Complex Scenarios

While Razor Pages are great for simple pages, there might be better choices for complex scenarios that require intricate routing, multiple views, or complex state management.

Pros and Cons of MVC

Pro: Flexibility

MVC is incredibly flexible and can accommodate a variety of scenarios. It works well for applications with many dynamic server views, single-page apps, REST APIs, and AJAX calls.

Pro: Familiarity

Since MVC has been the mainstay for web applications across most programming languages, many developers are already familiar with its structure and functioning.

Con: Complexity

Due to its flexibility, MVC can become quite complex, especially for beginners needing help understanding the interactions between the model, view, and controller.

Con: Risk of Bloated Controllers

With MVC, there’s a risk of having substantial controller classes filled with many different actions, making the code harder to maintain and reason with.

Using Multiple GET or POST Actions via Handlers

In a default setup, a Razor Page is designed to have a single OnGetAsync and OnPostAsync method. If you want to have different actions within your single page, you need to use a feature called a handler. If your page has AJAX callbacks, multiple possible form submissions, or other scenarios, you will need this.

So, for example, if you were using a Kendo grid and wanted the grid to load via an AJAX call, you would need to use a handler to handle that AJAX call back. Any single-page application would use a lot of handlers, or you should point all of those AJAX calls to an MVC controller.

I made an additional method called OnGetHelloWorldAsync() on my page. How do I invoke it?

From my research, there seem to be three different ways to use handlers:

  1. Querystring  – Example: “/managepage/2177/?handler=helloworld”
  2. Define as a route in your view: @page “{handler?}” and then use /helloworld in the url
  3. Define on your input submit button in your view. Example: <input type=”submit” asp-page-handler=”JoinList” value=”Join” />

You can learn more about multiple-page handlers here.

Special thanks to those who left comments about this! Article updated!

Using Pages would force a separation between how you load the page and what services the AJAX callbacks

Why you should use Razor Pages for everything! (maybe?)

One could argue that Razor Pages are the ideal solution to anything, essentially a web page within your app. It would draw a clear line in the sand that any HTML “pages” in your app are actual pages. Currently, an MVC action could return an HTML view, JSON, a file, or anything. Using Pages would force a separation between how you load the page and what services the AJAX callbacks.

Think about it. This solves a lot of problems with this forced separation.

Razor PageHTML ViewsMVC/Web APIREST API calls, SOA

This would prevent MVC controllers that contain tons of actions that are a mix of not only different “pages” in your app but also a mixture of AJAX callbacks and other functions.

Of course, I haven’t actually implemented this strategy yet. It could be terrible or brilliant. Only time will tell how the community ends up using Razor Pages.

Code Comparison of ASP.NET Razor Page vs MVC

While experimenting with Razor Pages, I built a straightforward form in both MVC and as a Razor Page. Let’s delve into a comparison of how the code looks in each case. It is just a text box with a submit button.

Here is my MVC view:

@model RazorPageTest.Models.PageClass

<form asp-action="ManagePage">
    <div class="form-horizontal">
        <h4>Client</h4>
        <hr />
        <div asp-validation-summary="ModelOnly" class="text-danger"></div>
        <input type="hidden" asp-for="PageDataID" />
        <div class="form-group">
            <label asp-for="Title" class="col-md-2 control-label"></label>
            <div class="col-md-10">
                <input asp-for="Title" class="form-control" />
                <span asp-validation-for="Title" class="text-danger"></span>
            </div>
        </div>
      
        <div class="form-group">
            <div class="col-md-offset-2 col-md-10">
                <input type="submit" value="Save" class="btn btn-default" />
            </div>
        </div>
    </div>
</form>

Here is my MVC controller. (My model is PageClass which just has two properties and is really simple.)

   public class HomeController : Controller
    {
        public IConfiguration Configuration;

        public HomeController(IConfiguration config)
        {
            Configuration = config;
        }

        public async Task<IActionResult> ManagePage(int id)
        {
            PageClass page;

            using (var conn = new SqlConnection(Configuration.GetConnectionString("contentdb")))
            {
                await conn.OpenAsync();

                var pages = await conn.QueryAsync<PageClass>("select * FROM PageData Where PageDataID = @p1", new { p1 = id });

                page = pages.FirstOrDefault();
            }

            return View(page);
        }

        [HttpPost]
        [ValidateAntiForgeryToken]
        public async Task<IActionResult> ManagePage(int id, PageClass page)
        {

            if (ModelState.IsValid)
            {
                try
                {
                    //Save to the database
                    using (var conn = new SqlConnection(Configuration.GetConnectionString("contentdb")))
                    {
                        await conn.OpenAsync();
                        await conn.ExecuteAsync("UPDATE PageData SET Title = @Title WHERE PageDataID = @PageDataID", new { page.PageDataID, page.Title});
                    }
                }
                catch (Exception)
                {
                   //log it
                }
                return RedirectToAction("Index", "Home");
            }
            return View(page);
        }
    }

Now let’s compare that to my Razor Page.

My Razor Page:

@page "{id:int}"
@model RazorPageTest2.Pages.ManagePageModel

<form asp-action="ManagePage">
    <div class="form-horizontal">
        <h4>Manage Page</h4>
        <hr />
        <div asp-validation-summary="ModelOnly" class="text-danger"></div>
        <input type="hidden" asp-for="PageDataID" />
        <div class="form-group">
            <label asp-for="Title" class="col-md-2 control-label"></label>
            <div class="col-md-10">
                <input asp-for="Title" class="form-control" />
                <span asp-validation-for="Title" class="text-danger"></span>
            </div>
        </div>

        <div class="form-group">
            <div class="col-md-offset-2 col-md-10">
                <input type="submit" value="Save" class="btn btn-default" />
            </div>
        </div>
    </div>
</form>

Here is my Razor PageModel, aka code behind:

   public class ManagePageModel : PageModel
    {
        public IConfiguration Configuration;

        public ManagePageModel(IConfiguration config)
        {
            Configuration = config;
        }

        [BindProperty]
        public int PageDataID { get; set; }
        [BindProperty]
        public string Title { get; set; } 

        public async Task<IActionResult> OnGetAsync(int id)
        {
            using (var conn = new SqlConnection(Configuration.GetConnectionString("contentdb")))
            {
                await conn.OpenAsync();
                var pages = await conn.QueryAsync("select * FROM PageData Where PageDataID = @p1", new { p1 = id });

                var page = pages.FirstOrDefault();

                this.Title = page.Title;
                this.PageDataID = page.PageDataID;
            }

            return Page();
        }

        public async Task<IActionResult> OnPostAsync(int id)
        {

            if (ModelState.IsValid)
            {
                try
                {
                    //Save to the database
                    using (var conn = new SqlConnection(Configuration.GetConnectionString("contentdb")))
                    {
                        await conn.OpenAsync();
                        await conn.ExecuteAsync("UPDATE PageData SET Title = @Title WHERE PageDataID = @PageDataID", new { PageDataID, Title });
                    }
                }
                catch (Exception)
                {
                   //log it
                }
                return RedirectToPage("/");
            }
            return Page();
        }
    }

Deciphering the comparison

The code between the two is nearly identical. Here are the key differences:

  • The MVC view part of the code is exactly the same except for the inclusion of “@page” in the Razor Page.
  • ManagePageModel has OnGetAsync and OnPostAsync, which replaced the two MVC controller “ManagePage” actions.
  • ManagePageModel includes my two properties that were in the separate PageClass before.

In MVC for an HTTP POST, you pass in your object to the MVC action (i.e., “ManagePage(int id, PageClass page)”). With a Razor Page, you are instead using two-way data binding. I annotated my two properties (PageDataID, Title) with [BindProperty] to get Razor Pages to work correctly with two-way data binding. My OnPostAsync method only has a single id input since the other properties are automatically bound.

Will it Prefix?

Do Razor Pages work with Prefix? Yes! Our free ASP.NET Profiler, Prefix, supports ASP.NET Razor Pages. Our Retrace and Prefix products have full support for ASP.NET Core.

Summary

I really like Razor Pages and can definitely see using them in an ASP.NET Core project I am working on. I like the idea of Razor Pages being the true pages in my app and implementing all AJAX/REST API functions with MVC. I’m sure there are also other use cases that Razor Pages don’t work for. The good news is MVC is super flexible, but that is what also makes it more complex. The true beauty of Razor Pages is their simplicity.

References:

]]>
.Net Core Dependency Injection https://stackify.com/net-core-dependency-injection/ Thu, 27 Jul 2023 18:48:00 +0000 https://stackify.com/?p=14052 What is Dependency Injection?

Dependency Injection (DI) is a pattern that can help developers decouple the different pieces of their applications. It provides a mechanism for the construction of dependency graphs independent of the class definitions. Throughout this article, I will be focusing on constructor injection where dependencies are provided to consumers through their constructors.

It’s also important to mention two other injection methods:

  • property injection, which the built-in DI framework for .NET Core doesn’t support
  • and action method injection

Consider the following classes:

class Bar : IBar { 
  // ...
}

class Foo {
  private readonly IBar _bar;
  public Foo(IBar bar) {
    _bar = bar;
  }
}

In this example, Foo depends on IBar and somewhere we’ll have to construct an instance of Foo and specify that it depends on the implementation Bar like so:

var bar = new Bar();
var foo = new Foo(bar);

The problem with this is two-fold. Firstly, it violates the Dependency Inversion Principle because the consuming class implicitly depends on the concrete types Bar and Foo. Secondly, it results in a scattered definition of the dependency graph and can make unit testing very difficult.

Why Do We Need Dependency Injection In C#?

The Composition Root pattern states that the entire dependency graph should be composed in a single location “as close as possible to the application’s entry point”. This could get pretty messy without the assistance of a framework. DI frameworks provide a mechanism, often referred to as an Inversion of Control (IoC) Container, for offloading the instantiation, injection, and lifetime management of dependencies to the framework. You invert the control of component instantiation from the consumers to the container, hence “Inversion of Control”.

To do this, you simply register services with a container, and then you can load the top level service. The framework will inject all child services for you. A simple example, based on the class definitions above, might look like:

container.Register<Bar>().As<IBar>();
container.Register<Foo>();
// per the Composition Root pattern, this _should_ be the only lookup on the container
var foo = container.Get<Foo>();

What Is .NET Core Dependency Injection?

Prior to .Net Core, the only way to get DI in your applications was through the use of a framework such as Autofac, Ninject, StructureMap and many others. However, DI is treated as a first-class citizen in ASP.Net Core. You can configure your container in your Startup.ConfigureServices method:

public class Startup {
  public void ConfigureServices(IServiceCollection services) {
    services.AddTransient<IArticleService, ArticleService>();
  }
  // ...
}

When a request gets routed to your controller, it will be resolved from the container along with all its dependencies:

public class ArticlesController : Controller {
  private readonly IArticleService _articleService;
  public ArticlesController(IArticleService articleService) {
    _articleService = articleService;
  }
 
  [HttpGet("{id}"]
  public async Task<IActionResult> GetAsync(int id) {
    var article = await _articleService.GetAsync(id);
    if(article == null)
      return NotFound();
    return Ok(article);
  }
}

What Is Singleton vs. Transient vs Scoped?

In the context of .NET Core DI, you’ll often hear or read the terms singleton, transient, and scoped. These are dependency lifetimes.

At registration time, dependencies require a lifetime definition. The service lifetime defines the conditions under which a new service instance will be created. Below are the lifetimes defined by the ASP.Net DI framework. The terminology may be different if you choose to use a different framework.

  • Transient – Created every time they are requested
  • Scoped – Created once per scope. Most of the time, scope refers to a web request. But this can also be used for any unit of work, such as the execution of an Azure Function.
  • Singleton – Created only for the first request. If a particular instance is specified at registration time, this instance will be provided to all consumers of the registration type.

Using Different Providers

If you would like to use a more mature DI framework, you can do so as long as they provide an IServiceProvider implementation. If they don’t provide one, it is a very simple interface that you should be able to implement yourself. You would just return an instance of the container in your ConfigureServices method. Here is an example using Autofac:

public class Startup { 
  public IServiceProvider ConfigureServices(IServiceCollection services) {
    // setup the Autofac container
    var builder = new ContainerBuilder();
    builder.Populate(services);
    builder.RegisterType<ArticleService>().As<IArticleService>();
    var container = builder.Build();
    // return the IServiceProvider implementation
    return new AutofacServiceProvider(container);
  }
  // ... 
}

Generics

Dependency injection can get really interesting when you start working with generics. Most DI providers allow you to register open generic types that will have their generic arguments set based on the requested generic type arguments. A great example of this is Microsoft’s new logging framework (Microsoft.Extensions.Logging). If you look under the hood  you can see how they inject the open generic ILogger<>:

services.TryAdd(ServiceDescriptor.Singleton(typeof(ILogger<>), typeof(Logger<>)));

This allows you to depend on the generic ILogger<> like so:

public class Foo {
  public Foo(ILogger<Foo> logger) {
    logger.LogInformation("Constructed!!!");
  }
}

Another common use case is the Generic Repository Pattern. Some consider this an anti-pattern when used with an ORM like Entity Framework because it already implements the Repository Pattern. But, if you’re unfamiliar with DI and generics, I think it provides an easy entry point.

Open generic injection also provides a great mechanism for libraries (such as JsonApiDotNetCore) to offer default behaviors with easy extensibility for applications. Suppose a framework provides an out-of-the-box, implementation of the generic repository pattern. It may have an interface that looks like this, implemented by a GenericRepository:

public interface IRepository<T> where T : IIdentifiable {
   T Get(int id);
}

The library would provide some IServiceCollection extension method like:

public static void AddDefaultRepositories(this IServiceCollection services) {
  services.TryAdd(ServiceDescriptor.Scoped(typeof(IRepository<>), typeof(GenericRepository<>)));
}

And the default behavior could be supplemented by the application on a per resource basis by injecting a more specific type:

services.AddScoped<IRepository<Foo>, FooRepository>();

And of course FooRepository can inherit from GenericRepository<>.

class FooRepository : GenericRepository<Foo> {
  Foo Get(int id) {
    var foo = base.Get(id);
    // ...authorization of resources or any other application concerns can go here
    return foo;
  }
}

Beyond the Web

The ASP.Net team has separated their DI framework from the ASP.Net packages into Microsoft.Extensions.DependencyInjection. What this means is that you are not limited to web apps and can leverage these new libraries in event-driven apps (such as Azure Functions and AWS Lambda) or in thread loop apps. All you need to do is:

  1. Install the framework NuGet package:
    Install-Package Microsoft.Extensions.DependencyInjection
    or
    dotnet add package Microsoft.Extensions.DependencyInjection
  2. Register your dependencies on a static container:
    var serviceCollection = new ServiceCollection();
    serviceCollection.AddScoped<IEmailSender, AuthMessageSender>();
    serviceCollection.AddScoped<AzureFunctionEventProcessor, IEventProcessor>();
    Container = serviceCollection.BuildServiceProvider();
  3. Define the lifetime scope (if applicable) and resolve your top level dependency:
    var serviceScopeFactory = Container.GetRequiredService<IServiceScopeFactory>();
    using (var scope = serviceScopeFactory.CreateScope())
    {
      var processor = scope.ServiceProvider.GetService<IEventProcessor>();
      processor.Handle(theEvent);
    }

Under the hood, the call to .BuildServiceProvider() will inject an IServiceScopeFactory. You can load this service and define a scope so you can use properly scoped services. 

Disposable Services

If a registered service implements IDisposable it will be disposed of when the containing scope is disposed. You can see how this is done here. For this reason, it is important to always resolve services from a scope and not the root container, as described above. If you resolve IDisposables from the root container, you may create a memory leak since these services will not be disposed of until the container gets disposed. 

Dynamic Service Resolution

Some DI providers provide resolution time hooks that allow you to make runtime decisions about dependency injection. For example, Autofac provides an AttachToComponentRegistration method that can be used to make runtime decisions. At Stackify, we used this with Azure Functions to wrap the TraceWriter (before they supported the ILogger interface) behind a facade. This facade passed the logging method calls to the scoped TraceWriter instance as well as our log4net logger. To do this, we register the instance of the TraceWriter when we begin the lifetime scope:

using (var scope = ServiceProvider.BeginLifetimeScope(b => b.RegisterInstance(traceWriter)))
{
  // ...
}

I’ve created a gist here that you can reference if you’d like to see the rest of the implementation.

When Not To Use IoC Containers

In general, IoC containers are an application concern. What this means is library and framework authors should think carefully about whether or not it is really necessary to create an IoC container within the package itself. An example of one that does this is the AspNetCore.Mvc framework packages. However, this framework is intended to manage the life of the application itself. This is very different than say a logging framework.

Conclusion

Dependency Injection describes the pattern of passing dependencies to consuming services at instantiation. DI frameworks provide IoC containers that allow developers to offload control of this process to the framework. This lets us decouple our modules from their concrete dependencies, improving testability and extensibility of our applications.

Hope this article was helpful. I often reference what we do here at Stackify and wanted to share that we also use our own tools in house to continually improve our applications. Both our free dynamic code profile, Stackify Prefix, and our full lifecycle APM, Stackify Retrace, help us make sure we are providing our clients with the best value.

[x_alert heading=”Note” type=”info”]All of the source code links used in this article are permalinks to the code on the default repository branches. These links should be used as a reference and not as the current state of the underlying implementations or APIs since these are subject to change at any time.[/x_alert]

]]>
IIS Error Logs and Other Ways to Find ASP.Net Failed Requests https://stackify.com/beyond-iis-logs-find-failed-iis-asp-net-requests/ Thu, 27 Jul 2023 10:38:15 +0000 https://stackify.com/?p=6798 As exciting as it can be to write new features in your ASP.NET Core application, our users inevitably encounter failed requests. Do you know how to troubleshoot IIS or ASP.NET errors on your servers? It can be tempting to bang on your desk and proclaim your annoyance. 

However, Windows and ASP.NET Core provide several different logs where failed requests are logged. This goes beyond simple IIS logs and can give you the information you need to combat failed requests.

Get to Know the 4 Different IIS Logs

If you have been dealing with ASP.NET Core applications for a while, you may be familiar with normal IIS logs. Such logs are only the beginning of your troubleshooting toolbox.

There are some other places to look if you are looking for more detailed error messages or can’t find anything in your IIS log file.

1. Standard IIS Logs

Standard IIS logs will include every single web request that flows through your IIS site.

Via IIS Manager, you can see a “Logging” feature. Click on this, and you can verify that your IIS logs are enabled and observe where they are being written to.

iis logs settings

You should find your logs in folders that are named by your W3SVC site ID numbers.

Need help finding your logs? Check out: Where are IIS Log Files Located?

By default, each logged request in your IIS log will include several key fields including the URL, querystring, and error codes via the status, substatus and win32 status.

These status codes can help identify the actual error in more detail.

#Fields: date time s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) cs(Referer) sc-status sc-substatus sc-win32-status time-taken
2019-09-13 21:45:10 ::1 GET /webapp2 - 80 - ::1 Mozilla/5.0 - 500 0 0 5502
2019-09-13 21:45:10 ::1 GET /favicon.ico - 80 - ::1 Mozilla/5.0 http://localhost/webapp2 404 0 2 4

The “sc-status” and “sc-substatus” fields are the standard HTTP status code of 200 for OK, 404, 500 for errors, etc.

The “sc-win32-status” can provide more details that you won’t know unless you look up the code. They are basic Win32 error codes.

You can also see the endpoint the log message is for under “cs-uri-stem”. For example, “/webapp2.” This can instantly direct you to problem spots in your application.

Another key piece of info to look at is “time-taken.” This gives you the roundtrip time in milliseconds of the request and its response.

By the way, if you are using Retrace, you can also use it to query across all of your IIS logs as part of its built-in log management functionality.

2. Can’t Find Your Request in the IIS Log? HTTPERR is Your IIS Error Log.

Every single web request should show in your IIS log. If it doesn’t, it is possible that the request never made it to IIS, or IIS wasn’t running.

It is also possible IIS Loggin is disabled. If IIS is running, but you still are not seeing the log events, it may be going to HTTPERR.

Incoming requests to your server first route through HTTP.SYS before being handed to IIS. These type of errors get logged in HTTPERR.

Common errors are 400 Bad Request, timeouts, 503 Service Unavailable and similar types of issues. The built-in error messages and error codes from HTTP.SYS are usually very detailed.

Where are the HTTPERR error logs?

C:\Windows\System32\LogFiles\HTTPERR

3. Look for ASP.NET Core Exceptions in Windows Event Viewer

By default, ASP.NET Core will log unhandled 500 level exceptions to the Windows Application EventLog. This is handled by the ASP.NET Core Health Monitoring feature. You can control settings for it via system.web/healthMonitoring in your appsettings.json file.

Very few people realize that the number of errors written to the Application EventLog is rate limited. So you may not find your error!

By default, it will only log the same type of error once a minute. You can also disable writing any errors to the Application EventLog.

iis error logs in eventlog

Can’t find your exception?

You may not be able to find your exception in the EventLog. Depending on if you are using WebForms, MVC, Core, WCF or other frameworks, you may have issues with ASP.NET Core not writing any errors at all to ASP.NET due to compatibility issues with the health monitoring feature.

By the way, if you install Retrace on your server, it can catch every single exception that is ever thrown in your code. It knows how to instrument into IIS features.

4. Failed Request Tracing for Advanced IIS Error Logs

Failed request tracing (FRT) is probably one of the least used features in IIS. It is, however, incredibly powerful. 

It provides robust IIS logging and works as a great IIS error log. FRT is enabled in IIS Manager and can be configured via rules for all requests, slow requests, or just certain response status codes.

You can configure it via the “Actions” section for a website:

The only problem with FRT is it is incredibly detailed. Consider it the stenographer of your application. It tracks every detail and every step of the IIS pipeline. You can spend a lot of time trying to decipher a single request.

5. Make ASP.NET Core Show the Full Exception…Temporarily

If other avenues fail you and you can reproduce the problem, you could modify your ASP.NET Core appsettings.json to see exceptions.

Typically, server-side exceptions are disabled from being visible within your application for important security reasons. Instead, you will see a yellow screen of death (YSOD) or your own custom error page.

You can modify your application config files to make exceptions visible.

asp net error ysod

ASP.NET

You could use remote desktop to access the server and set customErrors to “RemoteOnly” in your web.config so you can see the full exception via “localhost” on the server. This would ensure that no users would see the full exceptions but you would be able to.

If you are OK with the fact that your users may now see a full exception page, you could set customErrors to “Off.”

.NET Core

Compared to previous versions of ASP.NET, .NET Core has completely changed how error handling works. You now need to use the DeveloperExceptionPage in your middleware.  

.NET Core gives you unmatched flexibility in how you want to see and manage your errors. It also makes it easy to wire in instrumentation like Retrace.

6. Using a .NET Profiler to Find ASP.NET Core Exceptions

.NET Profilers like Prefix (which is free!) can collect every single exception that is .NET throws in your code even if they are hidden in your code. 

Prefix is a free ASP.NET Core profiler designed to run on your workstation to help you optimize your code as you write it. Prefix can also show you your SQL queries, HTTP calls, and much, much more.

profiled asp.net iis error log

Get Proactive About Tracking Application Errors!

Trying to reproduce an error in production or chasing down IIS logs/IIS error logs is not fun. Odds are, there are probably many more errors going on that you aren’t even aware of. When a customer contacts you and says your site is throwing errors, you better have an easy way to see them!

Tracking application errors is one of the most important things every development team should do. If you need help, be sure to try Retrace which can collect every single exception across all of your apps and servers.

Also, check out our detailed guide on C# Exception Handling Best Practices.

If you are using Azure App Services, also check out this article: Where to Find Azure App Service Logs.

Schedule A Demo
]]>
How to Use LoggerFactory and Microsoft.Extensions.Logging for .NET Core Logging With C# https://stackify.com/net-core-loggerfactory-use-correctly/ Fri, 21 Jul 2023 07:40:00 +0000 https://stackify.com/?p=7114 Do you use .NET (formerly .NET Core)? If so, you’re probably familiar with the built-in .NET Core LoggerFactory which is in Microsoft.Extensions.Logging. Back when it was introduced, it created a lot of confusion around logging with ASP.NET Core. Several years latter, the dust has settled down, and .NET logging has become somewhat “boring”, which means predictable and consistent.

In this post, we’ll offer you a guide on .NET logging. These are the topics we’ll cover:

  • Basics of the .NET Core Logging With LoggerFactory
  • Where is the LoggerFactory Created?
  • Accessing the LoggerFactory Object via Dependency Injection and Services
  • Accessing the Logging API Outside of a MVC Controller
  • How to Use the Logging API from Everywhere
  • Extend the Microsoft.Extensions.Logging API Functionality by Using NLog or Serilog Providers

Let’s get started.

Basics of the .NET Core Logging With LoggerFactory

It is designed as a logging API that developers can use to capture built-in ASP.NET logging as well as for their own custom logging. The logging API supports multiple output providers and is extensible to potentially be able to send your application logging anywhere.

Other logging frameworks like NLog and Serilog have even written providers for it. So you can use the ILoggerFactory and it ends up working sort of like Common.Logging does as a facade above an actual logging library. By using it in this way, it also allows you to leverage all of the power of a library like NLog to overcome any limitations the built-in Microsoft.Extensions.Logging API may have.

Where is the LoggerFactory Created?

Configuring the .NET logging facilities used to be way harder than it is today. In recent versions of .NET (6 and newer), the configuration of web apps has been greatly simplified. For starters, the Startup class is gone. You can have it back if you really want it but by default, it’s no longer there.

Instead, all configuration now lives in the Program.cs class.

Currently, if you start a new ASP.NET web API (making sure you don’t choose to use the minimal API format), your Program.cs class should look like the following:

var builder = WebApplication.CreateBuilder(args);


// Add services to the container.
builder.Services.AddControllers();
// Learn more about configuring Swagger/OpenAPI at https://aka.ms/aspnetcore/swashbuckle
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();

var app = builder.Build();

// Configure the HTTP request pipeline.
if (app.Environment.IsDevelopment())
{
    app.UseSwagger();
    app.UseSwaggerUI();
}

app.UseHttpsRedirection();

app.UseAuthorization();

app.MapControllers();

app.Run();

Right after the first line, we’ll add two more lines and that will be the start of our logging configuration:

builder.Logging.ClearProviders();
builder.Logging.AddConsole();

Accessing the LoggerFactory Object via Dependency Injection and Services

In the example code below, I am showing off 2 different ways to access the LoggerFactory from your MVC controller. Dependency injection can give you the factory or a logger either one.

public class ValuesController : Controller
{
        private ILoggerFactory _Factory;
        private ILogger _Logger;

        //set by dependency injection
        public ValuesController(ILoggerFactory factory, ILogger logger)
        {
            _Factory = factory;
            _Logger = logger;
        }

	[HttpGet]
	public IEnumerable Get()
	{
            var loggerFromDI = _Factory.CreateLogger("Values");            
            _Logger.LogDebug("From direct dependency injection");
            loggerFromDI.LogDebug("From dependency injection factory");
	}
}

Accessing the Logging API Outside of a MVC Controller

OK, so this is where the new logging API quickly becomes a nightmare. Dependency injection works great for accessing it in your MVC controller. But…how do you do logging in a class library that is consumed by this MVC project?

1. You could pass your existing LoggerFactory into every object/method you call (which is a terrible idea).

2. You could create your own LoggerFactory within your class library

This is an option as long as you don’t use any providers like a File provider that can’t have multiple instances writing at the same time. If you are using a lot of different class libraries you would have a lot of LoggerFactory objects running around.

3. Create a centrally located static class or project to hold and wrap around the main LoggerFactory reference

I see this as the best solution here unless you aren’t using any providers that have concurrency issues.

How to Use the Logging API from Everywhere

My suggestion is to create a little static helper class that becomes the owner of the LoggerFactory. The class can look something like this below. You can then use this ApplicationLogging class in any code that you want to use logging from and not have to worry about recreating LoggerFactory objects over and over. After all, logging needs to be fast!

public class ApplicationLogging
{
	private static ILoggerFactory _Factory = null;

	public static void ConfigureLogger(ILoggerFactory factory)
	{
		factory.AddDebug(LogLevel.None).AddStackify();
		factory.AddFile("logFileFromHelper.log"); //serilog file extension
	}

	public static ILoggerFactory LoggerFactory
	{
		get
		{
			if (_Factory == null)
			{
				_Factory = new LoggerFactory();
				ConfigureLogger(_Factory);
			}
			return _Factory;
		}
		set { _Factory = value; }
	}
	public static ILogger CreateLogger() => LoggerFactory.CreateLogger();
}    

Extend the Microsoft.Extensions.Logging API Functionality by Using NLog or Serilog Providers

Both NLog and Serilog both have a provider that you can use to extend the functionality of the built-in logging API. They essentially redirect all of the logs being written to the new logging API to their libraries. This gives you all the power of their libraries for the output of your logs while your code is not tied to any particular library. This is similar to the benefit of using Common.Logging.

Conclusion

Logging is an essential part of most non-trivial applications. As such, it shouldn’t be hard to integrate it into your application. The idea behind .NET’s built-in logging capabilities represents exactly that: by making logging a first-class citizen of the framework, friction is greatly reduced.

Back in the day, a great option when it came to .NET logging was to simply use NLog or Serilog and don’t even worry about the new logging API. Even though the built-in logging capabilities are now easier than ever to use, the advice still remains. If you want to capture the built-in ASP.NET logging, you can plugin the NLog/Serilog provider and it will map those messages over. By doing it this way, you can use a different logging library directly and you don’t have to even think about LoggerFactory even existing.

]]>
How to Deploy ASP.NET Core to IIS & How ASP.NET Core Hosting Works https://stackify.com/how-to-deploy-asp-net-core-to-iis/ Thu, 27 Apr 2023 07:00:00 +0000 https://stackify.com/?p=10613 Previously, we discussed the differences between Kestrel vs IIS. In this article, we will review how to deploy an ASP.NET Core application to IIS.

Deploying an ASP.NET Core app to IIS isn’t complicated. However, ASP.NET Core hosting is different compared to hosting with ASP.NET, because ASP.NET Core uses different configurations. You may read more about ASP.NET Core in this entry.

On the other hand, IIS is a web server that runs on the ASP.NET platform within the Windows OS. The purpose of IIS, in this context, is to host applications built on ASP.NET Core. There’s more information on IIS and ASP.NET in our previous blog, “What is IIS?

In this entry, we’ll explore how to make both ASP.NET Core and IIS work together. Without further ado, let’s explore the steps on how we can deploy ASP.NET Core to IIS.

How to Configure Your ASP.NET Core App For IIS

The first thing you will notice when creating a new ASP.NET Core project is that it’s a console application. Your project now contains a Program.cs file, just like a console app, plus the following code:

public class Program
{
    public static void Main(string[] args)
    {
        var host = new WebHostBuilder()
            .UseKestrel()
            .UseContentRoot(Directory.GetCurrentDirectory())
            .UseIISIntegration()
            .UseStartup()
            .Build();

        host.Run();
    }
}

What is the WebHostBuilder?

All ASP.NET Core applications require a WebHost object that essentially serves as the application and web server. In this case, a WebHostBuilder is used to configure and create the WebHost. You will normally see UseKestrel() and UseIISIntegration() in the WebHostBuilder setup code.

What do these do?

  • UseKestrel() – Registers the IServer interface for Kestrel as the server that will be used to host your application. In the future, there could be other options, including WebListener which will be Windows only.
  • UseIISIntegration() – Tells ASP.NET that IIS will be working as a reverse proxy in front of Kestrel and specifies some settings around which port Kestrel should listen on, forward headers and other details.

If you are planning to deploy your application to IIS, UseIISIntegration() is required

What is AspNetCoreModule?

You may have noticed that ASP.NET Core projects create a web.config file. This is only used when deploying your application to IIS and registers the AspNetCoreModule as an HTTP handler.

Default web.config for ASP.NET Core:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <system.webServer>
    <handlers>
      <add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModule" resourceType="Unspecified"/>
    </handlers>
    <aspNetCore processPath="%LAUNCHER_PATH%" arguments="%LAUNCHER_ARGS%" stdoutLogEnabled="false" stdoutLogFile=".\logs\stdout" forwardWindowsAuthToken="false"/>
  </system.webServer>
</configuration>

AspNetCoreModule handles all incoming traffic to IIS, then acts as the reverse proxy that knows how to hand traffic off to your ASP.NET Core application. You can view the source code of it on GitHub. AspNetCoreModule also ensures that your web application is running and is responsible for starting your process up.

Install .NET Core Windows Server Hosting Bundle

Before you deploy your application, you need to install the .NET Core hosting bundle for IIS – .NET Core runtime, libraries and the ASP.NET Core module for IIS.

After installation, you may need to do a “net stop was /y” and “net start w3svc” to ensure all the changes are picked up for IIS.

Download: .NET Core Windows Server Hosting <- Make sure you pick “Windows Server Hosting”

Steps to Deploy ASP.NET Core to IIS

Before you deploy, you need to make sure that WebHostBuilder is configured properly for Kestrel and IIS. Your web.config file should also exist and look similar to our example above.

Step 1: Publish to a File Folder

Step 2: Copy Files to Preferred IIS Location

Now you need to copy your publish output to where you want the files to live. If you are deploying to a remote server, you may want to zip up the files and move to the server. If you are deploying to a local dev box, you can copy them locally.

For our example, I am copying the files to C:\inetpub\wwwroot\AspNetCore46

You will notice that with ASP.NET Core, there is no bin folder and it potentially copies over a ton of different .NET DLLs. Your application may also be an EXE file if you are targeting the full .NET Framework. This little sample project had over 100 DLLs in the output.

Step 3: Create Application in IIS

While creating your application in IIS is listed as a single “Step,” you will take multiple actions. First, create a new IIS Application Pool under the .NET CLR version of “No Managed Code”. Since IIS only works as a reverse proxy, it isn’t actually executing any .NET code.

Second, you can create your application under an existing or a new IIS Site. Either way, you will want to pick your new IIS Application Pool and point it to the folder you copied your ASP.NET publish output files to.

Step 4: Load Your App!

At this point, your application should load just fine. If it does not, check the output logging from it. Within your web.config file you define how IIS starts up your ASP.NET Core process. Enable output logging by setting stdoutLogEnabled=true. You may also want to change the log output location as configured in stdoutLogFile. Check out the example web.config before to see where they are set.

Advantages of Using IIS with ASP.NET Core Hosting

Microsoft recommends using IIS with any public facing site for ASP.NET Core hosting. IIS provides additional levels of configurability, management, security and logging, among many other things.

Check out our blog post about Kestrel vs IIS to see a whole matrix of feature differences. The post goes into more depth about what Kestrel is and why you need both Kestrel and IIS.

One of the big advantages to using IIS is the process management. IIS will automatically start your app and potentially restart it if a crash were to occur. If you were running your ASP.NET Core app as a Windows Service or console app, you would not have that safety net there to start and monitor the process for you.

Speaking of safety nets, your application performance should be the top priority. Which is why you need an application performance monitoring tool that allows you to deploy robust applications.

Try Retrace for APM! Retrace is an application performance monitoring tool compatible with multiple development platforms. You can easily track deployments and improvements through the insight-based dashboards. The tool provides you key metrics so you can easily see which areas need attention.

Get your Free 14-Day Trial today!

]]>
What is Blazor? Your Guide to Getting Started https://stackify.com/blazor-introduction/ Wed, 12 Dec 2018 14:20:10 +0000 https://stackify.com/?p=23177 For years now, if you wanted to write code to run in a browser, your choices were JavaScript or JavaScript. For a couple of brief periods on certain browsers, there were other languages you could use, but they weren’t significant: VBScript on IE and Dart on a special build of Chrome.

There are also languages that compile down to JavaScript (TypeScript, CoffeeScript, …), but they were still really JavaScript under the covers. The JavaScript monoculture’s days are numbered with the advent of WebAssembly (Wasm). For .NET developers, Wasm is arriving in the form of Blazor.

What is WebAssembly?

WebAssembly is a specification for a virtual machine in the browser that supports running Wasm bytecode. This binary format allows for faster execution than typical JavaScript so the performance can be better than pure JavaScript. There are a number of compilers that can output this format including LLVM. So it is possible now to write code in C++, compile it to Wasm assembly, send it to a browser, and run it directly.

The browser support for Wasm is quite broad. Even on older browsers, support is available via asm.js, all be it at a lower performance threshold.

Shows browser support for WebAssembly - Edge, Firefox, Chrome, Safari, Opera, Android Browser and Chrome for Android. Taken from caniuse.com
Browser support for WebAssembly taken from caniuse.com

How .NET supports Wasm

A number of languages have started projects to bring their languages to WebAssembly by outputting Wasm assembly. The approach that Microsoft has taken is a little bit different from most of the other platforms. Typically you would compile your output binaries to Wasm and then load them directly in the browser. However, .NET binaries are already in a generic language designed to run on the .NET Framework: Intermediate Language.  So if instead of compiling your code to Wasm, they compiled the framework itself to Web Assembly, they could interpret the same binaries they have already using this Wasm version of the framework.

There are, of course, a few different flavors of the .NET Framework unified by .NET Standard. The version that runs as WebAssembly is actually Mono. There has been some concern voiced about using Mono over .NET Core, which I can kind of understand. It is slightly annoying to use several .NET frameworks, but the standardization efforts between the various frameworks are very good.

What is Blazor?

I like to tell people that Blazor and compiling the entire .NET Framework to Wasm is totally bonkers. The thing is that Blazor is a project of Steve Sanderson’s, who has previously created some great bonkers projects in the past: xVal, Knockout, JavaScript services. So as bonkers projects go, we’re in excellent hands.

Blazor is a highly experimental project to bring an ASP.NET feel to Wasm. You write your code in C# using all the tools you recognize and remember. You can still unit test using the same tools you have in the past. You can also still use the same logging tools like RetraceIn effect, you can take all of your C# knowledge and just write web applications.

Server-side vs. client-side

There are currently two models for Blazor: a client-side and server-side model. The client-side version actually runs in the browser through Wasm and updates to the DOM are done there, while the server-side model retains a model of the DOM on the server and transmits diffs back and forth between the browser and the server using a SignalR pipeline. Server-side is more likely to become a real product and has, in fact, been promised for ASP.NET Core 3, which sounds like it is about a year away.

Can this really be performant?

There are a number of factors which make it seem like the performance of an application deployed in this fashion would be terrible. The first is the large size of the .NET Framework.  Bundling the loading of the .NET framework into each website load is large. However, there are already technologies that make this more manageable. Browsers can cache the framework and even reuse it from site to site, if the framework is delivered from a CDN. A slightly more exotic approach is to employ tree shaking to remove the huge swaths of the framework that aren’t being used by an application.

The largest asset in the project is the mono.wasm file, which is 869KB. The various DLLs that are used in the project add up to almost a megabyte.

Name Size
Microsoft.AspNetCore.Blazor.Browser.dll 14.7KB
Microsoft.AspNetCore.Blazor.dll 44.0KB
Microsoft.AspNetCore.Blazor.TagHelperWorkaround.dll 2.5KB
Microsoft.Extension.DependencyInjection.Abstractions.dll 11.9KB
Microsoft.Extension.DependencyInjection.dll 20.4KB
Microsoft.JSInterop.dll 20.8KB
Mono.WebAssembly.Interop.dll 3.2KB
mscorlib.dll 670KB
System.dll 42.2KB
System.Core.dll 142KB
System.Net.Http.dll 31.7KB
Total 1003.4KB

This is a lot of framework code to download, almost 2MB without any actual functionality so that is indeed a concern.

Next, can .NET code run in a performant fashion when delivered out to the browser and compiled down to a weird assembly language? Currently, C projects compiled to Wasm seem to have a slowdown on the order of 50%. Obviously, there is an overhead to running the framework in the browser, but we’re not really sure where this is going to land yet. Work is being done to improve and optimize the speed, and by the time it hits production performance will likely be reasonable. A lot of this performance simply comes from the fact that Wasm is, in general, more efficient than JavaScript, although such benchmarks are notoriously difficult.

The end result is that Wasm seems to have a larger startup cost that a typical JavaScript web application, but that once you’re up and running any sort of computationally complex operations are faster.

Getting started with a Blazor project

The first step is to ensure that everything you have in terms of Visual Studio and .NET Core is up to date. At the time of writing, I’m on .NET Core tools version 2.1.500. Next, there are some tools that will make working with Blazor more pleasant. The first is the collection of templates that can be installed by running

dotnet new -i Microsoft.AspNetCore.Blazor.Templates

from the command line. We’ll make use of full Visual Studio in this article, but you can use VS Code or vi or whatever you want. In Visual Studio, install the Blazor extension, the official Blazor docs suggest running at least VS 15.9, but I’ve run on 15.8 without issue.

With all of that in place, we can start a brand new Blazor project. In the new project dialog, select a new ASP.NET Core project. The template selection dialog there are 3 Blazor-based  templates

The ASP.NET Core web application template selection dialog showing 3 Blazor templates
Blazor templates

  • Blazor – full client-side Blazor application without any server-side components. This is suitable for deployment to static hosting like S3 or Azure Blob Storage.
  • Blazor (ASP.NET Core hosted) – a client-side application with a server side that serves out the Blazor, and also provides a place to put in server-side APIs.
  • Blazor (Server-side in ASP.NET Core) – a server-side application that updates the DOM from the server via SignalR

For our purposes, we’re going to make use of the ASP.NET Core hosted variant. Remember this version which ships DLLs to the browser isn’t going to be in the official support in ASP.NET Core 3, at least not at this juncture.

Exploring the code

The solution, which is created from this template, has 3 projects.

  • Client contains the code run on the browser.
  • Server is the server-side code. The default project has a simple controller that returns some data about the weather.
  • Shared contains code that is used on both the client side and the server side. Things like models are a great thing to put in here. Validation meta-data would also likely be a nice addition in here because it would mean you could apply the same validation logic client and server side.

The most interesting code is in the Client project, as the Server project is mostly just a standard ASP.NET Core application. The first thing in the client structure you’ll notice is that it is pretty similar to the structure of an ASP.NET Core project.

Shows the directory tree of a Blazor application in Visual Studio
The directory structure of a Blazor application

There is a Program.cs that serves as the bootstrapper, just like you’d see in an ASP.NET Core application. The Startup.cs contains code that sets up services and configures the application. This similarity allows you to carry over even more of your ASP.NET Core knowledge to the Blazor world. The HTML files all also have the same .cshtml extension you know and love. Opening one of these files reveals that they use the same Razor syntax you know from MVC way back to 2008.

The most interesting of the cshtml files is the FetchData.cshtml.

What we can learn from FetchData.cshtml

@using Blazor2.Shared
@page "/fetchdata"
@inject HttpClient Http

<h1>Weather forecast</h1>

<p>This component demonstrates fetching data from the server.</p>

@if (forecasts == null)
{
    <p><em>Loading...</em></p>
}
else
{
    <table class="table">
        <thead>
            <tr>
                <th>Date</th>
                <th>Temp. (C)</th>
                <th>Temp. (F)</th>
                <th>Summary</th>
            </tr>
        </thead>
        <tbody>
            @foreach (var forecast in forecasts)
            {
                <tr>
                    <td>@forecast.Date.ToShortDateString()</td>
                    <td>@forecast.TemperatureC</td>
                    <td>@forecast.TemperatureF</td>
                    <td>@forecast.Summary</td>
                </tr>
            }
        </tbody>
    </table>
}

@functions {
    WeatherForecast[] forecasts;

    protected override async Task OnInitAsync()
    {
        forecasts = await Http.GetJsonAsync<WeatherForecast[]>("api/SampleData/WeatherForecasts");
    }
}

In this file, you’ll notice a few interesting pieces. The @page directive is used to provide a location the router can use to direct people to the page. The @inject allows injecting services. The HttpClient is registered by default and doesn’t require an explicit entry in Startup.cs. What’s interesting here is that the HttpClient used here isn’t the same one as you use on the server side because access to raw sockets isn’t permitted in the Wasm sandbox just like it isn’t permitted in JavaScript. Finally, you’ll notice that there is a branch based on if forecasts is null. This property is actually monitored by Blazor, and when it changes, the page rendering will be rerun. Change detection is built right in.

Extending the project

In order to extend the project out to a fully-fledged application, you can simply add services and cshtml pages. So long as you pay attention to the structure and match how you would extend an ASP.NET Core application, you should be in good standing. You can add references to external libraries via NuGet, most of which will just work. However, you should keep in mind that every time you add a package that increases the size of the payload, that needs to make it to the client. Since we’re at near 2MB for the base package, every little bit more than you have to download hurts all that much more.

Limitations

Obviously, there are some limitations to what you can do inside a browser. Everything that is run in Wasm is run inside of the JavaScript sandbox, which means that you can’t directly access things like a disk or the network. So a lot of functionality like using SQLClient to talk directly to a database won’t work (also that’s a terrible idea anyway). Libraries may contain functionality that tries to write temp files too, which won’t work. Keep these limitations in mind when you’re planning how you test the application.

Hitting native JavaScript code

One of the really nice parts about Blazor and Wasm, in general, is that it is possible to interact with the JavaScript APIs. So if you want to do geolocation through the geolocation API you can install a package like Blazor.Geolocation (freshly updated for Blazor 0.7, BTW) and it will just work. The complex marshaling is done to map JavaScript and .NET data types back and forth is all done under the hood by Blazor. For the most part, you can easily call .NET methods from JavaScript and JavaScript methods from .NET code. Mind blown!

In the example of Blazor.Geolocation to get the location information from the browser’s JavaScript context, you need only inject a location service into your cshtml and run

location = await locationService.GetLocationAsync();

The package takes care of dealing with the asynchronous nature of getting the location information from the browser. Other JavaScript APIs can be similarly wrapped.

You can read about how to interop with JavaScript in great detail in the Blazor documentation.

Debugging Blazor when something goes wrong

The debugging story isn’t fantastic at the moment. With server-side Blazor, everything is F5-debuggable in Visual Studio, but once you come over to client-side, the debugging story isn’t fully fleshed out. The sources tab in Chrome contains a baffling array of files and Wasm functions, which cannot be easily debugged at this juncture. Eventually, there may be source mapping support for all this and the experience should be much better. However, for the time being, the compiler is not able to output maps.

Network traffic can be debugged just as you would any other network traffic, for instance in Prefix. Equally, the server side remains a standard ASP.NET Core application, so it can benefit from instrumentation with Retrace or Prefix. In that regard, at least the debugging and analytics story is quite good.

Where is WebAssembly going?

Only fools make statements about the future, so here I go: I think that WebAssembly has a bright future. Unlocking web development for a myriad of languages and programming paradigms will result in some quite interesting new frameworks. The speed of WebAssembly is also promising for all manner of code from games to AI.

]]>
Writing Multitenant ASP.NET Core Applications https://stackify.com/writing-multitenant-asp-net-core-applications/ Fri, 27 Jul 2018 13:19:59 +0000 https://stackify.com/?p=20067 A multitenant web application is one that responds differently depending on how it is addressed – the tenant. This kind of architecture has become very popular, because a single code base and deployment can serve many different tenants. In this post, I will present some of the concepts and challenges behind multitenant ASP.NET Core apps. Let’s consider what it takes to write a multitenant ASP.NET Core app. For the sake of simplicity, let’s consider two imaginary tenants, ABC and XYZ. We won’t go into all that is involved in writing a multitenant app, but we will get a glimpse of all the relevant stuff that is involved in it.

What is a Tenant?

A tenant has a specific identity, and an application that responds to a particular tenant behaves differently from another tenant. Specifically, one or more of these may change:

  • User Interface (UI)
  • Data (including configuration parameters)
  • Behavior

By UI I mean a tenant may have different CSS files, different logo images, and so on. Data should be easy to understand – we don’t want tenant ABC to display data for XYZ, and vice-versa. Changes in behavior or functionality are also possible, when a particular tenant has a different feature set than others. For the sake of simplicity, let’s say that a tenant is identified by a string, like ABC or XYZ; this will be its code name.

We will define an interface, ITenantService, that will serve as the entry point for the multi-tenant functionality:

public interface ITenantService
{
    string GetCurrentTenant();
}

And an implementation of it:

public sealed class TenantService : ITenantService
{
    private readonly HttpContext _httpContext;
    private readonly ITenantIdentificationService _service;

    public TenantService(IHttpContextAccessor accessor, ITenantIdentificationService service)
    {
        this._httpContext = accessor.HttpContext;
        this._service = service;
    }

    public string GetCurrentTenant()
    {
        return this._service.GetCurrentTenant(this._httpContext);
    }
}

As you can see, this is very simple – essentially it consists of a method GetCurrentTenant. The actual complexity goes in the implementation strategies. For most of your application-specific code, this is likely the only reference you will need.

Concepts

First, let’s agree on some basic concepts:

  • We use a tenant identification (or resolution) strategy to find out what tenant are we talking to
  • A tenant DbContext access strategy will figure out the way to retrieve (and store)

Tenant Identification Strategies

How can we make the application know how it should behave, that is, what tenant should it be serving? For that we need to consider a tenant identification (or resolution) strategy. One can think of several ones, but I’m going to present just three:

  • Host header: the tenant will be inferred by the host header sent by browser when accessing the application (see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Host), for example, for http://abc.com it is abc.com, and for https://xyz.net it is xyx.net; this is the most robust strategy and probably the most widely used
  • Query string: a query string parameter will be used to distinguish between the different tenants, e.g., “?Tenant=abc“; probably useful only for development or testing purposes
  • Source IP: you may want that requests originating from the same IPs get the same tenant all the time

We will need to have a default tenant, that is, one that will be inferred if no information is passed by the browser to distinguish it.

Let’s define an interface that can provide us with this information; we’ll call it ITenantIdentificationService, and we already referenced it in the previous snippet:

public class TenantMapping
{
    public string Default { get; set; }
    public Dictionary<string, string> Tenants { get; } = new Dictionary<string, string>(StringComparer.InvariantCultureIgnoreCase);
}

As you can see, it basically consists of a dictionary of keys and values where each value represents a tenant code, and each key is either a host header or a query string parameter value. We can load an instance of this settings class from the configuration object:

public static class ConfigurationExtensions
{
    public static TenantMapping GetTenantMapping(this IConfiguration configuration)
    {
        return configuration.GetSection("Tenants").Get();
    }
}

The Get<T> extension method comes from the Microsoft.Extensions.Configuration.Binder NuGet package, and it is used to turn configuration into strongly-typed Plain Old CLR Object (POCO) objects. The actual configuration values will depend on the implementations of the identification/resolution service.

As we said, we will have three implementations of ITenantIdentificationService, one using the host header:

public sealed class HostTenantIdentificationService : ITenantIdentificationService
    {
        private readonly TenantMapping _tenants;

        public HostTenantIdentificationService(IConfiguration configuration)
        {
            this._tenants = configuration.GetTenantMapping();
        }

        public HostTenantIdentificationService(TenantMapping tenants)
        {
            this._tenants = tenants;
        }

        public IEnumerable<string> GetAllTenants()
        {
            return this._tenants.Tenants.Values;
        }

        public string GetCurrentTenant(HttpContext context)
        {
            if (!this._tenants.Tenants.TryGetValue(context.Request.Host.Host, out var tenant))
            {
                tenant = this._tenants.Default;
            }

            return tenant;
        }
    }

The key here is the domain name passed as the host header (eg, abc.com) and the value the tenant code (abc). This allows having many domains pointing to the same tenant, if we want that.

A sample configuration:

{
  "Tenants": {
      "default": "abc",
      "tenants": {
          "abc.com": "abc",
          "xyz.net": "xyz",
          "127.0.0.1": "xyz"
       }
    }
}

The other strategy that uses the query string is similar:

public sealed class QueryStringTenantIdentificationService : ITenantIdentificationService
{
    private readonly TenantMapping _tenants;

    public QueryStringTenantIdentificationService(IConfiguration configuration)
    {
        this._tenants = configuration.GetTenantMapping();
    }

    public string GetCurrentTenant(HttpContext context)
    {
        var tenant = context.Request.Query["Tenant"].ToString();

        if (string.IsNullOrWhiteSpace(tenant) || !this._tenants.Tenants.Values.Contains(tenant, 
            StringComparer.InvariantCultureIgnoreCase))
        {
            return this._tenants.Default;
        }

        if (this._tenants.Tenants.TryGetValue(tenant, out var mappedTenant))
        {
            return mappedTenant;
        }

        return tenant;
    }
}

The configuration in this case could be:

{
    "Tenants": {
        "default": "abc",
        "tenants": {
            "abc": "abc",
            "xyz": "xyz"
        }
    }
}

Here, they key in the tenants collection will be the value passed in the query string, for the Tenant parameter (eg, Tenant=abc).

Finally, using the source IP for the request:

{
    "Tenants": {
        "default": "abc",
        "tenants": {
            "192.168.1": "abc",
            "127": "xyz"
        }
    }
}

So, all requests coming from IPs 192.168.1.* will get tenant abc and all coming from the localhost will get xyz.

Both these implementations are stateless and cause no side effects, they just return whatever they think the current tenant is, from the current HttpContext. These are infrastructure classes, meaning, you should never have to know or reference them. But you do have to register the right one on the dependency injection (DI) framework of ASP.NET Core, it’s as easy as:

services.AddSingleton<ITenantIdentificationService, HostTenantIdentificationService>();

And we will also need to configure the mappings between host names (or query string values) and the default tenant, in the appsettings.json file, with values appropriate to the resolution strategy in use.

image-7

User Interface Strategies

When it comes to the user interface, we may want to do different things:

  • Show some content conditionally, for a specific tenant or set of tenants
  • Show a totally different view for a specific tenant

Showing content conditionally

For the first case, we will make use of a tag helper. Tag helpers were introduced in ASP.NET Core 2.0 and they are a way to declare components on a Razor view. The TenantTagHelper will show contents or not depending on whether the current tenant matches a list we give it as a parameter. It will look like this:

[HtmlTargetElement("tenant")]
public sealed class TenantTagHelper : TagHelper
{
    private readonly ITenantService _service;

    public TenantTagHelper(ITenantService service)
    {
        this._service = service;
    }

    [HtmlAttributeName("name")]
    public string Name { get; set; }

    public override Task ProcessAsync(TagHelperContext context, TagHelperOutput output)
    {
        var tenant = this.Name ?? string.Empty;

        if (tenant != this._service.GetCurrentTenant())
        {
            output.SuppressOutput();
        }

        return base.ProcessAsync(context, output);
    }
}

The tag helper gets, through constructor dependency injection, the ITenantService instance and uses it to get the current tenant. If it doesn’t match the tenant passed as a parameter, then the output is suppressed.

Like all tag helpers, it needs to be registered before it can be used in a view. The usual location for this is the ViewsShared_ViewImports.cshtml file:

This is content-specific to tenant ABC!

And the content will only show for tenant ABC. You need to keep in mind this approach requires you to explicitly hardcode the tenant’s name. This may or not be ideal for you.

Serving different files

If we want to serve different files, it’s a whole different thing. As you may know, ASP.NET Core relies on some conventions for looking up where to find markup files (.cshtml). These are generally located under Views<controller>, but we can override this by providing our own view location expander. A view location expander implements IViewLocationExpander (who could tell?) and needs to be registered for the Razor view engine options, upon startup:

public sealed class TenantViewLocationExpander : IViewLocationExpander
{
    private ITenantService _service;
    private string _tenant;

    public IEnumerable ExpandViewLocations(ViewLocationExpanderContext context, IEnumerable 
        viewLocations)
    {
        foreach (var location in viewLocations)
        {
            yield return location.Replace("{0}", this._tenant + "/{0}");
            yield return location;
        }
    }

    public void PopulateValues(ViewLocationExpanderContext context)
    {
        this._service = context.ActionContext.HttpContext.RequestServices.GetService();
        this._tenant = this._service.GetCurrentTenant();
    }
}

The view locations passed to ExpandViewLocations are:

  • /Views/{1}/{0}.cshtml
  • /Shared/{0}.cshtml
  • /Pages/{0}.cshtml

Where {0} is the view and {1} the controller name. What we are doing here is first returning a version with the tenant prepended to it, e.g.:

  • /Views/{1}/<tenant>/{0}.cshtml
  • /Shared/<tenant>/{0}.cshtml
  • /Pages/<tenant>/{0}.cshtml

Before all the others, this makes sure that if a folder exists with the tenant’s name, any files will be loaded from there. Again, the code receives on its constructor the notorious ITenantService, used to get the current tenant, and, in the PopulateValues, passes the tenant along to the ExpandViewLocations. This is the method that is responsible for returning the physical locations where the view files (.cshtml) are to be found. We are just returning the current locations but, for each registered location, another one which includes the current tenant’s code. This way, we ensure that, if the file is found, it is used before. To be more precise, it allows us to have this:

image-8

So, for controller Home, the folder abc will be searched when the application tries to locate static files for the abc tenant. So, when your HomeController‘s Index action method returns a call to View, the Index.cshtml file will be retrieved from the ViewsHomeabc folder.

Configuration Strategies

When it comes to configuration, it is somewhat tricky to set different values per tenant, especially because at application startup we do not know the current tenant, because it only exists on the scope of a request. But there are a couple of things we can do.

Named configuration

We can have different named configuration values associated with the same POCO class. In the Startup class’ ConfigureServices method, add code like this:

services.Configure("abc", options =>
{
    options.NumberOption = 1;
    options.StringOption = "abc";
});

services.Configure("xyz", options =>
{
options.NumberOption = 2;
options.StringOption = "xyz";
});

Again, I am hardcoding values for each tenant (abc and xyz), do keep this in mind. Forget about the PerTenantSettings class, this is just some class that can be used to pass arbitrary parameters to the different tenants and we won’t cover it here. Now, if we inject a configuration into any component, such as a controller, we can do it like this:

public HomeController(IOptionsSnapshot settings, ITenantService service)
{
    var tenant = service.GetCurrentTenant();
    var tenantSettings = settings.Get(tenant);
}

This relies on the IOptionsSnapshot<T>‘s ability to retrieve named configuration entries. This interface and associated capability comes from the Microsoft.Extensions.Options NuGet package. The name you pass it must be one that was also set when calling Configure<T>, as we saw earlier.

Service provider

What if we want to have different registrations per tenant on the service provider? This is a bit more complex, but let’s see a way by which we can accomplish it. First, we declare an interface that represents this capacity:

public interface ITenantConfiguration
{
    void Configure(IConfiguration configuration);
    void ConfigureServices(IServiceCollection services);

}

and a particular implementation:

public sealed class abcTenantConfiguration : ITenantConfiguration
{
    public void Configure(IConfiguration configuration)
    {
        configuration["StringOption"] = "abc";
    }

    public void ConfigureServices(IServiceCollection services)
    {
        services.AddScoped<IMyService, XptoService>();
    }

    public string Tenant => "abc";
}

As you can see, we can both override configuration settings for the current tenant and provide alternative service implementations for registered services. Now we need a way to load this, and we need to do it when configuring the registered services, usually in the ConfigureServices method of the Startup class. Note that we cannot use logic for the different tenants in ConfigureServices, because by the time it is called, there is not request yet, and therefore we do not know what the current tenant is. Instead, we shall create a couple of clever extension methods just for this purpose:

public static class ServiceCollectionExtensions
{
        public static IServiceCollection AddTenantConfiguration(this IServiceCollection services, Assembly assembly)
        {
            var types = assembly
                .GetExportedTypes()
                .Where(type => typeof(ITenantConfiguration).IsAssignableFrom(type))
                .Where(type => (type.IsAbstract == false) && (type.IsInterface == false));

            services.AddScoped(typeof(ITenantConfiguration), sp =>
            {
                var svc = sp.GetRequiredService<ITenantService>();
                var configuration = sp.GetRequiredService<IConfiguration>();
                var tenant = svc.GetCurrentTenant();
                var instance = types
                    .Select(type => ActivatorUtilities.CreateInstance(sp, type))
                    .OfType<ITenantConfiguration>()
                    .SingleOrDefault(x => x.Tenant == tenant);

                if (instance != null)
                {
                    instance.Configure(configuration);
                    instance.ConfigureServices(services);

                    sp.GetRequiredService<IHttpContextAccessor>().HttpContext.RequestServices = services.BuildServiceProvider();
                    return instance;
                }
                else
                {
                    return DummyTenantServiceProviderConfiguration.Instance;
                }
            });

            return services;
        }

        public static IServiceCollection AddTenantConfiguration<T>(this IServiceCollection services)
        {
            var assembly = typeof(T).Assembly;
            return services.AddTenantConfiguration(assembly);
        }
}

public sealed class DynamicTenantIdentificationService : ITenantIdentificationService
    {
        private readonly Func<HttpContext, string> _currentTenant;
        private readonly Func<IEnumerable<string>> _allTenants;

        public DynamicTenantIdentificationService(Func<HttpContext, string> currentTenant, Func<IEnumerable<string>> allTenants)
        {
            if (currentTenant == null)
            {
                throw new ArgumentNullException(nameof(currentTenant));
            }

            if (allTenants == null)
            {
                throw new ArgumentNullException(nameof(allTenants));
            }

            this._currentTenant = currentTenant;
            this._allTenants = allTenants;
        }

        public IEnumerable<string> GetAllTenants()
        {
            return this._allTenants();
        }

        public string GetCurrentTenant(HttpContext context)
        {
            return this._currentTenant(context);
        }
    }

So, we now have a way for a specific tenant to override the registered services or some configuration values at will! All we have to do is provide an instance of a class implementing ITenantConfiguration, pretty much like abcTenantConfiguration shown above. To use this, just call one of the extension methods in the Startup class:

services.AddTenantConfiguration();

Another option that I leave as an exercise to you would be to use Managed Extensibility Framework (MEF) or any other similar framework to dynamically load.

Per-tenant configuration

What if you need to access configuration values from views? Let’s consider we will have a configuration section for each tenant in the configuration file (appsettings.json), something like this:

{
    "Tenants": {
        "abc": {
            "StringOption": "abc",
            "NumberOption": 1
        },
        "xyz": {
            "StringOption": "xyz",
            "NumberOption": 2
        }
    }
}

We need to retrieve the configuration information relative to the current tenant, this code does just that:

public static class RazorPageExtensions
{
    public static T GetValueForTenant(this IRazorPage page, string setting, T defaultValue = default(T))
    {
        var service = page.ViewContext.HttpContext.RequestServices.GetService();
        var tenant = service.GetCurrentTenant();
        var configuration = page.ViewContext.HttpContext.RequestServices.GetService();
        var section = configuration.GetSection("Tenants").GetSection(tenant);

        if (section.Exists())
        {
            return section.GetValue(setting, defaultValue);
        }
        else
        {
            return configuration.GetValue(setting, defaultValue);
        }
    }
}

If the section or the named configuration setting does not exist, the default value will be returned instead.

From a Razor view, we can now do:

String Option: @this.GetValueForTenant("StringOption")

Database Access Strategies

When it comes to retrieving different values from a relational database, we have essentially three options:

  • Different Schemas: we use the same database for all the data, but we use different schemas (and tables) for each tenant:

  • Different Databases: we use a different database for each tenant:

image-9

  • Filter Columns: we use the same database and tables for all tenants, but different records for each tenant, filtered by some column:

image-10

All of these have their pros and cons, for example:

  • Using different schemas allows us to share the same database instance, but essentially we are duplicating all (or at least some) tables
  • Using different databases requires extra maintenance, like, backups, managing security, etc, but provides better encapsulation
  • Using a filtering column we only have one table for all tenants, but it may be possible, by sending custom SQL, to bypass the tenant restriction

For the purpose of this article, we will stick to Entity Framework Core, and therefore we will be using a DbContext to retrieve the data – for that, you will need the Microsoft.EntityFrameworkCore NuGet package and also the one that contains the SQL Server implementation (Microsoft.EntityFrameworkCore.SqlServer). Again, let’s define an interface that represents this functionality – setting database parameters depending on the tenant:

public abstract class TenantContext : DbContext
{
    protected TenantContext(DbContextOptions options) : base(options)
    {
    }

    private TenantContext() { }

    protected override void OnModelCreating(ModelBuilder modelBuilder)
    {
        var svc = this.GetService();
        svc.OnModelCreating(modelBuilder, this);
    }

    public override int SaveChanges()
    {
        var svc = this.GetService();
        svc.SaveChanges(this);
        return base.SaveChanges();
    }

   // rest goes here
}

This class is meant to serve as a basis for any application-specific DbContext as it contains the basic blocks to make it work in a multitenant way:

  • Setting up the multitenant database access strategy dynamically (OnModelCreating)
  • Setting the appropriate values on entities to be persisted, if there is need (SaveChanges)

You see that OnModelCreating gets the registered ITenantDbContext (whatever it is) from the service provider and calls its OnModelCreating method. Then, when the context is saving entities, it calls SaveChanges on the service reference. This context class needs to be registered with the service provider too:

services.AddDbContext(options =>
{
    options.UseSqlServer(this.Configuration.GetConnectionString("DefaultConnection"));
});

Without this, the ITenantDbContext would not be injectable into the context. Let’s see the possible implementations of this interface, for each of the three discussed strategies.

Different schemas

Here goes the implementation for the different schemas strategy:

public interface ITenantEntity { }

Just a marker entity, as you can see. For each of these entities, it sets the schema property to be the current tenant, as returned by the injected ITenantService. SaveChanges does nothing, as there is no need to modify the entities when they are saved. Not too complex, I’d say.

Different databases

For the different databases strategy we will create a DifferentConnectionTenantDbContext class:

{
    "ConnectionStrings": {
        "abc": "",
        "xyz": ""
    }
}

Filter column

The last one is somewhat trickier: we need to leverage a couple of features that are available in the latest versions of Entity Framework Core, such as shadow properties and global query filters. Without further ado, here is the implementation:

public sealed class FilterTenantDbContext : ITenantDbContext
{
    private readonly ITenantService _service;

    private static readonly MethodInfo _propertyMethod = typeof(EF).GetMethod(nameof(EF.Property), BindingFlags.Static | 
        BindingFlags.Public).MakeGenericMethod(typeof(string));

    private LambdaExpression IsTenantRestriction(Type type, string tenant)
    {
        var parm = Expression.Parameter(type, "it");
        var prop = Expression.Call(_propertyMethod, parm, Expression.Constant("Tenant"));
        var condition = Expression.MakeBinary(ExpressionType.Equal, prop, Expression.Constant(tenant));
        var lambda = Expression.Lambda(condition, parm);

        return lambda;
    }

    public FilterTenantDbContext(ITenantService service)
    {
        this._service = service;
    }

    public void OnModelCreating(ModelBuilder modelBuilder, DbContext context)
    {
        var tenant = this._service.GetCurrentTenant();

        foreach (var entity in modelBuilder.Model.GetEntityTypes().Where(x => 
            typeof(ITenantEntity).IsAssignableFrom(x.ClrType)))
        {
            entity.AddProperty("Tenant", typeof(string));
            modelBuilder
                .Entity(entity.ClrType)
                .HasQueryFilter(this.IsTenantRestriction(entity.ClrType, tenant));
        }
    }

    public void SaveChanges(DbContext context)
    {
        var svc = context.GetService();
        var tenant = svc.GetCurrentTenant();

        foreach (var entity in context.ChangeTracker.Entries().Where(e => e.State == 
            EntityState.Added))
        {
            entity.Property(nameof(TenantService.Tenant)).CurrentValue = tenant;
        }
    }

}

re A shadow property is one that does not exist in the POCO model but exists on the database. This is useful because we cannot easily query it – we probably won’t even know that it exists, as it doesn’t show up as a property in our class. A global query filter is a restriction that is automatically applied to all queries over a given type. This is most useful for implementing soft deletes and, you got it, multitenant apps! In OnModelCreating we first list all entities in the model that implement ITenantEntity – the marker interface used to tell those entities that need to be made multitenant-aware -, then we add a shadow property Tenant of type string to them – this will be used to filter by the current tenant. Lastly, we add a global filter in the form of a LINQ expression that automatically filters all accesses to multitenant entities by the current tenant code, as returned by ITenantService. Having an entity implement ITenantEntity is as easy as adding its declaration, no need to add any members:

services.AddSingleton<ITenantDbContext, DifferentConnectionTenantDbContext>();

Do not forget about this, or you will get an exception on the context’s OnModelCreating method. If you want, you can always have a dummy (Null Object Pattern implementation):

public sealed class DummyTenantDbContext : ITenantDbContext
{
    public void OnModelCreating(ModelBuilder modelBuilder, DbContext context) { }
    public void SaveChanges(DbContext context) { }
}

image 11

Behavior Strategies

When it comes to having different behavior, you should have service classes that return values that depend on the current tenant, and therefore can be used to make decisions. We saw how we can change the configuration, data access strategy or even service implementations based on the current tenant. You need to leverage these techniques to suit what you want to accomplish. For example, say you have a IDecisionService service injected into your controller:

[HttpPost]
public IActionResult Post(string option)
{
    var view = this._decisionService.SelectView(option);
    return this.View(view);
}

The actual IDecisionService implementation can probably receive a multitenant-aware DbContext or an ITenantService instance, but not likely one of the other infrastructure classes. It can then use these to make informed decisions of what to do. The possibilities are endless, but make sure you design your application with extensibility in mind, that is, not hardcoding it to specific tenants or “magic” values.

What’s More to It?

Other topics might include:

  • Caching
  • Logging
  • Profiling
  • Automatic discovery and configuration of tenants
  • Deployment to the cloud

However, I won’t go through these right now, as they can get quite complex. Maybe something for another article!

Putting it all Together

We’ve seen the basic building blocks for a multitenant architecture and also some reference implementations.

image-12

A lot more can be said, but I believe this will get you up to speed with this kind of architecture. For your convenience, I am listing here the configuration steps that must be followed, again, all should go in the ConfigureServices method of the Startup class.

// for having different Razor .cshtml files per tenant
services.Configure(options =>
{
    options.ViewLocationExpanders.Insert(0, new TenantViewLocationExpander());
});

// the tenants configuration
services.Configure(this.Configuration.GetSection("Tenants"));

// configuration specific for tenant abc
services.Configure("abc", options =>
{
    options.NumberOption = 1;
    options.StringOption = "abc";
});

// configuration specific for tenant xyz
services.Configure("xyz", options =>
{
    options.NumberOption = 2;
    options.StringOption = "xyz";
});

// a multitenant-aware DbContext
services.AddDbContext(options =>
{
    // the default connection, use whatever you need, if using the different connection per tenant strategy, this will be overridden
    options.UseSqlServer(this.Configuration.GetConnectionString("DefaultConnection"));
});

// the tenant service, required by all the others, entry point for ITenantIdentificationService
services.AddScoped<ITenantService, TenantService>();

// the tenant identification/resolution service
services.AddScoped<ITenantIdentificationService, HostTenantIdentificationService>();

// the service for applying multitenancy to a multitenant-aware DbContext
services.AddSingleton<ITenantDbContext, FilterTenantDbContext>();

// adding tenant-specific configuration classes from a given assembly
services.AddTenantConfiguration();

References

]]>
What’s New in .NET Core 2.1 https://stackify.com/new-in-net-core-2-1/ Tue, 10 Jul 2018 13:30:26 +0000 https://stackify.com/?p=19576 NET Core 2.1 was officially released on May 30. I will summarize what’s new for all its parts – .NET Core itself, Entity Framework Core, and ASP.NET Core. You can also check out our article on the .NET Ecosystem to fully understand your options before you start your next project.

.NET Core 2.1

First, you will need either Visual Studio 2017 15.7 (or higher), Visual Studio Code, or Visual Studio for Mac in order to fully leverage this version. Docker images have been published at the Microsoft Docker repo. It is important that you upgrade since .NET Core 2.0 will reach end of life for Microsoft support in October 2018. There are no breaking changes, only a couple of useful additions.

Global tools

One of these additions is global tools. Before, you could create extensions for the dotnet command – such as Entity Framework scaffolding, for example – but these could only run in the context of a folder where their binaries where installed. This is no longer the case, global tools can be published as NuGet packages and installed globally on the machine as easy as this:

dotnet tool install -g SomeTool

You can specify an installation path, but that may likely be rarely used. There is no base class or whatever, a global tool is just a program with a Main method that runs. Do remember, though, that global tools are usually added to your path and run in full trust. Please do not install .NET Core global tools unless you trust the author!

Finally, the watch, dev-certs, user-secrets, sql-cache and ef tools have been converted to global tools, so you will no longer need to used DotNetCliReferenceTool in your .csproj file. I bet this will be handy for you, Entity Framework Core migrations users!

.NET Core 2.1 Performance

One of the biggest highlights was performance, both build-time and execution performance. There is an ongoing Microsoft initiative that aims to squeeze every bit of lag from the code. The following chart shows build-time improvements of 2.1 in relation to 2.0, for two typical web applications, a small and a large one:

.NET Core 2.1 Incremental Build-time performance improvements

Image taken from https://blogs.msdn.microsoft.com/dotnet/2018/05/30/announcing-net-core-2-1, please refer to this page to learn about the actual details.

Runtime performance improvement occurs in many different areas and is hard to get a single value, but these have benefited a great deal:

  • devirtualization, where the JIT compiler is able to statically determine the target for virtual method invocations; this affects collections a lot
  • optimization of boxing, avoiding, in some cases, allocating objects in the heap at all;
  • reducing lock contention in the case of low-level threading primitives, and also the number of allocations
  • reducing allocations by introducing new APIs that don’t need them such as Span<T> and Memory<T>, and changing existing APIs to support these; the String class, for example, yields much better performance in typical scenarios
  • networking was also optimized, both low-level (IPAddress) and high-level (HttpClient); some of the improvements also had to do with reducing allocations
  • file system enumeration
  • operating system-specific operations, such as Guid generation

In general, the Just In Time (JIT) compiler is much smarter now and can optimize common scenarios, and a new set of APIs provides much more efficient resource usage then before, mostly by reducing allocations.

HttpClient and friends

HttpClient got a whole-new implementation based on .NET sockets and Span<T>. It also got a new factory class that assists in creating pre-configured instances that plays nicely with dependency injection:

public void ConfigureServices(IServiceCollection services)
{
    services.AddHttpClient("MyAPI", client =>
    {
        client.BaseAddress = new Uri("https://my.api.com/");
        client.DefaultRequestHeaders.Add("Accept", "application/json");
    });
    services.AddMvc();
}

public class MyController : Controller
{
    private readonly HttpClient _client;

    public MyController(IHttpClientFactory factory)
    {
        _client = factory.CreateClient("MyAPI");
    }
}

Span<T> and Memory<T>

These classes are used to represent contiguous memory chunks, without copying them. By contiguous memory, I mean arrays, pointers to unmanaged memory or stack-allocated memory. Classes such as String, Stream and others now offer methods that work with these new types in a more efficient way, without making copies, and allowing the slicing of it. The difference between Span<T> and Memory<T> is that the former needs to be declared on the stack (in a struct), but not the contents it points to (of course!).

var array = new byte[10];
Span bytes = array;
Span slicedBytes = bytes.Slice(5, 2);
slicedBytes[0] = 0;

or:

string str = "hello, world";
ReadOnlySpan slicedString = str.AsReadSpan().Slice(0, 5);

Neither of these calls (creation of Span<T> or ReadOnlySpan<T>) allocates any memory on the heap. Both these types have cast operators to and from arrays of generic types, so they can be directly assigned and converted.

Entity Framework Core 2.1

EF Core finally got some of its most demanded features:

CosmosDB provider

The first NoSQL provider for EF Core that is made available by Microsoft. Still in preview, but you can already use it in most cases, to access your CosmosDB (formerly Azure DocumentDB) databases.

Lazy loading

Not a big fan myself, but, hey, it’s here, and in an extensible way, which means developers can provide more efficient implementations. For those unaware, it means that properties that point to related entities (one-to-one, one-to-many, many-to-one) can be loaded only if they are actually needed, meaning, if any code accesses them:

public class Document
{
    public int Id { get; set; }
    public string Title { get; set; }
    public virtual Author Author { get; set; } // not loaded by default
}

Document doc = ...;
var author = doc.Author; //loaded here

Alternatively, it can be eagerly loaded with the rest of the entities:

var docs = ctx.Documents.Include(x => x.Author).ToList();

Server-side grouping

LINQ’s GroupBy now can run on the database. I stress can because actually not all scenarios work, but at least we get warnings about that if we look at the logger output. When it works, it is a big saver, because in the past data had to be brought to the client and processed there. It goes like this:

var jacketsBySize = ctx.Jackets.GroupBy(x => x.Size).Select(x => new { Size = x.Key, Count = x.Count() }).ToList();

The problem is, it’s not fully implemented, namely, we can’t group (yet) by a property of a reference property, like this:

var jacketsByBrand = ctx.Jackets.GroupBy(x => x.Brand.BrandId).Select(x => new { BrandId = x.Key, Count = x.Count() }).ToList();

Mind you, this will work, but on the client-side, meaning, EF will bring all the data into the client and then perform the grouping in memory – not something you would generally want!

Constructor injection

Entities instantiated by EF Core can now take parameters in their constructors, that is, no need for public parameterless constructors in entities. These parameters can either be property values or services from the dependency injection tool, although I would recommend you keep your entities unaware of these services, in general. This works now:

public class Product
{
    public int ProductId { get; }
    public string Name { get; }

    public Product(int productId, string name)
    {
        this.ProductId = productId;
        this.Name = name;
    }
}

This is useful if you wish to have read-only properties, for example.

Value conversions

Another popular one is value conversions. With this feature you can, for example, specify how an enumeration should be stored (as its string representation or as its underlying type, typically int), or that a Binary Large Object (BLOB) in the database is actually an Image, or even store encrypted values in the database. For example, to store enumerations as strings, use this:

protected override void OnModelCreating(ModelBuilder modelBuilder)
{
    modelBuilder
        .Entity()
        .Property(e => e.State)
        .HasConversion(v => v.ToString(), v => (State) Enum.Parse(typeof(State), v));
}

Notice how we need to supply both direction conversions – the ToString() call in the HasConversion is the value to store in the database, and the Enum.Parse is to convert the value from the database.

Data seeding

This one is back from the pre-Core days, although in a slightly different format. Initial data is configured when the model is defined, either in OnModelCreating or when registering a DbContext in the dependency injector:

modelBuilder.Entity().HasData(new Product { ProductId = 1, Name = "Jacket" });

You can pass any number of parameters in the HasData call.

Query types

Query types mean that you can instantiate your queries into any classes of your liking automatically, not just known entity classes. Instances generated this way have no primary key properties and are not change-tracked or persistable:

protected override void OnModelCreating(ModelBuilder modelBuilder)
{
    modelBuilder
        .Query()
        .ToView("vJacketSize");
}

Here, the vJacketSize view is populated using a SQL GROUP BY query that returns the count of jackets per size. Another option is to use a LINQ query:

modelBuilder
    .Query()
    .ToQuery(() => this.Jackets.GroupBy(x => x.Size).Select(x => new JacketSize { Size = x.Key, Count = x.Count() }));

To query, one must use the Query method instead of Set:

var jacketsAndSizes = ctx.Query<JacketSize>().ToList();

Ambient transactions support

Very popular too, EF Core can now automatically participate in ambient transactions such as those created by TransactionScope, of course, for providers that support it, like SQL Server:

using (new TransactionScope())
{
    //do modifications
    ctx.SaveChanges();

    //do more modifications
    ctx.SaveChanges();
}

Owned entity attribute

Owned entities are not new, they were introduced in EF Core 2.0, but then they had to be manually configured:

protected override void OnModelCreating(ModelBuilder modelBuilder)
{
    modelBuilder.Entity().OwnsOne(x => x.Product);
}

Think of an owned entity as a value type in Domain Driven Design terminology. It is similar to an entity but does not have an id and you can declare properties of owned entities, thereby making reuse easier.

Now we have the [Owned] attribute that is used for just that. Just add it to your owned class:

public class User
{
    public Address HomeAddress { get; set; }
    public Address WorkAddress { get; set; }
}

[Owned]
public class Address
{
    public string City { get; set; }
    public string PostCode { get; set; }
    public string Street { get; set; }
}

Include for derived types

You may be aware that EF Core 2.1 (as before) supports Single Table Inheritance / Table Per Class Hierarchy mapping. This means that a single table can be used to persist values for a class hierarchy, a column will be used to tell the specific class of each record. Sometimes, a relation of one-to-one or many-to-one may point to an abstract base class, but obviously what will be stored is a concrete class instance. If we want to eager load it, before 2.1, we had no luck, but, fortunately, we have it now:

var admins = ctx.User.Include(u => ((Admin) u).Privileges).ToList();

The syntax is a bit awkward, but it works!

ASP.NET Core 2.1

Big changes coming with this version:

SignalR

SignalR is finally released for ASP.NET Core. In case you don’t know about it, it’s a real-time library that permits communication from the server to the client and it just works in almost any browser out there, including mobile ones. Try it to believe it!

Razor Class Libraries

It is now possible to deploy .cshtml files in assemblies, which means it can also be deployed to NuGet. Very handy, and is the basis for the next feature.

Razor pages improvements

Razor Pages, introduced in version 2.0, now support areas and shared folders.

New partial tag helper

Essentially it is a new syntax to render partial views.

Identity UI library and scaffolding

The ASP.NET Core Identity library for authentication brings along its own user interface, which, starting with ASP.NET Core 2.1, is deployed on the same NuGet package as included .cshtml files. What this means is that you can select the parts you want of it, and provide your own UI for the others. Visual Studio 2017 now knows about this and will guide you through the process, when you add support for Identity.

Virtual authentication schemes

You can now mix different authentication providers, like bearer tokens and cookies, in the same app, very easily. Virtual here means that you can specify a moniker name, and then deal with it in the way you exactly want.

HTTPS by default

What’s there to say? It’s here by default, together with HTTP, which you can now probably disable. This is actually pretty good as it forces you to use HTTPS from the start, thereby avoiding typical pitfalls that arrive at deployment to production time.

GDPR-related template changes

For sites generated using the built-in template, a new GDPR-compliant cookie-consent message is displayed when one accesses the site for the first time. This message is configurable, of course. There’s also support for specifying cookies that are needed by the infrastructure and those that it can live without (this is an API change).

MVC functional test improvements

There’s a NuGet package called Microsoft.AspNetCore.Mvc.Testing that contains the infrastructure classes to perform functional tests of your MVC apps. These tests are different from unit tests, for example, because you actually test your classes in pretty much the same way as if they were running in a web app, this includes filters, model binding, and all that. Now, this package was already available previously, but now you no longer need to write some boilerplate code to allow your tests to locate the view files. It relies on convention and some magic to automatically find these, and makes your tests much simpler to write.

API conventions

In the old days, Swagger, now called OpenAPI, was a standard to define REST API endpoints. Using it you could describe your web methods, their acceptable HTTP verbs, content types, and return structures. This is useful if we wish to use UI tools for generating test requests or for generating client proxies for our web APIs. The new [ApiController] attribute, when applied to your controller classes, causes a couple things to occur:

  • Automatic model validation, using the registered validator (by default, using Data Annotations)
  • [FromBody] will be the default for non-GET requests, for complex types, [FromRoute] and [FromQuery] will be used in sequence for any other parameters, and also [FromFile], if you have parameters of type IFormFile
  • ApiExplorer will know very early about your controllers, which means you can also apply conventions or otherwise make changes to them

All of these can be controlled by configuration:

services.Configure(options =>
{
    options.SuppressModelStateInvalidFilter = true;
    options.SuppressConsumesConstraintForFormFileParameters = true;
});

Additionally, it is common for developers to just declare action methods as returning IActionResult, but unfortunately, this is quite opaque to OpenAPI, as it doesn’t say anything at all about what result we can expect from it. Now we have the ActionResult<T> type, which makes the usage of [Produces] attribute unnecessary. This class has a bit of magic, in particular, it does not implement IActionResult, but IConvertToActionResult instead. Don’t worry too much about it, it just works!

Generic host builder

A host is what runs your web app. In this version, a new HostBuilder class was introduced that allows you to configure the many aspects of your app, from the dependency injection, logging and configuration provider, from the very start. This will make your Startup class much leaner. It will also allow non-HTTP scenarios, because this is not tied to web/HTTP in any way. Before this we had WebHostBuilder, and as of now we still have, but in the future HostBuilder will eventually supersede it.

Updated SPA templates

New Angular, React and Redux templates are now available that are GDPR-friendly.

ASP.NET Core module

The ASP.NET Core module is what Internet Information Services (IIS) uses to process requests for .NET Core applications in Windows. Its performance has been improved roughly 6x as it runs .NET Core in-process, avoiding proxying. Its usage is transparent to the developer, and, of course, does not apply if you’re working with non-Windows operating systems.

Migrating to .NET Core 2.1

It’s as simple as updating the TargetFramework property of the .csproj files to contain netcoreapp2.1 instead of netcoreapp2.0 and replacing any references to Microsoft.AspNetCore.All for Microsoft.AspNetCore.App. It is also safe to remove any references to DotNetCliToolReference, as it was replaced by global tools. Of course, when Visual Studio asks you to update the NuGet packages of your solution, you should do it to use the latest features.

Conclusion

This new version brings lots of interesting features to the .NET Core world. Entity Framework Core seems to be moving fast and will hopefully reach the level of maturity of pre-Core versions pretty soon. ASP.NET Core continues to include more and more features on a steady pace too. It is good to see Microsoft addressing performance in a very serious manner and the results seem most impressive.

Stackify’s Application Peformance Management tool, Retrace keeps .NET Core applications running smoothly with APM, server health metrics, and error log integration.  Download your free two week trial today!

References

]]>
Entity Framework vs NHibernate: Understand the Similarities and Differences https://stackify.com/entity-framework-core-nhibernate/ Wed, 21 Mar 2018 13:22:51 +0000 https://stackify.com/?p=16126 A long time before Entity Framework (EF) Core was around – or any other Entity Framework for that matter – we already had NHibernate. In this article, I’m going to review Entity Framework and NHibernate, what approaches and differentiates them.

History of NHibernate and Entity Framework

NHibernate is a port of Hibernate from Java, one of the oldest and most respected Object-Relational Mappers (ORMs). It has existed since 2003 and recently has been developed entirely by the community, without any sponsor or umbrella company. It has always been open source under GNU Lesser General Public License.

Entity Framework Core is the .NET Core version of Microsoft’s flagship product that was first released in 2008. In the beginning, it was part of .NET 3.5 SP1 and as such it was licensed to be used for free, but no source code was available. As part of the .NET Core initiative, it was totally rebuilt now targeting .NET Core, and as part of .NET Core, it was made available under the MIT License. Since the beginning, it was closely linked to .NET and Visual Studio versions.

NHibernate has had a couple of versions:

  • Version 1 was the original one and offered basic features; initially it didn’t support generics, but those were introduced in a later minor version; it ran on .NET 1.1.
  • Version 2 dropped support for .NET 1.x.
  • Version 3 introduced LINQ (.NET Language-Integrated Query) and QueryOver, and also lazy loading of properties.
  • Version 3.2 and 3.3 introduced fluent (loquacious) configuration and conventions, mapping of views, HQL (Hibernate Query Language) improvements, and integrated bytecode generator.
  • Version 4 started targeting .NET 4 and BCL (Base Class Library) sets, instead of Iesi.Collections. Now using Roslyn (.NET Compiler Platform).
  • Version 5 added asynchronous programming, fixed TransactionScope support, and a lot of cleaning, including removing obsolete code.
  • Version 5.1 brought .NET Core support

As for Entity Framework (EF):

  • Version 1 had basic functionality with model-first and database-first workflows, and was released with .NET 3.5 Service Pack 1.
  • Version 4 came with .NET 4 and supported lazy loading, self-tracking entities, POCOs (Plain Old CLR Objects), and generator templates (T4 – Text Template Transformation Toolkit).
  • Versions 4.1 and 4.3 (“Code First”) introduced the code-first development model and a much cleaner API; it was the first to be distributed out of bound through NuGet. 4.3 introduced migrations.
  • Version 5 brought enumerated types, spatial types, and table-valued function support.
  • Version 6 added interceptors, logging, asynchronous operations, custom conventions, and support of stored procedures for CRUD operations.
  • .NET Core 1 was a total rewrite, and as such it didn’t offer all features as the previous versions. Only basic functionality, plus the architecture to support non-relational databases. Still added interesting features such as shadow properties.
  • .NET Core 1.1 brought mapping to fields, explicit loading, connection resiliency, and some previously available APIs.
  • .NET Core 2 introduced owned entities, global filters, the LIKE operator, DbContext pooling, explicitly compiled queries, scalar function mappings,nd self-contained entity configuration classes.

You can see that Microsoft hasn’t been exactly idle, and a lot has happened since the release of EF 1. In fact, the EF Core family is almost totally unrelated to the original one, and even Code-First was totally new, even though the original version was still underneath it.

Platforms

NHibernate now supports .NET 4 and higher running on Windows or platforms where Mono is available and as of version 5.1 (released a couple days ago) it also supports .NET Core. There are no plans to have it support data sources other than relational databases.

EF Core runs on .NET Core, and therefore on any platform that supports it. It is based on a provider model that theoretically can work with any kind of data source, not just relational ones. Work is in progress for Azure Cosmos DB, and others are likely to follow.

Architecture

NHibernate is distributed through NuGet as a single DLL, with no other dependencies unless we need ordered sets, where we also need Iesi.Collections. Everything is built in. It needs the full .NET Framework, or .NET Core.

Entity Framework Core, on the other hand, consists of multiple DLLs, coming in many NuGet packages. The good thing is, NuGet dependencies are brought along as needed, so, for SQL Server, you only need to grab Microsoft.EntityFrameworkCore.SqlServer. Targets .NET Core, of course.

Internally, they are both quite extensible, with lots of moving parts, some of which can be replaced.

Entity Framework Core builds upon .NET Core facilities like logging and dependency injection, and it leverages those for modifying internal components. Pretty much anything can be switched.

NHibernate does not use dependency injection and the way to replace each service is quite different from service to service. Overall, it’s not as extensible as EF Core.

In NHibernate, we need do instantiate a Configuration object, and from it produce a Session Factory. There should be only one of them at a time in an application, as they are relatively “heavy”. From a Session Factory, we produce Sessions, which are lightweight abstractions encapsulating an ADO.NET connection. Everything stems from it.

With EF, we only need to worry about a DB Context; it includes all the mapping and configuration information, and exposes all APIs for interacting with the datasource. It’s a much simpler model, but one can argue that the separation of concerns in NHibernate is more efficient.

Supported databases

Out of the box, NHibernate supports more than 10 different databases and versions, including:

  • SQL Server (including Azure)
  • SQL Server CE
  • Oracle Database (several editions)
  • Ingres
  • PostgreSQL
  • MySQL
  • DB2
  • Sybase
  • Informix
  • SQLite
  • Firebird
  • Any using ODBC (Open Database Connectivity), or OLE (Object Linking and Embedding) DB

Microsoft only makes available providers for:

  • SQL Server (including Azure)
  • SQLite
  • In-memory

The in-memory provider is particularly useful for unit testing scenarios. Work is in progress for Cosmos DB (Document DB) and Oracle Database. Other vendors already make available providers to other databases, for free or commercially.

It is clear that NHibernate has advantage here, as it’s much easier to start working on any supported database.

Configuration and mappings

Entity Framework Core uses fluent (code-based) configuration and fluent or attribute-based mappings. Built-in conventions cannot be replaced or added to, at this moment.

NHibernate has both XML and fluent configuration and mappings. It also offers attribute mappings through the NHibernate Attributes companion project. Custom conventions are very powerful in NHibernate, EF Core still doesn’t have them, and this is something that NHibernate is better at, right now.

Both of them need mappings, in any form. Both can map non-public members, properties or fields. Also both have the notion of value types (owned entities in EF, components in NHibernate). NHibernate can map a property or field to a custom SQL command.

Also, both support shadow properties, that is, entities that are part of the database schema, but not mapped into the class model. With EF Core, we can even query them.

Table inheritance

NHibernate fully supports the three “canonical” table inheritance patterns:

EF Core can only live with Table Per Class Hierarchy / Single Table Inheritance. There’s an open ticket for fixing it, but it’s not on the roadmap.

Primary key generation

Where primary keys are concerned, Entity Framework Core supports:

  • IDENTITY auto-generated keys on SQL Server and SQLite
  • Sequences on SQL Server 2012+
  • Manually assigned values

NHibernate, on the other hand, offers a much richer set, including:

  • Identities in SQL Server/Sybase, or auto-increment values in other databases
  • Sequences in any database that supports them
  • The database-independent High-Low algorithm
  • High-Low based on a sequence
  • A generator based on the current time
  • Several flavors of GUIDs (Globally Unique Identifiers)
  • Manually assigned

Querying and modifications

EF Core only has LINQ and SQL to query. No strongly-typed inserts, deletes or updates. Still no support for grouping on the database side, it will be fixed in the next version. No projections to non-entity types.

NHibernate, on the other hand, offers:

  • LINQ
  • Criteria API (a Query Object pattern implementation)
  • QueryOver (Criteria with LINQ)
  • HQL (object-oriented SQL)
  • SQL

All of these support pagination, as EF Core LINQ does.

It is also possible to do updates, deletes or inserts on top of LINQ queries, which is quite nice. Also, HQL is a great language for querying the database in an agnostic way; it is similar to SQL and can even update, insert or delete entries. NHibernate can project to any class, not just anonymous types.

NHibernate can use stored procedures or custom SQL for any kind of operation. EF Core still lacks this, which was present in pre-Core versions.

NHibernate has futures, which means that multiple queries can be queued, executed on the database, and retrieved at the same time, for databases that support it (SQL Server, Oracle). Also, depending on the primary key generation strategy, one can also batch insert multiple records at the same time.

Both support asynchronous queries and modifications.

Dealing with concurrency

Besides transactions, which are supported by all relational databases and major ORMs, some of these offer optimistic concurrency capabilities.

EF Core can use a ROWVERSION/TIMESTAMP column on SQL Server, or any of a list of columns, when updating a record.

NHibernate offers richer capabilities, besides SQL Server ROWVERSION/TIMESTAMP, it can also use database native mechanisms such as Oracle’s ORA_ROWSCN, but also timestamp or version columns. It can also compare all modified columns.

EF Core still does not support ambient transactions (TransactionScope), but the next version will fix that. NHibernate has historically had problems with TransactionScope, but they appear to be fixed now.

EF Core cannot explicitly lock records, but NHibernate does offer an API for it.

Collections

Collections in EF Core are simple: just lists, which means, one-to-many. There is no support for many-to-many yet.

In NHibernate, we have:

  • Bags of entities: unordered collections with possible duplication
  • Sets of entities: no duplication, maybe ordered
  • Lists: collections indexed by a number
  • Maps: collections indexed by any column or possibly other entity
  • Arrays of primitive types
  • Collections of primitive or value types (components in NHibernate)

So, as you can see, there’s a lot more going on in NHibernate, it’s not just saying that we have a collection. Both one-to-many and many-to-many associations are supported.

Lazy and explicit loading

Current version of EF Core (2.0) does not have lazy loading, only explicit (using the API) or eager loading (when doing a LINQ query) of associated entities and collections. The next version (2.1) will introduce this.

NHibernate has lazy loading of associated entities, collections, and even single properties (for large amounts of data, think BLOBs or CLOBs). There’s also explicit and eager loading, these behave the same way as in EF Core.

Type conversions

NHibernate can convert and query any type; EF Core still lacks this capacity, which should be introduced in the next version. With NHibernate, you can already query spatial types, for example.

Filters

Both of them have global filters at entity level. NHibernate also offers filtering at collection level, and it’s parameterizable. Easy to do soft deletes or multi-tenancy.

Interception

EF Core, as of now, only has basic SQL interception capabilities using the DiagnosticSource functionality. It’s easy to modify the context to do basic tasks before saving, updating, or deleting records. Future versions will offer LINQ expression interception and lifetime events.

NHibernate has a very rich event and interception model. Multiple interceptors can be added, at multiple levels (before/after transaction flush, deleting, inserting, or modifying, and after instantiating, etc).

Caching

Both offer first-level caching of entities. NHibernate goes one step further, and has second-level caching using a distributed cache provider, such as these:

  • Prevalence
  • ASP.NET cache
  • SysCache
  • Memcached
  • Redis
  • NCache
  • AppFabric Caching

Validation

NHibernate can do validation using the companion project NHibernate Validator, or by implementing an interface (this one has become somewhat legacy).

EF Core uses Data Annotations validation, which is the de facto standard for doing validations in .NET.

Migrations / database generation

NHibernate can do basic database generation and schema update – automatic or upon request.

EF Core has the migrations API, which is quite interesting. This is one aspect where EF is quite ahead of NHibernate, even though there have been attempts to create a similar framework for it. You can list all the versions of a schema, apply one explicitly or go back to a previous version.

Tooling

As EF Core is quite new, there is not much tooling in Visual Studio for it. It should come with time, of course. There are some third-party products such as Erik Ejlskov Jensen‘s excellent EF Core Power Tools, and maybe a few more. EF can generate entities from the database, and you can even modify the default behavior by registering your own version of infrastructure services.

With NHibernate, there are a few commercial ones, such as LLBLGen Pro or Entity Developer, and also some open-source projects. It doesn’t include any functionality that allows you to generate entities or mappings whatsoever.

Usability

NHibernate has so many features that it is easy to forget about one. It is said that its performance is lower than EF’s, and that may be true, but in my perspective performance should not be a driving reason to choose an ORM. With NHibernate, one can map pretty much any database, so there is always an option.

EF Core is very easy to use, generally well documented, and this is something where NHibernate is far behind. As of now, it can only be used in relatively simple scenarios – just consider the lack of support for table inheritance strategies. Having a .NET Core version is essential to me. Having the possibility of supporting non-relational databases using the same familiar APIs and concepts is something very appealing to me, but yet to see working.

Roadmap

EF Core does have a roadmap, which is a good thing. Considering all the effort Microsoft is putting into it – plus the added contribution of the developer community – we can expect the best out of it. Initial versions were discarded by most developers because they lacked many important features and SQL generation was far from optimal, but we’re getting there. New releases seem to be coming at a steady pace, and we can take part in the decision process, even though Microsoft has the final word.

For NHibernate, things are a bit worse. Currently the project is maintained by only a handful of developers, at most. Things do seem to be moving, after some years of stagnation, and a good example is the port to .NET Core. However, without professional developers working full time on this, it is reasonable to assume that progress will not be as fast as with EF – but NHibernate has the lead, in my opinion. The decision process is not so transparent, from where I sit.

With APM, server health metrics, and error log integration, improve your application performance with Stackify Retrace.  Try your free two week trial today

References

Entity Framework Core Roadmap

NHibernate Milestones

Prefix: Supported .NET Technologies and Profiling Abilities

.NET Core 2.1 Release: What To Expect in 2018

]]>
How to Build Cross-Platform .NET Core Apps https://stackify.com/cross-platform-net-core-apps/ Mon, 15 Jan 2018 14:36:30 +0000 https://stackify.com/?p=14873 One of the main reasons for using .NET Core is that you can run it on multiple platforms and architectures. So you can build an app that will run on Windows, but also on Linux, macOS and on different architectures like x86 and ARM. This is perfect for lots of scenarios, including desktop applications.

You can learn about other reasons for using .NET Core in this article about the .NET ecosystem on Stackify’s blog and in my Pluralsight course: The .NET Ecosystem: The Big Picture.

In this article, I’m going to show you how to create a simple .NET Core console application that can run on multiple operating systems.

To work through the demos in this article, make sure that you have the following installed on a computer running Windows 10:

Additionally, in order to run the app on a Mac, you will need access to a Mac that is running macOS 10.12 “Sierra” or a later version. If you don’t have one, you can get access to a Mac environment using https://www.macincloud.com

.NET Core application deployment options

You can deploy .NET Core applications as framework-dependent applications and as self-contained applications.

You can deploy .NET Core applications as framework-dependent applications and as self-contained applications.

There are a couple of differences between the two deployment models:

  • Framework-dependent applications require the .NET Core framework to be installed on the machine that the app will run on.
    • Self-contained applications don’t, because they contain everything the app needs to run
  • Framework-dependent applications can run on any operating system that you install .NET Core on, without modification
    • In contrast, for every OS that you want a self-contained app to run on, you need to publish an OS-specific version

How to create an app for multiple operating systems

Now that we know there are two different deployment options for .NET Core, let’s explore how to use those to run a .NET Core app in multiple operating systems.

To explore this, we’ll create two simple .NET Core console applications and run those on Windows and on macOS. I’ve already created this sample in a GitHub repository that you can find here.

We’ll create the applications using Visual Studio 2017. This is technically not needed, as you can create .NET Core applications with many different IDEs and even with the command line. Choose the IDE or tool that you are most comfortable with.

Framework-dependent app

First, we’ll create a framework-dependent app. We do this in Visual Studio 2017 with all the latest updates (at the moment, I’m on 15.5.2).

  1. In Visual Studio, click File > New Project and select .NET Core
    1. Now select the Console App (.NET Core) project type and create it
  2. Navigate to Program.cs. Out of the box, it will write Hello World! to the console
    1. Add Console.ReadLine(); below the Hello World line, to keep the console window open when the app runs

That’s it! This app will run on every operating system that .NET Core supports.

Self-contained app

Next, we’ll create a self-contained app. This is very similar to creating a framework-dependent app, but contains one extra step:

  1. In Visual Studio, click File > New Project and select .NET Core
    1. Now select the Console App (.NET Core) project type and create it
  2. Navigate to Program.cs. Out of the box, it will write Hello World! to the console
    1. Add Console.ReadLine(); below the Hello World line, to keep the console window open when the app runs
  3. Now for the extra step to make this into a self-contained application. Right-click the project file and click Edit Project like in the image below:

Shows the extra step to make this into a self-contained application. Right-click the project file and click Edit Project like in the image below:

  1. Now, in the project file, add the highlighted line:

Now, in the project file, add the highlighted line.

    1. This tells .NET Core which runtimes it can be built and published for and tells it to create a self-contained app. Currently, the app can only be published for Windows 10.

Running the app on Windows and macOS

So now, we have two .NET Core console applications; one framework-dependent and one self-contained. Let’s run them on Windows and on macOS.

Publishing the applications to run on Windows

Before we run the apps, we need to publish them, as you would do in production, to get a release build. For Windows, we can follow the same steps for both applications:

  1. In Visual Studio, right-click the project file and click Publish, like in the image below

Shows how before you run the apps, you need to publish them, as you would do in production, to get a release build. For Windows, we can follow the same steps for both applications.

  1. Then, you can pick a folder and click publish. That’s it!
  2. Do the same for the self-contained app

Publishing for running on macOS

If we want the framework-dependent app to run on macOS, we don’t have to do anything special, we can just use the publish results of the previous steps.

However, if we want the self-contained app to run on anything other than Windows, we need to take additional steps:

  1. In Visual Studio, right-click the project file of the self-contained app and click Edit Project to edit the project file
  2. Now add a new Runtime Identifier to the file and save it, just as highlighted here:

Publishing apps to run on macOS

  1. Now right-click the project file again and click Publish
  2. In the Publish overview, click the Settings link
  3. This opens the Profile Settings popup that you see in the image below. In here, make sure that you’ve selected the Target Runtime osx.10.12-x64 shows you to make sure that you’ve selected the Target Runtime
  4. Change the target location to another folder, so that we can distinguish between the Publish results for Windows and the results for macOS
  5. Click Save
  6. Now click Publish and that’s it!

The runtime identifier that we’ve just added was one of many that you can find in the .NET Core Runtime Identifier Catalogue here.

Running the applications on Windows

We now have the following publish results:

  • Results for the framework-dependent app
  • Results for the self-contained app
    • For Windows
    • For macOS

Let’s run them. First, we’ll run the framework-dependent application on Windows:

  1. Open Windows Explorer
  2. Navigate to the folder that contains the publish results for the framework-dependent app
  3. Now go to the navigation bar and type cmd before the path and hit enter, like in the image below. This opens a command prompt with the context of the folder. This only works on Windows 10.

C:\Users\barry\AppData\Local\Microsoft\Windows\INetCache\Content.Word\cmdtrick.png

  1. Now type dotnet FrameworkDependentApp.dll (or the name of the dll of your project, if you’ve named it differently)
  2. That’s it! You’ll now see Hello World! in the command window as the app output.

Now, let’s run the self-contained app. This is easier because it contains all of the bits necessary to run the application. This means that it has packaged the app and .NET Core, including dotnet.exe into a special executable that you just need to run.

  1. Navigate to the folder with the publish results for the self-contained app for Windows
  2. Double-click the SelfContainedApp.exe
  3. The app runs and shows Hello World in a command window

Running the applications on macOS

Let’s see if we can get this to run on macOS as well. The process is very similar as the process on Windows. If you don’t have a Mac, but do want to try this out, you can access a Mac for a fee from https://www.macincloud.com

  1. When you are in the macOS operating system, copy the publish results of the framework-dependent app and the ones for the self-contained app for macOS to the file system.

Now, let’s run the framework-dependent app on macOS:

  1. First, we need to install .NET Core as we need this to run the framework-dependent app. We can get this from https://www.microsoft.com/net/download/macos. You don’t have to install the SDK, we only need the runtime
  2. Now, open a terminal window (macOS version of the command line)
  3. In the terminal window, navigate to the folder that contains the framework-dependent app files
  4. Type dotnet FrameworkDependentApp.dll and hit enter.
  5. The app runs and shows Hello World!

Now to run the self-contained app on macOS:

  1. In order to run the self-contained app, we need to grant the executable permissions to run.
    1. First, we open a terminal window
  2. In the terminal window, navigate to the folder that contains the self-contained app files for macOS
  3. Type in sudo chmod +x selfcontainedapp and hit enter
  4. Type in the password of your admin account and hit enter to grant the executable permission to run

Running the applications on macOS

  1. Now type in open selfcontainedapp and hit enter
  2. The app now runs in a new terminal window and says Hello World!

Conclusion

One of the key differentiators of .NET Core is that you can use it to run an application on multiple platforms. This can be vital for your application.

It is very simple to create a framework-dependent app that can run on multiple platforms. In order to run, It only needs the .Net Core runtime to be installed.

Creating a completely self-contained application is also simple, you just need to add Runtime Identifiers and publish for a specific platform. This is because the publish for the specific platform results in platform-specific files. For Windows, it produces an .exe file with all its dependencies, but for macOS, it produces a macOS executable file, which is very different and can’t run on Windows.

One thing that we didn’t discuss is that you can run into platform differences when you use platform-specific functionalities, like when you access the file system of the computer. You need to keep this in mind and create platform-specific implementations for these things.

With APM, server health metrics, and error log integration, improve your application performance with Stackify Retrace.  Try your free two week trial today

]]>