asp.net Archives - Stackify Thu, 11 Apr 2024 06:34:58 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.4 https://stackify.com/wp-content/uploads/2023/02/favicon.png asp.net Archives - Stackify 32 32 ASP.NET Razor Pages vs MVC: How Do Razor Pages Fit in Your Toolbox? https://stackify.com/asp-net-razor-pages-vs-mvc/ Sat, 26 Aug 2023 18:16:00 +0000 https://stackify.com/?p=13334 As part of the release of .NET Core 2.0, there are also some updates to ASP.NET. Among these is the addition of a new web framework for creating a “page” without the full complexity of ASP.NET MVC. New Razor Pages are a slimmer version of the MVC framework and, in some ways, an evolution of the old “.aspx” WebForms.

In this article, we are going to delve into some of the finer points of using ASP.NET Razor Pages versus MVC.

  • The basics of Razor Pages
  • ASP.NET MVVM vs MVC
  • Pros and cons of Razor Pages and MVC
  • Using Multiple GET or POST Actions via Handlers
  • Why you should use Razor Pages for everything
  • Code comparison of ASP.NET Razor Page vs. MVC
A Razor Page is very similar to the view component that ASP.NET MVC developers use. It has all the same syntax and functionality

The Basics: What are ASP.NET Razor Pages?

Razor Page is very similar to the view component that ASP.NET MVC developers use. It has all the same syntax and functionality.

The key difference is that the model and controller code are also included within the Razor Page. It is more of an MVVM (Model-View-ViewModel) framework. It enables two-way data binding and a more straightforward development experience with isolated concerns.

Here is a basic example of a Razor Page using inline code within a @functions block. It is actually recommended to put the PageModel code in a separate file. This is akin to how we did code behind files with ASP.NET WebForms.

@page
@model IndexModel
@using Microsoft.AspNetCore.Mvc.RazorPages

@functions {
    public class IndexModel : PageModel
    {
        public string Message { get; private set; } = "In page model: ";

        public void OnGet()
        {
            Message += $" Server seconds  { DateTime.Now.Second.ToString() }";
        }
    }
}

<h2>In page sample</h2>
<p>
    @Model.Message
</p>

We Have Two Choices Now: ASP.NET MVVM or MVC

You could say that we now choose an MVC or MVVM framework. I’m not going to go into all the details of MVC vs MVVM. This article does a good job of that with some examples. MVVM frameworks are most noted for two-way data binding of the data model.

MVC works well with apps with many dynamic server views, single-page apps, REST APIs, and AJAX calls. Razor Pages are perfect for simple pages that are read-only or do basic data input.

MVC has been all the rage recently for web applications across most programming languages. It definitely has its pros and cons. ASP.NET WebForms was designed as an MVVM solution. You could argue that Razor Pages are an evolution of the old WebForms.

Pros and Cons of Razor Pages

As someone who has been doing ASP.NET development for about 15 years, I am pretty conversant with all the ASP.NET frameworks. Based on my experimentation with the new Razor Pages, here are my views on the pros and cons and how I envisage using them.

Pro: More organized and less magical

I don’t know about you, but the first time I ever used ASP.NET MVC, I spent a lot of time figuring out how it worked. The naming of things and the dynamically created routes caused a lot of magic that I wasn’t used to. The fact that /Home/ goes to HomeController.Index() that loads a view file from “Views\Home\Index.cshtml” is a lot of magic to get comfortable with when starting.

Razor Pages don’t have any of that “magic” and the files are more organized. You have a Razor View and a code behind the file, just like WebForms did, versus MVC having separate files in different directories for the controller, view, and model.

Compare simple MVC and Razor Page projects. (Will show more code differences later in this article.)

Compare MVC vs Razor Page Files
Compare MVC vs Razor Page Files

Pro: Single Responsibility

If you have ever used an MVC framework before, you have likely seen some huge controller classes that are filled with many different actions. They are like a virus that grows over time as things get added.

With Razor Pages, each page is self-contained, with its view and code organized together. This follows the Single Responsibility Principle.

Con: Requires New Learning 

Since Razor Pages represent a new way of doing things, developers used to the MVC framework might have a learning curve to overcome.

Con: Limitations for Complex Scenarios

While Razor Pages are great for simple pages, there might be better choices for complex scenarios that require intricate routing, multiple views, or complex state management.

Pros and Cons of MVC

Pro: Flexibility

MVC is incredibly flexible and can accommodate a variety of scenarios. It works well for applications with many dynamic server views, single-page apps, REST APIs, and AJAX calls.

Pro: Familiarity

Since MVC has been the mainstay for web applications across most programming languages, many developers are already familiar with its structure and functioning.

Con: Complexity

Due to its flexibility, MVC can become quite complex, especially for beginners needing help understanding the interactions between the model, view, and controller.

Con: Risk of Bloated Controllers

With MVC, there’s a risk of having substantial controller classes filled with many different actions, making the code harder to maintain and reason with.

Using Multiple GET or POST Actions via Handlers

In a default setup, a Razor Page is designed to have a single OnGetAsync and OnPostAsync method. If you want to have different actions within your single page, you need to use a feature called a handler. If your page has AJAX callbacks, multiple possible form submissions, or other scenarios, you will need this.

So, for example, if you were using a Kendo grid and wanted the grid to load via an AJAX call, you would need to use a handler to handle that AJAX call back. Any single-page application would use a lot of handlers, or you should point all of those AJAX calls to an MVC controller.

I made an additional method called OnGetHelloWorldAsync() on my page. How do I invoke it?

From my research, there seem to be three different ways to use handlers:

  1. Querystring  – Example: “/managepage/2177/?handler=helloworld”
  2. Define as a route in your view: @page “{handler?}” and then use /helloworld in the url
  3. Define on your input submit button in your view. Example: <input type=”submit” asp-page-handler=”JoinList” value=”Join” />

You can learn more about multiple-page handlers here.

Special thanks to those who left comments about this! Article updated!

Using Pages would force a separation between how you load the page and what services the AJAX callbacks

Why you should use Razor Pages for everything! (maybe?)

One could argue that Razor Pages are the ideal solution to anything, essentially a web page within your app. It would draw a clear line in the sand that any HTML “pages” in your app are actual pages. Currently, an MVC action could return an HTML view, JSON, a file, or anything. Using Pages would force a separation between how you load the page and what services the AJAX callbacks.

Think about it. This solves a lot of problems with this forced separation.

Razor PageHTML ViewsMVC/Web APIREST API calls, SOA

This would prevent MVC controllers that contain tons of actions that are a mix of not only different “pages” in your app but also a mixture of AJAX callbacks and other functions.

Of course, I haven’t actually implemented this strategy yet. It could be terrible or brilliant. Only time will tell how the community ends up using Razor Pages.

Code Comparison of ASP.NET Razor Page vs MVC

While experimenting with Razor Pages, I built a straightforward form in both MVC and as a Razor Page. Let’s delve into a comparison of how the code looks in each case. It is just a text box with a submit button.

Here is my MVC view:

@model RazorPageTest.Models.PageClass

<form asp-action="ManagePage">
    <div class="form-horizontal">
        <h4>Client</h4>
        <hr />
        <div asp-validation-summary="ModelOnly" class="text-danger"></div>
        <input type="hidden" asp-for="PageDataID" />
        <div class="form-group">
            <label asp-for="Title" class="col-md-2 control-label"></label>
            <div class="col-md-10">
                <input asp-for="Title" class="form-control" />
                <span asp-validation-for="Title" class="text-danger"></span>
            </div>
        </div>
      
        <div class="form-group">
            <div class="col-md-offset-2 col-md-10">
                <input type="submit" value="Save" class="btn btn-default" />
            </div>
        </div>
    </div>
</form>

Here is my MVC controller. (My model is PageClass which just has two properties and is really simple.)

   public class HomeController : Controller
    {
        public IConfiguration Configuration;

        public HomeController(IConfiguration config)
        {
            Configuration = config;
        }

        public async Task<IActionResult> ManagePage(int id)
        {
            PageClass page;

            using (var conn = new SqlConnection(Configuration.GetConnectionString("contentdb")))
            {
                await conn.OpenAsync();

                var pages = await conn.QueryAsync<PageClass>("select * FROM PageData Where PageDataID = @p1", new { p1 = id });

                page = pages.FirstOrDefault();
            }

            return View(page);
        }

        [HttpPost]
        [ValidateAntiForgeryToken]
        public async Task<IActionResult> ManagePage(int id, PageClass page)
        {

            if (ModelState.IsValid)
            {
                try
                {
                    //Save to the database
                    using (var conn = new SqlConnection(Configuration.GetConnectionString("contentdb")))
                    {
                        await conn.OpenAsync();
                        await conn.ExecuteAsync("UPDATE PageData SET Title = @Title WHERE PageDataID = @PageDataID", new { page.PageDataID, page.Title});
                    }
                }
                catch (Exception)
                {
                   //log it
                }
                return RedirectToAction("Index", "Home");
            }
            return View(page);
        }
    }

Now let’s compare that to my Razor Page.

My Razor Page:

@page "{id:int}"
@model RazorPageTest2.Pages.ManagePageModel

<form asp-action="ManagePage">
    <div class="form-horizontal">
        <h4>Manage Page</h4>
        <hr />
        <div asp-validation-summary="ModelOnly" class="text-danger"></div>
        <input type="hidden" asp-for="PageDataID" />
        <div class="form-group">
            <label asp-for="Title" class="col-md-2 control-label"></label>
            <div class="col-md-10">
                <input asp-for="Title" class="form-control" />
                <span asp-validation-for="Title" class="text-danger"></span>
            </div>
        </div>

        <div class="form-group">
            <div class="col-md-offset-2 col-md-10">
                <input type="submit" value="Save" class="btn btn-default" />
            </div>
        </div>
    </div>
</form>

Here is my Razor PageModel, aka code behind:

   public class ManagePageModel : PageModel
    {
        public IConfiguration Configuration;

        public ManagePageModel(IConfiguration config)
        {
            Configuration = config;
        }

        [BindProperty]
        public int PageDataID { get; set; }
        [BindProperty]
        public string Title { get; set; } 

        public async Task<IActionResult> OnGetAsync(int id)
        {
            using (var conn = new SqlConnection(Configuration.GetConnectionString("contentdb")))
            {
                await conn.OpenAsync();
                var pages = await conn.QueryAsync("select * FROM PageData Where PageDataID = @p1", new { p1 = id });

                var page = pages.FirstOrDefault();

                this.Title = page.Title;
                this.PageDataID = page.PageDataID;
            }

            return Page();
        }

        public async Task<IActionResult> OnPostAsync(int id)
        {

            if (ModelState.IsValid)
            {
                try
                {
                    //Save to the database
                    using (var conn = new SqlConnection(Configuration.GetConnectionString("contentdb")))
                    {
                        await conn.OpenAsync();
                        await conn.ExecuteAsync("UPDATE PageData SET Title = @Title WHERE PageDataID = @PageDataID", new { PageDataID, Title });
                    }
                }
                catch (Exception)
                {
                   //log it
                }
                return RedirectToPage("/");
            }
            return Page();
        }
    }

Deciphering the comparison

The code between the two is nearly identical. Here are the key differences:

  • The MVC view part of the code is exactly the same except for the inclusion of “@page” in the Razor Page.
  • ManagePageModel has OnGetAsync and OnPostAsync, which replaced the two MVC controller “ManagePage” actions.
  • ManagePageModel includes my two properties that were in the separate PageClass before.

In MVC for an HTTP POST, you pass in your object to the MVC action (i.e., “ManagePage(int id, PageClass page)”). With a Razor Page, you are instead using two-way data binding. I annotated my two properties (PageDataID, Title) with [BindProperty] to get Razor Pages to work correctly with two-way data binding. My OnPostAsync method only has a single id input since the other properties are automatically bound.

Will it Prefix?

Do Razor Pages work with Prefix? Yes! Our free ASP.NET Profiler, Prefix, supports ASP.NET Razor Pages. Our Retrace and Prefix products have full support for ASP.NET Core.

Summary

I really like Razor Pages and can definitely see using them in an ASP.NET Core project I am working on. I like the idea of Razor Pages being the true pages in my app and implementing all AJAX/REST API functions with MVC. I’m sure there are also other use cases that Razor Pages don’t work for. The good news is MVC is super flexible, but that is what also makes it more complex. The true beauty of Razor Pages is their simplicity.

References:

]]>
IIS Error Logs and Other Ways to Find ASP.Net Failed Requests https://stackify.com/beyond-iis-logs-find-failed-iis-asp-net-requests/ Thu, 27 Jul 2023 10:38:15 +0000 https://stackify.com/?p=6798 As exciting as it can be to write new features in your ASP.NET Core application, our users inevitably encounter failed requests. Do you know how to troubleshoot IIS or ASP.NET errors on your servers? It can be tempting to bang on your desk and proclaim your annoyance. 

However, Windows and ASP.NET Core provide several different logs where failed requests are logged. This goes beyond simple IIS logs and can give you the information you need to combat failed requests.

Get to Know the 4 Different IIS Logs

If you have been dealing with ASP.NET Core applications for a while, you may be familiar with normal IIS logs. Such logs are only the beginning of your troubleshooting toolbox.

There are some other places to look if you are looking for more detailed error messages or can’t find anything in your IIS log file.

1. Standard IIS Logs

Standard IIS logs will include every single web request that flows through your IIS site.

Via IIS Manager, you can see a “Logging” feature. Click on this, and you can verify that your IIS logs are enabled and observe where they are being written to.

iis logs settings

You should find your logs in folders that are named by your W3SVC site ID numbers.

Need help finding your logs? Check out: Where are IIS Log Files Located?

By default, each logged request in your IIS log will include several key fields including the URL, querystring, and error codes via the status, substatus and win32 status.

These status codes can help identify the actual error in more detail.

#Fields: date time s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) cs(Referer) sc-status sc-substatus sc-win32-status time-taken
2019-09-13 21:45:10 ::1 GET /webapp2 - 80 - ::1 Mozilla/5.0 - 500 0 0 5502
2019-09-13 21:45:10 ::1 GET /favicon.ico - 80 - ::1 Mozilla/5.0 http://localhost/webapp2 404 0 2 4

The “sc-status” and “sc-substatus” fields are the standard HTTP status code of 200 for OK, 404, 500 for errors, etc.

The “sc-win32-status” can provide more details that you won’t know unless you look up the code. They are basic Win32 error codes.

You can also see the endpoint the log message is for under “cs-uri-stem”. For example, “/webapp2.” This can instantly direct you to problem spots in your application.

Another key piece of info to look at is “time-taken.” This gives you the roundtrip time in milliseconds of the request and its response.

By the way, if you are using Retrace, you can also use it to query across all of your IIS logs as part of its built-in log management functionality.

2. Can’t Find Your Request in the IIS Log? HTTPERR is Your IIS Error Log.

Every single web request should show in your IIS log. If it doesn’t, it is possible that the request never made it to IIS, or IIS wasn’t running.

It is also possible IIS Loggin is disabled. If IIS is running, but you still are not seeing the log events, it may be going to HTTPERR.

Incoming requests to your server first route through HTTP.SYS before being handed to IIS. These type of errors get logged in HTTPERR.

Common errors are 400 Bad Request, timeouts, 503 Service Unavailable and similar types of issues. The built-in error messages and error codes from HTTP.SYS are usually very detailed.

Where are the HTTPERR error logs?

C:\Windows\System32\LogFiles\HTTPERR

3. Look for ASP.NET Core Exceptions in Windows Event Viewer

By default, ASP.NET Core will log unhandled 500 level exceptions to the Windows Application EventLog. This is handled by the ASP.NET Core Health Monitoring feature. You can control settings for it via system.web/healthMonitoring in your appsettings.json file.

Very few people realize that the number of errors written to the Application EventLog is rate limited. So you may not find your error!

By default, it will only log the same type of error once a minute. You can also disable writing any errors to the Application EventLog.

iis error logs in eventlog

Can’t find your exception?

You may not be able to find your exception in the EventLog. Depending on if you are using WebForms, MVC, Core, WCF or other frameworks, you may have issues with ASP.NET Core not writing any errors at all to ASP.NET due to compatibility issues with the health monitoring feature.

By the way, if you install Retrace on your server, it can catch every single exception that is ever thrown in your code. It knows how to instrument into IIS features.

4. Failed Request Tracing for Advanced IIS Error Logs

Failed request tracing (FRT) is probably one of the least used features in IIS. It is, however, incredibly powerful. 

It provides robust IIS logging and works as a great IIS error log. FRT is enabled in IIS Manager and can be configured via rules for all requests, slow requests, or just certain response status codes.

You can configure it via the “Actions” section for a website:

The only problem with FRT is it is incredibly detailed. Consider it the stenographer of your application. It tracks every detail and every step of the IIS pipeline. You can spend a lot of time trying to decipher a single request.

5. Make ASP.NET Core Show the Full Exception…Temporarily

If other avenues fail you and you can reproduce the problem, you could modify your ASP.NET Core appsettings.json to see exceptions.

Typically, server-side exceptions are disabled from being visible within your application for important security reasons. Instead, you will see a yellow screen of death (YSOD) or your own custom error page.

You can modify your application config files to make exceptions visible.

asp net error ysod

ASP.NET

You could use remote desktop to access the server and set customErrors to “RemoteOnly” in your web.config so you can see the full exception via “localhost” on the server. This would ensure that no users would see the full exceptions but you would be able to.

If you are OK with the fact that your users may now see a full exception page, you could set customErrors to “Off.”

.NET Core

Compared to previous versions of ASP.NET, .NET Core has completely changed how error handling works. You now need to use the DeveloperExceptionPage in your middleware.  

.NET Core gives you unmatched flexibility in how you want to see and manage your errors. It also makes it easy to wire in instrumentation like Retrace.

6. Using a .NET Profiler to Find ASP.NET Core Exceptions

.NET Profilers like Prefix (which is free!) can collect every single exception that is .NET throws in your code even if they are hidden in your code. 

Prefix is a free ASP.NET Core profiler designed to run on your workstation to help you optimize your code as you write it. Prefix can also show you your SQL queries, HTTP calls, and much, much more.

profiled asp.net iis error log

Get Proactive About Tracking Application Errors!

Trying to reproduce an error in production or chasing down IIS logs/IIS error logs is not fun. Odds are, there are probably many more errors going on that you aren’t even aware of. When a customer contacts you and says your site is throwing errors, you better have an easy way to see them!

Tracking application errors is one of the most important things every development team should do. If you need help, be sure to try Retrace which can collect every single exception across all of your apps and servers.

Also, check out our detailed guide on C# Exception Handling Best Practices.

If you are using Azure App Services, also check out this article: Where to Find Azure App Service Logs.

Schedule A Demo
]]>
Best Practices for Error Handling in ASP.NET MVC https://stackify.com/aspnet-mvc-error-handling/ Wed, 12 Jul 2023 17:37:00 +0000 https://stackify.com/?p=10806 How easy would our lives be, as programmers, if our applications behaved correctly 100% of the time? Unfortunately, that’s not the case: things go wrong, often in serious ways.

In order to offer the user the best possible experience—and also understand what went wrong so you can fix it—your application needs to handle its errors, and that’s why error handling is an important part of any application.

Most web frameworks provide ways for their users to handle errors, and ASP.NET is no exception. In this article, we will review MVC error handling best practices.

We’ll start by covering some ASP.NET MVC fundamentals. Then, we move on to explain why handling errors is essential in most non-trivial applications and give examples of common errors you might encounter in your web app.

Finally, we move on to discuss five ways in which you can do error handling in ASP.NET MVC. Let’s get started!

ASP.NET MVC Fundamentals

First of all, let’s cover some MVC fundamentals. What is MVC, and why should you care about it?

MVC stands for Model-View-Controller. It’s a design pattern developers use to manage the user interface of their applications in such a way that different concerns (user output, connection to the database) are kept segregated so they can be easily managed.

ASP.NET MVC, in a nutshell, it’s a web framework that implements MVC and is part of .NET.

Once an application is in production, "anything" could happen

Why Should You Handle Errors?

Writing an application and releasing it, despite not being a walk in the park, is just the first step in a never-ending journey. The real challenge is to maintain and evolve an application once it’s deployed.

Once an application is in production, “anything” could happen. And unfortunately, bad things do happen quite a lot. When they do, your code needs to be able to deal graciously with such problems, and that’s where error handling comes in handy.

Through error handling, an application developer can ensure the users get a consistent experience even in the face of failure. Also, a great error-handling policy can be used to generate logs and other types of instrumentation so that developers can, later on, investigate in order to discover the root cause of the problem and fix it.

Example of Errors

What are some errors that could need handling in web applications? Here’s a non-exhaustive list:

  • Database errors
  • Data conversion/parsing errors (e.g., attempting to serialize a malformed JSON.)
  • Server errors (such as the 500 status code, among others)
  • Errors caused by file uploads
  • Authentication/authorization issues

5 Ways to Do MVC Error Handling

Between .NET, ASP.NET, and MVC there are several potential ways to handle application errors.

  • Web.Config customErrors
  • MVC HandleErrorAttribute
  • Controller.OnException method
  • HttpApplication Application_Error event
  • Collect exceptions via .NET profiling with Retrace

There are some pros and cons to all of these ways to handle errors. You probably need to use a combination of them to handle properly and log errors.

There are two critical things that you need to accomplish with error handling:

  1. Gracefully handle errors and show your users a friendly error page
  2. Logging errors so that you are aware of them and can monitor them

Must-Have: Global Error Page With Web.Config <customErrors>

The last thing you ever want your users to see is a “yellow screen of death” type error. If you don’t know what that is, I’m referring the standard yellow ASP.NET error screen.

For any application, I would always recommend specifying a custom error page in your Web.Config. Worst case scenario, your users will see this page if an unhandled exception occurs.

<system.web>
    <customErrors mode="On" defaultRedirect="~/ErrorHandler/Index">
        <error statusCode="404" redirect="~/ErrorHandler/NotFound"/>
    </customErrors>
<system.web/>

MORE: How to Use Web.Config customErrors for ASP.NET

Use MVC HandlerErrorAttribute to Customize Responses

The HandleErrorAttribute inherits from FilterAttribute and can be applied to an entire controller or individual controller action methods.

It can only handle 500-level errors that happen within an MVC action method. It does not track exceptions that help outside of the MVC pipeline. Exceptions may occur in other HTTP modules, MVC routing, etc.

When to Use HandleErrorAttribute

Since it does not provide a way to collect all exceptions that could ever happen, it is a bad solution for a global unhandled error handler.

It works perfectly for tailoring specific error pages for a particular MVC controller or action method and specifying an error page in your Web.config <customErrors> works ideal for a universal error page. The HandleErrorAttribute gives you fine-grained control if you need it.

Note: HandleErrorAttribute requires customErrors to be enabled in your Web.Config.

For example, if you wanted to show a particular MVC view when a SqlException happens, you can do it with the code below:

[HandleError(ExceptionType = typeof(SqlException), View = "SqlExceptionView")]
public string GetClientInfo(string username)
{
	return "true";
}

The problem with HandleErrorAttribute is it doesn’t provide a way to log the exception!

Use MVC Controller OnException to Customize Responses

OnException is similar to HandleErrorAttribute but provides more flexibility. It works with all HTTP status codes, and not just 500-level responses. It also gives you the ability to log the errors!

public class UserMvcController : Controller
{
   protected override void OnException(ExceptionContext filterContext)
   {
      filterContext.ExceptionHandled = true;

	  //Log the error!!
      _Logger.Error(filterContext.Exception);

      //Redirect or return a view, but not both.
      filterContext.Result = RedirectToAction("Index", "ErrorHandler");
      // OR 
      filterContext.Result = new ViewResult
      {
         ViewName = "~/Views/ErrorHandler/Index.cshtml"
      };
   }
}

 

When to Use OnException for MVC Error Handling

If you want a way to present your users with custom MVC views or custom log exceptions, OnException is a good solution for you. It provides more flexibility than HandleErrorAttribute and does not require customErrors to be enabled in your Web.Config file.

Note: OnException gets called for all HTTP status codes. So be careful how you handle simple issues like a 404 caused by a bad URL.

Use HttpApplication Application_Error as Global Exception Handler

So far we have covered three different ways to customize the response that your users see if an exception occurs. Only within OnException can you potentially log exceptions.

To log all unhandled exceptions that may occur within your application, you should implement a basic error logging code as shown below.

public class MvcApplication : System.Web.HttpApplication
{
   protected void Application_Start()
   {
      AreaRegistration.RegisterAllAreas();
      FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters);
      RouteConfig.RegisterRoutes(RouteTable.Routes);
      BundleConfig.RegisterBundles(BundleTable.Bundles);
   }

   protected void Application_Error()
   {
      var ex = Server.GetLastError();
      //log the error!
      _Logger.Error(ex);
   }
}
If you want a way to present your users with custom MVC views or custom log exceptions, OnException is a good solution for you

When to Use Application_Error

Always! HttpApplication’s Error even provides the best mechanism to collect and log all unhandled application errors.

Collect All .NET Exceptions with Stackify Retrace

Stackify’s APM solution, Retrace, taps into the .NET profiling APIs to track the performance of your app down to the code level. As part of that, it can automatically collect all unhandled exceptions or be configured to receive all exceptions ever thrown, even if they are handled and discarded. Retrace doesn’t require any code changes either!

Retrace allows you to view and monitor all of your application errors. Check out our error monitoring features to learn more.

MVC Error Handling
Screenshot from Retrace

Summary on MVC Error Handling

There are several ways to do MVC error handling. You should always specify a default error page via your web.config <customErrors> and log unhandled exceptions that get called back to your HttpApplication Error method.

You can use HandleErrorAttribute or OnException to provide fine-grained control of how you display error type messages to your users.

If you want to track all of your application exceptions, be sure to check our Retrace and our error monitoring features. You can also view all application exceptions on your workstation for free with our free profiler, Prefix.

More Resources:

]]>
How to Deploy ASP.NET Core to IIS & How ASP.NET Core Hosting Works https://stackify.com/how-to-deploy-asp-net-core-to-iis/ Thu, 27 Apr 2023 07:00:00 +0000 https://stackify.com/?p=10613 Previously, we discussed the differences between Kestrel vs IIS. In this article, we will review how to deploy an ASP.NET Core application to IIS.

Deploying an ASP.NET Core app to IIS isn’t complicated. However, ASP.NET Core hosting is different compared to hosting with ASP.NET, because ASP.NET Core uses different configurations. You may read more about ASP.NET Core in this entry.

On the other hand, IIS is a web server that runs on the ASP.NET platform within the Windows OS. The purpose of IIS, in this context, is to host applications built on ASP.NET Core. There’s more information on IIS and ASP.NET in our previous blog, “What is IIS?

In this entry, we’ll explore how to make both ASP.NET Core and IIS work together. Without further ado, let’s explore the steps on how we can deploy ASP.NET Core to IIS.

How to Configure Your ASP.NET Core App For IIS

The first thing you will notice when creating a new ASP.NET Core project is that it’s a console application. Your project now contains a Program.cs file, just like a console app, plus the following code:

public class Program
{
    public static void Main(string[] args)
    {
        var host = new WebHostBuilder()
            .UseKestrel()
            .UseContentRoot(Directory.GetCurrentDirectory())
            .UseIISIntegration()
            .UseStartup()
            .Build();

        host.Run();
    }
}

What is the WebHostBuilder?

All ASP.NET Core applications require a WebHost object that essentially serves as the application and web server. In this case, a WebHostBuilder is used to configure and create the WebHost. You will normally see UseKestrel() and UseIISIntegration() in the WebHostBuilder setup code.

What do these do?

  • UseKestrel() – Registers the IServer interface for Kestrel as the server that will be used to host your application. In the future, there could be other options, including WebListener which will be Windows only.
  • UseIISIntegration() – Tells ASP.NET that IIS will be working as a reverse proxy in front of Kestrel and specifies some settings around which port Kestrel should listen on, forward headers and other details.

If you are planning to deploy your application to IIS, UseIISIntegration() is required

What is AspNetCoreModule?

You may have noticed that ASP.NET Core projects create a web.config file. This is only used when deploying your application to IIS and registers the AspNetCoreModule as an HTTP handler.

Default web.config for ASP.NET Core:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <system.webServer>
    <handlers>
      <add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModule" resourceType="Unspecified"/>
    </handlers>
    <aspNetCore processPath="%LAUNCHER_PATH%" arguments="%LAUNCHER_ARGS%" stdoutLogEnabled="false" stdoutLogFile=".\logs\stdout" forwardWindowsAuthToken="false"/>
  </system.webServer>
</configuration>

AspNetCoreModule handles all incoming traffic to IIS, then acts as the reverse proxy that knows how to hand traffic off to your ASP.NET Core application. You can view the source code of it on GitHub. AspNetCoreModule also ensures that your web application is running and is responsible for starting your process up.

Install .NET Core Windows Server Hosting Bundle

Before you deploy your application, you need to install the .NET Core hosting bundle for IIS – .NET Core runtime, libraries and the ASP.NET Core module for IIS.

After installation, you may need to do a “net stop was /y” and “net start w3svc” to ensure all the changes are picked up for IIS.

Download: .NET Core Windows Server Hosting <- Make sure you pick “Windows Server Hosting”

Steps to Deploy ASP.NET Core to IIS

Before you deploy, you need to make sure that WebHostBuilder is configured properly for Kestrel and IIS. Your web.config file should also exist and look similar to our example above.

Step 1: Publish to a File Folder

Step 2: Copy Files to Preferred IIS Location

Now you need to copy your publish output to where you want the files to live. If you are deploying to a remote server, you may want to zip up the files and move to the server. If you are deploying to a local dev box, you can copy them locally.

For our example, I am copying the files to C:\inetpub\wwwroot\AspNetCore46

You will notice that with ASP.NET Core, there is no bin folder and it potentially copies over a ton of different .NET DLLs. Your application may also be an EXE file if you are targeting the full .NET Framework. This little sample project had over 100 DLLs in the output.

Step 3: Create Application in IIS

While creating your application in IIS is listed as a single “Step,” you will take multiple actions. First, create a new IIS Application Pool under the .NET CLR version of “No Managed Code”. Since IIS only works as a reverse proxy, it isn’t actually executing any .NET code.

Second, you can create your application under an existing or a new IIS Site. Either way, you will want to pick your new IIS Application Pool and point it to the folder you copied your ASP.NET publish output files to.

Step 4: Load Your App!

At this point, your application should load just fine. If it does not, check the output logging from it. Within your web.config file you define how IIS starts up your ASP.NET Core process. Enable output logging by setting stdoutLogEnabled=true. You may also want to change the log output location as configured in stdoutLogFile. Check out the example web.config before to see where they are set.

Advantages of Using IIS with ASP.NET Core Hosting

Microsoft recommends using IIS with any public facing site for ASP.NET Core hosting. IIS provides additional levels of configurability, management, security and logging, among many other things.

Check out our blog post about Kestrel vs IIS to see a whole matrix of feature differences. The post goes into more depth about what Kestrel is and why you need both Kestrel and IIS.

One of the big advantages to using IIS is the process management. IIS will automatically start your app and potentially restart it if a crash were to occur. If you were running your ASP.NET Core app as a Windows Service or console app, you would not have that safety net there to start and monitor the process for you.

Speaking of safety nets, your application performance should be the top priority. Which is why you need an application performance monitoring tool that allows you to deploy robust applications.

Try Retrace for APM! Retrace is an application performance monitoring tool compatible with multiple development platforms. You can easily track deployments and improvements through the insight-based dashboards. The tool provides you key metrics so you can easily see which areas need attention.

Get your Free 14-Day Trial today!

]]>
ASP.NET Core Testing Tools and Strategies https://stackify.com/asp-net-core-testing-tools/ Fri, 17 Mar 2023 08:25:00 +0000 https://stackify.com/?p=20933 Don’t be that developer who is woken up in the middle of the night because of some problem with the web application. After all, you need your beauty sleep – some of us more than others. The best way to avoid problems with your application is to test thoroughly. Of course, there is a cost to testing, and it is easy to jump in too deeply such that every piece of code is outrageously well tested. Finding the right balance of just what to test and which ASP.NET Core testing tools to use is something that comes with experience.

Testing simple code that has a low impact too thoroughly is not as bad as failing to test complex code with a high impact, but it will still have a negative impact on the success of your project. Complicating the problem is that we have a number of ways to test code. You may be familiar with the idea of the testing triangle or, for those who think a bit higher dimensionally, the testing pyramid.

Testing triangle

The Testing Triangle

The purpose of the triangle is to demonstrate the number of tests you should have of each type. The bulk of your tests should be unit tests: these are tests which test a single public method on a class. As you move up the testing triangle the number of tests at that level decrease; conversely the scope of the tests increase. The cost of an individual test increases towards the top of the triangle.

In this article, we’ll look at testing tools which can help us out on each level of the pyramid. Some of these tools will be specific to .NET, but others will be portable to almost any development platform, and we’ll call out which is which as we progress. You will notice that as we increase the level of abstraction by moving up the triangle, the testing technologies become broader and applicable to more web development technologies.

The testing triangle is a good guide for your ASP.NET core testing strategies.

Unit Tests

Unit tests are the smallest sort of tests that you’ll write. Ideally, they exercise a single method and should be trivial to write. In fact, many people suggest that if you find unit tests difficult to write then it is an indication that the code being tested is doing too much and should be split up. I use unit tests as an indicator that methods should be split up into multiple methods or even split into separate classes. These are the sorts of tests you should create during test-driven development.

Unit testing tools are not new in .NET or in any modern language. The migration to .NET Core has brought with it most of the first-class unit testing tools. Unit testing tools are divided into a number of categories: test frameworks, test runners and assertion libraries. The frameworks are a set of attributes that allow you to decorate your code such that they can be found and run.

Typically the testing frameworks also include a runner which will run the tests without starting up your entire application. Finally, there are assertion libraries that are used to check the conditions inside a test – most people stick with the assertions provided by their testing frameworks. All of these tools can run against any .NET project, not just ASP.NET.

code

Test frameworks

My personal favourite testing framework is xUnit, which is now part of the open source .NET Foundation. This ensures that it will have a long life and is well recognized in the community. xUnit is a fairly opinionated framework that spends a lot of effort on being fast. Its terminology is slightly different from what you might have seen in the past, substituting terms like Fact and Theory in place of Test and ParameterizedTest. During the incubation of .NET Core, xUnit was the only unit testing framework that kept up with the constant betas.

Recently Microsoft open sourced their MSTest framework as v2. MSTest has been around for years and has been sorely in need of updating for much of that time. However, the new version is quite good and remarkably fast. Even two years ago I would not have even considered MSTest over xUnit, but it is now quite competitive.

Erik has an excellent post on this very blog doing a deeper comparison of several .NET unit testing frameworks.

Test runners

Tests Runners execute the tests in your suite and present the results in an understandable format. If you’re using Visual Studio then there is no need to look any further than the built-in unit test runner. It works very well for ASP.NET core testing.

This part of the tooling has received a lot of love from the team over the last few releases. The speed of running tests and of test discovery is remarkably better. In addition to the standard runner, there are now live unit tests. This tooling runs your tests continuously as you write your code to tighten up the feedback loop between writing code and getting feedback.

Command-line tooling for running tests is also excellent on .NET Core. Tests can be run as easily as running dotnet test. The official documentation has a thorough entry on running command-line tests, and this approach is suitable for running on a build server. xUnit will actually detect that is running in a continuous integration (CI) environment and alter its output format to one that the CI server can parse.

Microsoft visual studio

Assertion libraries

Almost everybody makes use of the built-in assertion libraries that come with the testing frameworks. However if, like me, you’re particular about the way your assertions are constructed there are some really nice alternatives. Shouldly and Fluent Assertions provide a more BDD style of assertion. What does that mean? Instead of writing

Assert.That(location.X, Is.EqualTo(1337));

we can write the much more legible

location.X.ShouldBe(1337);

The error messages produced by these libraries are also much easier to read, saving you time tracking down testing failures.

Unit tests for other languages

Because these tools run on the .NET framework, they can be used to test F# code as well, just as F# code can be used to test C# code. There are some very nice testing tools in the F# space which, if you’re feeling adventurous, are worth looking at in more detail. F# for Fun and Profit have a wonderful series of entries on F# and unit testing.

Integration Tests

Integration tests are larger than unit tests and typically cross over the boundaries between modules. What is a module you might ask: that’s a great question.

A module could be as small as a single class, so an integration test could be as small as just exercising the interactions between two classes. Conversely, a module may cross process boundaries, and be an integration between a piece of code and a database server.

While the catchword for unit tests is speed (so you can run them rapidly on your development box without interrupting your flow), the catchword for integration tests is parallelization. Each test is inherently going to take a long time, so to compensate we find something for all those dozens of cores you can get on your machine these days to do.

Request-based testing tools

More often than not, the integration tests on a web project will involve submitting requests to the web server and seeing what comes back. Previously integration tests of this sort have been quite tricky to write. For ASP.NET Core testing, the situation has been improved with the introduction of the TestServer. This server allows you to submit requests to an in-memory HTTP server. This is much faster and provides a more realistic representation of what a fully-fledged server would return. You can read a much more in-depth article on integration testing at ASP.NET Core in the official docs.

One handy tool for integration tests which examine the HTML returned from an endpoint is AngleSharp. AngleSharp provides an API for parsing and exploring the DOM which makes checking attributes of the returned HTML a snap.

code static void

Database integration testing

In the past, I’ve written some pretty impressive pieces of code that stand up and tear down databases against which integration tests can be run. Although it was a pretty snazzy piece of code at the time, the speed at which the tests could run was quite limited. Fortunately, we’ve moved a little bit out of the dark ages with Entity Framework Core. Just like the TestServer, EF Core provides an in-memory implementation of a database. This database can be used to test LINQ based queries with great rapidity. The one shortcoming of this approach is that queries written in SQL for performance or clarity reasons cannot be tested with the in-memory implementation. In those cases, I recommend reading Dave Paquette’s article on integration testing with EF Core and full SQL Server.

Acceptance Tests

Acceptance tests cross-module boundaries like integration tests, but they are written from a user’s point of view. These tests are usually written in a way that describes the behaviour of the system rather than the function. By this, I mean that your tests simulate a user’s experience with the application.

In .NET are two major flavours of tools that facilitate these tests: SpecFlow and NSpec. The primary difference between these tools is their approach to describing the tests. SpecFlow uses the Gherkin language from the Cucumber tool language to describe tests. NSpec uses its own dialect to describe similar tests. Either of these tools will do a good job building tests, but I find SpecFlow to be a more usable tool; your mileage may vary.

UI Tests

The highest level of automated tests are UI tests. These tests actually drive the web browser in an automated fashion performing mouse clicks, typing text, and clicking links. In the .NET space, I actually like Canopy more than any other. In my experience, UI tests tend to be incredibly fragile and inconsistent. I’d like to believe that it is just my own incompetence that leads them to be such, but in discussions with other people I’ve yet to uncover any counterexamples. Much of the issue is that web browsers are simply not designed for automation and are missing hooks to program against. Typically testing ASP.NET web apps becomes an exercise in adding sleeps in the hopes that the UI has updated itself. Recently I’ve been hearing good things about a new tool called Cypress, but I’ve yet to use it. If you have I’d love to hear about it in the comments.

UI test tools are portable to any web application and not limited to ASP.NET.

Manual Tests

Manual testing is often seen as terrible drudgery. A lot of companies try to avoid manual testing in favor of cheaper automated tests. Automated tests cannot tell you what the user experience of your website is like. It is important to get a real variety of typical users to look at your site. The problems that manual testers are able to find are different from the ones you will find using unit, integration, or any other test.

A screen showing a manual test run in VSTS

Bonus: Performance Testing ASP.NET Core

ASP.NET Core is really, really fast. The benchmarks for it place it in the top two or three web frameworks. This is a very impressive accomplishment considering that the team started their performance optimizations near the bottom of the list. However, as performant as the framework is, its performance can easily be ruined by running poor code on top of it. There are two classes of tests which can be run in this space: low-level and load tests.

Low-level tests

Low-level performance tests answer questions like “what is the quickest way to deserialize this data stream?”, or “will heavy load in this section of the code cause too many memory allocations?”. These questions are rarely asked and for most applications that simply retrieve data from the database and display it, they don’t matter a great deal. But as soon as your code starts doing anything computationally expensive or memory intensive, they become important. The most popular tool in this space, and the one which you’ll frequently see used by the .NET Framework team itself is BenchmarkDotNet.

benmarkdotnet testing tool, one of many testing tools discussed in this article

This ASP.NET Core testing tool provides attributes and a test runner, not unlike xUnit, but focused on testing. The output looks something like

Benchmarkdotnet output example

As you can see in this example, the code is tested on a number of platforms with varying sample sizes. These tests can be used to find issues and then kept to ensure that there aren’t any regressions in performance as your application matures.

Load testing

The best thing that could happen to your web application is that it becomes very popular and people flock to it; it can also be the worst thing. Knowing how your application performs under load will help keep your nights and weekends uninterrupted. The best tool in this space and one that I’ve seen recommended even by those who won’t otherwise touch the Microsoft stack is Visual Studio Team Services’ load testing tool. Using the power of the millions of machines in Azure VSTS is able to spin up very high-scale load tests that can stress your website far beyond what any desktop tool could. It is priced very reasonably and I highly recommend it.

VSTS LogoVSTS load testing uses Azure for testing, consequently it scales to the moon

Request Tracing

Both load testing and low-level tests are great tools for gauging the performance of your application before launching it. However, nobody can predict what actual user load on your website might look like. It is really useful to have a tool like Retrace which is able to drill into the performance of your live site highlighting slow queries, code bottlenecks and hidden errors.

took ms

During development Prefix, the sister tool to Retrace is invaluable at showing a high level overview of application performance. Load testing will only tell you that your site is slow while Prefix will give you the why. The price for Prefix (free!) is certainly attractive too.

Double Bonus: Accessibility Testing

Accessibility testing unlocks your application to more people. Your users may be blind, have limited mobility, or be color blind, and your site should be usable by all of them. There are as many tools in this space as there are different impairments. This article is too short to do a full analysis, but others have done so.

A Zillion Choices

Decision fatigue is a real problem in the modern world and the variety of testing tools available to you will not make the problem any better. I hope that this article will at least point you in a direction that resembles success. If you’re testing ASP.NET Core at all, then that is a huge win, since any tests are better than none. Go forth and testify!

]]>
Advanced ASP.NET Trace Viewer – WebForms, MVC, Web API, WCF https://stackify.com/asp-net-trace-viewer/ Fri, 09 Aug 2019 15:58:53 +0000 https://stackify.com/?p=12034 Software is a complex thing. As soon as you deploy an application to production—especially when you don’t have any control over the environment it’s running on—anything could happen.

You’ve created this “monster” and set if free. It’s now free from your control. How do you tame this beast before it creates havoc?

The first step is to trace its steps (no pun intended).

In order to do that, you must apply techniques that will amplify your vision, so to speak. Application logging, tracing, and profiling are the primary ways that developers can do that.

In this article, we’ll review ASP.NET tracing and how to view your tracing statements with Prefix.

Intro to ASP.NET Tracing

Defining Software Tracing

According to Wikipedia

Tracing involves a specialized use of logging to record information about a program’s execution. This information is typically used by programmers for debugging purposes, and additionally, depending on the type and detail of information contained in a trace log, by experienced system administrators or technical-support personnel and by software monitoring tools to diagnose common problems with software. 

In short, tracing is the process of recording what an application did. Such information can be really useful afterward, especially for debugging and fixing issues.

With detailed tracing information, it should be possible to “go back in time”, reenacting the sequence of actions performed by the application.

You might be thinking that this sounds suspiciously like logging. You wouldn’t be the first to make that connection.

To be fair, there are many similarities between the two techniques. But they’re still different, and the next section will explore those differences.

Tracing vs Logging

How do tracing and logging differ?

Logging—particularly application logging—is generally used to record higher-level information. In other words, logging records events that are relevant to the business logic, or the domain of the application.

A log shouldn’t have too much noise. That would make it hard to read, parse, and extract actionable information from log entries.

On the other hand, traces can—and should—be noisier than logs. While logs provide a high-level overview of an event, tracing offers a continuous view of an app, following the progression of data in the program.

With tracing, there is much more information involved, and that’s by design. 

However, the distinction between tracing and logging isn’t always that clear. Much of that is due to the fact that the approaches share tools, vocabulary, and techniques, similar to what happens with the whole “unit tests vs integration tests” debate.

For instance, by applying logging levels, it’s possible to reduce or increase the granularity of a given logger. Determining at which point the logging approach becomes “tracing” or vice-versa might still be open to debate. 

With the more general definitions out of the way, it’s time to get to specifics. Let’s turn our focus to Asp.NET tracing.

Meet ASP.NET Tracing

Tracing is built into the .NET framework and has been available for years. Microsoft describes ASP.NET tracing as a way to view diagnostic information about a single request.

It lets you see the page’s execution path, web request details, and much more.

Configuration

To enable ASP.NET tracing, you need to modify your web.config as shown below.

<configuration>
  <system.web>
    <trace enabled="true" requestLimit="40" localOnly="true" />
  </system.web>
</configuration>

If you are using MVC or Web API you also need to configure the WebPageTraceListener.

<system.diagnostics>
  <trace>
    <listeners>
      <add name="WebPageTraceListener"
            type="System.Web.WebPageTraceListener, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"/>
    </listeners>
  </trace>
</system.diagnostics>

How to Write Traces

By default, ASP.NET collects a lot of details like cookies, headers, response status code and more. These basics can be very helpful, but being able to see your own tracing statements is even more valuable.

You can log them as multiple tracing levels. Including warnings, information, error, and just trace.

You can easily write your own tracing statements as shown in this example.

[System.Web.Http.HttpGet]
[System.Web.Http.ActionName("KitchenAsync")]
public async Task<HttpResponseMessage> KitchenAsync()
{
	Trace.WriteLine("Starting KitchenAsync - Call redis");
	await SERedisAsync();
	Trace.WriteLine("Call DB");
	await DBTestAsync();
	Trace.WriteLine("Call Webclient");
	await WebClientAsync();
	Trace.WriteLine("Finished KitchenAsync");
	return Request.CreateResponse(HttpStatusCode.OK, Guid.NewGuid().ToString());
}

Viewing ASP.NET Traces

Built into ASP.NET is a trace viewer. You can access it via your web browser by going to /trace.axd within your application.

This will load a list of the most recent web requests.

View ASP.NET Traces via Trace
View ASP.NET Traces via Trace

By clicking “View Details” you can view what is captured for a specific web request.

Here is an example of the one from the C# code above. You can see the Trace.WriteLine messages that were logged.

ASP. NET Trace Detail
ASP. NET Trace Detail

ASP.NET Trace Viewer With Prefix

The built in view for ASP.NET tracing is pretty cool. However, if you want the ultimate experience, you want Prefix.

What is Prefix?

Prefix is a free developer tool from Stackify. It is installed on your workstation and runs as a lightweight ASP.NET profiler.

It tracks key methods being called by your code and provides an excellent user interface to view .NET traces, logs, SQL queries, exceptions, and much more.

Prefix automatically tracks hundreds of key methods across dozens of common frameworks and dependencies.

Things like SQL Server, mongodb, redis, etc. To view a complete list of what Prefix tracks, please visit our docs.

View ASP.NET Traces

Prefix collects a lot of similar details as the standard ASP.NET trace view. It also includes way more details.

Not only does it automatically pick up the Trace.WriteLine statements, it also supports Debug.WriteLine and logging statements via popular logging frameworks like log4net, NLog, etc.

You can also see how the code interacts with Redis, 2 SQL queries, and an external HTTP web service call.

ASP.NET Trace View With Prefix
ASP.NET Trace View With Prefix

ASP.NET Tracing: It’s Essential, and it’s Easier With Prefix

Understanding what your code is doing is essential to validate that your code works. Tools like Prefix help answer that critical question of “what did my code just do?”.

Prefix gives you instant visibility to what your code is doing, way beyond just basic ASP.NET tracing.

Prefix is an amazing tool that is free to download! It shows off the power of lightweight ASP.NET profiling. You can get this same type of functionality with Stackify Retrace for your servers.

]]>
Top 10 .NET Debugging Tips https://stackify.com/debugging-tips-net/ Thu, 11 Oct 2018 13:41:00 +0000 https://stackify.com/?p=22571 The best-laid plans of mice and men still go off the rails sometimes. Even when you’ve been rigorous and put unit tests in place, there are times when you’ll want to jump in and debug an application or a unit test. In this article, we’ll take a look at 10 tips for .NET debugging.

1. Setting Breakpoints

A breakpoint is one of the fundamental units of debugging. It is a hint to the debugging environment that it should stop the execution of the application at a certain point. If you’re using the full Visual Studio IDE, then adding a breakpoint is simple. Click in the gutter next to the line of code on which you want to halt execution.

image

Breakpoints can be added to almost any line of code that is executed. This means that you can’t breakpoint on the [Fact] attribute in this code or on the actual function name. In the former case, you can drop into the definition of the attribute, if you need to break on it. In the latter, breakpointing on the first line of the function will stop execution before the function has been executed. Breakpoints can be added before you execute the code or while you’re debugging.

2.Breakpoints with conditions

Sometimes a piece of code may be executed a bunch of times before you encounter the conditions you’re looking for. For instance, you might be working with a collection and the 93rd element is the one you need to debug. You could sit and hit F5 92 times to get to the record in which you’re interested, or you could use a conditional breakpoint. Conditional breakpoints will only cause the process to halt when a condition is met. The conditions can be very simple like counting the number of times a line has been hit. They can also be more complex like checking if some value is true.

image

One underused feature in the conditions dialog is setting a breakpoint when a value changes. This can be really useful if you’re trying to track down what is changing the value of some variable. Additionally there is a checkbox for performing an action when the breakpoint is hit. You can quickly add temporary logging to a running application in this way.

3. Viewing return values

In most cases, you can use the mouse to hover over a variable when debugging to get an idea of what value is held in the variable. One place where you can’t do this is for getting the return value from a function. You can’t simply hover over the return statement and get a preview of what is going to be returned from the function. To pull a value, you can look in two places: the watches and locals.

In the locals window, you can see the return value by looking for the name of the function (fully qualified, of course) followed by the word “returned”. I put a breakpoint on the closing brace from the function to see the value which would be returned.

locals

In the watch window, you can refer to the current return value by examining a special variable called $ReturnValue

watch 1

4. Editing values

Being able to see a value in the watch panel is one thing, but unlike my kids in any store that sells china, you’re allowed to poke at things. If you need to simulate some return value or check what would happen with a specific, difficult-to-reproduce set of variables, then it can be done by poking into the values shown in the locals window. This technique is particularly useful if you have a boundary condition you need to test out, but aren’t sure how to set up the inputs to get the outputs you need. In the video below, you can see a preview of the sum of first and last as provided by our next debugging tip: OzCode.

image

5. OzCode

Visual Studio is the Cadillac of IDEs when it comes to debugging, as compared with most any other IDE. Adding OzCode straps a set of rocket boosters to it. It provides a rich set of tools that make debugging even easier. I’m particularly fond of the changes it makes to the display of code during a debugging session. You can see, in the video above, the values of variables are shown inline without having to hover over them. In addition, you can search within a complex object or collections of objects for specific values.

image

The tool is not free, but it is a worthwhile addition to your toolbelt. If you’re working in a code base that is not easily tested with unit tests, the purchase is doubly worthwhile.

6. Add a unit test

Sometimes the best form of debugging is not debugging. The thing with debugging is that it tends to be quite a slow process. You need to set up a test case, set some breakpoints, and run through the application to get to where you need to be to exercise the behavior you wish to debug. I’ve found this to be a very frustrating experience in the past. Graphical applications frequently require spending a bunch of time clicking on buttons to get to the location with the problem.

Instead of struggling with this, it is likely better that you set up a unit test to exercise the function in question with exactly the values you need. The techniques presented thus far in this article may be useful in getting you to a state where you know which values are causing a problem. These can then be extracted into a unit test to tighten up the debugging cycle. You can, of course, debug into the unit tests to really zero in on the problem.

There are countless great unit testing tools, some of which can even generate some tests for you. Leveraging these tools and building a suite of unit tests may reduce the number of times that you have to drop to debugging in the future. The end result is that you’ll feel more comfortable about changing code in the future, knowing you have a safety net of tests to fall back on.

Bonus Tip: Check in with Prefix

You might think of Prefix as just a tool for getting some insight into what your code is doing, but it’s more than that. You can use Prefix to find everything from n+1 errors to hidden exceptions and getting code suggestions. Frequently looking at the logs as surfaced by Prefix will be enough to point you in the right direction to fix your bug. Prefix can just run all the time even when you’re doing the initial development so you can catch bugs while they’re still on your computer and you don’t have to debug problems in production.

Read Why APM Usage is Shifting Left to Development and QA.

Difficult to test code

If your codebase is difficult to test, and there are some of those out there, then you might want to pick up a copy of Michael C. Feathers’ book “Working Effectively with Legacy Code”. The appendix in this book offers concrete advice about how to extract testable code from a jumble of untestable code.

7. Breakpointing in LINQ

I am a LINQ addict. There, I’ve come out and said it. There is no application I build that doesn’t make heavy use of LINQ. I like the functional nature of being able to sort, filter, and project in a very terse way. My brain is used to parsing and understanding complex queries, and I know that one of my shortcomings as a developer is writing unapproachable LINQ statements. Traditionally, debugging LINQ has been pretty complex and the advice in the Microsoft documentation is pretty scant. You can debug by selecting chunks of the query at a time and evaluating them in the quick watch or watch windows. This leaves something to be desired.

OzCode has a nice visualizer that will show how the data in a LINQ query is altered at each step. Here you can see a query that takes a bunch of words, filters them for length, grabs the first letter, groups them, and finds the largest count.

image

The query is displayed as a series of panels showing the state of the data after a step has been taken. There is currently only support for visualizing full-framework LINQ queries but support for .NET Core is on its way.

8. Evaluating functions with no side effects

A well-designed method in functional programming has no side effects. That is to say, you can run the same function multiple times with the same inputs and the results will be identical. This is called a pure function and it enables all sorts of fun with caching and optimizing. Unfortunately, most functions we encounter are not pure and may well have a side effect. Consider this function:

static int hitCounter = 0;
public static int GetAndIncrement()
{
    return hitCounter++;
}

Every time this function is called, the result is different: 0, 1, 2, 3. This is true if you’re evaluating the function in a quick watch window too.

quick watch

Every time the function is evaluated there is a side effect of the hit counter is incremented. This is an easy condition to hit when debugging your code. To avoid changing the state of the running program while we’re debugging, we can add “, nse” to the expression.

Adding nse to the expression prevents evaluation

9. .NET Debugging in VSCode

If you’re working on a project that isn’t using the full version of Visual Studio, then you can still get a rich debugging experience by leveraging Visual Studio Code. The debugging experience in VSCode isn’t quite as good as that of full VS, but it is still jolly good. The swish tools found in OzCode haven’t made an appearance in VS Code yet. You can debug a whole host of different languages using VSCode: from Java to Python to C#, and all of them are supported through a series of plugins. The documentation on how to start debugging is also great and might be the only starting point you need.

10. Talk it out

This is probably the single most important thing you can do to debug your code. Find a friend, a coworker, or an inanimate object, and talk about the problem you’re having. I find that simply the act of getting my thoughts together sufficiently to explain the problem to somebody else is enough to point me in the direction of a solution. Sometimes the simplest solutions are better than a world of slick debugging tools.

]]>
Serilog Tutorial for .NET Logging: 16 Best Practices and Tips https://stackify.com/serilog-tutorial-net-logging/ Wed, 15 Aug 2018 14:08:20 +0000 https://stackify.com/?p=21406 Serilog is a newer logging framework for .NET. It was built with structured logging in mind. It makes it easy to record custom object properties and even output your logs to JSON.

Note: You can actually check out our other tutorials for NLog and log4net to learn how to do structured logging with them also!

In this article, we are going to review some of the key features, benefits, and best practices of Serilog.

What is Serilog? Why should you use it, or any C# logging framework?

Logging is one of the most basic things that every application needs. It is fundamental to troubleshoot any application problems.

Logging frameworks make it easy to send your logs to different places via simple configurations. Serilog uses what are called sinks to send your logs to a text file, database, or log management solutions, or potentially dozens of other places, all without changing your code.

How to install Serilog via Nuget and get started

Starting with Serilog is as easy as installing a Serilog Nuget package. You will also want to pick some logging sinks to direct where your log messages should go, including the console and a text file.

If you are new to Serilog, check out their website: Serilog.net

Install-Package Serilog
Install-Package Serilog.Sinks.Console

Logging sinks: What they are and common sinks you need to know

Sinks are how you direct where you want your logs sent. The most popular of the standard sinks are the File and Console targets. I would also try the Debug sink if you want to see your log statements in the Visual Studio Debug window so you don’t have to open a log file.

Serilog’s sinks are configured in code when your application first starts. Here is an example:

using (var log = new LoggerConfiguration()
    .WriteTo.Console()
    .CreateLogger())
{
    log.Information("Hello, Serilog!");
    log.Warning("Goodbye, Serilog.")
}

If you are using a Console, you should check out the ColoredConsole sink:

Serilog colored console target

How to enable Serilog’s own internal debug logging

If you are having any problems with Serilog, you can subscribe to it’s internal events and write them to your debug window or a console.

Serilog.Debugging.SelfLog.Enable(msg => Debug.WriteLine(msg));
Serilog.Debugging.SelfLog.Enable(Console.Error);

Please note that the internal logging will not write to any user-defined sinks.

Make good use of multiple Serilog logging levels and filter by them

Be sure to use verbose, debug, information, warning, error, and fatal logging levels as appropriate.

This is really valuable if you want to specify only certain levels to be logged to specific logging sinks or to reduce logging in production.

If you are using a central logging solution, you can easily search for logs by the logging level. This makes it easy to find warnings, errors, and fatals quickly.

How to do structured logging, or log an object or properties with a message

When Serilog was first released, this was one of the biggest reasons to use it. It was designed to easily log variables in your code.

As you can see in his example, it is easy to use these custom variables:

Log.Debug("Processing item {ItemNumber} of {ItemCount}", itemNumber, itemCount);

Serilog takes it to the next level because those same variables can also easily be recorded as JSON or sent to a log management solution like Retrace.

If you want to really get the value of structured logging, you will want to send your logs to a log management tool that can index all the fields and enable powerful searching and analytics capabilities.

Warning: Saving logs to a database doesn't scale

Querying a SQL database for logging data is a terrible idea if aren’t using full-text indexing. It is also an expensive place to save logging data.

You are much better off sending your logs to a log management service that can provide full-text indexing and more functionality with your logs.

Do not send emails on every exception

Everyone has made this mistake only once. Sending an email every time an exception happens quickly leads to all the emails being ignored, or your inbox gets flooded when a spike in errors occurs.

If you love getting flooded with emails, there is an email sink you can use.

How to send alerts for exceptions

If you want to send alerts when a new exception occurs, send your exceptions to an error reporting solution, like Stackify Retrace, that is designed for this. Retrace can deduplicate your errors so you can figure out when an error is truly new or regressed. You can also track its history, error rates, and a bunch of other cool things.

How to search logs across servers

Capturing logs and logging them to a file on disk is great, but it doesn’t scale and isn’t practical on large apps. If you want to search your logs across multiple servers and applications, you need to send all of your logs to a centralized logging server.

You can easily send your logs to Stackify with our custom sink:

var log = new LoggerConfiguration()
    .WriteTo.Stackify()
    .CreateLogger();

Products like Stackify Retrace make it easy to view all of your logs in one place and search across all of them. They also support things like log monitoring, alerts, structured logging, and much more.

Screenshot of Retrace’s log viewer:

Retrace Log Management

Use filters to suppress certain logging statements

Serilog has the ability to specify log levels to send to specific sinks or suppress all log messages. Use the restrictedToMinimumLevel parameter.

Log.Logger = new LoggerConfiguration()
    .MinimumLevel.Debug()
    .WriteTo.File("log.txt")
    .WriteTo.Console(restrictedToMinimumLevel: LogEventLevel.Information)
    .CreateLogger();

You can make your own custom sinks

If you want to do something that the standard Serilog sinks do not support, you can search online for one or write your own.

One example could be a target for writing to Azure Storage.

As an example of a custom target, you can review the source code for our Serilog sink for sending logs to Retrace.

Customize the output format of your Logs

With Serilog you can control the format of your logs, such as which fields you include, their order, and etc.

Here is a simple example:

Log.Logger = new LoggerConfiguration()
    .WriteTo.Console(outputTemplate:
        "[{Timestamp:HH:mm:ss} {Level:u3}] {Message:lj}{NewLine}{Exception}")
    .CreateLogger();

The following fields can be used in a custom output template:

  • Exception
  • Level
  • Message
  • NewLine
  • Properties
  • Timestamp

If that isn’t enough, you can implement ITextFormatter to have even more control of the output format.

Serilog has also great support from writing your log files as JSON. It has a built-in JSON formatter that you can use.

Log.Logger = new LoggerConfiguration()
    .WriteTo.File(new CompactJsonFormatter(), "log.txt")
    .CreateLogger();

Enrich your logs with more context

Most logging frameworks provide some way to log additional context values across all logging statements. This is perfect for setting something like a UserId at the beginning for a request and having it included in every log statement.

Serilog implements this by what they call enrichment.

Below is a simple example of how to add the current thread id to the logging data captured.

var log = new LoggerConfiguration()
    .Enrich.WithThreadId()
    .WriteTo.Console()
    .CreateLogger();

To really use enrichment with your own app, you will want to use the LogContext.

var log = new LoggerConfiguration()
    .Enrich.FromLogContext()

After configuring the log context, you can then use it to push properties into the context of the request. Here is an example:

log.Information("No contextual properties");
using (LogContext.PushProperty("A", 1))
{
    log.Information("Carries property A = 1");
    using (LogContext.PushProperty("A", 2))
    using (LogContext.PushProperty("B", 1))
    {
        log.Information("Carries A = 2 and B = 1");
    }
    log.Information("Carries property A = 1, again");
}

How to correlate log messages by web request transaction

One of the toughest things about logging is correlating multiple log messages to the same web request. This is especially hard in async code.

You can use one of the enrichment libraries to add various ASP.NET values to each of your log messages.

var log = new LoggerConfiguration()
    .WriteTo.Console()
    .Enrich.WithHttpRequestId()
    .Enrich.WithUserName()
    .CreateLogger();

It also has some cool options to add things like WithHttpRequestRawUrl, WithHttpRequestClientHostIP, and other properties.

If you are using .NET full framework or .NET Core, check out these enrichment libraries:

Note: There are also some other enrichment libraries for other various ASP.NET frameworks.

How to view Serilog logs by ASP.NET web request

Log files can quickly become a spaghetti mess of log messages. Especially with web apps that have lots of AJAX requests going on that all do logging.

I highly recommend using Prefix, Stackify’s free .NET Profiler to view your logs per web request along with SQL queries, HTTP calls, and much more.

transaction trace annotated

Summary

In this tutorial about Serilog, we covered best practices, benefits, and features of Serilog that you should know about. We also showed you how can use Stackify Prefix (for free) to view your logs while testing your application on your development machine. We also briefly showed the power of Stackify Retrace for centralizing all of your logs in one easy to use log management solution.

Schedule A Demo
]]>
The Java vs .NET Comparison (Explained with Cats) https://stackify.com/java-vs-net-comparison/ Fri, 26 Jan 2018 14:25:13 +0000 https://stackify.com/?p=15277 Comparing technologies is always fun. Not that we want to start yet another programming language war, but it is really quite interesting to take a fresh look at a familiar technology and put it into perspective. Plus, it’s quite common for developers and business owners to be faced with the choice between two or more options, be it a fresh career move or a tech stack choice for a new project.

In this article, we will focus on the Java vs .NET comparison. Why these two? Well, it’s quite common for Java developers to switch to .NET and vice versa. To spice things up, these two technologies are widely considered to be the major options for complex, large-scale application development in the enterprise domain.

Before we start, let’s set the record straight. It’s not really an “apples to oranges” comparison. Java vs .NET might not be the fairest of matchups: Java is a programming language, while .NET is a framework that can use several languages. Yet, one of the languages used with .NET, C#, is considered a direct Java competitor. This makes Java vs C# a popular (and often difficult) choice that business owners and developers regularly need to face.

So, if you are at the crossroads and trying to decide which technology is better for your career or product, this article is for you. Without going into too much detail (Wikipedia already has it covered after all), let’s see how Java and .NET are similar and what makes them differ.

What do Java and .NET have in common?

They are created for heavy lifting

Both Java and .NET framework are perfect enterprise-level technologies. They work fantastically well with high-load systems, complex architectures, and big data applications. Both Java and .NET are scalable and reliable solutions for large-scale projects.

image

They have something to offer for every occasion

Java is known to be an exceptionally multi-purpose language. Remember the slogan “write once, run anywhere”? At the same time, the .NET framework can also run on desktop and server (as well as mobile) applications.

image

They look alike

Another piece of common ground between Java and .NET is their similar syntax. Java is largely influenced by C++. At the same time, C# shares certain syntax specifics with other C-style languages, including C, C++, and Java itself.

image

They are object-oriented

It is hard to find a developer who wouldn’t know about OOP. There is a good reason for that. Object-oriented programming is a common standard for software development, so using a language that supports this approach out of the box is a good call. Both Java and .NET allow for code reuse, are flexible, and offer better troubleshooting opportunities thanks to the modular structure of their code.

image

Both of them could use some help from time to time

As open-source materials, both Java and .NET support various third-party frameworks and reusable components. This makes development easier and more efficient: as a developer you don’t need to reinvent the wheel each time you come across a specific problem. You just look up a library or framework that can do the job you need, then simply plug it into your project.

image

What are the main differences between Java and .NET?

They play well with other languages (just different ones)

Java works smoothly with a number of other languages, including Clojure, Groovy, Scala, and Kotlin, while .NET developers can choose between C#, F#, and Visual Basic.

image

Each of them has a home of its own

Both technologies feel at home in different environments. For example, Visual Studio is the one and only IDE for building .NET applications, while Java developers have a wider selection: There are 4 main IDEs available: Eclipse, IntelliJ IDEA, Oracle NetBeans, and Oracle JDeveloper.

image1

They are portable (but one of them is more portable than the other).

Java is known for its backward compatibility: migrating code between Java platforms is easy. While you can still do this with .NET, it is usually more time-consuming and difficult (although, .NET Standard and .NET Core can make this process easier).

image 2

And the winner is…

Neither. While the approaches used by Java and .NET are somewhat different, they have a common goal: Building scalable enterprise solutions and web/desktop apps.

As a result, there is no right or wrong way to go about the choice of programming language. It all depends on your needs/requirements, project specifics, and the availability of talent (for a business owner looking to hire developers).

With APM, server health metrics, and error log integration, improve your application performance with Stackify Retrace.  Try your free two week trial today

]]>
.NET Standard Explained: How To Share Code https://stackify.com/net-standard-explained/ Tue, 23 Jan 2018 14:15:34 +0000 https://stackify.com/?p=14883 You can learn how the .NET ecosystem works on Stackify. It consists of runtimes (.NET Framework, .NET Core and Mono for Xamarin), class libraries, and a common infrastructure (runtime tools and languages).

In this article, we are going to talk about the thing that makes the runtimes play well together and enables them to share code. Here, you’ll learn what .NET Standard is and what it isn’t.

Class libraries

Class Libraries

The .NET Framework contains the .NET Framework Class Library. This contains all the basic classes like Collections, strings, but also classes to connect with data sources, work asynchronously, and so on.

.NET Core also has a class library. This one is a subset of the .NET Framework library and contains less APIs.

And the Xamarin application workloads that run on the Mono runtime also have a class library, which is also a subset of the .NET Framework class library, more geared towards the purpose of the Xamarin application workloads.

Having different class libraries with different sets of APIs is not very useful. This makes it difficult to share code between runtimes as the code you write for one runtime might not work in another runtime, because it doesn’t have the APIs that you need.

.NET Standard

To solve the difficulties with having multiple class libraries, there is .NET Standard. This is a set of specifications that tell you which APIs you can use. This specification is implemented by all the runtimes.

.NET Standard is not something that you install – it is a formal specification of APIs that you can use. .NET Standard is an evolution of Portable Class Libraries. Portable Class Libraries are another way of sharing code between projects.

Runtimes, like .NET Core, implement .NET Standard. Specific runtime versions implement specific versions of .NET Standard, thereby implementing a specific set of APIs. For instance, the .NET Framework 4.5 implements .NET Standard 1.1 and with that, all the .NET Standard versions that came before 1.1.

The purpose of .NET Standard is simple: to share code between runtimes. When you want to share code between different runtimes in the .NET Ecosystem, use .NET Standard.

The purpose of .NET Standard is simple: to share code between runtimes. When you want to share code between different runtimes in the .NET Ecosystem, use .NET Standard.

Portable class libraries

A word about Portable Class Libraries, or PCLs. A Portable Class Library is just what it sounds like; a class library that is portable. It is a class library that you write and can use in applications that run on multiple platforms. Their purpose is to share code between applications, just like .NET Standard does.

.NET Standard is the evolution of PCLs and will eventually replace PCLs completely. Here are the main differences between .NET Standard and PCLs:

  • .NET Standard is a set of curated APIs, picked by Microsoft, PCLS are not.
    • The APIs that a PCL contains is dependent on the platforms that you choose to target when you create a PCL. This makes a PCL only sharable for the specific targets that you choose.
  • .NET Standard is platform-agnostic, it can run anywhere, on Windows, Mac, Linux and so on.
    • PCLs can also run cross-platform, but they have a more limited reach. PCLs can only target a limited set of platforms.

.NET Standard versioning

Alright, let’s talk a bit about how .NET Standard is versioned.

.NET Standard versioning

Each version of .NET Standard contains a certain set of APIs, like System.Collections and System.IO. Each new version of .NET Standard contains all the APIs of the previous versions and some new ones. This makes .NET Standard backwards compatible.

So each version contains all of the APIs of the previous versions. There are no breaking changes between versions and when a version is generally available, its contents will never change, so you can rely on the API specification of that version.

Specific runtime versions implement specific .NET Standard versions.

Each version contains an amount of APIs. Each higher version contains more APIs. This keeps on growing with each version. Higher version numbers of .NET Standard mean that they contain more APIs. But lower versions are supported by more platforms.

So therefore, if you create a .NET Standard library that you want to share, you should always target the lowest version of .NET Standard that you can get away with. This way, you can reach the maximum amount of platforms.

Higher version = more APIs

Lower version = more platforms

Target the lowest version that you can

.NET Standard tools

There are several tools that can help you to get started with .NET Standard.

.NET API Browser

You can use the .NET API Browser to find out what APIs are in which version of .NET Standard. This is a great page on the Microsoft docs website. Just select the .NET Standard version that you want to browse and either browse or search for specific functionality. This is a really good tool.

.NET Portability Analyzer

You can also analyze your current projects for compatibility with a specific .NET Standard version to see if you can convert them to .NET Standard. To do this, you can use the .NET Portability Analyzer, which is a tool from the Visual Studio Marketplace. You can use it from Visual Studio or from the command line.

Shared projects

Finally, I wanted to address Shared projects. You see these things in Visual Studio and they look like another form of code sharing, next to PCLs and .NET Standard.

You see these things in Visual Studio and they look like another form of code sharing, next to PCLs and .NET Standard.

So what are they? Shared Projects are a project template in Visual Studio that do not result in assemblies when you build them. They are just links between files and projects, just like linked files did in previous versions of Visual Studio. They do not provide any APIs like PCLs and .NET Standard do. They just act as a file-sharing mechanism between projects.

solution shared projects

Look at the example in the image above. The ConsoleApp project references the ShareProject1 project, which contains the SharedClass class.

When you build this, from the compilers’ viewpoint, the ConsoleApp contains the SharedClass class and the SharedProject doesn’t exist.

Shared Projects are nothing more than a Tool in Visual Studio that helps you to share files. They aren’t bad things, they are just different than .NET Standard and they are Visual Studio specific.

Conclusion

So, we’ve learned that .NET Standard is not a physical thing, but a specification of APIs.

It is the next version of Portable Class Libraries and it can be used by all the runtimes in the .NET ecosystem (.NET Framework, .NET Core, and Mono for Xamarin). Each runtime implements .NET Standard.

It has an additive versioning system, meaning that each new version contains all the APIs of the previous versions, which makes it backwards compatible. And there never are any breaking changes between versions.

With APM, server health metrics, and error log integration, improve your application performance with Stackify Retrace.  Try your free two week trial today

]]>