You searched for c# - Stackify https://stackify.com/ Wed, 15 May 2024 12:11:06 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.4 https://stackify.com/wp-content/uploads/2023/02/favicon.png You searched for c# - Stackify https://stackify.com/ 32 32 Getting Started With Azure Serverless https://stackify.com/azure-serverless-guide/ Tue, 20 Feb 2024 17:09:47 +0000 https://stackify.com/?p=42690 Serverless computing represents a paradigm shift in how we build, deploy and scale cloud applications. By decoupling infrastructure and server management from code development, developers are free to put a single focus on fine-tuning code in app development. The era of serverless computing puts innovation at center stage and removes the traditional constraints of server management. In this post, we’ll see how developers can leverage Azure serverless to simplify deployment of individual functions or units of code without worrying about the underlying infrastructure.

Overview of Azure Serverless

Serverless computing is a cloud computing model where the management and provisioning of servers are abstracted away from developers. Thus, developers can focus on writing code and defining functions. Popular serverless computing platforms include AWS Lambda, Azure serverless and Google Cloud Functions, among others.

Azure serverless from Microsoft Azure offers various serverless computing services and features that enable developers to build applications without managing the underlying infrastructure. The serverless services that Azure provides are flexible, scalable and cost effective, an ideal combination for a variety of applications ranging from small-scale projects to large-scale enterprise solutions.

Key Components of Azure Serverless Architecture

  • Azure Functions: A serverless compute resource that allows developers to execute event-triggered functions without the need to explicitly provision or manage servers. Azure Functions supports multiple programming languages that developers can use to build microservices, APIs and event-driven applications.
  • Azure Logic Apps: Enable the creation of workflows that automate and orchestrate tasks, integrating with various Azure services and external systems. Logic Apps are useful in business process automation and connecting applications, services and data across cloud and on-premises environments.
  • Azure Event Grid: A fully managed event routing service that simplifies the development of event-driven applications. Azure Event Grid allows you to react to events from Azure services or custom sources and route events to different subscribers, such as Azure Functions and Logic Apps.
  • Azure Cosmos DB Serverless: A globally distributed, multi-model database service offering a serverless option for developers to pay for the resources consumed by each request. Azure Cosmos DB Serverless is suitable for scenarios with intermittent or unpredictable workloads.

Benefits of Using Azure Serverless

Using Azure serverless offers several benefits for developers and organizations, facilitating the development of scalable, cost-effective and efficient applications.

  • Cost efficiency: Azure serverless has a pay-as-you-go pricing model, which means you only pay for what you consume. The model is cost effective compared with provisioning and maintaining dedicated servers.
  • Automatic scaling: Serverless architectures automatically scale as demand grows. As the number of incoming requests/events increases, the platform dynamically allocates resources to ensure optimal performance.
  • Reduced management overhead: Azure serverless abstracts away the complexities of infrastructure management. Developers can therefore focus on writing code and defining functions without the need to worry about server provisioning, maintenance or scaling.
  • Flexibility in language and frameworks: Azure Functions supports multiple programming languages, such as C#, JavaScript, Python and PowerShell. This flexibility allows developers to choose the language they’re most comfortable with or that best fits application requirements.

Also Read-https://stackify.com/faas-vs-serverless-resolving-the-dilemma/

Challenges of Using Azure Serverless & Possible Solutions

While Azure serverless offers numerous benefits, use also comes with challenges. Understanding these challenges and how to overcome them is essential for any successful implementation. The following are some of the challenges and possible solutions.

  • State management: Most serverless functions are stateless by design, which poses a challenge when dealing with applications that require stateful operations. To curb this challenge, consider using Azure Durable Functions, which provides a way to create stateful, long-running workflows.
  • Limited execution time: Serverless functions often have a maximum execution time limit, meaning long-running workflows get terminated. To solve this issue, break down larger tasks into smaller, more manageable functions.
  • Cold start latency: When a serverless function needs to be initialized before responding to a request, first requests experiences slower response times. To mitigate cold start latency, use warm-up strategies, such as periodic pinging, to keep functions warm.

How to Set up Azure Serverless

Setting up an Azure Functions involves creating a Function App, defining a function within the app and configuring the necessary settings.

First, we’ll set up Azure serverless.

Sign in to Azure Portal

  1. Visit the Azure Portal.
  2. Sign in with your Azure account or create a new one.

Create a New Function App

  1. Click on Create a resource in the Azure Portal.
  1. In the Search the Marketplace box, type Function and select Function App from the results.
  1. Click on the Create button.

Configure the Function App

  1. Choose your Azure subscription.
  2. Create a new Resource Group or select an existing resource group.
  3. Choose a unique name for your Function App.
  4. Select the appropriate runtime stack (e.g., Node.js, Python, C#).
  5. Choose either Windows or Linux for the operating system.
  6. Select the region closest to your target audience.
  7. Choose the hosting plan. (Consumption Plan is suitable for most scenarios.)
  1. Enable Application Insights or disable it per your preference.

  1. Click on the Review + create button and then Create to provision the Function App.
  1. Once created, navigate to the Function App in the Azure portal.

Create a New Function

  1. In the Function App menu, click on Functions.
  2. Click on the + Create button to add a new function.
  1. Choose a template (e.g., HTTP trigger, Timer trigger) or start from scratch.
  2. Configure the function settings, such as the name, authentication and trigger details.
  3. Click on the Create button.

Write Your Function Code

  1. In the Azure portal, go to the Functions section within your Function App.
  2. Click on the function you created to open the code editor.
  1. Write your function code in the editor. For example, in a JavaScript HTTP trigger, you might handle the HTTP request.

Test Your Function

  1. In the function editor, provide any required input parameters and run the test.

  1. Next, click on the Test/Run button to test your function.

Monitor & Debug Your Function

  1. Use the Monitor tab in the Azure Portal to view logs and diagnose issues.

Deploy Your Function

  1. Set up deployment options, such as continuous integration (CI) using Azure DevOps or GitHub Actions.
  2. Deploy your function code to the Azure Function App.

How to Choose the Right Azure Serverless Services

Choosing the right Azure serverless service depends on your specific requirements, application architecture and business goals. The following are some of the guidelines to help you make proper decisions.

  • Understand your requirements: Clearly define your application requirements, such as scalability, performance, latency and integration needs.
  • Integration and connectivity: Evaluate the integration capabilities of the serverless services. For instance, Azure Logic Apps provides a visual designer for building workflows and supports a wide range of connectors for different services.
  • Event trigger options: Consider the event triggers that your application requires. Azure Functions can be triggered by various events such as HTTP requests, timers, queues and blobs, among others.
  • Scalability and performance: Assess the scalability requirements of your application. Azure Functions automatically scale on demand, making them suitable for highly variable workloads. Also, understand the performance characteristics of your chosen services, especially in terms of response time and execution duration.
  • Cost consideration: Understand the pricing model for the selected Azure serverless services. Consider factors such as memory usage, execution duration and the number of executions.

Best Practices When Using Azure Serverless Services

When using Azure serverless services, it’s important to follow best practices to ensure optimal performance, reliability, security and cost-effectiveness. Below are some of the best practices you should adopt.

  • Adhere to the principles of the Azure Well-Architected Framework, which includes the pillars of operational excellence, security, reliability, performance efficiency, cost optimization, etc.
  • Leverage Managed Identities to securely access other Azure resources without the need for explicit credentials.
  • Use Azure Key Vault to securely store and manage sensitive information, such as connection strings, API keys and other secrets.
  • Use Azure Monitor, Azure Application Insights or other logging solutions to collect and analyze logs for serverless functions.
  • Mitigate cold start latency by using the Premium plan for Azure Functions, which reduces cold start latency by enabling functions to run on a more powerful and less crowded infrastructure.

Wrapping Up

In conclusion, you’ve embarked on a journey to build scalable, efficient and cost-effective applications using Azure’s serverless services. As a recap, in this post, you have seen what’s meant by serverless and understood different Azure serverless services, guidelines to follow when choosing the right serverless services, as well as some of the best practices of using Azure serverless services.

Your journey with Azure serverless is just the beginning. As you continue to refine and expand your serverless applications, the knowledge and experience gained will contribute to the success of your projects in the ever-evolving world of cloud computing.

Best of luck!

Read on to learn how Retrace helps fulfill your needs when compared to Azure Monitor.

This post was written by Verah Ombuiis, a passionate technical content writer and DevOps practitioner who believes in sharing her insights on IT technologies with the world. Verah believes in learning new technologies through deep, hands-on experience, so she can teach others in the easiest possible way. She has extensive experience and expose to popular DevOps technologies, such as Terraform, AWS Cloud, Microsoft Azure, Ansible, Kubernetes, Docker, Jenkins, Linux and more.

]]>
How to Use Dependency Injection in Azure Functions https://stackify.com/how-to-use-dependency-injection-in-azure-functions/ Tue, 30 Jan 2024 18:52:01 +0000 https://stackify.com/?p=42680 Azure Functions is a powerful function as a service (FaaS) tool in the Microsoft Azure cloud computing service. Built to create event-driven, scalable, serverless computing services and applications, Azure Functions enable developers to focus on code logic without worrying about application infrastructure. The service also simplifies scaling apps and reduces costs, users only pay for resources consumed.

However, how developers manage dependencies is a growing concern, as time goes on and applications grow more and more complex. Dependency injection in Azure Functions is a great way to optimize applications and improve the developer experience when using Azure Functions. 

In this guide, we’ll cover:

  • What dependency injection in Azure Functions is
  • Advantages of using dependency injection with Azure Functions
  • How to use dependency injection in Azure Functions

Let’s get started.

What Is Dependency Injection in Azure Function?

Dependency injection is a software design methodology where an object supplies the dependencies of another object to help enhance the modularity, robustness, testability and maintainability of your code. This concept isn’t just specific to Azure Functions; it’s pretty popular in other languages and technologies like .Net Core.

The main idea behind dependency injection is to achieve the inversion of control (IOC) and loose coupling between classes and their dependencies.

Inverting the control means that a particular portion of an application receives its benefits or control from a flow handled by another portion of the application.

Now, taking this concept to Azure Functions, dependency injection enables you to inject dependencies into your Azure Functions, thus ensuring you can write more modular code and manage dependencies much better.

Dependency injection is done by an assembler rather than by the objects themselves, so understanding dependency injection technique is critical.

Also Read-https://stackify.com/how-to-catch-all-exceptions-in-python/

Why Is Dependency Injection Important?

For starters, dependency injection permits you to make your systems loosely coupled. Dependency injection decouples the implementations of classes from the implementations of other classes’ dependencies.

This process becomes possible by separating the usage of your class from the creation of one. This separation further improves reusability and limits how your lower-level classes are impacted. 

Other advantages of dependency injection are:

  • Aids in following the SOLID object-oriented design principles, as interfaces are used more and help reduce coupling between components
  • Ensures applications are more testable, maintainable and reusable. By externalizing configuration details into configuration files, the client’s knowledge of how dependencies are implemented is taken out
  • Makes the reuse of business logic implementations easier within different parts of your codebase and gives you more modularized and reusable code
  • Simplifies code maintenance, as reusable code is easier to maintain
  • Reduces the cost of ownership with more straightforward debugging and feature implementations

Implementing this principle in Azure Functions ensures a structured approach to managing dependencies and a robust application. You can learn more about dependency injection with code examples and how it relates to SOLID principles in Stackify’s recent tutorial.

How to Implement Dependency Injection in Azure Functions

To maximize the benefits of the tutorials in this guide, you’ll need the following: 

Creating Azure Functions Using Visual Studio

Let’s start by creating an Azure Functions app in Visual Studio. Azure Functions applications are event-driven, serverless computing platforms.

To do this:

  1. Open Visual Studio code or Visual Studio. I will be using Visual Studio
  2. Go to File and select New
  3. Search for Azure Functions in the search box. Click it
  4. Give the project a name and click Create. This name should not have underscores, hyphens or other non-alphanumeric characters. I will name my project “SampleStackify

5. Select Http trigger. Azure Functions supports many trigger points, such as HTTP and queue triggers

6. Then, click Create to create the Function

These steps will automatically create our first Azure Function. You can open the “Function1.cs” file to see the generated function.

Next, we must add interfaces and classes for the dependency injection.

Creating Interfaces & Classes for Dependency Injection

We must create the service or some classes we want to inject into our Azure Function.

However, before creating the class, we need to create an interface and the class that implements this interface.

To achieve this:

  1. Right-click on Function
  2. Then, select Add followed by New Folder. You can name this folder “Services
  3. Inside this folder, right-click again and select Add
  4. Under Add, select New File to create the interface
  5. Select Interface from the list of templates available
  6. You can name this interface “IGreeterService

Next, edit the definition of IGreeterService using the syntax below.

public interface IGreeterService

{
    string GetGreeting(string name);
}
The file should look like this:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace SampleStackify.Services
{

    public interface IGreeterService

    {
        string GetGreeting(string name);
    }
}

Then, we must create a new class, “GreeterService,” to heredit our “iGreeterService” interface.

To create this class:

  1. Right-click on the “Services” folder
  2. Select Class and name it “GreeterService”

Then, edit the template generated using the syntax below:

public class GreeterService : IGreeterService
{
    public string GetGreeting(string name)
    {
        return $"Hello, {name}. This function executed successfully.";
    }
}

This GreeterService.cs file will look like this:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace SampleStackify.Services
{
    public class GreeterService : IGreeterService
    {
        public string GetGreeting(string name)

        {
            return $"Hello, {name}. This function executed successfully.";
        }

    }
}

Creating a Startup Class in Azure Functions to Inject Dependency

We’ll need to create another class named “Startup” to handle all the dependencies we want to achieve.

You should right-click on your app name, “SampleStackify,” and select Class. Next, name this class “Startup.”

Before creating our startup class, we must add the Microsoft.azure.functions.extension and Microsoft.extensions.http Nuget packages.

You can accomplish this by right-clicking and selecting Manage Nuget packages.

Then, navigate to the browse menu and search for Microsoft.azure.functions.extension.

Click on install.

Similarly, search for Microsoft.extensions.http and click on Install. You’ll need to accept the license agreement.

Once done, return to the “Startup” class and write the code below. Doing this will register the service we want to inject and provides the mapping of the interface to the class.

The syntax for our Startup.cs file will look like this:












using Microsoft.Azure.Functions.Extensions.DependencyInjection;
using Microsoft.Extensions.DependencyInjection;
using SampleStackify.Services;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace SampleStackify

{
    public class Startup : FunctionsStartup
    {
        public override void Configure(IFunctionsHostBuilder builder)
        {
            builder.Services.AddScoped<IGreeterService, GreeterService>();
        }
    }
}

We need to add the code below into Function1.cs to inject dependency. First, we need to declare the two variables to handle our client.

private readonly ILogger<Function1> _logger;
private readonly IGreeterService _greeterService;
The final Function1.cs will look like this:
using Microsoft.Azure.Functions.Worker;
using Microsoft.Azure.Functions.Worker.Http;
using Microsoft.Extensions.Logging;
using Newtonsoft.Json;
using SampleStackify.Services;
using System.Net;
using System.Net.Http;
using System.Threading.Tasks;

namespace SampleStackify
{
    public class Function1
    {
        private readonly ILogger<Function1> _logger;
        private readonly IGreeterService _greeterService;

        public Function1(ILogger<Function1> logger, IGreeterService greeterService)

        {
            _logger = logger;

            _greeterService = greeterService;
        }

        [Function("Function1")]

        public async Task<HttpResponseData> Run(

            [HttpTrigger(AuthorizationLevel.Function, "get", "post")] HttpRequestData req,

            FunctionContext context)

        {

            _logger.LogInformation("C# HTTP trigger function processed a request.");

            var response = req.CreateResponse(HttpStatusCode.OK);

            response.Headers.Add("Content-Type", "text/plain; charset=utf-8");

            var name = req.Query["name"];

            var requestBody = await req.ReadAsStringAsync();

            dynamic data = JsonConvert.DeserializeObject(requestBody);

            name = name ?? data?.name;


            string responseMessage = string.IsNullOrEmpty(name)

                ? "Welcome to Azure Functions!"

                : _greeterService.GetGreeting(name);

            response.WriteString(responseMessage);

            return response;
        }
    }
}

We can now run our app to start the Azure Function.

That’s it. We’ve now injected dependency into our Azure Function app.

Challenges & Troubleshooting Common Issues

One common challenge is the misconfiguration of service. This can be remediated by ensuring the FunctionsStartup class is correctly and accurately configured with the necessary service registrations. You should also ensure that your dependencies have the appropriate constructors, as mismatches can lead to runtime errors.

Another challenge is the incorrect scoping of dependencies and resolving circular dependencies. These issues can also be avoided by making sure you use and verify the lifetime (transient, scoped or singleton) that aligns with Function requirements and analyzing the relationships between injected services. 

You can also troubleshoot these issues by checking the Azure Functions logs for diagnostic insights and any initialization errors or failures in service resolution.

Best Practices When Using Dependency Injection in Azure Functions

Let’s explore best practices to should follow when using dependency injection in Azure Functions.

1. Unit testing helps in the early detection of issues, especially when we make changes or design our Functions. By creating more unit-testable Functions, we can ensure that the individual aspect of our code works as expected without testing the entire script.

In Azure Functions, unit testing comes in handy because we often have small, independent units of code. 

var response1 = await function.Run(request1, context);

Assert.AreEqual(HttpStatusCode.OK, response1.StatusCode);

Assert.IsTrue(response1.Headers.TryGetValue("Content-Type", out var contentType1));

Assert.AreEqual("text/plain; charset=utf-8", contentType1);

var responseBody1 = await response1.ReadAsStringAsync();

Assert.AreEqual("Welcome to Azure Functions!", responseBody1);

The script above is an example of a unit test that verifies our Azure Function (Function1) is executed with a specific request (request1). The test also checks that our Function’s response has the expected HTTP status code, content type and response body content.

2. Log when possible! Logging can track the process right from the initialization of dependencies. These logs provide insights that can be used for troubleshooting and monitoring. For example, if a service injected fails to initialize or execute, logs can reveal the specific point of failure, identify bottlenecks and make diagnosing and fixing problems easier.

3. Leveraging the FunctionsStartup class in Azure Functions helps organize our service configuration for dependency injection. One of the best ways to do this is by registering your services using the IFunctionsHostBuilder provided by the Azure Functions runtime inside the Configure method of the FunctionsStartup class. Here, you can define the dependencies to avoid scattering them throughout different parts of your application.

To establish more readability, you should also apply the assembly.

4. FunctionsStartup(typeof(MyNamespace.Startup)) attribute at the assembly level to specify the type of your FunctionsStartup class. This attribute will inform your Azure Functions runtime about the custom startup class to use when configuring the function app, thus acting as a line between your startup logic and the Azure Functions runtime.

Wrapping Up

Azure Functions lets you build serverless, compute-on-demand, lightweight applications. By weaving together the functionalities of Azure Functions and your technical understanding of dependency injections, developers can create more efficient and robust applications.

Of course, we’ve just scratched the surface of what can be done with dependency injection in Azure Functions. As you build, you should monitor your applications and Azure Functions with solutions like Retrace. A comprehensive, developer-friendly APM solution, Retrace monitors the performance of your virtual machines, Azure Stack and deployments.

You can give Retrace a go by starting with a free trial today!

Also Read-https://stackify.com/rack-mini-profiler-a-complete-guide-on-rails-performance/

This post was written by Ifeanyi Benedict Iheagwara. Ifeanyi is a data analyst and Power Platform developer who is passionate about technical writing, contributing to open source organizations, and building communities. Ifeanyi writes about machine learning, data science, and DevOps, and enjoys contributing to open-source projects and the global ecosystem in any capacity.

]]>
C# Read File: A Beginner’s Guide to Reading Text Files in C# https://stackify.com/c-read-file-a-beginners-guide-to-reading-text-files-in-c/ Tue, 19 Dec 2023 18:34:27 +0000 https://stackify.com/?p=42612 File manipulation is an incredibly common programming task with endless applications. It has two main sides: reading and writing. This post will focus on the “reading” bit, so if you’ve just googled “C# read file” and ended up here, search no more—we have the guide you need!

We’ll start by covering some prerequisites you need to follow, then dive right into the guide.

Prerequisites

To follow along with the tutorial, we assume you have the following:

With that out of the way, let’s dig in!

How Do I Read a File in C#?

Let’s start our guide with the simplest scenario: reading a whole file. Start by creating a new project of type “console application” using the method that’s most comfortable for you.

After that, create a file and place it in some easy location on your computer. For instance, if you’re on Linux, store it in your home directory. On Windows, I suggest creating a folder at the root of C: and placing the file there.

The contents of the file don’t matter; just write something. Make sure you create the file as a plain-text file. That is, don’t use text processors such as Microsoft Word or Apache Open Office Writer. Instead, use a simple text editor such as Notepad, Notepad++, or Gedit.

The file extension shouldn’t matter either, though using the .txt extension makes things easier on Windows—it’ll make Notepad automatically able to open the file.

Is your file ready to go? Great, you can now start writing some code. Remember that C# is an object-oriented language, so it shouldn’t come as a surprise that we’ll use a class to perform our file operations. The class is appropriately named “File,” and it lives in the System.IO namespace.

using System.IO

Start by adding “using System.IO;” to your list of using statements. Then, add the following line:

string fileContents = File.ReadAllText("C:\\files\\example.txt");

The code calls the ReadAllText method on the File class, passing the path to the file as an argument. Notice the double backslashes? Since I’m on Windows, I have to use backslash as a path separator, but since it’s a special character inside a C# string, I need to escape it. If I was on Linux, I might have used a path like this: /home/carlosschults/files/example.txt. Of course, replace that place with the actual path to the file on your computer.

Anyway, my entire program currently looks like this:

using System.IO;
string fileContents = File.ReadAllText("C:\\files\\example.txt");

If you’re using an older version of C#, before top-level statements were introduced, your code might look like this:

using System.IO;
internal class Program
{
    private static void Main(string[] args)
    {
        string fileContents = File.ReadAllText("C:\\files\\example.txt");
    }
}

The end result is the same. As a next step, let’s display the contents of the file:

Console.WriteLine("Here are the contents of the file: ");
Console.WriteLine(fileContents);

Finally, run the application. If you did everything right, you should see the contents of your file being displayed.

Load a File’s Lines to an Array

In the first example, you learned how to read a whole file and load its contents into a single string variable. Often, you need to do some kind of processing to a file that requires you to handle each of its lines separately—parsing a CSV file is an example that comes to mind.

Let’s learn how to do that. First, change your file so it has several lines. Here’s mine after the change:

Now, let’s make two tiny changes to our original file reading code:

  • First, switch the type of the fileContents variable from string to string[]—that is, an array of strings
  • Then, replace the method call to ReadAllText with ReadAllLines.

The new code should look like this:

string[] fileContents = File.ReadAllLines("C:\\files\\example.txt");

You might be familiar with type inference—that is, using the var keyword instead of the type name when declaring a variable. In a normal situation, I’d use it, but I chose not to do it here in order to make the types as explicit as possible.

You now have an array of strings containing the lines of the file. Let’s use a for loop to iterate over those lines:

for (var i = 0; i < fileContents.Length; i++)
{
	Console.WriteLine($"Line #{i + 1}: ");
	Console.WriteLine(fileContents[i]);
	Console.WriteLine();
}

After running the application, you should now see something like this:

Line #1:
<Contents of the first line>

Line #2:
<Contents of the second line>

Line #n:
<Contents of the n-th line>

Read a Text File, Line by Line

Both the methods from the File class we’ve used so far are easy to understand and use. However, they can give you a headache if you come across large files. Those methods load the entirety of the file to memory, which means you can run out of memory with a large enough file.

OK, so how to proceed in such cases?

Let’s change the code again:

  • We’ll change the variable type from string[] to IEnumerable<string>.
  • Also, we’ll use the ReadLines method instead of the ReadAllLines.
  • Finally, we’ll replace the for loop with a foreach one.

Here’s the new code:

IEnumerable<string> fileContents = File.ReadLines("C:\\files\\example.txt");
var index = 1;

foreach (string line in fileContents)
{
	Console.WriteLine($"Line #{index}: ");
	Console.WriteLine(line);
	Console.WriteLine();
}

If you run the code, you’ll see that the result is exactly the same. So, what has changed, if anything? The previous method, ReadAllLines, returned an array with all of the lines. So, you have the whole content of the file loaded in memory.

ReadLines, on the other hand, returns an IEnumerable. It doesn’t contain all lines loaded in memory. Instead, it’s more like a promise that if you request the next item, it will provide it to you as long as there are further items.

And how do you request the next item? You do that every time you iterate over the result using the foreach. The result is that the lines are being loaded one by one to memory.

Does all of the above sound a little fuzzy? I know, I understand it can be weird to wrap your head around those concepts. For now, do this:

  • Remember that ReadLines is the “safe” method to use if you don’t want to load the whole file to memory.
  • Research the concept called “Lazy Evaluation” and how it relates to the IEnumerable interface and LINQ.

What If the File Can’t Be Found?

If you try to open a file and it doesn’t exist on the path you provided, you’ll get an exception. So, how to handle this possibility?

An option would be to use the File.Exists method. Here’s our first example, updated so it validates the existence of the file:

string path = "C:\\files\\example404.txt";
if (File.Exists(path))
{
    string fileContents = File.ReadAllText(path);
    Console.WriteLine("Here are the contents of the file: ");
    Console.WriteLine(fileContents);
}
else
{
    Console.WriteLine("File wasn't found!");
}

As you can see, here we use the method to check whether a file exists on that location. If it does, we display its contents. Otherwise, we show a fallback message.

This approach presents a problem, though. It’s a scenario called a race condition. You see, it’s possible that after our code has confirmed the existence of the file, some other process in the operating system deletes the file. Then, when the code reaches the point where it tries to read the file, it throws an exception anyway.

In this scenario, I’d recommend simply catching the exception and not bothering trying to validate the file’s existence:

try
{
    string fileContents = File.ReadAllText(path);
    Console.WriteLine("Here are the contents of the file: ");
    Console.WriteLine(fileContents);
}
catch (FileNotFoundException ex)
{ 
    // do something 
}

There are several other exceptions that can occur when reading files, all of them inheriting from System.IO.IOException class. Head to the .NET docs to learn more about the exceptions.

Read a File, Read More Articles, Don’t Stop Learning!

File manipulation is a staple of programming, and in this post, we’ve covered just the tip of the iceberg. You’ve learned:

  • how to load a whole file to a single string
  • how to load a whole file to an array of strings, representing the lines of the file
  • how to load a file, line by line, in a lazy way, without compromising memory

Sometimes you don’t find what you’re looking for, and often the file you want to read isn’t there. Luckily, you’ve learned that you can either check for the presence of the file or simply don’t bother and handle the exception that may arise.

Before departing, I invite you to hang around at the Stackify blog and explore the .NET/C# posts we have there. They surely will be a great resource on your C# learning journey. Thanks for reading!

This post was written by Carlos Schults. Carlos is a consultant and software engineer with experience in desktop, web, and mobile development. Though his primary language is C#, he has experience with a number of languages and platforms. His main interests include automated testing, version control, and code quality.

]]>
C# Sleep: A Detailed Guide https://stackify.com/c-sleep-a-detailed-guide/ Thu, 30 Nov 2023 00:01:39 +0000 https://stackify.com/?p=42568 Ah, the sweet allure of a well-rested application. No, I’m not talking about kicking back and letting your software take a nap. I’m diving deep into C# and its Sleep method. 

Have you ever wondered, “Is there a Sleep function in C#?” We’ve got answers. By the end of this post, you’ll know all about the ins, outs and potential pitfalls of using Sleep in C#. So, grab your favorite beverage, and let’s get into it!

What Is C# Sleep?

In the vast and varied landscape of C#, Sleep stands out as one of the pivotal methods provided by the Thread class. Nestled within the System.Threading namespace, this method might seem simple at a glance, but it’s a powerful tool in the hands of a seasoned developer.

At its core, the Sleep method is designed to pause or suspend the execution of the current thread for a specified duration, measured in milliseconds. It’s like telling a part of your program, “Hey, take a short break!” But why would you want to do that? Well, the reasons can be multifaceted, ranging from simulating delays to managing resource allocation.

However, it’s essential to understand that Sleep doesn’t make the entire application or process sleep. Instead, it halts only the specific thread it’s called upon. In a multithreaded environment, where multiple threads run concurrently, using Sleep on one thread doesn’t affect the execution of other threads. They continue to run unimpeded.

Consider this analogy to give you a clearer picture: Imagine a bustling kitchen with several chefs (threads) working simultaneously. If one chef decides to take a short break (sleep), it doesn’t mean the entire kitchen stops. The other chefs continue their tasks without interruption.

This distinction is crucial, because in modern software development, applications often rely on multithreading to perform multiple operations concurrently, enhancing efficiency and responsiveness. By selectively pausing specific threads using the Sleep method, developers can fine-tune the flow of their applications, ensuring that critical tasks get the CPU time they need while less crucial tasks can wait their turn.

Why Use C# Sleep?

Well, there are various scenarios:

  1. Simulating delays: Whether you’re mocking up a long-running process or testing how your application behaves under certain conditions, Sleep offers a straightforward way to introduce artificial delays.
  2. Throttling: Sometimes, you should limit how often a particular section of code runs, especially when dealing with external systems with rate limits.
  3. UI responsiveness: In GUI applications, you might use Sleep to prevent the UI from becoming unresponsive during intensive tasks.

So, why is it important? The answer lies in resource management. By controlling when and how threads execute, developers can optimize applications for performance and responsiveness.

How to Use C# Sleep

Let’s get our hands dirty with some code!

Basic Sleep

Want to sleep for 1 second in C#? Here’s how:

using System.Threading;

// ... other code ...

Thread.Sleep(1000); // Sleeps for 1000 milliseconds (or 1 second)

In this snippet, the Sleep method is called with a parameter of 1000, representing the sleep duration in milliseconds. So, the current thread will be paused for a second before it resumes its tasks.

Simple, right? Let’s move on to a more complex example.

Conditional Sleep

Imagine fetching data from an API that allows five requests per minute. You might use Sleep to throttle your requests:

for (int i = 0; i < 5; i++)
{
    FetchDataFromAPI();

    if (i < 4) // To avoid sleeping after the last request
        Thread.Sleep(12000); // Sleeps for 12 seconds between each request
}

Check out this in-depth guide on C# threading and multithreading for more intricate threading scenarios.

Potential Issues with Using Sleep

While Sleep can be super handy, it’s not without its quirks.

  1. Unpredictability: Relying heavily on Sleep can lead to unpredictability, especially if you’re not accounting for other processes that might be running.
  2. Resource hogging: Threads, even when sleeping, hold onto resources. Overusing Sleep might lead to resource contention.
  3. Nonresponsiveness: In GUI applications, misuse of Sleep on the main thread can cause the application to become unresponsive.

When and Why to Consider Alternatives

While Sleep is great, there are situations where alternatives like Task.Delay or Timer might be more appropriate. For example, in asynchronous operations, Task.Delay allows for nonblocking waits.

Moreover, understanding the underlying mechanisms, like how C# handles reflection, can provide insights into optimizing your application’s behavior.

Wrapping Up with Benefits

Understanding the mechanics of a feature or tool is undeniably essential. Still, the actual value emerges when we translate these technical nuances into tangible benefits for developers and end users. Let’s dissect the advantages of mastering tools like Sleep in C# and related monitoring tools.

  1. Optimized performance: At the heart of every application lies the desire for speed and efficiency. By effectively controlling the flow of execution using methods like Sleep, developers can fine-tune the application’s responsiveness. This ensures that crucial processes get prioritized while less urgent tasks are artfully managed to avoid overburdening the system. The result? An application that’s nimble and agile, delivering a seamless experience.
  2. Enhanced user experience: Remember the last time you used a laggy application? Frustrating, right? Responsiveness is a cornerstone of user satisfaction. By managing thread execution effectively, developers can prevent those dreaded moments of unresponsiveness or lag. A smooth, consistent user experience fosters trust and encourages users to spend more time with your application, ultimately boosting engagement and loyalty.
  3. Resource efficiency: In the world of computing, resources aren’t infinite. Every byte of memory and every CPU cycle counts. Methods like Sleep empower developers to make judicious use of these resources. By orchestrating when specific tasks should run and when they should pause, you ensure that your application doesn’t waste valuable resources, leading to a more cost-effective and environmentally friendly operation.
  4. Predictable scalability: As your application grows, so do its demands. Developers can better predict how their applications will behave under increased loads by understanding and effectively using tools like Sleep. This foresight allows for more accurate scaling strategies, ensuring that as your user base grows, your application remains as sprightly as ever.
  5. Empowered debugging and troubleshooting: Let’s face it, bugs are an inevitable part of development. However, with tools like Netreo and Retrace, which excel in capturing critical metrics, logs and traces, developers can gain a bird’s-eye view of their application’s behavior. Diagnosing issues becomes a breeze when you can observe and analyze how your application behaves. And as the integration between these platforms progresses, we’re looking at an all-in-one observability platform that can revolutionize app debugging, while optimizing your entire IT infrastructure.

Conclusion

By weaving together the technical understanding of methods like Sleep with comprehensive monitoring tools, developers can craft applications that aren’t just functional but are efficient, user friendly, and resilient. In this digital transformation era, where user expectations are sky-high, such mastery can be the difference between a good and genuinely great application.

This post was written by Juan Reyes. As an entrepreneur, skilled engineer, and mental health champion, Juan pursues sustainable self-growth, embodying leadership, wit, and passion. With over 15 years of experience in the tech industry, Juan has had the opportunity to work with some of the most prominent players in mobile development, web development, and e-commerce in Japan and the US.

]]>
Log4net for .NET Logging: The Only Tutorial and 14 Tips You Need to Know https://stackify.com/log4net-for-net-logging-the-only-tutorial-and-14-tips-you-need-to-know/ Tue, 21 Nov 2023 11:15:37 +0000 https://stackify.com/?p=42464 If you’ve been writing code for any reasonable amount of time, then it’s virtually impossible that you haven’t handled logging in any way, since it’s one of the most essential parts of modern, “real life” app development.

If you’re a .NET developer, then you’ve probably used some of the many famous logging frameworks available for use on this platform. Today’s post will cover one of these frameworks: log4net.

Getting started with logging—and also the concept of a logging framework—can be a daunting task. This post will feature a gentle but complete introduction to log4net.

After following this tutorial, you’ll have a firm grasp of what’s log4net about, how to install and use it, and what the most important best practices are that you should try to adopt. Let’s get started.

What Is log4net and Why Should You Use It, or Any C# Logging Framework?

Before we dive into the nuts and bolts of how to use log4net, we need to understand what this thing is about.

So, what is log4net?

Log4net is a logging framework for the .NET platform. It’s definitely not the only one, but it’s one of the most popular frameworks out there.

A logging framework is a tool that can dramatically reduce the burden of dealing with logs. Logging is an essential aspect of software development.

Whether you’re developing a simple application or a complex enterprise-level system, effective logging can help you troubleshoot issues, monitor application behavior, and track events.

adopting a logging framework

When you employ a framework, it takes care of many of the important yet annoying aspects of logging: where to log in, whether to append to an existing file or create a new one, the formatting of the log message, and more.

Another very important issue that a logging framework takes care of for you is log targets. By adopting a logging framework, it becomes easy to write your logs to different places by simply changing your configuration.

You can write your .NET logs to a file on disk, a database, a log management system, or potentially dozens of other places, all without changing your code.

Getting Started: How to Install log4net Using Nuget

1. Add log4net Package

The easiest way to add Log4net to your project is by using NuGet Package Manager. Open your Visual Studio project, right-click on your project in Solution Explorer, and select “Manage NuGet Packages.” Search for “Log4net” and install the package.

You can also run this quick command from the Package Manager Console:

PM> Install-Package log4net

2. Add log4net.config File

Add a new file to your project in Visual Studio called log4net.config and be sure to set a property for the file. Set Copy to Output Directory to Copy Always. This is important because we need the log4net.config file to be copied to the bin folder when you build and run your app.

add config file

To get you started quickly, copy this log4net config and put it in your new log4net.config file. This will log messages to the console and a log file. We will discuss more about logging appenders further down.

 <log4net>
    <root>
      <level value="ALL" />
      <appender-ref ref="console" />
      <appender-ref ref="file" />
    </root>
    <appender name="console" type="log4net.Appender.ConsoleAppender">
      <layout type="log4net.Layout.PatternLayout">
        <conversionPattern value="%date %level %logger - %message%newline" />
      </layout>
    </appender>
    <appender name="file" type="log4net.Appender.RollingFileAppender">
      <file value="myapp.log" />
      <appendToFile value="true" />
      <rollingStyle value="Size" />
      <maxSizeRollBackups value="5" />
      <maximumFileSize value="10MB" />
      <staticLogFileName value="true" />
      <layout type="log4net.Layout.PatternLayout">
        <conversionPattern value="%date [%thread] %level %logger - %message%newline" />
      </layout>
    </appender>
  </log4net>

3. Tell log4net to Load Your Config

The next thing we need to do is tell log4net where to load its configuration so that it actually works. I suggest putting this in your AssemblyInfo.cs file.

You can find it under the Properties section in your project:

load your config

Add this to the bottom of your AssemblyInfo file.

[assembly: log4net.Config.XmlConfigurator(ConfigFile = "log4net.config")]

4. Log Something!

Now you can modify your app to log something and try it out!

    class Program
    {
        private static readonly log4net.ILog log = log4net.LogManager.GetLogger(System.Reflection.MethodBase.GetCurrentMethod().DeclaringType);

        static void Main(string[] args)
        {
            log.Info("Hello logging world!");
            Console.WriteLine("Hit enter");
            Console.ReadLine();
        }
    }

Log Appenders: What They Are and Which Ones You Need to Know

Appenders are how you direct where you want your logs sent. The most popular of the standard appenders are most likely the ConsoleAppender, File Appender, Database Appender, and RollingFileAppender.

Console Appender

The Console Appender is one of the simplest and most commonly used appenders. It’s primarily intended for development and debugging, as it outputs log messages to the console, making them easily visible during the development phase.

File Appender

The File Appender is crucial in production environments, where logs need to be persisted for analysis and auditing. It logs messages to text files, allowing for later inspection and tracking of application behavior.

Database Appender

For applications that require centralized log storage and compliance with auditing requirements, the Database Appender is invaluable. It logs messages to a database, ensuring that log data is stored securely and can be accessed for analysis.

RollingFile Appender

The RollingFile Appender is a specialized version of the file appender. It helps manage log files by creating new files when a certain size or time threshold is reached. This prevents log files from growing excessively and becoming unwieldy.

I would also try the DebugAppender if you want to see your log statements in the Visual Studio Debug window so you don’t have to open a log file.

If you are using a Console, check out the ColoredConsoleAppender.

colored console appender

Make Good Use of Multiple Log Levels and Filter by Them

Be sure to use Debug, Info, Warning, Error, and Fatal logging levels as appropriate within your code. Don’t log everything as Debug. Be sure to think about how you would be viewing the logs and what you want to see later when coding your logging statements.

You can specify in your log4net config which log4net logging levels you want to log.

This is really valuable if you want to specify only certain levels to be logged to a specific log appender or to reduce logging in production. This allows you to log more or less data without changing your code.

log4net levels:

  • All: Logs everything, regardless of its severity. Not commonly used as it generates a vast volume of log data, making it challenging to analyze and manage.
  • Debug: For detailed debugging information.
  • Info: General operational information. This information helps developers understand what the application is doing without cluttering the log with excessive detail.
  • Warn: Indicates potential issues that need attention.
  • Error: Logs errors that affect functionality but do not necessarily cause the application to terminate.
  • Fatal: Records critical errors that may terminate the application.
  • Off: Disables logging altogether. (don’t log anything)

Advanced Topics & 14 log4net Best Practices

1. Define Your LogManager Object as Static

Declaring any variable in your code has overhead. When I have been doing some profiling sessions in the past to optimize code, I have noticed that the constructors on the LogManager object can use a lot of CPU.

Declare it as static and use this little trick so you don’t have to hard code the class type.

   private static readonly log4net.ILog log 
       = log4net.LogManager.GetLogger(System.Reflection.MethodBase.GetCurrentMethod().DeclaringType);

2. How to Enable log4net’s Own Internal Debug Logging

From time to time, you may have problems with a specific appender, or issues working with it.

To help resolve these issues, enable internal log4net logging via your web.config file.

<configuration>
   <appSettings>
      <add key="log4net.Internal.Debug" value="true"/>
   </appSettings>
</configuration>

You can then specify where the logging is written to.

<configuration>
...

<system.diagnostics>
    <trace autoflush="true">
        <listeners>
            <add 
                name="textWriterTraceListener" 
                type="System.Diagnostics.TextWriterTraceListener" 
                initializeData="C:\tmp\log4net.txt" />
        </listeners>
    </trace>
</system.diagnostics>

...
</configuration>

3. Do Not Send Your Logs to a Database Table with the AdoAppender

Trying to query logs in SQL is very difficult if you log any real volume. You are much better off sending your logs to Elasticsearch or a log management service that can provide full-text indexing and more functionality with your logs.

4. Do Not Send Emails on Every Exception

The last thing you want to do is send any sort of email from an appender. They either get ignored over time or something starts throwing a lot of exceptions and then your app starts sending thousands of errors. Although, there is a SmtpAppender if you really want to do this.

5. How to Send Alerts for Exceptions

If you want to send alerts about exceptions, send your exceptions to an error-tracking product, like Retrace, which is designed for this.

They can also de-dupe your errors so you can figure out when an error is truly new, track its history, and track error rates.

6. Send Your Logs to a Log Management System to View Them Across Servers

Capturing logs and logging them to a file on disk is great. But if you want to search your logs across multiple servers and applications, you need to send all of your logs to a central repository.

There are a lot of log management solutions that can help you with this, or you can even set up your own elastic search cluster for it.

 if you want to search your logs across multiple servers and applications, you need to send all of your logs to a central repository

If you want to query all the files on disk, consider using VisualLogParser.

7. Use Filters to Suppress Certain Logging Statements

Filters can be configured to suppress specific log messages. Take a look at these examples.

Here’s how you can filter by the text on the log messages.

<filter type="log4net.Filter.StringMatchFilter">
  <stringToMatch value="test" /> <!-- Can filter by string or regex -->
</filter>

Here, you can filter by the log level:

<filter type="log4net.Filter.LevelRangeFilter">
   <levelMin value="INFO" />
   <levelMax value="FATAL" />
</filter>

8. You Can Make Your Own Custom log4net Appenders

If you want to do something that the standard appenders do not support, you can search online for one or write your own.

One example could be an appender for writing to Azure Storage. Once upon a time, we wrote one to send our logs to Azure Table storage to centralize them. We couldn’t really query them due to the lack of full-text indexing, but we could view them.

As an example of a custom appender, you can review the source code for our appender for sending logs to Retrace.

9. Customize Your Layout in Your Logs with log4net Pattern Layouts

You can modify your configuration to change what fields are outputting and in what format using pattern layouts.

    <appender name="LogFileAppender" type="log4net.Appender.RollingFileAppender">
      <param name="File" value="stackify.log" />
      <param name="AppendToFile" value="true" />
      <rollingStyle value="Size" />
      <maxSizeRollBackups value="10" />
      <maximumFileSize value="10MB" />
      <staticLogFileName value="true" />
      <layout type="log4net.Layout.PatternLayout">
        <param name="ConversionPattern" value="%-5p %d{MM-dd hh:mm:ss.ffff}  [%thread]  %m%n" />
      </layout>
    </appender>

Using the layout above, write the level (%p), current date time (%d), thread # (%thread), the message (%m), and a new line (%n). The -5 in the %-5p is to set the width of the field to 5 characters.

Here are some other notable fields you can log, although they can have a big performance impact on your app and would not be recommended for high-volume logging on a production application.

  • %method: name of the method where the log message was written
  • %stacktrace{level}: output a stack trace to show where the log message was written
  • %type: type of the caller issuing the log request. Most likely your class name
  • %line: the line number from where your logging statement was logged

A layout like this:

<layout type="log4net.Layout.PatternLayout">
        <param name="ConversionPattern" value="%-5p%d{ yyyy-MM-dd HH:mm:ss} – [%thread] %m method:%method %n stacktrace:%stacktrace{5} %n type:%type %n line: %line %n" />
</layout>

Produces a log message like this:

ERROR 2017-02-06 09:38:10 – [10] Error downloading web request method:ThrowWebException 
 stacktrace:Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly > System.AppDomain.ExecuteAssembly > System.AppDomain._nExecuteAssembly > ConsoleApplication1.Program.Main > ConsoleApplication1.Class1.ThrowWebException 
 type:ConsoleApplication1.Class1 
 line: 26 

10. Use the Diagnostic Contexts to Log Additional Fields

You can also log custom fields to help give some more context about the user, customer, or transaction related to the log statements.

The below example sets a custom property called customer. You can then modify your log4net pattern layout to include %property{customer} to output it in your logs.

            log4net.ThreadContext.Properties["customer"] = "My Customer Name";

            log.Debug("We are going to try and do a web request");

            try
            {
                Class1.ThrowWebException();
            }
            catch (Exception ex)
            {
                log.Error("Error trying to do something", ex);
            }
            log.Debug("We are done with the web request");

11. How to Correlate Log Messages by Web Request Transaction

Additionally, you can assign objects in contexts to use what it calls “active property values.” When the log message is written, the ToString() method will be called which can dynamically do something.

This can be used to write transaction IDs to help correlate messages to a web request or transaction!

        //Create our little helper class
        public class ActivityIdHelper
        {
            public override string ToString()
            {
                if (Trace.CorrelationManager.ActivityId == Guid.Empty)
                {
                    Trace.CorrelationManager.ActivityId = Guid.NewGuid();
                }

                return Trace.CorrelationManager.ActivityId.ToString();
            }
        }

In your global.asax or Startup.cs class, subscribe to an event for when a request first starts.

        public override void Init()
        {
            base.Init();
            this.Error += WebApiApplication_Error;
            this.BeginRequest += WebApiApplication_BeginRequest;
            this.EndRequest += WebApiApplication_EndRequest;

        }

        void WebApiApplication_BeginRequest(object sender, EventArgs e)
        {
            //set the property to our new object
            log4net.LogicalThreadContext.Properties["activityid"] = new ActivityIdHelper();

            log.Debug("WebApiApplication_BeginRequest");
        }

If you add %property{activity} to your pattern layout, you can now see a transaction ID in your logs like so.

Your log messages may still look like spaghetti, but at least you can easily see which ones go together.

DEBUG 02-06 02:51:58.6347 – a69640f7-d47d-4aa4-99c9-13cfd9ab93c2 WebApiApplication_BeginRequest 
DEBUG 02-06 02:51:58.6382 – a69640f7-d47d-4aa4-99c9-13cfd9ab93c2 Starting KitchenAsync - Call redis
DEBUG 02-06 02:51:58.9315 – b8a3bcee-e82e-4298-b27f-6481b256b5ad Finished KitchenAsync
DEBUG 02-06 02:51:59.1285 – a69640f7-d47d-4aa4-99c9-13cfd9ab93c2 Call Webclient
DEBUG 02-06 02:51:59.1686 – 54218fab-bd1b-4c77-9ff8-ebef838dfb82 WebApiApplication_BeginRequest
DEBUG 02-06 02:51:59.1746 – 54218fab-bd1b-4c77-9ff8-ebef838dfb82 Starting KitchenAsync - Call redis
DEBUG 02-06 02:51:59.4378 – a69640f7-d47d-4aa4-99c9-13cfd9ab93c2 Finished KitchenAsync
DEBUG 02-06 02:51:59.6450 – 54218fab-bd1b-4c77-9ff8-ebef838dfb82 Call Webclient
DEBUG 02-06 02:51:59.9294 – 54218fab-bd1b-4c77-9ff8-ebef838dfb82 Finished KitchenAsync

12. How to Log ASP.NET Request Details

You could use the same strategy as above to dynamically grab ASP.NET request info to add to your log message.

        public class WebRequestInfo
        {
            public override string ToString()
            {
                return HttpContext.Current?.Request?.RawUrl + ", " + HttpContext.Current?.Request?.UserAgent;
            }
        }

        void WebApiApplication_BeginRequest(object sender, EventArgs e)
        {
            log4net.LogicalThreadContext.Properties["activityid"] = new ActivityIdHelper();
            log4net.LogicalThreadContext.Properties["requestinfo"] = new WebRequestInfo();

            log.Debug("WebApiApplication_BeginRequest");
        }

13. How to Do Structured Logging, or Log an Object or Properties with a Message

By default, you can log an object to it and it will serialize it with its default renderers.

log.Debug(new {color="red", int1 = 1});

Output:

DEBUG 2017-02-06 15:07:25 – [8] { color = red, int1 = 1 }

But what if you want to log your entire log message as JSON?

There are several Nuget packages related to log4net and JSON, but the support and docs for all of them seem a little sketchy.

I would recommend just making your own JsonLayout class that does it. There is a good sample on GitHub. You could then control exactly how you log the JSON and which fields you log.

Output from the GitHub JsonLayout:

{
	"processSessionId" : "225ba696-6607-4abc-95f6-df8e0438e898",
	"level" : "DEBUG",
	"messageObject" : "Finished KitchenAsync",
	"renderedMessage" : "Finished KitchenAsync",
	"timestampUtc" : "2017-02-06T21:20:07.5690494Z",
	"logger" : "WebApp2.Controllers.TestController",
	"thread" : "69",
	"exceptionObject" : null,
	"exceptionObjectString" : null,
	"userName" : "IIS APPPOOL\\WebApp2",
	"domain" : "/LM/W3SVC/1/ROOT/WebApp2-10-131308895921693643",
	"identity" : "",
	"location" : "WebApp2.Controllers.TestController+d__27.MoveNext(C:\\BitBucket\\PrefixTests\\WebApp2\\Controllers\\TestController.cs:477)",
	"pid" : 14428,
	"machineName" : "LAPTOP-1UJ70V4E",
	"workingSet" : 352481280,
	"osVersion" : "Microsoft Windows NT 10.0.14393.0",
	"is64bitOS" : true,
	"is64bitProcess" : true,
	"properties" : {
		"requestinfo" : {},
		"activityid" : {},
		"log4net:UserName" : "IIS APPPOOL\\WebApp2",
		"log4net:Identity" : "",
		"log4net:HostName" : "LAPTOP-1UJ70V4E"
	}
}

If you want to really get the value of structured logging, you will want to send your logs to a log management tool that can index all the fields and enable powerful searching and analytics capabilities.

Learn more here: What is structured logging and why developers need it.

14. How to View log4net C# Logs by ASP.NET Web Request

Log files can quickly become a spaghetti mess of log messages. Especially with web apps that have lots of AJAX requests going on that all do logging.

I highly recommend using Prefix, Stackify’s FREE .NET Profiler to view your logs per web request, along with SQL queries, HTTP calls, and much more.

transaction trace annotated

Start Logging ASAP

Logging is an essential part of modern software development. Deploying a piece of software to a production environment without any type of logging would be unthinkable nowadays. Doing so would amount to taking a walk in a huge city, during rush hour, blindfolded.

Software is a very complex thing. When you release an application, and deploy it to a (potentially) unknown environment, you can’t know for sure that everything is going to work as intended.

If something goes wrong—and it will—logging is one of the few ways you can use to “go back in time,” understand the problem, and fix it.

Conclusion

Today’s post covered the log4net logging framework. You learned what a logging framework is, and how it can relieve the burden you might face as a developer having to come up with a logging strategy.

We also shared a list of best practices and tips you can start employing right away to make your journey with log4net easier.

Thanks for reading, and see you next time.

]]>
C# Delegates: Definition, Types & Examples https://stackify.com/c-delegates-definition-types-examples/ Wed, 01 Nov 2023 22:38:12 +0000 https://stackify.com/?p=42452 The C# delegate is an essential “construct” in the C# programming language. Delegates are essential for event handling, LINQ queries, asynchronous programming and more. And you can, of course, make use of delegates to make your code simpler and more concise.

This post offers you a guide to this incredibly useful tool in C#. By the end of the post, you’ll have learned:

  • what a C# delegate is
  • what types it has
  • how a C# delegate works
  • why you need C# delegates in the first place

Let’s get started.

What Is a Delegate In C#?

A C# delegate is an object that represents a method. The C# delegate allows you to treat a method as a value, assigning the method to a variable, passing it to other methods as parameters, adding it to a collection, and so on. Delegates are similar—regarding their behavior—to what some other languages call a function pointer, the difference being that delegates are fully object-oriented.

As a next step, let’s understand why you’d need delegates. 

Why Are Delegates Useful?

A common scenario in programming is when the part of the code that knows the action that needs to be executed isn’t the same part of the code that performs the execution. In such situations, you need a way to encapsulate an action inside an object and pass it around.

How to solve this issue? C# delegate to the rescue! By instantiating a delegate, you can express an action as an object and hand that object over (or delegate it) to the code that’s actually able to execute the action.

In C#, delegates are particularly useful for event handling. Through delegates, you subscribe to an event. Delegates are also essential in LINQ, which is honestly a big part of what makes programming in C# so pleasurable.

C# Delegate Basic Examples

Let’s declare a delegate type that represents a function that takes a string as a parameter and returns an integer:

public delegate int StrToInt(string s);

The line above is a delegate declaration. We essentially defined a new type. Now, we can create delegate instances that match that declaration. Consider we have the following method:

public static int ConverToNumber(string s)
{
	if (int.TryParse(s, out int result))
		return result;
	return 0;
}

Let’s say we also have this method:

public static int GetLength(string text)
{
    return text.Length + 10;
}

Not the two most useful methods ever written, but both fit the bill. That is to say, both match the definition of our delegate, so we can assign them to instances of that type:

StrToInt myAction = ConvertToNumber;
StrToInt otherAction = GetLength;

Now, we can use the delegate instances to execute the actions:

var someText = "Hello World!";
var length = otherAction.Invoke(someText);
Console.WriteLine($"The text '{someText}' has a length of {length}.");

The code above uses the Invoke method to run the method encapsulated by the delegate instance. You can also just call a delegate the same way you would a normal method:

var numberAsText = "10";
var number = myAction(numberAsText);

Instantiating a Delegate

In the examples above, we’ve assigned “normal” methods to the delegate instances. These assignments work just fine and might be a good option if you already have the right method defined. But sometimes you don’t, and having a concise syntax to quickly instantiate a delegate instance can be a lifesaver. Anonymous methods come in handy in this scenario:

StrToInt myAction = delegate(string s) {  return s.Length; };

As you can see, it’s possible to use the delegate keyword to create a function on the fly and assign it to the delegate instance. But why stop there? We can go a step further:

StrToInt myAction = s => s.Length;

In the example above, we use a lambda expression to define a function that gets converted to a delegate instance. You’re probably used to seeing lambdas around, as they’re certainly the most common way to instantiate delegates.

Types of Delegates

You’ve seen the what and why of C# delegates, and even some examples. Let’s now go deeper into the “how.”

Single cast

You can leverage different types of delegates in C#. The ones you’ve seen so far are single-cast: delegates that point to a single method. Having a delegate hold two or more methods is possible and often quite valuable.

Multicast

Delegates that hold more than one method are \ called multicast delegates. You can use the plus (+) operator to add more methods to a delegate instance, and the minus (-) operator to remove a method. When you invoke the delegate instance, it executes all of the methods in its invocation likes in the order in which they were added.

Let’s see a multicast delegate in action. First, consider the following delegate declaration:

public delegate void DisplaySomething(int number);

Let’s now instantiate this type and use a lambda expression to create an anonymous method that displays a message showing whether the number is even or odd:

// yep, I know about bitwise operators, wrote like this for readability
DisplaySomething displayEvenOrOdd = x => Console.WriteLine(x % 2 == 0 ? "even" : "odd");

Now we’ll create another instance that takes a number and displays its value in binary:

DisplaySomething displayBinary = x => Console.WriteLine(Convert.ToString(x, 2));

Let’s now create yet another instance that displays as many asterisks as the number informed:

DisplaySomething displayNAsteriks = x => Console.WriteLine(new String('*', x));

Finally, let’s combine all three into a multicast delegate and invoke it:

DisplaySomething displayAll = displayEvenOrOdd + displayBinary + displayNAsteriks;
displayNAsteriks;
displayAll(10);

And voilá:

even
1010
**********

Now, let’s remove the “display binary” function from the invocation list and invoke the multicast delegate again:

displayAll -= displayBinary;
displayAll(9);

And this is what we get:

odd
**********

Generic Delegates

As it turns out, working with delegates can be even easier than what we’ve shown you up to now. That’s because .NET offers some built-in generic delegates that can spare you the work of having to declare your own delegate types. Let’s explore them right now.

Func

The first generic delegate in our list is Func<T, TResult>. We can use it to quickly define a delegate that gets a parameter (expressed by T) and returns a value (defined by TResult.) Recall our first declaration example:

public delegate int StrToInt(string s);

Using Func, we wouldn’t need to declare that type and could’ve gone straight away with the instantiation:

Func<string, int> myAction = word => word.Length;

How to use this? Same as before: use the Invoke method, or simply call it the same way you call a regular method:

var result = myAction("C# is awesome!");

You can, of course, declare methods with more than one parameter. For instance, one that takes two ints and returns an int:

Func<int, int, int> sum = (a, b) => a + b;

What about a method with no argument but that returns a value? No problem:

Func<DateTime> whatTimeIsIt = () => DateTime.Now;

Action

What if you need to declare a void method? Then Action<T> is your friend:

Action<int> displayEvenOrOdd = x => Console.WriteLine(x % 2 == 0 ? "even" : "odd");

And to declare a void method with no arguments, simply use Action with no type parameters:

Action displayHelloWorld = () => Console.WriteLine("Hello World!");

Predicate

Finally, let’s cover Predicate<T>. In a nutshell, this generic delegate is for methods that perform a kind of check, based on some criteria, and return either true or false. Suppose you have a list of numbers:

var numbers = Enumerable.Range(1, 10).ToList();

You want to get a list with just the even numbers. You can use a Predicate to define a method that does the checking:

Predicate<int> isEven = x => x % 2 == 0;

Then, as a next step, you could use Func to define a function that does the filtering:

 Func<List<int>, Predicate<int>, List<int>> filter = (input, predicate) =>
{
	var result = new List<int>();
	foreach (var i in input)
	{
		if (predicate(i)) result.Add(i);
	}
	return result;
};

Of course, in real life, you wouldn’t write code like this and would use the Where LINQ extension method instead. Interestingly, the Where LINQ delegate uses Func<T, Boolean> instead of Predicate<T>.

Don’t Delegate Your C# Learning To Anyone Else

In this post, we’ve offered you a comprehensive introduction to C# delegates. As you’ve seen, they’re a powerful part of programming in C#, which can help you write simpler, more concise code.

For instance, a few days ago, I was struggling with code duplication in a codebase at work. I had the same exception-handling logic popping up in a lot of places. That’s when I decided to leverage the power of delegates. I extracted the exception handling code to a dedicated method and then used Func in order to pass the action to be executed to that method. While just an example from work, I’m sure you’ll find plenty of opportunities to use delegates to improve your code.

Before departing, a final suggestion. If you want to continue your C# learning journey, the Stackify blog is a great place to hang out. With plenty of resources for you to improve your .NET developer skills, grab a coffee, look around, learn and enjoy!

This post was written by Carlos Schults. Carlos is a consultant and software engineer with experience in desktop, web, and mobile development. Though his primary language is C#, he has experience with a number of languages and platforms. His main interests include automated testing, version control, and code quality.

]]>
What is an Unhandled Exception and How to Find Them https://stackify.com/refresh-what-is-an-unhandled-exception-and-how-to-find-them/ Fri, 13 Oct 2023 05:26:38 +0000 https://stackify.com/?p=42318 In the world of programming, exceptions are inevitable. They represent unexpected or exceptional events that can occur during the execution of a program. While some exceptions might be anticipated and handled gracefully, others might be unexpected, leading to application crashes or unexpected behavior. This guide delves into the nuances of exceptions in C#, focusing on the importance of handling an unhandled exception and the tools available for the same.

When the application code does not handle these errors properly, we term it as an "unhandled exception.

What is an Exception?

An exception represents a runtime error or an unexpected event that can occur during the execution of a program. In C#, an exception is a known type of error. When the application code does not handle these errors properly, we term it as an “unhandled exception.”

For instance, consider a scenario where you attempt to open a file on your disk, but the file doesn’t exist. In such cases, the .NET Framework will throw a FileNotFoundException. This situation exemplifies a foreseeable problem that can be addressed within the code.

What is an Unhandled Exception?

An unhandled exception arises when developers fail to anticipate or handle potential exceptions. Look at the code sample below. Here, the developer presumes that a valid file path will be provided in “args.” The code then proceeds to load the file’s contents. If no file path is given or if the specified file is missing, the code will throw exceptions, leading to unhandled exceptions.

static void Main(string[] args)
{
    string fileContents = File.ReadAllText(args[0]);

    //do something with the file contents
}

This code can easily throw several types of exceptions and lacks exception handling best practices. Here are some of the most common best practices to follow.

Best Practices for Exception Handling in C#

a. Use Specific Exceptions: Instead of using the general `Exception` class, it’s better to catch more specific exceptions. This allows for more precise error handling and debugging.

b. Avoid Catching Exceptions You Can’t Handle: If you can’t resolve an exception, it’s often better to let it propagate up the call stack to a level where it can be adequately addressed.

c. Use `finally` Blocks for Cleanup: The `finally` block ensures that resources get released, regardless of whether an exception occurs.

d. Don’t Rethrow Exceptions Incorrectly: Use `throw;` instead of `throw ex;` to preserve the original exception stack trace.

e. Log Exceptions: Always log exceptions for future analysis. This can be invaluable for debugging and identifying recurring issues.

You can find a more in-depth exploration on C# Exception best practices in this article.

Why Do We Need to Handle Exceptions?

Exception handling is crucial for several reasons:

  1. User Experience: Unhandled exceptions can lead to application crashes, affecting the user experience. A gracefully handled exception can provide informative feedback to the user rather than abruptly terminating the program.
  2. Data Integrity: Exceptions can occur during data processing or transactions. Failing to handle such exceptions can lead to data corruption or loss.
  3. Debugging and Maintenance: Proper exception handling can provide detailed error logs, making it easier for developers to debug and maintain the application.
  4. Resource Management: Unhandled exceptions can lead to resource leaks, such as open database connections or file handles that never get closed.
  5. Security: Revealing detailed error information, especially in web applications, can expose vulnerabilities. Proper exception handling can prevent exposing sensitive information.

Types of Unhandled Exceptions in C#

In C#, there are several exceptions that, if not handled, can lead to significant issues in applications. Some of the common unhandled exceptions include:

  • FileNotFoundException: Triggered when trying to access a file that doesn’t exist.
  • NullReferenceException: Occurs when attempting to use an object reference that is null.
  • IndexOutOfRangeException: Thrown when trying to access an array or collection with an index that’s out of its bounds.
  • DivideByZeroException: Arises when there’s an attempt to divide by zero.
  • InvalidOperationException: Occurs when a method call is invalid for the object’s current state.

This is just a small subset of potential unhandled exceptions. Developers must be vigilant and incorporate robust exception handling mechanisms to catch and handle such scenarios.

How to Catch Unhandled Exceptions in C#

The .NET Framework offers events that catch unhandled exceptions. Register for these events once when your application starts. In ASP.NET, use the Startup class or Global.asax. For Windows applications, insert the registration in the first few lines of the Main() method.

static void Main(string[] args)
{
  Application.ThreadException += new ThreadExceptionEventHandler(Application_ThreadException);
  AppDomain.CurrentDomain.UnhandledException += new UnhandledExceptionEventHandler(CurrentDomain_UnhandledException);

  string fileContents = File.ReadAllText(args[0]);
  //do something with the file contents
}

static void Application_ThreadException(object sender, ThreadExceptionEventArgs e)
{
  // Log the exception, display it, etc
  Debug.WriteLine(e.Exception.Message);
}

static void CurrentDomain_UnhandledException(object sender, UnhandledExceptionEventArgs e)
{
  // Log the exception, display it, etc
  Debug.WriteLine((e.ExceptionObject as Exception).Message);
}

MORE: AppDomain.UnhandledException Event (MSDN)

View Unhandled Exceptions in Windows Event Viewer

If your application encounters unhandled exceptions, the Windows Event Viewer may log them under the “Application” category. This feature assists developers in diagnosing sudden application crashes.

The Windows Event Viewer might log two entries for the same exception: one as a .NET Runtime error and another as a generic Windows Application Error.

The Windows Event Viewer might log two entries for the same exception: one as a .NET Runtime error and another as a generic Windows Application Error.

From the .NET Runtime:

Application: Log4netTutorial.exe
Framework Version: v4.0.30319
Description: The process was terminated due to an unhandled exception.
Exception Info: System.IndexOutOfRangeException
   at Log4netTutorial.Program.Main(System.String[])

Logged under Application Error:

Faulting application name: Log4netTutorial.exe, version: 1.0.0.0, time stamp: 0x58f0ea6b
Faulting module name: KERNELBASE.dll, version: 10.0.14393.953, time stamp: 0x58ba586d
Exception code: 0xe0434352
Fault offset: 0x000da882
Faulting process id: 0x4c94
Faulting application start time: 0x01d2b533b3d60c50
Faulting application path: C:\Users\matt\Documents\Visual Studio 2015\Projects\Log4netTutorial\bin\Debug\Log4netTutorial.exe
Faulting module path: C:\WINDOWS\System32\KERNELBASE.dll
Report Id: 86c5f5b9-9d0f-4fc4-a860-9457b90c2068
Faulting package full name: 
Faulting package-relative application ID: 

Also read-https://stackify.com/the-linq-join-operator-a-complete-tutorial/

Custom Exceptions in C#

Sometimes, the built-in exception types don’t cater to specific needs. In such cases, C# allows developers to define custom exceptions. Custom exceptions can provide more context or handle domain-specific errors.

To create a custom exception:
1. Derive from the `Exception` class.
2. Provide a public constructor to initialize the exception.
3. (Optional) Override the `ToString` method to customize the error message.

Starting with C# 6, exception filters allow developers to specify a condition with the `catch` clause

Exception Filters in C# 6 and Later

Starting with C# 6, exception filters allow developers to specify a condition with the `catch` clause. The catch block executes only if the condition evaluates to `true`.

Example:

try
{
    // Code that might throw exceptions
}
catch (MyException ex) when (ex.ErrorCode == 404)
{
    // Handle only if ErrorCode is 404
}

Performance Implications of Exceptions

While exceptions are a powerful tool for managing errors, they can impact performance if misused. It’s essential to understand that exception handling should not be a mechanism for regular control flow in your application. Throwing exceptions incurs a performance cost, so use them judiciously.

Handling Exceptions in Multithreaded and Asynchronous Code

With the rise of asynchronous programming, especially with `async` and `await` in C#, handling exceptions becomes a bit more nuanced. Exceptions thrown in an asynchronous method need to be caught in the calling method when the task is awaited.

Find Unhandled Exceptions with Retrace

Retrace boasts impressive error monitoring capabilities. It can automatically collect all .NET exceptions that your application encounters, including unhandled exceptions and all thrown exceptions or first-chance exceptions.

]]>
ViewBag 101: How It Works, When It’s Used, Code Examples, and More https://stackify.com/refresh-viewbag-101-how-it-works-when-its-used-code-examples-and-more/ Thu, 12 Oct 2023 17:14:28 +0000 https://stackify.com/?p=42343 ViewBag is a property – considered a dynamic object – that enables you to share values dynamically between the controller and view within ASP.NET MVC applications. Let’s take a closer look at ViewBag, when it’s used, some limitations and other possible options to consider.

Ways to Pass Data from the Controller to the View

In the case of ASP.NET MVC, you have three ways to pass data from the controller to the view. These are:

  1. ViewBag
  2. ViewData
  3. TempData

ViewBag and ViewData are highly similar in the way they pass data from controller to view, and both are considered as a way to communicate between the view and the controller within a server call. Both of these objects work well when you use external data.

How does one differ from the other? 

ViewData is a dictionary or listing of objects that you can use to put data into. The data is now accessible to view. It is based on the ViewDataDictionary class. You can use ViewBag around any ViewData object so that you could assign dynamic properties to it, making it more flexible.

You would need typecasting for ViewData and check for null values, but you do not need to typecast complex data types in ViewBag.

The Microsoft Developer Network writes that the ViewBag property allows you to share values dynamically to the view from the controller

What is ViewBag?

The Microsoft Developer Network writes that the ViewBag property allows you to share values dynamically to the view from the controller. As such, ViewBag is considered a dynamic object without pre-set properties.

The syntax for using it in C# is:

public object ViewBag { get; }

For C++:

public:
property Object^ ViewBag {
	Object^ get();
}

Syntax For F#:

public object ViewBag { get; }

For VB:

Public ReadOnly Property ViewBag As Object

You can define the properties that you want by adding them, and you would need to use the same property name if you want to retrieve these values in view.

Here is a nifty example for that. In the controller, you can set up properties such as:

public ActionResult Index() //We'll set the ViewBag values in this action
{
ViewBag.Title = "Put your page title here";
ViewBag.Description = "Put your page description here";

ViewBag.UserNow = new User()
{
   Name = "Your Name",
   ID = 4,
};

return View();

}

To display these properties in view, you would need to use the same property names.

<h3>@ViewBag.Title</h3>

<p>@ViewBag.Description</p>

Your name:

<div>
<dl>
<dt>Name:</dt>
<dd>@ViewBag.UserNow.Name</dd>
<dt>ID:</dt>
<dd>@ViewBag.CurrentUser.ID</dd>
</dl>
</div>

A step-by-step demonstration is available at C# Corner.

When is It Used?

ViewBag is used to allow you to share values dynamically. There are several variables that are not known before the program is run, and these values are only entered during runtime. And this is where ViewBag shines, because you can put just about anything you want into it.

You can use ViewBag objects for transferring small amounts of data from the controller to the view, in cases such as:

  • Shopping carts
  • Dropdown lists options
  • Widgets
  • Aggregated data

ViewBag is a great way to access data that you use but may reside outside the data model. ViewBag is easy to use, because it’s implemented as a property of both controllers and view.

There is also one instance when ViewBag is mandatory, and that is when you are specifying a page title on any given view. For instance:

@{

ViewBag.PageTitle = "Page title will be displayed on the browser tab";

}

Some Limitations

As you may have guessed, ViewBag is not suitable for bigger sets of data or more complicated ones. For instance, complex relational data, big sets of aggregate data, data coming from a variety of sources and dashboards.

Also, there are some potential problems you may encounter. Errors, for instance, are not detected during compilation. Because of ViewBag’s dynamic nature, you can create names according to your liking. Dynamic properties are not checked during compilation, unlike normal types. What does this mean? You may not be able to detect if you used the right name and whether it matches the name you specified in the view.

For instance, you have assigned the following properties:

public ActionResult Index() //We'll set the ViewBag values in this action

{

ViewBag.Titl = "Enter your title here";

return View();

}

Then use this code for the view:

<h3>@ViewBag.Title</h3>

In this example, you specified Title as a ViewBag name, and this is a valid variable name for the property. But the view is looking for the variable name “Title.” When you compile your program, you would not be alerted to this error. Imagine having a lot of names to check manually!

ViewBag vs. TempData

TempData, on the other hand, is also a dictionary based on the TempDataDictionary class. TempData keeps information temporarily, as long as the HTTP request is active, and is perfect for redirects and a few other instances because of its temporary nature.

ViewModel

As discussed in the limitations section, there are several types of data where you cannot use ViewBag, primarily those big and complex data sets. For these types of data, you can use ViewModel if you are using ASP.NET MVC.

ViewModel also has a distinct advantage in that it is strongly typed. This means that unlike in ViewBag, ViewModel does not confuse one type with another type. For example, using ViewBag, the compiler will not detect an error when you use a DateTime as if it were a string.

public ActionResult Index() //We'll set the ViewBag values in this action
{
ViewBag.PageCreationDate = DateTime.Now;
ViewBag.LastIndexOfP = ViewBag.PageCreateDate.LastIndexOf('p');
return View();
}

The ViewBag property, LastIndexOfP in this case, is trying to get ‘p’ based on the PageCreationDate. In this example, you are treating a DateTime as a string, and compiling your program would not detect that error. Using ViewModel, you have better type protection, and error such as this one is detected by the compiler.

In general, ViewBag is a way to pass data from the controller to the view. It is a type object and is a dynamic property under the controller base class

Summary

In general, ViewBag is a way to pass data from the controller to the view. It is a type object and is a dynamic property under the controller base class. Compared to ViewData, it works similarly but is known to be a bit slower and was introduced in ASP.NET MVC 3.0 (ViewData was introduced in MVC 1.0).

The use of ViewBag has some limitations in that the compiler cannot check dynamic types and you would need to run your program first to find the errors when you use it. As such, using ViewModel is highly recommended in some instances.

ViewBag is a way to pass data from the controller to the view, but it does have a few limitations. You can build better when you understand ViewBag, when to use it, and what other options are available that may be better suited for certain use cases – such as ViewData (which is a bit faster) and ViewModel (when you need to check dynamic types).

If you want to write better ASP.NET applications, Stackify’s Prefix is a lightweight ASP.NET profiler that can help you write better software. And, if you want to take it up a notch, our APM tool offers the best in ASP.NET application monitoring. Check out our other tutorials for doing more with ASP.NET, such as understanding ASP.NET performance for reading incoming data, and find out what we learned while converting from ASP.NET to .NET Core in this post.

Additional Resources and Tutorials

For more information, check out the following resources and tutorials:

]]>
What Is NullReferenceException? Object reference not set to an instance of an object. https://stackify.com/nullreferenceexception-object-reference-not-set/ Fri, 29 Sep 2023 11:15:00 +0000 https://stackify.com/?p=10673 Description: What Is NullReferenceException? Object reference not set to an instance of an object

“Object Reference Not Set to an instance of an object.” Cast the first stone for those who never struggled with this error message as a beginner C# / .NET programmer.

This infamous and dreaded error message happens when you get a NullReferenceException. This exception throws when you attempt to access a member. For instance, a method or a property on a variable that currently holds a null reference.

But what does null reference exactly mean? What exactly are references? And how can you stop the NullReferenceException occurring in your code? Let’s take a look.

We’ll start with fundamentals by briefly explaining what references are in C# / .NET. After that, you’ll learn what null references are.

Following our exploration of theoretical definitions, we will dig into a more hands-on approach. This will guide you on preventing the occurrence of the NullReferenceException in real-world applications.

What Are References?

We already know that a null reference causes the NullReferenceException. But in what way does it differ from a non-null reference?

In .NET, data types are categorized into two groups: value types and reference types. A value type variable stores the actual value, whereas a reference type variable holds a reference pointing to the location of the object in memory. This reference functions like a link or shortcut, providing access to a web page or file on your computer, helping you understand where the object resides.

Types such as int (and the other numerical primitive types), DateTime, and boolean are value types. That is, structs are value types. Classes are reference types.

So, a reference is what a variable of a reference type contains. Variables can become empty, which we call a null reference: a reference that doesn’t point to any object. When you try calling a method or another member on an empty variable, you get the NullReferenceException.

Understanding the NullReferenceException

Null reference errors are responsible for a good percentage of all application bugs. Null references are often problems that arise from the lack of additional logic to verify if objects possess valid values before utilizing those objects. Here are some ways a NullReferenceException can occur:

Invoking a Method on a Null Object Reference

If the variable “text”  is passed in as null to the function MyMyethod, the following code will throw a NullReferenceException. You cannot invoke the ToUpper() method on a string that is null.

public void MyMethod(string text)

{
 //Throws exception if text == null
 if (text.ToUpper() == “Hello World”)
 {
      //do something
 }
}

You can also have null reference exceptions because any object type is null, not just string. For example, the code below never modifies the SqlCommand object.

Not running a SQL query would be a serious problem for your application. A null string might be something you just ignore and move on. Sometimes, especially with SqlCommand, it indicates a serious issue.

SqlCommand command = null;
//Exception! Object reference is not set to an instance of an object
command.ExecuteNonQuery();

Simple Examples of Null Values Causing Problems

Some of the most common causes are settings, database calls or API-type calls not returning expected values. For example, you add a new field to your database and don’t populate default values for every record. The code failed to account for the fact that it queries random records, and the new field is null. KA-BOOM: Object reference not set to an instance of an object.

How to Avoid the NullReferenceException?

Use the Null Conditional Operator to Avoid NullReferenceExceptions

The null conditional operator is one of the best additions to C#. Instead of many “variable != null” checks, use “?” to return null instead of throwing an exception. This will make more sense with some examples below:

text?.ToUpper(); //from previous example, would return null
int? length = customerList?.Length; // null if customerList is null  
Customer first = customerList?[0]; // null if customerList is null 
int? count = customerList?[0]?.Orders?.Count(); // null if customerList, the first customer, or Orders is null

Use Null Coalescing to Avoid NullReferenceExceptions

Another great feature is null coalescing, or the “??” operator. It works great for providing a default value for a null variable and works with all nullable data types.

The following code throws an exception if we don’t use null coalescing. Adding “?? new List<string>()” prevents the “Object reference not set to an instance of an object” exception.

List<string> values = null;
foreach (var value in values ?? new List<string>())
{
    Console.WriteLine(value);
} 
 

Avoiding NullReferenceException With C# 8.0’s Nullable Types

Bugs with null references happen because, in C#, any reference type object can be null at any time. What if, as a developer, you could ensure that a specific string will never be null?

What if the compiler prevented the accidental assignment of null to a variable? Sounds amazing? Good news, then: this is a real feature of the eighth version of C# called, obviously, nullable types.

The feature works ingeniously and powerfully by redefining the reference types as non-nullable by default—as many argue they should’ve been from the start.

To understand better, take a look at the following example:

static int Add(string numbers)

static int Add(string numbers)
{
return numbers.Split(“,”).Select(int.Parse).Sum();
}

In the pre-8.0 version of C#, the code above is dangerous. The numbers variable could be null, which would cause a NullReferenceException when trying to use the Split method.

The feature of nullable reference types in C# 8.0 would ensure your safety. The variable could never be null, and the call to the Split method would never throw an exception. Any attempt to pass null to the Add method would result in a compilation error.

But what if you wished to permit null values in numbers? In that case, you’d just have to add a question mark after the type’s name:

static int Add(string? numbers)
{
            return numbers.Split(“,”).Select(int.Parse).Sum();
}

There has been a significant shift, and Figures can now be null too. The compiler will warn you to check the variable’s value. You can even make the warning a compiler error for extra safety.

Description: Code Snippet

The compiler indicates that “numbers” has the potential to be null. Possible solutions include:

  • Using an if-statement to ensure the variable has a valid reference
  • Using the already-mentioned null-coalescing operator when calling the Split method
  • Making the “numbers” variable non-nullable again by removing the question mark

Keep in mind that this feature is opt-in. It comes disabled by default, and you must activate it in your project’s configuration. The reason is that shipping the feature already enabled would cause breaking changes in most code bases.

C# 10: Non-Nullable Reference Types (NRT) Enhancements

C# 10 builds upon the nullable reference types feature introduced in C# 8.0, making it more robust and providing better tools for preventing null reference exceptions.

  • Global Usings for Nullable Attributes: C# 10 introduced global usings for nullable attributes, allowing you to import attributes like [NotNull] and [MaybeNull] globally in your project to make annotating types easier.
  • Improved Warnings and Analysis: C# 10 improves warnings related to nullable reference types, making it easier to identify potential null reference issues at compile-time.

Example:

// Enable nullable annotations globally for the entire project
global using System;
global using System.Diagnostics.CodeAnalysis; // Import the nullable attributes globally

#nullable enable // Enable nullable reference types

public class Example
{
    public string? GetName()
    {
        return "John"; // This method can return null (string?)
    }

    public void PrintName()
    {
        string? name = GetName(); // Nullable reference type
        if (name is not null)
        {
            Console.WriteLine($"Name: {name}");
        }
        else
        {
            Console.WriteLine("Name is null");
        }
    }

    public void UseNotNullAttribute([NotNull] string value)
    {
        // This method expects a non-null value
        Console.WriteLine($"Value: {value}");
    }

    public void Demo()
    {
        string? name = GetName();
        UseNotNullAttribute(name); // Compiler will issue a warning
    }
}

In this example:

  1. We enable nullable reference types with #nullable enable globally for the entire project.
  2. We use the global using directive to import System.Diagnostics.CodeAnalysis, which contains the nullable attributes [NotNull] and [MaybeNull], globally for the project.
  3. The GetName() method returns a nullable string (string?), indicating that it can return null.
  4. In the UseNotNullAttribute method, we apply the [NotNull] attribute to the value parameter, indicating that it expects a non-null argument.
  5. In the Demo method, we call UseNotNullAttribute(name), which passes the nullable name variable as an argument. The compiler will issue a warning because we’re passing a nullable value to a method that expects a non-null argument, thanks to the global usings for nullable attributes.

Global usings for nullable attributes help you catch potential null reference issues at compile-time, improving code safety and readability. Please note that language features and syntax may have evolved since my last update, so it’s a good idea to consult the official C# documentation for the most up-to-date information on global usings and nullable attributes in the latest C# versions.

The Golden Rule of Programming

I’ve had a motto for several years that I consistently share with my team. I refer to it as the golden rule of coding. I think every new programmer needs a tattoo that says it.

“If it can be null, it will be null”

Adding extra logic and code can help prevent null reference errors by checking if objects are null before using them. Developers should always assume everything is invalid and be very defensive in their code.

Always pretend every database call is going to fail. Typically, each domain is likely to contain highly disorganized data. Good exception-handling best practices are critical.

What Are the Next Steps?

Null reference exceptions are common in .NET and most programming languages. Fortunately, we can all point fingers at Tony Hoare. He invented null references and even calls it the billion-dollar mistake.

What are the strategic measures we can implement to prevent this issue? One option is to adhere to my golden rule: if it can be null, it will be null!!

Nowadays, fortunately, we have the help of the compiler itself when fighting against the NullReferenceException. Turn on the “nullable reference types” feature in C# 8.0 to avoid null values if desired. The principle is: If it can’t be null, it’ll never be null. The compiler will forbid it!

Do you want to know more about C#, exceptions, and other related topics? If so, stay in tune with the Stackify blogs, since we’re always publishing posts on these topics and more.

Profiler & APM Tools Developers Can Trust

Before you push your code, you must improve the user experience and optimize the bottlenecks. To make this possible, make sure you leverage the power of the tools at your disposal.

For example – tools from Stackify by Netreo, like Prefix. Affectionately known as the Developer’s Sidekick, Prefix is a lightweight code profiler that validates the performance of your code while you write it. As a result, Prefix helps even the most experienced developers push better code to test and receive fewer support tickets. How, you ask?

Prefix captures detailed snapshots of web requests, enabling you to quickly identify and resolve hidden exceptions, slow queries, and other performance issues. Prefix natively supports web applications in .NET, Java, PHP, Node.js, Python or Ruby. And with the recent addition of OpenTelemetry data ingestion, Prefix now handles twice the programming languages, adding C++, Erlang/Elixir, Go, Javascript, Rust and Swift. And you can download Prefix for Free and try it today!

Maximize App Performance with Retrace APM

Once in production, DevOps can ensure ongoing app performance with Retrace, our full lifecycle APM solution. With more than 1,000 customers, Retrace helps software development teams monitor, troubleshoot and optimize the performance of their applications. Retrace provides real-time APM insights, enabling developers to quickly identify, diagnose and resolve errors, crashes, slow requests and more. For more on the features of Retrace, start your Free Trial today!

]]>
Python Garbage Collection: What It Is and How It Works https://stackify.com/python-garbage-collection/ Thu, 21 Sep 2023 11:41:48 +0000 https://stackify.com/?p=24081 Python is one of the most popular programming languages and its usage continues to grow. It ranked first in the TIOBE language of the year in 2022 and 2023 due to its growth rate. Python’s ease of use and large community have made it a popular fit for data analysis, web applications, and task automation.

In this post, we’ll cover:

  • Basics of Memory Management
  • Why we need Garbage Collection
  • How Python Implements Garbage Collection

We’ll take a practical look at how you should think about garbage collection when writing your Python applications.

What is Python garbage collection and why do we need It?

If Python is your first programming language, the whole idea of garbage collection might be foreign to you. Let’s start with the basics.

Memory management

A programming language uses objects in its programs to perform operations. Objects include simple variables, like strings, integers, or booleans. They also include more complex data structures like lists, hashes, or classes.

The values of your program’s objects are stored in memory for quick access. In many programming languages, a variable in your program code is simply a pointer to the address of the object in memory. When a variable is used in a program, the process will read the value from memory and operate on it.

In early programming languages, most developers were responsible for all memory management in their programs. This meant before creating a list or an object, you first needed to allocate the memory for your variable. After you were done with your variable, you then needed to deallocate it to “free” that memory for other users.

This led to two problems:

  1. Forgetting to free your memory. If you don’t free your memory when you’re done using it, it can result in memory leaks. This can lead to your program using too much memory over time. For long-running applications, this can cause serious problems.
  2. Freeing your memory too soon. The second type of problem consists of freeing your memory while it’s still in use. This can cause your program to crash if it tries to access a value in memory that doesn’t exist, or it can corrupt your data. A variable that refers to memory that has been freed is called a dangling pointer.

These problems were undesirable, and so some newer languages added automatic memory management.

Automatic memory management and garbage collection

With automatic memory management, programmers no longer needed to manage memory themselves. Rather, the runtime handled this for them.

There are a few different methods for automatic memory management. The popular ones use reference counting. With reference counting, the runtime keeps track of all of the references to an object. When an object has zero references to it, it’s unusable by the program code and can be deleted.

For programmers, automatic memory management adds a number of benefits. It’s faster to develop programs without thinking about low-level memory details. Further, it can help avoid costly memory leaks or dangerous dangling pointers.

However, automatic memory management comes at a cost. Your program will need to use additional memory and computation to track all of its references. What’s more, many programming languages with automatic memory management use a “stop-the-world” process for garbage collection where all execution stops while the garbage collector looks for and deletes objects to be collected.

With the advances in computer processing from Moore’s law and the larger amounts of RAM in newer computers, the benefits of automatic memory management usually outweigh the downsides. Thus, most modern programming languages like Java, Python, and Golang use automatic memory management.

For long-running applications where performance is critical, some languages still have manual memory management. The classic example of this is C++. We also see manual memory management in Objective-C, the language used for macOS and iOS. For newer languages, Rust uses manual memory management.

Now that we know about memory management and garbage collection in general, let’s get more specific about how Python garbage collection works.

Try Stackify’s free code profiler, Prefix, to write better code on your workstation. Prefix works with .NET, Java, PHP, Node.js, Ruby, and Python.

How Python implements garbage collection

In this section, we’ll cover how garbage collection works in Python.

This section assumes you’re using the CPython implementation of Python. CPython is the most widely used implementation. However, there are other implementations of Python, such as PyPyJython (Java-based), or IronPython (C#-based).

To see which Python you’re using, run the following command in your terminal (Linux):

>>>python -c 'import platform; print(platform.python_implementation())'

Or, you can have these lines for both Linux and Windows terminals.
>>> import platform
>>> print(platform.python_implementation())
CPython

There are two aspects to memory management and garbage collection in CPython:

  • Reference counting
  • Generational garbage collection

Let’s explore each of these below. 

Reference counting in CPython

The main garbage collection mechanism in CPython is through reference counts. Whenever you create an object in Python, the underlying C object has both a Python type (such as list, dict, or function) and a reference count.

At a very basic level, a Python object’s reference count is incremented whenever the object is referenced, and it’s decremented when an object is dereferenced. If an object’s reference count is 0, the memory for the object is deallocated.

Your program’s code can’t disable Python’s reference counting. This is in contrast to the generational garbage collector discussed below.

Some people claim reference counting is a poor man’s garbage collector. It does have some downsides, including an inability to detect cyclic references as discussed below. However, reference counting is nice because you can immediately remove an object when it has no references.

Viewing reference counts in Python

You can use the sys module from the Python standard library to check reference counts for a particular object. There are a few ways to increase the reference count for an object, such as 

  • Assigning an object to a variable.
  • Adding an object to a data structure, such as appending to a list or adding as a property on a class instance.
  • Passing the object as an argument to a function.

Let’s use a Python REPL and the sys module to see how reference counts are handled.

First, in your terminal, type python to enter into a Python REPL.

Second, import the sys module into your REPL. Then, create a variable and check its reference count:

>>> import sys
>>> a = 'my-string'
>>> sys.getrefcount(a)
2

Notice that there are two references to our variable a. One is from creating the variable. The second is when we pass the variable a to the sys.getrefcount() function.

If you add the variable to a data structure, such as a list or a dictionary, you’ll see the reference count increase:

>>> import sys
>>> a = 'my-string'
>>> b = [a] # Make a list with a as an element.
>>> c = { 'key': a } # Create a dictionary with a as one of the values.
>>> sys.getrefcount(a)
4

As shown above, the reference count of a increases when added to a list or a dictionary.

In the next section, we’ll learn about the generational garbage collector, which is the second tool Python uses for memory management.

Generational garbage collection

In addition to the reference counting strategy for memory management, Python also uses a method called a generational garbage collector.

The easiest way to understand why we need a generational garbage collector is by way of example.

In the previous section, we saw that adding an object to an array or object increased its reference count. But what happens if you add an object to itself?

>>> class MyClass(object):
...     pass
...
>>> a = MyClass()
>>> a.obj = a
>>> del a

In the example above, we defined a new class. We then created an instance of the class and assigned the instance to be a property on itself. Finally, we deleted the instance.

By deleting the instance, it’s no longer accessible in our Python program. However, Python didn’t destroy the instance from memory. The instance doesn’t have a reference count of zero because it has a reference to itself.

We call this type of problem a reference cycle, and you can’t solve it by reference counting. This is the point of the generational garbage collector, which is accessible by the gc module in the standard library.

Generational garbage collector terminology

There are two key concepts to understand with the generational garbage collector.

  1. The first concept is that of a generation.
  2. The second key concept is the threshold.

The garbage collector is keeping track of all objects in memory. A new object starts its life in the first generation of the garbage collector. If Python executes a garbage collection process on a generation and an object survives, it moves up into a second, older generation. The Python garbage collector has three generations in total, and an object moves into an older generation whenever it survives a garbage collection process on its current generation.

For each generation, the garbage collector module has a threshold number of objects. If the number of objects exceeds that threshold, the garbage collector will trigger a collection process. For any objects that survive that process, they’re moved into an older generation.

Unlike the reference counting mechanism, you may change the behavior of the generational garbage collector in your Python program. This includes changing the thresholds for triggering a garbage collection process in your code. Additionally, you can manually trigger a garbage collection process, or disable the garbage collection process altogether.

Let’s see how you can use the gc module to check garbage collection statistics or change the behavior of the garbage collector.

Using the GC module

In your terminal, enter python to drop into a Python REPL.

Import the gc module into your session. You can then check the configured thresholds of your garbage collector with the get_threshold() method:

>>> import gc
>>> gc.get_threshold()
(700, 10, 10)

By default, Python has a threshold of 700 for the youngest generation and 10 for each of the two older generations.

You can check the number of objects in each of your generations with the get_count() method:

>>> import gc
>>> gc.get_count()
(596, 2, 1)

In this example, we have 596 objects in our youngest generation, two objects in the next generation, and one object in the oldest generation.

As you can see, Python creates a number of objects by default before you even start executing your program. You can trigger a manual garbage collection process by using the gc.collect() method:

>>> gc.get_count()
(595, 2, 1)
>>> gc.collect()
577
>>> gc.get_count()
(18, 0, 0)

Running a garbage collection process cleans up a huge amount of objects—there are 577 objects in the first generation and three more in the older generations.

You can alter the thresholds for triggering garbage collection by using the set_threshold() method in the gc module:

>>> import gc
>>> gc.get_threshold()
(700, 10, 10)
>>> gc.set_threshold(1000, 15, 15)
>>> gc.get_threshold()
(1000, 15, 15)

In the example above, we increase each of our thresholds from their defaults. Increasing the threshold will reduce the frequency at which the garbage collector runs. This will be less computationally expensive in your program at the expense of keeping dead objects around longer.

Now that you know how both reference counting and the garbage collector module work, let’s discuss how you should use this when writing Python applications.

Python Garbage Collector

What does Python’s garbage collector mean for you as a developer

We’ve spent a fair bit of time discussing memory management generally and its implementation in Python. Now it’s time to make it useful. How should you use this information as a developer of Python programs?

General rule: Don’t change garbage collector behavior

As a general rule, you probably shouldn’t think about Python’s garbage collection too much. One of the key benefits of Python is it enables developer productivity. Part of the reason for this is because it’s a high-level language that handles a number of low-level details for the developer.

Manual memory management is more relevant for constrained environments. If you do find yourself with performance limitations that you think may be related to Python’s garbage collection mechanisms, it will probably be more useful to increase the power of your execution environment rather than to manually alter the garbage collection process. In a world of Moore’s law, cloud computing, and cheap memory, more power is readily accessible.

This is even realistic given that Python generally doesn’t release memory back to the underlying operating system. Any manual garbage collection process you do to free memory may not give you the results you want. For more details in this area, refer to this post on memory management in Python.

Disabling the garbage collector

With that caveat aside, there are situations where you may want to manage the garbage collection process. Remember that reference counting, the main garbage collection mechanism in Python, can’t be disabled. The only garbage collection behavior you can alter is the generational garbage collector in the gc module.

One of the more interesting examples of altering the generational garbage collector came from Instagram disabling the garbage collector altogether.

Instagram uses Django, the popular Python web framework, for its web applications. It runs multiple instances of its web application on a single compute instance. These instances are run using a master-child mechanism where the child processes share memory with the master.

The Instagram dev team noticed that the shared memory would drop sharply soon after a child process spawned. When digging further, they saw that the garbage collector was to blame.

The Instagram team disabled the garbage collector module by setting the thresholds for all generations to zero. This change led to their web applications running 10% more efficiently.

While this example is interesting, make sure you’re in a similar situation before following the same path. Instagram is a web-scale application serving many millions of users. To them, it’s worth it to use some non-standard behavior to squeeze every inch of performance from their web applications. For most developers, Python’s standard behavior around garbage collection is sufficient.

If you think you may want to manually manage garbage collection in Python, make sure you understand the problem first. Use tools like Stackify’s Retrace to measure your application performance and pinpoint issues. Once you fully understand the problem, then take steps to fix it.

Start your 14-day FREE trial today!

Retrace Python APM

Wrapping up

In this post, we learned about Python garbage collection. We started by covering the basics of memory management and the creation of automatic memory management. We then looked at how garbage collection is implemented in Python, through both automatic reference counting and a generational garbage collector. Finally, we reviewed how this matters to you as a Python developer.

While Python handles most of the hard parts of memory management for you, it’s still helpful to know what’s happening under the hood. From reading this post, you now know that you should avoid reference cycles in Python, and you should know where to look if you need greater control over Python garbage collector.

]]>