Docker just made container technology easy for people to use. This is why Docker is a must-have in most development workflows today. Most likely, your dream company is using Docker right now.
Docker’s official documentation has a lot of moving parts. Honestly, it can be overwhelming at first. You could find yourself needing to glean information here and there to build that Docker image you’ve always wanted to build.
Maybe building Docker images has been a daunting task for you, but it won’t be after you read this post. Here, you’ll learn how to build—and how not to build—Docker images. You’ll be able to write a Dockerfile and publish Docker images like a pro.
First, you’ll need to install Docker. Docker runs natively on Linux. That doesn’t mean you can’t use Docker on Mac or Windows. In fact, there’s Docker for Mac and Docker for Windows. I won’t go into details on how to install Docker on your machine in this post. If you’re on a Linux machine, this guide will help you get Docker up and running.
Now that you have Docker set up on your machine, you’re one step closer to building images with Docker. Most likely, you’ll come across two terms — ”containers” and “images”—that can be confusing.
Docker containers are runtime instances of Docker images, whether running or stopped. In fact, one of the major differences between Docker containers and images is that containers have a writable layer and it’s the container that runs your software. You can think of a Docker image as the blueprint of a Docker container.
When you create a Docker container, you’re adding a writable layer on top of the Docker image. You can run many Docker containers from the same Docker image. You can see a Docker container as a runtime instance of a Docker image.
It’s time to get our hands dirty and see how Docker build works in a real-life app. We’ll generate a simple Node.js app with an Express app generator. Express generator is a CLI tool used for scaffolding Express applications. After that, we’ll go through the process of using Docker build to create a Docker image from the source code.
We start by installing the express generator as follows:
$ npm install express-generator -g
Next, we scaffold our application using the following command:
$ express docker-app
Now we install package dependencies:
$ npm install
Start the application with the command below:
$ npm start
If you point your browser to http://localhost:3000, you should see the application default page, with the text “Welcome to Express.”
Mind you, the application is still running on your machine, and you don’t have a Docker image yet. Of course, there are no magic wands you can wave at your app and turn it into a Docker container all of a sudden. You’ve got to write a Dockerfile and build an image out of it.
Docker’s official docs define Dockerfile as “a text document that contains all the commands a user could call on the command line to assemble an image.” Now that you know what a Dockerfile is, it’s time to write one.
Docker builds images by reading instructions in dockerfiles. A docker instruction has two components: instruction and argument.
A docker instruction can be written as :
RUN npm install
“RUN” in the instruction and “npm install” is the argument. There are many docker instructions but below are some of the docker instructions you will come across often and the explanation. Mind you, we’ll use some of them in this post.
Dockerfile Instruction | Explanation |
FROM | We use “FROM” to specify the base image we want to start from. |
RUN | RUN is used to run commands during the image build process. |
ENV | Sets environment variables within the image, making them accessible both during the build process and while the container is running. If you only need to define build-time variables, you should utilize the ARG instruction. |
COPY | The COPY command is used to copy a file or folder from the host system into the docker image. |
EXPOSE | Used to specify the port you want the docker image to listen to at runtime. |
ADD | An advanced form of COPY instruction. You can copy files from the host system into the docker image. You can also use it to copy files from a URL into a destination in the docker image. In fact, you can use it to copy a tarball from the host system and automatically have it extracted into a destination in the docker image. |
WORKDIR | It’s used to set the current working directory. |
VOLUME | It is used to create or mount the volume to the Docker container |
USER | Sets the user name and UID when running the container. You can use this instruction to set a non-root user of the container. |
LABEL | Specify metadata information of Docker image |
ARG | Defines build-time variables using key-value pairs. However, these ARG variables will not be accessible when the container is running. To maintain a variable within a running container, use ENV instruction instead. |
CMD | Executes a command within a running container. Only one CMD instruction is allowed, and if multiple are present, only the last one takes effect. |
ENTRYPOINT | Specifies the commands that will execute when the Docker container starts. If you don’t specify any ENTRYPOINT, it defaults to “/bin/sh -c”. |
Enough of all the talk. It’s time to create docker instructions we need for this project. At the root directory of your application, create a file with the name “Dockerfile.”
$ touch Dockerfile
There’s an important concept you need to internalize—always keep your Docker image as lean as possible. This means packaging only what your applications need to run. Please don’t do otherwise.
In reality, source code usually contains other files and directories like .git, .idea, .vscode, or ci.yml. Those are essential for our development workflow, but won’t stop our app from running. It’s a best practice not to have them in your image—that’s what .dockerignore is for. We use it to prevent such files and directories from making their way into our build.
Create a file with the name .dockerignore at the root folder with this content:
.git .gitignore node_modules npm-debug.log Dockerfile* docker-compose* README.md LICENSE .vscode
Dockerfile usually starts from a base image. As defined in the [Docker documentation](https://docs.docker.com/engine/reference/builder/), a base image or parent image is where your image is based. It’s your starting point. It could be an Ubuntu OS, Redhat, MySQL, Redis, etc.
Base images don’t just fall from the sky. They’re created—and you too can create one from scratch. There are also many base images out there that you can use, so you don’t need to create one in most cases.
We add the base image to Dockerfile using the FROM command, followed by the base image name:
# Filename: Dockerfile FROM node:18-alpine
Let’s instruct Docker to copy our source during Docker build:
# Filename: Dockerfile FROM node:18-alpine WORKDIR /usr/src/app COPY package*.json ./ RUN npm install COPY . .
First, we set the working directory using WORKDIR. We then copy files using the COPY command. The first argument is the source path, and the second is the destination path on the image file system. We copy package.json and install our project dependencies using npm install. This will create the node_modules directory that we once ignored in .dockerignore.
You might be wondering why we copied package.json before the source code. Docker images are made up of layers. They’re created based on the output generated from each command. Since the file package.json does not change often as our source code, we don’t want to keep rebuilding node_modules each time we run Docker build.
Copying over files that define our app dependencies and install them immediately enables us to take advantage of the Docker cache. The main benefit here is quicker build time. There’s a really nice blog post that explains this concept in detail.
Exposing port 3000 informs Docker which port the container is listening on at runtime. Let’s modify the Docker file and expose the port 3000.
# Filename: Dockerfile FROM node:18-alpine WORKDIR /usr/src/app COPY package*.json ./ RUN npm install COPY . . EXPOSE 3000
The CMD command tells Docker how to run the application we packaged in the image. The CMD follows the format CMD [“command”, “argument1”, “argument2”].
# Filename: Dockerfile FROM node:18-alpine WORKDIR /usr/src/app COPY package*.json ./ RUN npm install COPY . . EXPOSE 3000 CMD ["npm", "start"]
With Dockerfile written, you can build the image using the following command:
$ docker build .
We can see the image we just built using the command docker images.
$ docker images
If you run the command above, you will see something similar to the output below.
REPOSITORY TAG IMAGE ID CREATED SIZE 7b341adb0bf1 2 minutes ago 83.2MB
When you have many images, it becomes difficult to know which image is what. Docker provides a way to tag your images with friendly names of your choosing. This is known as tagging. Let’s proceed to tag the Docker image we just built. Run the command below:
$ docker build . -t yourusername/example-node-app
If you run the command above, you should have your image tagged already. Running docker images again will show your image with the name you’ve chosen.
$ docker images
The output of the above command should be similar to this:
REPOSITORY TAG IMAGE ID CREATED SIZE yourusername/example-node-app latest be083a8e3159 7 minutes ago 83.2MB
You run a Docker image by using the docker run API. The command is as follows:
$ docker run -p80:3000 yourusername/example-node-app
The command is pretty simple. We supplied -p argument to specify what port on the host machine to map the port the app is listening on in the container. Now you can access your app from your browser on https://localhost.
To run the container in a detached mode, you can supply argument -d:
$ docker run -d -p80:3000 yourusername/example-node-app
A big congrats to you! You just packaged an application that can run anywhere Docker is installed.
The Docker image you built still resides on your local machine. This means you can’t run it on any other machine outside your own—not even in production! To make the Docker image available for use elsewhere, you need to push it to a Docker registry.
A Docker registry is where Docker images live. One of the popular Docker registries is Docker Hub. You’ll need an account to push Docker images to Docker Hub, and you can create one [here.](https://hub.docker.com/)
With your [Docker Hub](https://hub.docker.com/) credentials ready, you need only to log in with your username and password.
$ docker login
Enter your Docker Hub username and docker hub token or password to authenticate
Retag the image with a version number:
$ docker tag yourusername/example-node-app yourdockerhubusername/example-node-app:v1
Then push with the following:
$ docker push yourusername/example-node-app:v1
If you’re as excited as I am, you’ll probably want to poke your nose into what’s happening in this container, and even do cool stuff with Docker API.
You can list Docker containers:
$ docker ps
And you can inspect a container:
$ docker inspect
You can view Docker logs in a Docker container:
$ docker logs
And you can stop a running container:
$ docker stop
Logging and monitoring are as important as the app itself. You shouldn’t put an app in production without proper logging and monitoring in place, no matter what the reason. Retrace provides first-class support for Docker containers. This guide can help you set up a Retrace agent.
The whole concept of containerization is all about taking away the pain of building, shipping, and running applications. In this post, we’ve learned how to write Dockerfile as well as build, tag, and publish Docker images. Now it’s time to build on this knowledge and learn about how to automate the entire process using continuous integration and delivery. Here are a few good posts about setting up continuous integration and delivery pipelines to get you started:
]]>Many software professionals think that they can just leave all the RDBMS settings as they came by default. They’re wrong. Often, the default settings your RDBMS comes configured with are far from being the optimal ones. Not optimizing such settings results in performance issues that you could easily avoid.
Some programmers, on the other hand, believe that even though SQL performance tuning is important, only DBAs should do it. They’re wrong as well.
First of all, not all companies will even have a person with the official title “DBA.” It depends on the size of the company, more than anything.
But even if you have a dedicated DBA on the team, that doesn’t mean you should overwhelm them with tasks that could’ve been performed by the developers themselves. If a developer can diagnose and fix a slow query, then there’s no reason why they shouldn’t do it. The relevant word here, though, is can—most of the time, they can’t.
How do we fix this problem? Simple: We equip developers with the knowledge they need to find slow SQL queries and do performance tuning in SQL Server. In this post, we’ll give you seven tips to do just that.
Before we show you our list of tips you can use to do SQL performance tuning on your software organization, I figured we should define SQL performance tuning.
So what is SQL performance tuning? I bet you already have an idea, even if it’s a vague one.
In a nutshell, SQL performance tuning consists of making queries of a relation database run as fast as possible.
As you’ll see in this post, SQL performance tuning is not a single tool or technique. Rather, it’s a set of practices that makes use of a wide array of techniques, tools, and processes.
Without further ado, here are seven ways to find slow SQL queries in SQL Server.
In order to diagnose slow queries, it’s crucial to be able to generate graphical execution plans, which you can do by using SQL Server Management Studio. Actual execution plans are generated after the queries run. But how do you go about generating an execution plan?
Begin by clicking on “Database Engine Query”, on the SQL Server Management Studio toolbar.
After that, enter the query and click “Include Actual Execution Plan” on the Query menu.
Finally, it’s time to run your query. You do that by clicking on the “Execute” toolbar button or pressing F5. Then, SQL Server Management Studio will display the execution plan in the results pane, under the “Execution Plan” tab.
Resource usage is an essential factor when it comes to SQL database performance. Since you can’t improve what you don’t measure, you definitely should monitor resource usage.
So how can you do it?
If you’re using Windows, use the System Monitor tool to measure the performance of SQL Server. It enables you to view SQL Server objects, performance counters, and the behavior of other objects.
Using System Monitor allows you to monitor Windows and SQL Server counters simultaneously, so you can verify if there’s any correlation between the performance of the two.
Another important technique for SQL performance tuning is to analyze the performance of Transact-SQL statements that are run against the database you intend to tune.
You can use the Database Engine Tuning Advisor to analyze the performance implications.
But the tool goes beyond that: it also recommends actions you should take based on its analysis. For instance, it might advise you to create or remove indexes.
One of the great features of SQL Server is all of the dynamic management views (DMVs) that are built into it. There are dozens of them and they can provide a wealth of information about a wide range of topics.
There are several DMVs that provide data about query stats, execution plans, recent queries and much more. You can use them together to get some amazing insights.
For example, the query below can be used to find the queries that use the most reads, writes, worker time (CPU), etc.
SELECT TOP 10 SUBSTRING(qt.TEXT, (qs.statement_start_offset/2)+1, ((CASE qs.statement_end_offset WHEN -1 THEN DATALENGTH(qt.TEXT) ELSE qs.statement_end_offset END - qs.statement_start_offset)/2)+1), qs.execution_count, qs.total_logical_reads, qs.last_logical_reads, qs.total_logical_writes, qs.last_logical_writes, qs.total_worker_time, qs.last_worker_time, qs.total_elapsed_time/1000000 total_elapsed_time_in_S, qs.last_elapsed_time/1000000 last_elapsed_time_in_S, qs.last_execution_time, qp.query_plan FROM sys.dm_exec_query_stats qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) qt CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp ORDER BY qs.total_logical_reads DESC -- logical reads -- ORDER BY qs.total_logical_writes DESC -- logical writes -- ORDER BY qs.total_worker_time DESC -- CPU time
The result of the query will look something like this below. The image below is from a marketing app I made. You can see that one particular query (the top one) takes up all the resources.
By looking at this, I can copy that SQL query and see if there is some way to improve it, add an index, etc.
Pros: Always available basic rollup statistics.
Cons: Doesn’t tell you what is calling the queries. Can’t visualize when the queries are being called over time.
One of the great features of application performance management (APM) tools is the ability to track SQL queries. For example, Retrace tracks SQL queries across multiple database providers, including SQL Server. Retrace tells you how many times a query was executed, how long it takes on average, and what transactions called it.
This is valuable information for SQL performance tuning. APM solutions collect this data by doing lightweight performance profiling against your application code at runtime.
Below is a screenshot from Retrace’s application dashboard showing which SQL queries take the longest for a particular application.
Retrace collects performance statistics about every SQL query being executed. You can search for specific queries to find potential problems.
By selecting an individual query, you see how often that query is called over time and how long it takes. You also see what webpages use the SQL query and how it impacts their performance.
Since Retrace is a lightweight code profiler and captures ASP.NET request traces, it even shows you the full context of what your code is doing.
Below is a captured trace showing all the SQL queries and other details about what the code was doing. Retrace even shows log messages within this same view. Also, notice that it shows the server address and database name that’s executing the query. You can also see the number of records it returns.
As you can see, Retrace provides comprehensive SQL reporting capabilities as part of its APM capabilities. It also provides multiple monitoring and alerting features around SQL queries.
Pros: Detailed reporting across apps, per app, and per query. Shows transaction traces details how queries are used. Starts at just $10 a month. Is always running once installed.
Cons: Doesn’t provide the number of reads or writes per query.
The SQL Server Profiler was around for a very long time. It was a very useful tool to see in real time what SQL queries are being executed against your database, but it’s currently deprecated. Microsoft replaced it with SQL Server Extended Events.
This is sure to anger a lot of people but I can understand why Microsoft is doing it. Extended Events works via Event Tracing (ETW).
This has been the common way for all Microsoft-related technologies to expose diagnostic data. ETW provides much more flexibility. As a developer, I could easily tap into ETW events from SQL Server to collect data for custom uses. That’s really cool and really powerful.
MORE: Introducing SQL Server Extended Events
Pros: Easier to enable and leave running. Easier to develop custom solutions with.
Cons: Since it is fairly new, most people may not be aware of it.
I’m going to assume that SQL Azure’s performance reporting is built on top of Extended Events. Within the Azure Portal, you can get access to a wide array of performance reporting and optimization tips that are very helpful.
Note: These reporting capabilities are only available for databases hosted on SQL Azure.
In the screenshot below, you can see how SQL Azure makes it easy to use your queries that use the most CPU, Data IO, and Log IO. It has some great basic reporting built into it.
You can also select an individual query and get more details to help with SQL performance tuning.
Pros: Great basic reporting.
Cons: Only works on Azure. No reporting across multiple databases.
Next time you need to do some performance tuning with SQL Server, you’ll have a few options at your disposal to consider. Odds are that you’ll use more than one of these tools depending on what you are trying to accomplish.
Try Stackify’s free code profiler, Prefix, to write better code on your workstation. Prefix works with .NET, Java, PHP, Node.js, Ruby, and Python.
If you’re using an APM solution like Retrace, be sure to check what kind of SQL performance functionality it has built-in. If you don’t have an APM solution or aren’t sure what it is, be sure to read this: What is Application Performance Management and 10 critical features that developers need in APM.
]]>But what if you want to measure the maximum performance of your application? Or what if you want to know how the application behaves under extreme stress?
To answer these questions, you can pursue these types of testing:
These types of tests are ideal for answering the above questions. However, the difference between those testing types is subtle.
This article will guide you through each of those testing types. You’ll find out about each type of testing and learn about the differences between them.
Performance testing is an umbrella term for both load and stress testing. Performance testing refers to all testing related to verifying the system’s performance and monitoring how it behaves under stress. Therefore we can say that performance testing is concerned with the following metrics:
Performance testing is often linked to a customer’s functional requirements. Imagine a client who asks to develop a service that handles ticket sales for events. For example, the client expects the application to be able to handle up to 50,000 requests per minute. This is a functional requirement that performance testing helps to validate.
The goal of performance testing is not to find bugs but to find performance bottlenecks. Why is this important? A single performance bottleneck can have a huge impact on the overall application’s performance. Therefore, it’s crucial to conduct performance testing to detect such issues.
In addition, this type of testing also verifies the performance in different environments to make sure the application works well for different setups and operating systems. To give an example, an application might work fine on a Linux server but have performance issues on a Windows server. Performance testing should help you rule out such problems.
In short, the goal of performance testing is to gather insights into the application’s performance and communicate these performance metrics to the stakeholders.
It’s always a good idea to measure the performance of an application. Delivering an application that hasn’t been performance tested is the same as delivering a bike with brakes that haven’t been tested.
Performance testing helps you with the following aspects:
Next, let’s get into the details of load testing.
Load testing specifically tries to identify how the application behaves under expected loads. Therefore, you should first know what load you expect for your application. Once you know this, you can start load testing the application.
Often, load testing includes a scenario where 100 extra requests hit the application every 30 seconds. Next, you’ll want to increase the number of requests up to the expected load for the application. Once the expected load has been reached, most testing engineers prefer to continue hitting the application for a couple more minutes to detect possible memory or CPU issues. These kinds of issues might only pop up after hitting the application for a while and are rarely visible from the beginning.
Moreover, the goal is to gather statistics about important metrics, such as response time, reliability, and stability.
To summarize, load testing is generally concerned with collecting all this data and analyzing it to detect anomalies. The idea of load testing is to create an application that behaves stably under an expected load. You don’t want to see ever-increasing memory usage, as that might indicate you have a memory leak.
Load testing helps you to get a better understanding of the expected load your application can handle. And understanding those limits helps you to reduce the risk of failure.
Let’s say your application can handle 5,000 requests per minute. Because you know this limit, your organization can take precautions to scale the application in case the number of requests per minute gets close to this limit. By taking these precautions, you’re reducing the risk of failure.
In addition, load testing gives you good insights into the memory usage and CPU time of your application. This data is of great value for measuring the stability of your application. Ideally, the memory usage for your application should remain stable when you’re performing load testing.
Last, let’s explore the true meaning of stress testing.
Stress testing helps you detect the breaking point of an application. Also, it allows a testing engineer to find the maximum load an application can handle. In order words, it lets you determine the upper limit of the application.
To give an example, let’s say a certain application programming interface can handle 5,000 simultaneous requests, but it will fail if it has to process more requests for the given setup. This limit is important for companies to know because it allows them to scale their application when needed.
In addition, it’s also a common practice to increase “stress” on the application by closing database connections, removing access to files, or closing network ports. The idea here is to evaluate how the application reacts under such extreme conditions. Therefore, this type of testing is extremely useful when you want to evaluate the robustness of your application.
Stress testing provides the following benefits for you and your organization:
Finally, let’s compare the above three testing types and learn about the differences between performance testing, load testing, and stress testing.
Key | Stress Testing | Load Testing |
---|---|---|
Purpose | Stress testing tests the system’s performance under extreme load. | Load testing tests the system performance for the expected load on the application |
Threshold | Stress testing is conducted above threshold limits. | Load testing is conducted at threshold limits. |
Result | Stress testing allows you to evaluate how robust is your application. | Load testing ensures that the system can handle the expected load. |
To finish off, let’s do a quick recap of performance testing, load testing, and stress testing so that you see how these tests are related.
To conclude, don’t neglect performance testing. It’s a great tool for increasing customer satisfaction because you can guarantee that your application works well for them in all usage scenarios. To help you with conducting performance tests, you can use dedicated tools such as Stackify’s Retrace tool . An application monitoring tool gives you insights into memory usage and CPU time. In addition, for Node.js specifically, it can give you insights into the Node.js event loop and how long it’s blocked. An event loop that’s continuously blocked is often a bad sign.
]]>Before we cover some of the most important application performance metrics you should be tracking, let’s speak briefly about what application performance metrics is.
Application Performance Metrics are the indicators used to measure and track the performance of software applications. Performance here includes and is not limited to availability, end-user experience, resource utilization, reliability, and responsiveness of your software application.
The act of continuously monitoring your application’s metrics is called Application Performance Monitoring.
But why is this important?
For starters, application performance metrics allow you and your team to address issues affecting your application proactively. This is particularly important in situations when the application is the business itself.
Monitoring application performance metrics allows helps your team:
However, as important as it is to monitor application performance metrics, trying to monitor everything will be a time-consuming, ineffective, and unproductive use of resources. Thus, tracking the right metrics is much more important as this will provide better insights and understanding of the technical functionality of your application.
Here are some of the most important application performance metrics you should be tracking.
The application performance index, or Apdex score, has become an industry standard for tracking the relative performance of an application.
It works by specifying a goal for how long a specific web request or transaction should take.
Those transactions are then bucketed into satisfied (fast), tolerating (sluggish), too slow, and failed requests. A simple math formula is then applied to provide a score from 0 to 1.
Retrace automatically tracks satisfaction scores for every one of your applications and web requests. We convert the number to a 0-100 instead of 0-1 representation to make it easier to understand.
Let me start by saying that averages suck. I highly recommend using the aforementioned user satisfaction Apdex scores as a preferred way to track overall performance. That said, averages are still a useful application performance metric.
The last thing you want your users to see are errors. Monitoring error rates is a critical application performance metric.
There are potentially 3 different ways to track application errors:
It is common to see thousands of exceptions being thrown and ignored within an application. Hidden application exceptions can cause a lot of performance problems.
If your application scales up and down in the cloud, it is important to know how many server/application instances you have running. Auto-scaling can help ensure your application scales to meet demand and saves you money during off-peak times. This also creates some unique monitoring challenges.
For example, if your application automatically scales up based on CPU usage, you may never see your CPU get high. You would instead see the number of server instances get high. (Not to mention your hosting bill going way up!)
Understanding how much traffic your application receives will impact the success of your application. Potentially all other application performance metrics are affected by increases or decreases in traffic.
Request rates can be useful to correlate to other application performance metrics to understand the dynamics of how your application scales.
Monitoring the request rate can also be good to watch for spikes or even inactivity. If you have a busy API that suddenly gets no traffic at all, that could be a really bad thing to watch out for.
A similar but slightly different metric to track is the number of concurrent users. This is another interesting metric to track to see how it correlates.
If the CPU usage on your server is extremely high, you can guarantee you will have application performance problems. Monitoring the CPU usage of your server and applications is a basic and critical metric.
Virtually all server and application monitoring tools can track your CPU usage and provide monitoring alerts. It is important to track them per server but also as an aggregate across all the individually deployed instances of your application.
Monitoring and measuring if your application is online and available is a key metric you should be tracking. Most companies use this as a way to measure uptime for service level agreements (SLA).
If you have a web application, the easiest way to monitor application availability is via a simple scheduled HTTP check.
Retrace can run these types of HTTP “ping” checks every minute for you. It can monitor response times, status codes, and even look for specific content on the page.
If your application is written in .NET, C#, or other programming languages that use garbage collection, you are probably aware of the performance problems that can arise from it.
When garbage collection occurs, it can cause your process to suspend and can use a lot of CPU.
Garbage collection metrics may not be one of the first things you think about key application performance metrics. It can be a hidden performance problem that is always a good idea to keep an eye on.
For .NET, you can monitor this via the Performance Counter of “% GC Time”. Java has similar capabilities via JMX metrics. Retrace can monitor these via its application metrics capabilities.
Memory usage is vital because it helps one gauge how an application manages and consumes resources during execution. However, it’s important to recognize that memory usage has technical and financial implications.
From a technical standpoint, high memory usage, memory leaks, or insufficient memory significantly affect application performance and scalability. This will result in slower response time, increased latency, frequent crashes, and potential downtime. However, on a financial front, high memory usage might require additional infrastructure costs like hardware upgrades, cloud service expenses, or additional resources to accommodate your needs.
Thus, monitoring memory effectively is crucial to ensure optimal application performance while minimizing financial impacts.
Throughput measures the number of transactions or requests an application can process within a given timeframe. It indicates how well an application handles a high volume of workloads. Thus a high throughput generally shows better performance and scalability.
It is, however, important to know that various factors like memory, disk I/O, and network bandwidth influence throughput. Regardless, it is an interesting metric to track, especially since you can use it for benchmarking, identifying resource limitations, and ensuring optimal resource utilization. It is also a decision-maker as it is great for accessing and making performance comparisons of systems and applications on different loads.
Application performance measurement is necessary for all types of applications. Depending on your type of application, there could be many other monitoring needs.
Retrace can help you monitor a broad range of web application performance metrics. Retrace collects critical metrics about your applications, servers, code level performance, application errors, logs, and more. These can be used for measuring and monitoring the performance of your application.
]]>However, Windows and ASP.NET Core provide several different logs where failed requests are logged. This goes beyond simple IIS logs and can give you the information you need to combat failed requests.
If you have been dealing with ASP.NET Core applications for a while, you may be familiar with normal IIS logs. Such logs are only the beginning of your troubleshooting toolbox.
There are some other places to look if you are looking for more detailed error messages or can’t find anything in your IIS log file.
Standard IIS logs will include every single web request that flows through your IIS site.
Via IIS Manager, you can see a “Logging” feature. Click on this, and you can verify that your IIS logs are enabled and observe where they are being written to.
You should find your logs in folders that are named by your W3SVC site ID numbers.
Need help finding your logs? Check out: Where are IIS Log Files Located?
By default, each logged request in your IIS log will include several key fields including the URL, querystring, and error codes via the status, substatus and win32 status.
These status codes can help identify the actual error in more detail.
#Fields: date time s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) cs(Referer) sc-status sc-substatus sc-win32-status time-taken 2019-09-13 21:45:10 ::1 GET /webapp2 - 80 - ::1 Mozilla/5.0 - 500 0 0 5502 2019-09-13 21:45:10 ::1 GET /favicon.ico - 80 - ::1 Mozilla/5.0 http://localhost/webapp2 404 0 2 4
The “sc-status” and “sc-substatus” fields are the standard HTTP status code of 200 for OK, 404, 500 for errors, etc.
The “sc-win32-status” can provide more details that you won’t know unless you look up the code. They are basic Win32 error codes.
You can also see the endpoint the log message is for under “cs-uri-stem”. For example, “/webapp2.” This can instantly direct you to problem spots in your application.
Another key piece of info to look at is “time-taken.” This gives you the roundtrip time in milliseconds of the request and its response.
By the way, if you are using Retrace, you can also use it to query across all of your IIS logs as part of its built-in log management functionality.
Every single web request should show in your IIS log. If it doesn’t, it is possible that the request never made it to IIS, or IIS wasn’t running.
It is also possible IIS Loggin is disabled. If IIS is running, but you still are not seeing the log events, it may be going to HTTPERR.
Incoming requests to your server first route through HTTP.SYS before being handed to IIS. These type of errors get logged in HTTPERR.
Common errors are 400 Bad Request, timeouts, 503 Service Unavailable and similar types of issues. The built-in error messages and error codes from HTTP.SYS are usually very detailed.
Where are the HTTPERR error logs?
C:\Windows\System32\LogFiles\HTTPERR
By default, ASP.NET Core will log unhandled 500 level exceptions to the Windows Application EventLog. This is handled by the ASP.NET Core Health Monitoring feature. You can control settings for it via system.web/healthMonitoring in your appsettings.json file.
Very few people realize that the number of errors written to the Application EventLog is rate limited. So you may not find your error!
By default, it will only log the same type of error once a minute. You can also disable writing any errors to the Application EventLog.
Can’t find your exception?
You may not be able to find your exception in the EventLog. Depending on if you are using WebForms, MVC, Core, WCF or other frameworks, you may have issues with ASP.NET Core not writing any errors at all to ASP.NET due to compatibility issues with the health monitoring feature.
By the way, if you install Retrace on your server, it can catch every single exception that is ever thrown in your code. It knows how to instrument into IIS features.
Failed request tracing (FRT) is probably one of the least used features in IIS. It is, however, incredibly powerful.
It provides robust IIS logging and works as a great IIS error log. FRT is enabled in IIS Manager and can be configured via rules for all requests, slow requests, or just certain response status codes.
You can configure it via the “Actions” section for a website:
The only problem with FRT is it is incredibly detailed. Consider it the stenographer of your application. It tracks every detail and every step of the IIS pipeline. You can spend a lot of time trying to decipher a single request.
If other avenues fail you and you can reproduce the problem, you could modify your ASP.NET Core appsettings.json to see exceptions.
Typically, server-side exceptions are disabled from being visible within your application for important security reasons. Instead, you will see a yellow screen of death (YSOD) or your own custom error page.
You can modify your application config files to make exceptions visible.
ASP.NET
You could use remote desktop to access the server and set customErrors to “RemoteOnly” in your web.config so you can see the full exception via “localhost” on the server. This would ensure that no users would see the full exceptions but you would be able to.
If you are OK with the fact that your users may now see a full exception page, you could set customErrors to “Off.”
.NET Core
Compared to previous versions of ASP.NET, .NET Core has completely changed how error handling works. You now need to use the DeveloperExceptionPage in your middleware.
.NET Core gives you unmatched flexibility in how you want to see and manage your errors. It also makes it easy to wire in instrumentation like Retrace.
.NET Profilers like Prefix (which is free!) can collect every single exception that is .NET throws in your code even if they are hidden in your code.
Prefix is a free ASP.NET Core profiler designed to run on your workstation to help you optimize your code as you write it. Prefix can also show you your SQL queries, HTTP calls, and much, much more.
Trying to reproduce an error in production or chasing down IIS logs/IIS error logs is not fun. Odds are, there are probably many more errors going on that you aren’t even aware of. When a customer contacts you and says your site is throwing errors, you better have an easy way to see them!
Tracking application errors is one of the most important things every development team should do. If you need help, be sure to try Retrace which can collect every single exception across all of your apps and servers.
Also, check out our detailed guide on C# Exception Handling Best Practices.
If you are using Azure App Services, also check out this article: Where to Find Azure App Service Logs.
]]>Logs are not an easy thing to deal with, but regardless is an important aspect of any production system. When you are faced with a difficult issue, it’s much easier to use a log management tool than it is to weave through endless loops of text-files spread throughout your system environment.
Try Stackify’s free code profiler, Prefix, to write better code on your workstation. Prefix works with .NET, Java, PHP, Node.js, Ruby, and Python.
The big advantage of log management tools is that they can help you easily pinpoint the root cause of any application or software error, within a single query. The same applies to security-related concerns, where many of the following tools are capable of helping your IT team prevent attacks even before they happen. Another factor is having a visual overview of how your software is being used globally by your user base — getting all this crucial data in one single dashboard is going to make your productivity rise substantially.
When picking the right log management tool for your needs, evaluate your current business operation. Decide on whether you’re still a small operation looking to get the basic data out of your logs, or you plan to enter the enterprise level – which will require more powerful logging system and efficient tools to tackle large scale log management.
We built Retrace to address the need for a cohesive, comprehensive developer tool that combines APM, errors, logs, metrics, and monitoring in a single dashboard. When it comes to log management tools, they run the gamut from stand-alone tools to robust solutions that integrate with your other go-to tools, analytics, and more. We put together this list of 52 useful log management tools (listed below in no particular order) to provide an easy reference for anyone wanting to compare the current offerings to find a solution that best meets your needs.
Tired of chasing bugs in the dark? Thanks to Retrace, you don’t have to. Retrace your code, find bugs, and improve application performance with this suite of essential tools that every developer needs, including logging, error monitoring, and code level performance.
Key Features:
Cost:
Logentries is a cloud-based log management platform that makes any type of computer-generated type of log data accessible to developers, IT engineers, and business analysis groups of any size. Logentries’ easy onboarding process ensures that any business team can quickly and effectively start understanding their log data from day one.
Key Features:
Cost:
GoAccess is a real-time log analyzer software intended to be run through the terminal of Unix systems, or through the browser. It provides a rapid logging environment where data can be displayed within milliseconds of it being stored on the server.
Key Features:
Cost: Free (Open-Source)
Logz.io uses machine-learning and predictive analytics to simplify the process of finding critical events and data generated by logs from apps, servers, and network environments. Logz.io is a SaaS platform with a cloud-based back-end that’s built with the help of ELK Stack – Elasticsearch, Logstash & Kibana. This environment provides a real-time insight of any log data that you’re trying to analyze or understand.
Key Features:
Cost:
Graylog is a free and open-source log management tool that supports in-depth log collection and analysis. Used by teams in Network Security, IT Ops and DevOps, you can count on Graylog’s ability to discern any potential risks to security, lets you follow compliance rules, and helps to understand the root cause of any particular error or problem that your apps are experiencing.
Key Features:
Cost:
Splunk’s log management tool focuses on enterprise customers who need concise tools for searching, diagnosing and reporting any events surrounding data logs. Splunk’s software is built to support the process of indexing and deciphering logs of any type, whether structured, unstructured, or sophisticated application logs, based on a multi-line approach.
Key Features:
Cost:
Logmatic is an extensive log management tool that integrates seamlessly with any language or stack. Logmatic works equally well with front-end and back-end log data and provides a painless online dashboard for tapping into valuable insights and facts of what is happening within your server environment.
Key Features:
Cost:
Logstash from Elasticsearch is one of the most renowned open-source log management tool for managing, processing and transporting your log data and events. Logstash works as a data processor that can combine and transform data from multiple sources at the same time, then send it over to your favorite log management platform, such as Elasticsearch.
Key Features:
Cost:
Sumo Logic is a unified logs and metrics platform that helps you analyze your data in real-time using machine-learning, Sumo Logic can quickly depict the root cause of any particular error or event, and it can be setup to be constantly on guard as to what is happening to your apps in real-time. Sumo Logic’s strong point is its ability to work with data at a rapid pace, removing the need for external data analysis and management tools.
Key Features:
Cost:
Papertrail is a snazzy hosted log management tool that takes care of aggregating, searching, and analyzing any type of log files, system logs, or basic text log files. Its real-time features allow for developers and engineers to monitor live happenings for apps and servers as they are happening. Papertrail offers seamless integration with services like Slack, Librato and Email to help you set up alerts for trends and any anomalies.
Key Features:
Cost:
Fluentd collects events from various data sources and writes them to files, RDBMS, NoSQL, IaaS, SaaS, Hadoop and so on. Fluentd helps you unify your logging infrastructure. Fluentd’s flagship feature is an extensive library of plugins which provide extended support and functionality for anything related to log and data management within a concise developer environment.
Key Features:
Cost:
Syslog is an open-source log management tool that helps engineers and DevOps to collect log data from a large variety of sources to process them and eventually send over to a preferred log analysis tool. With Syslog, you can effortlessly collect, diminish, categorize and correlate your log data from your existing stack and push it forward for analysis.
Key Features:
Cost: Free
Rsyslog is a blazing-fast system built for log processing. It offers great performance benchmarks, tight security features, and a modular design for custom modifications. Rsyslog has grown from a singular logging system to be able to parse and sort logs from an extended range of sources, which it can then transform and provide an output to be used in dedicated log analysis software.
Key Features:
Cost: Free
LOGalyze is a simple to use log collection and analysis system with low operational costs, centralized system for log management and is capable of gathering log data from extended sources of operational systems. LOGalyze does predictive event detection in real-time while giving system admins and management personnel the right tools for indexing and searching through piles of data effortlessly.
Key Features:
Cost: Free & Open-Source
Sentry is a modern platform for managing, logging, and aggregation any potential errors within your apps and software. Sentry’s state of the art algorithm helps teams detect any potential errors within the app infrastructure that could be critical to production operations. Sentry essential helps teams to avoid the hassle of having to deal with a problem that’s too late to fix and instead uses its technology to help inform teams about any potential rollbacks or fixes that would sustain the health of the software.
Key Features:
Cost:
Apache Flume is an elegantly designed service for helping its users to stream data directly into Hadoop. It’s core architecture is based on streaming data flows — these can be used to ingest data from a variety of sources to directly link up with Hadoop for further analysis and storage purposes. Flume’s Enterprise customers use the service to stream data into the Hadoop’s HDFS; generally, this data includes data logs, machine data, geo-data, and social media data.
Key Features:
Cost: Free, Open-Source
Cloudlytics is a SaaS startup designed to improve the analysis of log data, billing data, and cloud services. In particular, it is targeted at AWS Cloud services, such as CloudFront and S3 CloudTrail — using Cloudlytics customers can get in-depth insights and pattern discovery based on the data provided by those services. With three management modules, Cloudlytics gives its users the flexibility to choose from monitoring resources in their environment, analyze monthly bills or analyze AWS logs.
Key Features:
Cost: Upon request.
Octopussy is a Perl-based, open-source log management tool that can do alerting and reporting, and visualization of data. Its basic back-end functionality is to analyze logs, generate reports based on log data, and alert the administration to any relevant information.
Key Features:
Cost: Free
Today’s environment of IT departments can provide a layer of challenges when it comes to truly in-depth understanding of why events occur and what logs are reporting. With thousands of log entries from a plethora of sources, and with the demand for logs to be analyzed real-time, there can arise difficulties in knowing how to manage all of the data in a centralized environment. NXLog strives to provide the required tools for concise analysis of logs from a variety of platforms, sources, and formats. NXLog can collect logs from files in various formats, receive logs from the network remotely over UDP, TCP or TLS/SSL on all supported platforms.
Key Features:
Cost: Free (Community Edition), Enterprise (Upon request)
NetIQ is an enterprise software company that focuses on products related to application management, software operations, and security and log management resources. The Sentinel Log Manager is a bundle of software applications that allow for businesses to take advantage of features like effortless log collector, analysis services, and secure storage units to keep your data accessible and safe. Sentinel’s cost-effective and flexible log management platforms make it easy for businesses to audit their logs in real-time for any possible security risks, or application threats that could upset production software.
Key Features:
Cost: Custom quote upon request.
XpoLog seeks out new and innovative ways to help its customers better understand and master their IT data. With their leading technology platform, XpoLog focuses on helping customers analyze their IT data using unique patents and algorithms that are affordable for all operation sizes. The platform drastically reduces time to resolution and provides a wealth of intelligence, trends, and insights into enterprise IT environments.
Key Features:
Cost:
EventTracker provides its customers with business-optimal services that help to correlate and identify system changes that potentially affect the overall performance, security, and availability of IT departments. EventTracker uses SIEM to create a powerful log management environment that can detect changes through concise monitoring tools, and provides USB security protection to keep IT infrastructure protected from emerging security attacks. EventTracker SIEM collates millions of security and log events and provides actionable results in dynamic dashboards so you can pinpoint indicators of a compromise while maintaining archives to meet regulatory retention requirements.
Key Features:
Cost: Upon request.
Getting your focus lost in an ocean of log data can be detrimental to your work and business productivity. You know the information you need is somewhere in those logs, but don’t quite have the power to pick it out from the rest. LogRhythm is a next-generation log management platform that does all the work of unfolding your data for you. Using comprehensive algorithms and the integration of Elasticsearch, anyone can identify crucial insights about business and IT operations. LogRhythm focuses on making sure that all of your data is understood, versus collecting it alone and only taking it from it what you need.
Key Features:
Cost: Upon request.
McAfee is a household name in IT and Network security and has been known to provide modern and latest technology optimized tools for businesses and corporations of all sizes. The McAfee Enterprise Log Manager is an automated log management and analysis suite for all types of logs; Event, Database, Application, and System logs. The software’s in-built features can identify and validated logs for their authenticity — a truly necessary feature for compliance reasons. Organizations have been using McAfee to ensure that their infrastructure is in compliance with the latest security policies. McAfee Enterprise complies with more than 240 standards.
Key Features:
Cost: Upon request.
AlientVault USM (Unified Security Management) reaches far beyond the capabilities of SIEM solutions using a powerful AIO (All in One) security precautions and comprehensive threat analysis algorithm to identify threats in your physical or cloud locations. Resource-dependent IT teams that rely on SIEM are at risk of delaying their ability to detect and analyze threats as they happen, whereas AlienVault USM combines the powerful features of SIEM and integrates them with direct log management and other security features, such as; asset discovery, assessment of vulnerabilities, and direct-threat detection — all of which give you one and centralized platform for security monitoring.
Key Features:
Cost: Upon request.
Not everyone is in need of an enterprise solution for log management, in fact, many of today’s most well-known businesses operate solely on mobile-only platforms, which is a market that Bugfender is trying to impact with its high-quality log application for cloud-based analysis of general log and user behavior within your mobile apps.
Key Features:
Cost:
Mezmo, formally LogDNA, prides itself as the easiest log management tools that you’ll ever put your hands on. Mezmo’s cloud-based log services enable for engineers, DevOps, and IT teams to suction any app or system logs within one simple dashboard. Using CMD or Web interface, you can search, save, tail, and store all of your logs in real-time. With Mezmo, you can diagnose issues, identify the source of server errors, analyze customer activity, monitor Nginx, Redis, and more. A live-streaming tail makes surfacing difficult-to-find bugs easy.
Key Features:
Cost:
Prometheus is a systems and service monitoring system that collects metrics from configured targets at specified intervals, evaluates rule expressions, displays results and triggers alerts when pre-defined conditions are met. With customers like DigitalOcean, SoundCloud, Docker, CoreOS and countless others, the Prometheus repository is a great example of how open-source projects can compete with leading technology and innovate in the field of systems and log management.
Key Features:
Cost: Free, Open-Source.
Scout is a language specific monitoring app that helps Ruby on Rails developers identify code errors, memory leaks, and more. Scout has been renowned for its simple yet advanced UI that provides an effortless experience of understanding what is happening with your Ruby on Rails apps in real-time. A recent business expansion also enabled Scout to expand its functionality for Elixir-built apps.
Key Features:
Cost: $59/server/month
Motadata does more than just manages your logs; it can correlate, integrate and visualize near any of your IT data using native applications inbuilt within the platform. On top of world-class log management, Motadata is capable of monitoring the status and health of your network, servers, and apps. Contextual alerts ensure that you can sleep well-rested as any critical events or pre-defined thresholds will notify you or your team using frequently used platforms like Email, Messaging, or Chat applications.
Key Features:
Cost: Upon request.
InTrust gives your IT department a flexible set of tools for collecting, storing, and searching through huge amounts of data that comes from general data sources, server systems, and usability devices within a single dashboard. InTrust delivers a real-time outlook on what your users are doing with your products, and how those actions affect security, compliance, and operations in general. With InTrust you can understand who is doing what within your apps and software, allowing you to make crucial data-driven decisions when necessary.
Key Features:
Cost:
Nagios provides a complete log management and monitoring solution which is based on its Nagios Log Server platform. With Nagios, a leading log analysis tool in this market, you can increase the security of all your systems, understand your network infrastructure and its events, and gain access to clear data about your network performance and how it can be stabilized.
Key Features:
Cost: Starting at $1995.
If Enterprise-level log management tool is overwhelming you by now, you may want to look into LNAV — an advanced log data manager intended to be used by smaller-scale IT teams. With direct terminal integration, it can stream log data as it is incoming in real-time. You don’t have to worry about setting anything up or even getting an extra server; it all happens live on your existing server, and it’s beautiful. In order to run LNAV, you will need to get the following packages: libpcre, sqlite, ncurses, readline, zlib, and bz2.
Key Features:
Cost: Open-Source
Seq is a software-specific log software for .NET applications. Developers can easily use Seq to monitor log data and performance through the process of developing the application all the way to production level. Search specific application logs from a simple events dashboard, and understand how your apps progress or perform when you push towards your final iteration.
Key Features:
Cost:
Logary is a high performance, multi-target logging, metric, tracing and health-check library for Mono and .Net. As a next-generation logging software, Logary uses the history of your app progress to build models from.
Key Features:
Cost: Open-Source
EventSentry is an award-winning monitoring solution that includes a new NetFlow component for visualizing, measuring, and investigating network traffic. This log management tool helps SysAdmins and network professionals achieve more uptime and security.
Key Features:
Cost:
A full feature, all-in-one SIEM solution that unifies log management, security analytics, and compliance, Logsign is a next-generation solution that increases awareness and allows SysAdmins and network professionals to respond in real time.
Key Features:
Cost: FREE trial available. Contact for a quote
IT Operations Management (ITOM) provides AI-powered log analysis for watching over your digital systems. This way, you can prevent and fix IT issues before they become problems. ITOM’s advanced AI analytics platform predicts and prevents problems in digital business by connecting to your digital assets and continually monitoring and learning about them by reading logs and detecting when something seems likely to go off course.
Key Features:
Cost:
SolarWinds offers IT management software and monitoring tools such as their Log & Event manager. This log management tool handles security, compliance, and troubleshooting by normalizing your log data to quickly spot security incidents and make troubleshooting a breeze.
Key Features:
Cost: FREE trial available. Starts at $2,877
ManageEngine creates comprehensive IT management software for all of your business needs. Their EventLog Analyzer is an IT compliance and log management software for SIEM that is one of the most cost-effective on the market today.
Key Features:
Cost:
PagerDuty helps developers, ITOps, DevOps, and businesses protect their brand reputation and customer experiences. An incident resolution platform, PagerDuty automates your resolutions and provides full-stack visibility and delivers actionable insights for better customer experiences.
Key Features:
Cost: FREE trial available for 14 days
BLËSK Event Log Manager is an intuitive, comprehensive, and cost-effective iT and network management software solution. With BLËSK, you can collect log and event data automatically with zero installation and zero configuration.
Key Features:
Cost: FREE trial available. Contact for a quote
Alert Logic offers full stack security and compliance. Their Log Manager with ActiveWatch is a Security-as-a-Service solution that meets compliance requirements and identifies security issues anywhere in your environment, even in the public cloud.
Key Features:
Cost: Contact for a quote.
WhatsUp Gold Network Monitoring is a log management tool that delivers advanced visualization features that enable IT teams to make faster decisions and improve productivity. With WhatsUp Gold, you can deliver network reliability and performance and ensure optimized performance while minimizing downtime and continually monitoring networks.
Key Features:
Cost: FREE trial available for 30 days
Loggly is a cloud-based log management services that can dig deep into extensive collections of log data in real-time while giving you the most crucial information, on how to improve your code and deliver a better customer experience. Loggly’s flagship log data collection environment means that you can use traditional standards like HTTP and Syslog, versus having to install complicated log collector software on each server separately.
Key Features:
Cost:
ChaosSearch has developed a brand new approach to delivering data analytics and insights at scale. Their platform connects to and indexes the data within our customers’ cloud storage environments (ie., AWS S3), rendering all of their data fully searchable and available for analysis with the existing data visualization/analysis tools they are already using. Whereas all other solutions require complex data pipelines consisting of parsing or schema changes, ChaosSearch indexes all data as-is, without transformation, while auto-detecting native schemas.
ChaosSearch Features
ChaosSearch Cost
Stackify by Netreo received top honors from SD Times for Performance Monitoring in the 2023 SD Times 100. Each year, SD Times editors recognize leaders in the industry across 10 different categories and designate companies with “Best in Show” honors. Retrace APM is a full lifecycle APM solution and the driving force behind the successful placement within the SD Times 100 each of the past 5 years!
]]>When I was working at AWS, one of the things that inspired me the most was the company’s founding principles. As articulated by Andy Jassy, then CEO of AWS:
AWS laid the foundation of a cloud infrastructure that works like Lego blocks and can be provisioned immediately. As customers’ businesses grew, they could scale effortlessly, paying only for what they needed rather than making large upfront investments.
Democratizing the access to tools and technologies has many advantages. Easy, low-cost access to critical tools and technologies fuels global innovation. Developers from diverse backgrounds and experiences bring new perspectives and ideas to the table, leading to new solutions and approaches. Providing accessible tools fosters a culture of empowerment and continuous feedback, motivating high-performing teams to deliver the best for customers.
With access to the resources they need, development teams collaborate more effectively and create higher quality products. Most importantly, democratizing software development reduces costs by eliminating the need for expensive licenses or infrastructure. All this makes it easier for developers to access the tools and technologies they need to do their jobs, regardless of their financial situation.
Stackify’s purpose is to put APM/ELM capability in the hands of every developer in this world. We believe developing high quality, high-performance software solutions should not be limited to a select few. Inspired by the founding vision of AWS and our commitment to democratizing access to Retrace, Stackify is launching a full featured, all-inclusive $9.99/month consumption plan for our popular, cloud-based Application Performance Monitoring (APM) and Errors and Log Management (ELM) solution.
With more than 1,000 customers, Retrace helps software development teams monitor, troubleshoot and optimize the performance of their applications. Providing real-time APM insights, Retrace enables developers to quickly identify, diagnose and resolve errors, crashes, slow requests and more. Retrace also supports a range of programming languages, frameworks and platforms, including .NET, Java, .NET Core, Node.js, PHP, Python and Ruby.
With our new starter consumption plan, developers, DevOps teams and companies of all sizes can easily optimize code quality and application performance with affordable access to our all-inclusive, full-featured Retrace solution. On a $9.99 plan, Stackify is the only company to provide support where customers can interact with real people and resolve their queries.
Retrace consumption pricing delivers “APM and ELM for All” without restrictive contracts for up to 1 million logs and 10 thousand traces. An unlimited number of users get complete resource and server monitoring for an unlimited number of servers, plus seven-day data retention and stellar support services during business hours. This is a significant opportunity for developers and development teams worldwide to improve their application management and provide a world-class experience to their customers.
]]>Huntington Beach, Calif. – May 24, 2023 – Netreo, the award-winning provider of IT infrastructure monitoring and observability solutions and one of Inc. 5000’s fastest growing companies, today announced new, Retrace consumption-based pricing designed to deliver APM and ELM for All. Consumption-based pricing is designed to disrupt the market by enabling DevOps teams of all sizes to benefit from the full-featured Retrace application performance monitoring plus error and log management solution for only $9.99 per month (US dollars).
Currently, DevOps teams are forced to buy separate monitoring products or high-cost modules to get the same application performance, error and log management functionality offered by Retrace. Such go-to-market strategies from competitors effectively restrict purchasing to larger organizations that can afford the higher cost of piece part solutions. By removing pricing barriers and providing more functionality, Netreo is disrupting the market by enabling application developers, DevOps teams and companies of all sizes to benefit from the advantages of the integrated Retrace solution.
“All developers have the same goal of building high quality, high performing applications; and companies invest in multiple high-cost and often disconnected tools to ensure applications are defect free and optimized,” said Netreo APM Business Unit General Manager, Sanjeev Mittal. “Retrace is already unique in the market by offering application performance monitoring, errors and log management and more in our core APM solution. By removing price barriers, Netreo is enabling DevOps teams to collaborate more effectively and create higher quality products, while reducing expensive licenses and infrastructure costs.”
Starter consumption plans provide complete access to all Retrace functions without restrictive contracts. Customers pay monthly to get unlimited users complete resource and server monitoring for an unlimited number of servers, plus seven-day data retention and technical support (during business hours).
The Retrace full lifecycle APM solution delivers robust APM capabilities combined with the top tools and capabilities that developers and IT team needs most to eliminate application bugs and performance issues before impacting users. Turning detailed application tracing, centralized logging, critical metrics monitoring and more into actionable insights, Retrace enhances troubleshooting and optimizes performance throughout the entire lifecycle of enterprise applications. Supporting a wide range of programming languages, frameworks and platforms, including .NET, Java, .NET Core, Node.js, PHP, Python and Ruby, Retrace enables developers and DevOps teams worldwide to improve their application management and provide a world-class experience to their customers.
Netreo’s full-stack IT infrastructure management (ITIM), application performance monitoring (APM) and digital experience monitoring (DEM) solutions empower enterprise ITOps, developers and IT leaders with AIOps-driven observability, actionable insights, process automation and accelerated issue resolution. By having real-time intelligence on all resources, devices and applications deployed in cloud, on-premises and hybrid networks, Netreo’s users have the confidence to deliver more reliable and innovative internal and external customer digital experiences. Netreo is available via subscription, and in on-premises and cloud deployment models. Netreo is one of Inc. 5000’s fastest-growing companies and is trusted worldwide by thousands of private and public entities, managing half a billion resources per day.
Try Retrace and Prefix for free or connect with Stackify by Netreo on Twitter, LinkedIn and Facebook.
Request a demo of Netreo or connect with Netreo on Twitter, LinkedIn, and Facebook.
Media Contact:
Kyle Biniasz
Vice President of Marketing
(949) 769-5705
]]>This article will explain how SDLC works, dive deeper in each of the phases, and provide you with examples to get a better understanding of each phase.
SDLC or the Software Development Life Cycle is a process that produces software with the highest quality and lowest cost in the shortest time possible. SDLC provides a well-structured flow of phases that help an organization to quickly produce high-quality software which is well-tested and ready for production use.
The SDLC involves six phases as explained in the introduction. Popular SDLC models include the waterfall model, spiral model, and Agile model.
So, how does the Software Development Life Cycle work?
SDLC works by lowering the cost of software development while simultaneously improving quality and shortening production time. SDLC achieves these apparently divergent goals by following a plan that removes the typical pitfalls of software development projects. That plan starts by evaluating existing systems for deficiencies.
Next, it defines the requirements of the new system. It then creates the software through the stages of analysis, planning, design, development, testing, and deployment. By anticipating costly mistakes like failing to ask the end-user or client for feedback, SLDC can eliminate redundant rework and after-the-fact fixes.
It’s also important to know that there is a strong focus on the testing phase. As the SDLC is a repetitive methodology, you have to ensure code quality at every cycle. Many organizations tend to spend few efforts on testing while a stronger focus on testing can save them a lot of rework, time, and money. Be smart and write the right types of tests.
Next, let’s explore the different stages of the Software Development Life Cycle.
Following the best practices and/or stages of SDLC ensures the process works in a smooth, efficient, and productive way.
“What are the current problems?” This stage of the SDLC means getting input from all stakeholders, including customers, salespeople, industry experts, and programmers. Learn the strengths and weaknesses of the current system with improvement as the goal.
“What do we want?” In this stage of the SDLC, the team determines the cost and resources required for implementing the analyzed requirements. It also details the risks involved and provides sub-plans for softening those risks.
In other words, the team should determine the feasibility of the project and how they can implement the project successfully with the lowest risk in mind.
“How will we get what we want?” This phase of the SDLC starts by turning the software specifications into a design plan called the Design Specification. All stakeholders then review this plan and offer feedback and suggestions. It’s crucial to have a plan for collecting and incorporating stakeholder input into this document. Failure at this stage will almost certainly result in cost overruns at best and the total collapse of the project at worst.
“Let’s create what we want.”
At this stage, the actual development starts. It’s important that every developer sticks to the agreed blueprint. Also, make sure you have proper guidelines in place about the code style and practices.
For example, define a nomenclature for files or define a variable naming style such as camelCase. This will help your team to produce organized and consistent code that is easier to understand but also to test during the next phase.
“Did we get what we want?” In this stage, we test for defects and deficiencies. We fix those issues until the product meets the original specifications.
In short, we want to verify if the code meets the defined requirements.
Try Stackify’s free code profiler, Prefix, to write better code on your workstation. Prefix works with .NET, Java, PHP, Node.js, Ruby, and Python.
“Let’s start using what we got.”
At this stage, the goal is to deploy the software to the production environment so users can start using the product. However, many organizations choose to move the product through different deployment environments such as a testing or staging environment.
This allows any stakeholders to safely play with the product before releasing it to the market. Besides, this allows any final mistakes to be caught before releasing the product.
“Let’s get this closer to what we want.” The plan almost never turns out perfect when it meets reality. Further, as conditions in the real world change, we need to update and advance the software to match.
The DevOps movement has changed the SDLC in some ways. Developers are now responsible for more and more steps of the entire development process. We also see the value of shifting left. When development and Ops teams use the same toolset to track performance and pin down defects from inception to the retirement of an application, this provides a common language and faster handoffs between teams.
Application performance monitoring (APM) tools can be used in a development, QA, and production environment. This keeps everyone using the same toolset across the entire development lifecycle.
Read More: 3 Reasons Why APM Usage is Shifting Left to Development & QA
Security is an essential aspect of any software development process. However, unlike traditional software development that addresses security as a separate stage, SDLC addresses security every step of the way through DevSecOps practices.
DevSecOps, an extension of DevOps, is a methodology that emphasizes the integration of security assessments throughout the entire SDLC. It ensures that the software is secure from initial design to final delivery and can withstand any potential threat. During DevSecOps, the team undergoes security assurance activities such as code review, architecture analysis, penetration testing, and automated detection, which are integrated into IDEs, code repositories, and build servers.
By following some best practices, DevSecOps can be integrated into SDLC in various ways.
The most common SDLC examples or SDLC models are listed below.
This SDLC model is the oldest and most straightforward. With this methodology, we finish one phase and then start the next. Each phase has its own mini-plan and each phase “waterfalls” into the next. The biggest drawback of this model is that small details left incomplete can hold up the entire process.
The Agile SDLC model separates the product into cycles and delivers a working product very quickly. This methodology produces a succession of releases. Testing of each release feeds back info that’s incorporated into the next version. According to Robert Half, the drawback of this model is that the heavy emphasis on customer interaction can lead the project in the wrong direction in some cases.
This SDLC model emphasizes repetition. Developers create a version very quickly and for relatively little cost, then test and improve it through rapid and successive versions. One big disadvantage here is that it can eat up resources fast if left unchecked.
An extension of the waterfall model, this SDLC methodology tests at each stage of development. As with waterfall, this process can run into roadblocks.
This high-risk SDLC model throws most of its resources at development and works best for small projects. It lacks the thorough requirements definition stage of the other methods.
The most flexible of the SDLC models, the spiral model is similar to the iterative model in its emphasis on repetition. The spiral model goes through the planning, design, build and test phases over and over, with gradual improvements at each pass.
Each SDLC model offers a unique process for your team’s various project challenges. The project’s specifications and intended results significantly influence which model to use. For example, the waterfall model works best for projects where your team has no or limited access to customers to provide constant feedback. However, the Agile model’s flexibility is preferred for complex projects with constantly changing requirements.
Hence, the Agile SDLC model has recently become increasingly popular and in demand. This demand can be primarily linked to the agile model’s flexibility and core principles. By its core principles, we mean adaptability, customer involvement, lean development, teamwork, time, sustainability, and testing, with its two primary elements being teamwork and time (faster delivery). So rather than creating a timeline for the project, agile breaks the project into individual deliverable ‘time-boxed’ pieces called sprints. This model prioritizes flexibility, adaptability, collaboration, communication, and quality while promoting early and continuous delivery. Ultimately, all this ensures that the final product meets customer needs and can quickly respond to market demands.
However, regardless of the model you pick, there are a lot of tools and solutions, like Stackify’s Retrace tool, to assist you every step of the way.
SDLC done right can allow the highest level of management control and documentation. Developers understand what they should build and why. All parties agree on the goal upfront and see a clear plan for arriving at that goal. Everyone understands the costs and resources required.
Several pitfalls can turn an SDLC implementation into more of a roadblock to development than a tool that helps us. Failure to take into account the needs of customers and all users and stakeholders can result in a poor understanding of the system requirements at the outset. The benefits of SDLC only exist if the plan is followed faithfully.
Want to improve application quality and monitor application performance at every stage of the SDLC? Try out Stackify’s Retrace tool for free and experience how it can help your organization at producing higher-quality software.
]]>