apm Archives - Stackify Fri, 17 May 2024 05:47:17 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.4 https://stackify.com/wp-content/uploads/2023/02/favicon.png apm Archives - Stackify 32 32 Docker Build: A Beginner’s Guide to Building Docker Images https://stackify.com/docker-build-a-beginners-guide-to-building-docker-images/ Fri, 29 Sep 2023 13:27:00 +0000 https://stackify.com/?p=25654 Docker has changed the way we build, package, and deploy applications. But this concept of packaging apps in containers isn’t new—it was in existence long before Docker.

Docker just made container technology easy for people to use. This is why Docker is a must-have in most development workflows today. Most likely, your dream company is using Docker right now.

Docker’s official documentation has a lot of moving parts. Honestly, it can be overwhelming at first. You could find yourself needing to glean information here and there to build that Docker image you’ve always wanted to build.

Maybe building Docker images has been a daunting task for you, but it won’t be after you read this post. Here, you’ll learn how to build—and how not to build—Docker images. You’ll be able to write a Dockerfile and publish Docker images like a pro.

Install Docker

First, you’ll need to install Docker. Docker runs natively on Linux. That doesn’t mean you can’t use Docker on Mac or Windows. In fact, there’s Docker for Mac and Docker for Windows. I won’t go into details on how to install Docker on your machine in this post. If you’re on a Linux machine, this guide will help you get Docker up and running.

Now that you have Docker set up on your machine, you’re one step closer to building images with Docker. Most likely, you’ll come across two terms — ”containers” and “images”—that can be confusing.

Docker containers are runtime instances of Docker images, whether running or stopped

Docker images and Containers

Docker containers are runtime instances of Docker images, whether running or stopped. In fact, one of the major differences between Docker containers and images is that containers have a writable layer and it’s the container that runs your software. You can think of a Docker image as the blueprint of a Docker container.

When you create a Docker container, you’re adding a writable layer on top of the Docker image. You can run many Docker containers from the same Docker image. You can see a Docker container as a runtime instance of a Docker image.

Building your first Docker image

It’s time to get our hands dirty and see how Docker build works in a real-life app. We’ll generate a simple Node.js app with an Express app generator. Express generator is a CLI tool used for scaffolding Express applications. After that, we’ll go through the process of using Docker build to create a Docker image from the source code.

We start by installing the express generator as follows:

$ npm install express-generator -g

Next, we scaffold our application using the following command:

$ express docker-app

Now we install package dependencies:

$ npm install

Start the application with the command below:

$ npm start

If you point your browser to http://localhost:3000, you should see the application default page, with the text “Welcome to Express.”

Dockerfile

Mind you, the application is still running on your machine, and you don’t have a Docker image yet. Of course, there are no magic wands you can wave at your app and turn it into a Docker container all of a sudden. You’ve got to write a Dockerfile and build an image out of it.

Docker’s official docs define Dockerfile as “a text document that contains all the commands a user could call on the command line to assemble an image.” Now that you know what a Dockerfile is, it’s time to write one.

Docker builds images by reading instructions in dockerfiles. A  docker instruction has two components: instruction and argument. 

A docker instruction can be written as :

RUN npm install

“RUN” in the instruction and “npm install” is the argument.   There are many docker instructions but below are some of the docker instructions you will come across often and the explanation. Mind you,  we’ll use some of them in this post.

Docker Instructions

Dockerfile InstructionExplanation
FROMWe use “FROM” to specify the base image we want to start from.
RUNRUN is used to run commands during the image build process.
ENVSets environment variables within the image, making them accessible both during the build process and while the container is running. If you only need to define build-time variables, you should utilize the ARG instruction.
COPYThe COPY command is used to copy a file or folder from the host system into the docker image.
EXPOSEUsed to specify the port you want the docker image to listen to at runtime.
ADDAn advanced form of COPY instruction. You can copy files from the host system into the docker image. You can also use it to copy files from a URL into a destination in the docker image. In fact, you can use it to copy a tarball from the host system and automatically have it extracted into a destination in the docker image.
WORKDIRIt’s used to set the current working directory.
VOLUMEIt is used to create or mount the volume to the Docker container
USERSets the user name and UID when running the container. You can use this instruction to set a non-root user of the container.
LABELSpecify metadata information of Docker image
ARGDefines build-time variables using key-value pairs. However, these ARG variables will not be accessible when the container is running. To maintain a variable within a running container, use  ENV instruction instead.
CMDExecutes a command within a running container. Only one CMD instruction is allowed, and if multiple are present, only the last one takes effect.
ENTRYPOINTSpecifies the commands that will execute when the Docker container starts. If you don’t specify any ENTRYPOINT, it defaults to “/bin/sh -c”.

Creating a Dockerfile

Enough of all the talk. It’s time to create docker instructions we need for this project. At the root directory of your application, create a file with the name “Dockerfile.”

$ touch Dockerfile

Dockerignore

There’s an important concept you need to internalize—always keep your Docker image as lean as possible. This means packaging only what your applications need to run. Please don’t do otherwise.

In reality, source code usually contains other files and directories like .git, .idea, .vscode, or ci.yml. Those are essential for our development workflow, but won’t stop our app from running. It’s a best practice not to have them in your image—that’s what .dockerignore is for. We use it to prevent such files and directories from making their way into our build.

Create a file with the name .dockerignore at the root folder with this content:

.git
.gitignore
node_modules
npm-debug.log
Dockerfile*
docker-compose*
README.md
LICENSE
.vscode

The Base Image

Dockerfile usually starts from a base image. As defined in the [Docker documentation](https://docs.docker.com/engine/reference/builder/), a base image or parent image is where your image is based. It’s your starting point. It could be an Ubuntu OS, Redhat, MySQL, Redis, etc.

Base images don’t just fall from the sky. They’re created—and you too can create one from scratch. There are also many base images out there that you can use, so you don’t need to create one in most cases.

We add the base image to Dockerfile using the FROM command, followed by the base image name:

# Filename: Dockerfile
FROM node:18-alpine

Copying source code

Let’s instruct Docker to copy our source during Docker build:

# Filename: Dockerfile
FROM node:18-alpine
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .

First, we set the working directory using WORKDIR. We then copy files using the COPY command. The first argument is the source path, and the second is the destination path on the image file system. We copy package.json and install our project dependencies using npm install. This will create the node_modules directory that we once ignored in .dockerignore.

You might be wondering why we copied package.json before the source code. Docker images are made up of layers. They’re created based on the output generated from each command. Since the file package.json does not change often as our source code, we don’t want to keep rebuilding node_modules each time we run Docker build.

Copying over files that define our app dependencies and install them immediately enables us to take advantage of the Docker cache. The main benefit here is quicker build time. There’s a really nice blog post that explains this concept in detail.

Exposing a port

Exposing port 3000 informs Docker which port the container is listening on at runtime. Let’s modify the Docker file and expose the port 3000.

# Filename: Dockerfile
FROM node:18-alpine
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000

 Docker CMD

The CMD command tells Docker how to run the application we packaged in the image. The CMD follows the format CMD [“command”, “argument1”, “argument2”].

# Filename: Dockerfile
FROM node:18-alpine
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]

Building Docker images

With Dockerfile written, you can build the image using the following command:

$ docker build .

We can see the image we just built using the command docker images.

$ docker images

If you run the command above, you will see something similar to the output below.

REPOSITORY TAG IMAGE ID CREATED SIZE
7b341adb0bf1 2 minutes ago 83.2MB
When you have many images, it becomes difficult to know which image is what.

Tagging a Docker image

When you have many images, it becomes difficult to know which image is what. Docker provides a way to tag your images with friendly names of your choosing. This is known as tagging. Let’s proceed to tag the Docker image we just built. Run the command below:

$ docker build . -t yourusername/example-node-app

If you run the command above, you should have your image tagged already. Running docker images again will show your image with the name you’ve chosen.

$ docker images

The output of the above command should be similar to this:

REPOSITORY TAG IMAGE ID CREATED SIZE
yourusername/example-node-app latest be083a8e3159 7 minutes ago 83.2MB

Running or Testing a Docker image

You run a Docker image by using the docker run API. The command is as follows:

$ docker run -p80:3000 yourusername/example-node-app

The command is pretty simple. We supplied -p argument to specify what port on the host machine to map the port the app is listening on in the container. Now you can access your app from your browser on https://localhost.

To run the container in a detached mode, you can supply argument -d:

$ docker run -d -p80:3000 yourusername/example-node-app

A big congrats to you! You just packaged an application that can run anywhere Docker is installed.

Pushing a Docker image to the Docker repository

The Docker image you built still resides on your local machine. This means you can’t run it on any other machine outside your own—not even in production! To make the Docker image available for use elsewhere, you need to push it to a Docker registry.

A Docker registry is where Docker images live. One of the popular Docker registries is Docker Hub. You’ll need an account to push Docker images to Docker Hub, and you can create one [here.](https://hub.docker.com/)

With your [Docker Hub](https://hub.docker.com/) credentials ready, you need only to log in with your username and password.

$ docker login

Enter your Docker Hub username and docker hub token or password to authenticate

Retag the image with a version number:

$ docker tag yourusername/example-node-app yourdockerhubusername/example-node-app:v1

Then push with the following:

$ docker push yourusername/example-node-app:v1

If you’re as excited as I am, you’ll probably want to poke your nose into what’s happening in this container, and even do cool stuff with Docker API.

You can list Docker containers:

$ docker ps

And you can inspect a container:

$ docker inspect

You can view Docker logs in a Docker container:

$ docker logs

And you can stop a running container:

$ docker stop

Logging and monitoring are as important as the app itself. You shouldn’t put an app in production without proper logging and monitoring in place, no matter what the reason. Retrace provides first-class support for Docker containers. This guide can help you set up a Retrace agent.

Conclusion

The whole concept of containerization is all about taking away the pain of building, shipping, and running applications. In this post, we’ve learned how to write Dockerfile as well as build, tag, and publish Docker images. Now it’s time to build on this knowledge and learn about how to automate the entire process using continuous integration and delivery. Here are a few good posts about setting up continuous integration and delivery pipelines to get you started:

Start Free Trial
]]>
SQL Performance Tuning: 7 Practical Tips for Developers https://stackify.com/performance-tuning-in-sql-server-find-slow-queries/ Fri, 22 Sep 2023 10:15:11 +0000 https://stackify.com/?p=12166 Being able to execute SQL performance tuning is a vital skill for software teams that rely on relational databases. Vital isn’t the only adjective that we can apply to it, though. Rare also comes to mind, unfortunately.

Many software professionals think that they can just leave all the RDBMS settings as they came by default. They’re wrong. Often, the default settings your RDBMS comes configured with are far from being the optimal ones. Not optimizing such settings results in performance issues that you could easily avoid.

Some programmers, on the other hand, believe that even though SQL performance tuning is important, only DBAs should do it. They’re wrong as well.

First of all, not all companies will even have a person with the official title “DBA.” It depends on the size of the company, more than anything.

But even if you have a dedicated DBA on the team, that doesn’t mean you should overwhelm them with tasks that could’ve been performed by the developers themselves. If a developer can diagnose and fix a slow query, then there’s no reason why they shouldn’t do it. The relevant word here, though, is can—most of the time, they can’t.

How do we fix this problem? Simple: We equip developers with the knowledge they need to find slow SQL queries and do performance tuning in SQL Server. In this post, we’ll give you seven tips to do just that.

What Is SQL Performance Tuning?

Before we show you our list of tips you can use to do SQL performance tuning on your software organization, I figured we should define SQL performance tuning.

So what is SQL performance tuning? I bet you already have an idea, even if it’s a vague one.

In a nutshell, SQL performance tuning consists of making queries of a relation database run as fast as possible.

As you’ll see in this post, SQL performance tuning is not a single tool or technique. Rather, it’s a set of practices that makes use of a wide array of techniques, tools, and processes.

7 Ways to Find Slow SQL Queries

Without further ado, here are seven ways to find slow SQL queries in SQL Server.

1. Generate an Actual Execution Plan

In order to diagnose slow queries, it’s crucial to be able to generate graphical execution plans, which you can do by using SQL Server Management Studio. Actual execution plans are generated after the queries run. But how do you go about generating an execution plan?

Begin by clicking on “Database Engine Query”, on the SQL Server Management Studio toolbar.

After that, enter the query and click “Include Actual Execution Plan” on the Query menu.

Finally, it’s time to run your query. You do that by clicking on the “Execute” toolbar button or pressing F5. Then, SQL Server Management Studio will display the execution plan in the results pane, under the “Execution Plan” tab.

2. Monitor Resource Usage

Resource usage is an essential factor when it comes to SQL database performance. Since you can’t improve what you don’t measure, you definitely should monitor resource usage.

So how can you do it?

If you’re using Windows, use the System Monitor tool to measure the performance of SQL Server. It enables you to view SQL Server objects, performance counters, and the behavior of other objects.

Using System Monitor allows you to monitor Windows and SQL Server counters simultaneously, so you can verify if there’s any correlation between the performance of the two.

3. Use the Database Engine Tuning Advisor

Another important technique for SQL performance tuning is to analyze the performance of Transact-SQL statements that are run against the database you intend to tune.

You can use the Database Engine Tuning Advisor to analyze the performance implications.

But the tool goes beyond that: it also recommends actions you should take based on its analysis. For instance, it might advise you to create or remove indexes.

4. Find Slow Queries With SQL DMVs

One of the great features of SQL Server is all of the dynamic management views (DMVs) that are built into it. There are dozens of them and they can provide a wealth of information about a wide range of topics.

There are several DMVs that provide data about query stats, execution plans, recent queries and much more. You can use them together to get some amazing insights.

For example, the query below can be used to find the queries that use the most reads, writes, worker time (CPU), etc.

SELECT TOP 10 SUBSTRING(qt.TEXT, (qs.statement_start_offset/2)+1,
((CASE qs.statement_end_offset
WHEN -1 THEN DATALENGTH(qt.TEXT)
ELSE qs.statement_end_offset
END - qs.statement_start_offset)/2)+1),
qs.execution_count,
qs.total_logical_reads, qs.last_logical_reads,
qs.total_logical_writes, qs.last_logical_writes,
qs.total_worker_time,
qs.last_worker_time,
qs.total_elapsed_time/1000000 total_elapsed_time_in_S,
qs.last_elapsed_time/1000000 last_elapsed_time_in_S,
qs.last_execution_time,
qp.query_plan
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) qt
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp
ORDER BY qs.total_logical_reads DESC -- logical reads
-- ORDER BY qs.total_logical_writes DESC -- logical writes
-- ORDER BY qs.total_worker_time DESC -- CPU time

The result of the query will look something like this below. The image below is from a marketing app I made. You can see that one particular query (the top one) takes up all the resources.

By looking at this, I can copy that SQL query and see if there is some way to improve it, add an index, etc.

Find slow SQL queries with DMVs

Pros: Always available basic rollup statistics.
Cons: Doesn’t tell you what is calling the queries. Can’t visualize when the queries are being called over time.

5. Query Reporting via APM Solutions

One of the great features of application performance management (APM) tools is the ability to track SQL queries. For example, Retrace tracks SQL queries across multiple database providers, including SQL Server. Retrace tells you how many times a query was executed, how long it takes on average, and what transactions called it.

This is valuable information for SQL performance tuning. APM solutions collect this data by doing lightweight performance profiling against your application code at runtime.

Below is a screenshot from Retrace’s application dashboard showing which SQL queries take the longest for a particular application.

SQL Performance Tuning With Retrace Top Queries
Retrace Top SQL Queries

Retrace collects performance statistics about every SQL query being executed. You can search for specific queries to find potential problems.

Retrace View All SQL Queries
Retrace View All SQL Queries

By selecting an individual query, you see how often that query is called over time and how long it takes. You also see what webpages use the SQL query and how it impacts their performance.

Retrace SQL Performance Over Time
Retrace SQL Performance Over Time

Since Retrace is a lightweight code profiler and captures ASP.NET request traces, it even shows you the full context of what your code is doing.

Below is a captured trace showing all the SQL queries and other details about what the code was doing. Retrace even shows log messages within this same view. Also, notice that it shows the server address and database name that’s executing the query. You can also see the number of records it returns.

Retrace Web Transaction Trace
Retrace Web Transaction Trace

As you can see, Retrace provides comprehensive SQL reporting capabilities as part of its APM capabilities. It also provides multiple monitoring and alerting features around SQL queries.

Pros: Detailed reporting across apps, per app, and per query. Shows transaction traces details how queries are used. Starts at just $10 a month. Is always running once installed.

Cons: Doesn’t provide the number of reads or writes per query.

6. SQL Server Extended Events

The SQL Server Profiler was around for a very long time. It was a very useful tool to see in real time what SQL queries are being executed against your database, but it’s currently deprecated. Microsoft replaced it with SQL Server Extended Events.

This is sure to anger a lot of people but I can understand why Microsoft is doing it. Extended Events works via Event Tracing (ETW).

This has been the common way for all Microsoft-related technologies to expose diagnostic data. ETW provides much more flexibility. As a developer, I could easily tap into ETW events from SQL Server to collect data for custom uses. That’s really cool and really powerful.

SQL Server Extended Events

MORE: Introducing SQL Server Extended Events

Pros: Easier to enable and leave running. Easier to develop custom solutions with.

Cons: Since it is fairly new, most people may not be aware of it.

7. SQL Azure Query Performance Insights

I’m going to assume that SQL Azure’s performance reporting is built on top of Extended Events. Within the Azure Portal, you can get access to a wide array of performance reporting and optimization tips that are very helpful.

Note: These reporting capabilities are only available for databases hosted on SQL Azure.

In the screenshot below, you can see how SQL Azure makes it easy to use your queries that use the most CPU, Data IO, and Log IO. It has some great basic reporting built into it.

In the screenshot you can see how SQL Azure makes it easy to use your queries that use the most CPU, Data IO, and Log IO. It is has some great basic reporting built into it.
SQL Azure Top Queries

You can also select an individual query and get more details to help with SQL performance tuning.

You can also select an individual query and get more details to help with SQL performance tuning.
SQL Azure Query Details

Pros: Great basic reporting.
Cons: Only works on Azure. No reporting across multiple databases.

Summary

Next time you need to do some performance tuning with SQL Server, you’ll have a few options at your disposal to consider. Odds are that you’ll use more than one of these tools depending on what you are trying to accomplish.

Try Stackify’s free code profiler, Prefix, to write better code on your workstation. Prefix works with .NET, Java, PHP, Node.js, Ruby, and Python.

If you’re using an APM solution like Retrace, be sure to check what kind of SQL performance functionality it has built-in. If you don’t have an APM solution or aren’t sure what it is, be sure to read this: What is Application Performance Management and 10 critical features that developers need in APM.

]]>
Load Testing vs. Performance Testing vs. Stress Testing https://stackify.com/load-testing-vs-performance-testing-vs-stress-testing/ Sat, 05 Aug 2023 05:42:57 +0000 https://stackify.com/?p=28056 Just conducting one type of testing is generally not enough. For example, let’s say you decide to perform unit testing only. However, unit tests only verify business logic. Many other types of tests exist to verify the integration between components, such as integration tests.

But what if you want to measure the maximum performance of your application? Or what if you want to know how the application behaves under extreme stress?

To answer these questions, you can pursue these types of testing:

  • Load testing
  • Performance testing
  • Stress testing

These types of tests are ideal for answering the above questions. However, the difference between those testing types is subtle.

This article will guide you through each of those testing types. You’ll find out about each type of testing and learn about the differences between them.

What Is Performance Testing?

Performance testing is an umbrella term for both load and stress testing. Performance testing refers to all testing related to verifying the system’s performance and monitoring how it behaves under stress. Therefore we can say that performance testing is concerned with the following metrics:

  • Reliability: Determine the error rate and how it changes under higher loads.
  • Stability: You can measure this through memory and CPU usage.
  • Response time: Measure the average response time for requests.
  • Scalability: Determine how the application behaves under different types of loads.

Performance testing is often linked to a customer’s functional requirements. Imagine a client who asks to develop a service that handles ticket sales for events. For example, the client expects the application to be able to handle up to 50,000 requests per minute. This is a functional requirement that performance testing helps to validate.

The goal of performance testing is not to find bugs but to find performance bottlenecks. Why is this important? A single performance bottleneck can have a huge impact on the overall application’s performance. Therefore, it’s crucial to conduct performance testing to detect such issues.

In addition, this type of testing also verifies the performance in different environments to make sure the application works well for different setups and operating systems. To give an example, an application might work fine on a Linux server but have performance issues on a Windows server. Performance testing should help you rule out such problems.

In short, the goal of performance testing is to gather insights into the application’s performance and communicate these performance metrics to the stakeholders.

Benefits of Performance Testing

It’s always a good idea to measure the performance of an application. Delivering an application that hasn’t been performance tested is the same as delivering a bike with brakes that haven’t been tested.

Performance testing helps you with the following aspects:

  • Measure the stability of the software.
  • Assess how your application behaves under a normal load, as this is key information for the client.
  • Find performance bottlenecks early on in the development life cycle.
  • Measuring performance helps you to further improve performance because it helps you tailor configurations for components to make them more streamlined.

Next, let’s get into the details of load testing.

What Is Load Testing?

Load testing specifically tries to identify how the application behaves under expected loads. Therefore, you should first know what load you expect for your application. Once you know this, you can start load testing the application.

Often, load testing includes a scenario where 100 extra requests hit the application every 30 seconds. Next, you’ll want to increase the number of requests up to the expected load for the application. Once the expected load has been reached, most testing engineers prefer to continue hitting the application for a couple more minutes to detect possible memory or CPU issues. These kinds of issues might only pop up after hitting the application for a while and are rarely visible from the beginning.

Moreover, the goal is to gather statistics about important metrics, such as response time, reliability, and stability.

To summarize, load testing is generally concerned with collecting all this data and analyzing it to detect anomalies. The idea of load testing is to create an application that behaves stably under an expected load. You don’t want to see ever-increasing memory usage, as that might indicate you have a memory leak.

Benefits of Load Testing

Load testing helps you to get a better understanding of the expected load your application can handle. And understanding those limits helps you to reduce the risk of failure.

Let’s say your application can handle 5,000 requests per minute. Because you know this limit, your organization can take precautions to scale the application in case the number of requests per minute gets close to this limit. By taking these precautions, you’re reducing the risk of failure.

In addition, load testing gives you good insights into the memory usage and CPU time of your application. This data is of great value for measuring the stability of your application. Ideally, the memory usage for your application should remain stable when you’re performing load testing.

Last, let’s explore the true meaning of stress testing.

Stress testing helps you detect the breaking point of an application

What Is Stress Testing?

Stress testing helps you detect the breaking point of an application. Also, it allows a testing engineer to find the maximum load an application can handle. In order words, it lets you determine the upper limit of the application.

To give an example, let’s say a certain application programming interface can handle 5,000 simultaneous requests, but it will fail if it has to process more requests for the given setup. This limit is important for companies to know because it allows them to scale their application when needed.

In addition, it’s also a common practice to increase “stress” on the application by closing database connections, removing access to files, or closing network ports. The idea here is to evaluate how the application reacts under such extreme conditions. Therefore, this type of testing is extremely useful when you want to evaluate the robustness of your application.

Benefits of Stress Testing

Stress testing provides the following benefits for you and your organization:

  • Find the exact breaking point for your application.
  • Evaluate the robustness of an application.
  • Determine which components are most likely to break first when putting the application under extreme stress and how to handle this type of failure accordingly.

Finally, let’s compare the above three testing types and learn about the differences between performance testing, load testing, and stress testing.

KeyStress TestingLoad Testing
PurposeStress testing tests the system’s performance under extreme load.Load testing tests the system performance for the expected load on the application
ThresholdStress testing is conducted above threshold limits.Load testing is conducted at threshold limits.
ResultStress testing allows you to evaluate how robust is your application.Load testing ensures that the system can handle the expected load.

Differences Between Performance, Load, and Stress Testing

To finish off, let’s do a quick recap of performance testing, load testing, and stress testing so that you see how these tests are related.

  • Performance testing is an umbrella term that includes both load testing and stress testing. Performance testing is concerned with evaluating the overall system’s performance and collecting metrics such as availability, response time, and stability. Basically, performance testing is the term that regards to how well your application works. Not in terms of features per se, but in terms of how those features behave in different scenarios that are related to performance. Those can be the number of users, connection quality, disk usage, number of jobs to be processed, and so forth.
  • Load testing is part of performance testing. It is a technique that verifies whether the application can handle the expected load on the application. Load testing works well for detecting performance bottlenecks, as they can have a big impact on the overall performance.
  • A testing engineer uses stress testing to find the breaking point of an application. In addition, this type of testing verifies how the application behaves in extreme stress situations, such as losing database connectivity or not being able to access an application programming interface. The goal is to find out how the application behaves in such stress situations to determine its robustness. Both load and stress testing are an integral part of performance testing and should be carried out each time the application performance needs to be evaluated.

Conclusion

To conclude, don’t neglect performance testing. It’s a great tool for increasing customer satisfaction because you can guarantee that your application works well for them in all usage scenarios. To help you with conducting performance tests, you can use dedicated tools such as Stackify’s Retrace tool . An application monitoring tool gives you insights into memory usage and CPU time. In addition, for Node.js specifically, it can give you insights into the Node.js event loop and how long it’s blocked. An event loop that’s continuously blocked is often a bad sign.

]]>
10 Key Application Performance Metrics & How to Measure Them https://stackify.com/application-performance-metrics/ Thu, 27 Jul 2023 15:44:00 +0000 https://stackify.com/?p=12055 If you are trying to figure out how to measure the performance of your application, you are in the correct place. We spend a lot of time at Stackify thinking about application performance, especially about how to monitor and improve it. In this article, we cover some of our most important application performance metrics you should be tracking.

Before we cover some of the most important application performance metrics you should be tracking, let’s speak briefly about what application performance metrics is.

What is Application Performance Metrics?

Application Performance Metrics are the indicators used to measure and track the performance of software applications. Performance here includes and is not limited to availability, end-user experience, resource utilization, reliability, and responsiveness of your software application.

The act of continuously monitoring your application’s metrics is called Application Performance Monitoring. 

But why is this important?

Why do you Need Application Performance Metrics?

For starters, application performance metrics allow you and your team to address issues affecting your application proactively. This is particularly important in situations when the application is the business itself.

Monitoring application performance metrics allows helps your team:

  • Avoid downtime.
  • Identify anomalies, troubleshoot, and remediate issues early and before they impact the end users.
  • Ensure the application’s performance is always optimal.
  • Keep your end users satisfied.
  • Drive business growth and better scale your business.

However, as important as it is to monitor application performance metrics, trying to monitor everything will be a time-consuming, ineffective, and unproductive use of resources. Thus, tracking the right metrics is much more important as this will provide better insights and understanding of the technical functionality of your application.

Key Application Performance Metrics

Here are some of the most important application performance metrics you should be tracking.

1. User Satisfaction / Apdex Scores

The application performance index, or Apdex score, has become an industry standard for tracking the relative performance of an application.

It works by specifying a goal for how long a specific web request or transaction should take.

Those transactions are then bucketed into satisfied (fast), tolerating (sluggish), too slow, and failed requests. A simple math formula is then applied to provide a score from 0 to 1.

Developers Need Smart Application Metrics, Not Server Monitoring

Retrace automatically tracks satisfaction scores for every one of your applications and web requests. We convert the number to a 0-100 instead of 0-1 representation to make it easier to understand.

Web Application Performance Metrics for Satisfaction
Retrace Satisfaction Chart

2. Average Response Time

Let me start by saying that averages suck. I highly recommend using the aforementioned user satisfaction Apdex scores as a preferred way to track overall performance. That said, averages are still a useful application performance metric.

Application Performance Metrics Averages

3. Error Rates

The last thing you want your users to see are errors. Monitoring error rates is a critical application performance metric.

There are potentially 3 different ways to track application errors:

  • HTTP Error % – Number of web requests that ended in an error
  • Logged Exceptions – Number of unhandled and logged errors from your application
  • Thrown Exceptions – Number of all exceptions that have been thrown

It is common to see thousands of exceptions being thrown and ignored within an application. Hidden application exceptions can cause a lot of performance problems.

4. Count of Application Instances

If your application scales up and down in the cloud, it is important to know how many server/application instances you have running. Auto-scaling can help ensure your application scales to meet demand and saves you money during off-peak times. This also creates some unique monitoring challenges.

For example, if your application automatically scales up based on CPU usage, you may never see your CPU get high. You would instead see the number of server instances get high. (Not to mention your hosting bill going way up!)

Request rates can be useful to correlate to other application performance metrics to understand the dynamics of how your application scales

5. Request Rate

Understanding how much traffic your application receives will impact the success of your application. Potentially all other application performance metrics are affected by increases or decreases in traffic.

Request rates can be useful to correlate to other application performance metrics to understand the dynamics of how your application scales.

Monitoring the request rate can also be good to watch for spikes or even inactivity. If you have a busy API that suddenly gets no traffic at all, that could be a really bad thing to watch out for.

A similar but slightly different metric to track is the number of concurrent users. This is another interesting metric to track to see how it correlates.

6. Application & Server CPU

If the CPU usage on your server is extremely high, you can guarantee you will have application performance problems. Monitoring the CPU usage of your server and applications is a basic and critical metric.

Virtually all server and application monitoring tools can track your CPU usage and provide monitoring alerts. It is important to track them per server but also as an aggregate across all the individually deployed instances of your application.

7. Application Availability

Monitoring and measuring if your application is online and available is a key metric you should be tracking. Most companies use this as a way to measure uptime for service level agreements (SLA).

If you have a web application, the easiest way to monitor application availability is via a simple scheduled HTTP check.

Retrace can run these types of HTTP “ping” checks every minute for you. It can monitor response times, status codes, and even look for specific content on the page.

8. Garbage Collection

If your application is written in .NET, C#, or other programming languages that use garbage collection, you are probably aware of the performance problems that can arise from it.

When garbage collection occurs, it can cause your process to suspend and can use a lot of CPU.

Garbage collection metrics may not be one of the first things you think about key application performance metrics. It can be a hidden performance problem that is always a good idea to keep an eye on.

For .NET, you can monitor this via the Performance Counter of “% GC Time”. Java has similar capabilities via JMX metrics. Retrace can monitor these via its application metrics capabilities.

Memory usage is vital because it helps one gauge how an application manages and consumes resources during execution

9. Memory Usage

Memory usage is vital because it helps one gauge how an application manages and consumes resources during execution. However, it’s important to recognize that memory usage has technical and financial implications.

From a technical standpoint, high memory usage, memory leaks, or insufficient memory significantly affect application performance and scalability. This will result in slower response time, increased latency, frequent crashes, and potential downtime. However, on a financial front, high memory usage might require additional infrastructure costs like hardware upgrades, cloud service expenses, or additional resources to accommodate your needs.

Thus, monitoring memory effectively is crucial to ensure optimal application performance while minimizing financial impacts.

10. Throughput

Throughput measures the number of transactions or requests an application can process within a given timeframe. It indicates how well an application handles a high volume of workloads. Thus a high throughput generally shows better performance and scalability.

It is, however, important to know that various factors like memory, disk I/O, and network bandwidth influence throughput. Regardless, it is an interesting metric to track, especially since you can use it for benchmarking, identifying resource limitations, and ensuring optimal resource utilization. It is also a decision-maker as it is great for accessing and making performance comparisons of systems and applications on different loads.

Summary

Application performance measurement is necessary for all types of applications. Depending on your type of application, there could be many other monitoring needs.

Retrace can help you monitor a broad range of web application performance metrics. Retrace collects critical metrics about your applications, servers, code level performance, application errors, logs, and more. These can be used for measuring and monitoring the performance of your application.

]]>
IIS Error Logs and Other Ways to Find ASP.Net Failed Requests https://stackify.com/beyond-iis-logs-find-failed-iis-asp-net-requests/ Thu, 27 Jul 2023 10:38:15 +0000 https://stackify.com/?p=6798 As exciting as it can be to write new features in your ASP.NET Core application, our users inevitably encounter failed requests. Do you know how to troubleshoot IIS or ASP.NET errors on your servers? It can be tempting to bang on your desk and proclaim your annoyance. 

However, Windows and ASP.NET Core provide several different logs where failed requests are logged. This goes beyond simple IIS logs and can give you the information you need to combat failed requests.

Get to Know the 4 Different IIS Logs

If you have been dealing with ASP.NET Core applications for a while, you may be familiar with normal IIS logs. Such logs are only the beginning of your troubleshooting toolbox.

There are some other places to look if you are looking for more detailed error messages or can’t find anything in your IIS log file.

1. Standard IIS Logs

Standard IIS logs will include every single web request that flows through your IIS site.

Via IIS Manager, you can see a “Logging” feature. Click on this, and you can verify that your IIS logs are enabled and observe where they are being written to.

iis logs settings

You should find your logs in folders that are named by your W3SVC site ID numbers.

Need help finding your logs? Check out: Where are IIS Log Files Located?

By default, each logged request in your IIS log will include several key fields including the URL, querystring, and error codes via the status, substatus and win32 status.

These status codes can help identify the actual error in more detail.

#Fields: date time s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) cs(Referer) sc-status sc-substatus sc-win32-status time-taken
2019-09-13 21:45:10 ::1 GET /webapp2 - 80 - ::1 Mozilla/5.0 - 500 0 0 5502
2019-09-13 21:45:10 ::1 GET /favicon.ico - 80 - ::1 Mozilla/5.0 http://localhost/webapp2 404 0 2 4

The “sc-status” and “sc-substatus” fields are the standard HTTP status code of 200 for OK, 404, 500 for errors, etc.

The “sc-win32-status” can provide more details that you won’t know unless you look up the code. They are basic Win32 error codes.

You can also see the endpoint the log message is for under “cs-uri-stem”. For example, “/webapp2.” This can instantly direct you to problem spots in your application.

Another key piece of info to look at is “time-taken.” This gives you the roundtrip time in milliseconds of the request and its response.

By the way, if you are using Retrace, you can also use it to query across all of your IIS logs as part of its built-in log management functionality.

2. Can’t Find Your Request in the IIS Log? HTTPERR is Your IIS Error Log.

Every single web request should show in your IIS log. If it doesn’t, it is possible that the request never made it to IIS, or IIS wasn’t running.

It is also possible IIS Loggin is disabled. If IIS is running, but you still are not seeing the log events, it may be going to HTTPERR.

Incoming requests to your server first route through HTTP.SYS before being handed to IIS. These type of errors get logged in HTTPERR.

Common errors are 400 Bad Request, timeouts, 503 Service Unavailable and similar types of issues. The built-in error messages and error codes from HTTP.SYS are usually very detailed.

Where are the HTTPERR error logs?

C:\Windows\System32\LogFiles\HTTPERR

3. Look for ASP.NET Core Exceptions in Windows Event Viewer

By default, ASP.NET Core will log unhandled 500 level exceptions to the Windows Application EventLog. This is handled by the ASP.NET Core Health Monitoring feature. You can control settings for it via system.web/healthMonitoring in your appsettings.json file.

Very few people realize that the number of errors written to the Application EventLog is rate limited. So you may not find your error!

By default, it will only log the same type of error once a minute. You can also disable writing any errors to the Application EventLog.

iis error logs in eventlog

Can’t find your exception?

You may not be able to find your exception in the EventLog. Depending on if you are using WebForms, MVC, Core, WCF or other frameworks, you may have issues with ASP.NET Core not writing any errors at all to ASP.NET due to compatibility issues with the health monitoring feature.

By the way, if you install Retrace on your server, it can catch every single exception that is ever thrown in your code. It knows how to instrument into IIS features.

4. Failed Request Tracing for Advanced IIS Error Logs

Failed request tracing (FRT) is probably one of the least used features in IIS. It is, however, incredibly powerful. 

It provides robust IIS logging and works as a great IIS error log. FRT is enabled in IIS Manager and can be configured via rules for all requests, slow requests, or just certain response status codes.

You can configure it via the “Actions” section for a website:

The only problem with FRT is it is incredibly detailed. Consider it the stenographer of your application. It tracks every detail and every step of the IIS pipeline. You can spend a lot of time trying to decipher a single request.

5. Make ASP.NET Core Show the Full Exception…Temporarily

If other avenues fail you and you can reproduce the problem, you could modify your ASP.NET Core appsettings.json to see exceptions.

Typically, server-side exceptions are disabled from being visible within your application for important security reasons. Instead, you will see a yellow screen of death (YSOD) or your own custom error page.

You can modify your application config files to make exceptions visible.

asp net error ysod

ASP.NET

You could use remote desktop to access the server and set customErrors to “RemoteOnly” in your web.config so you can see the full exception via “localhost” on the server. This would ensure that no users would see the full exceptions but you would be able to.

If you are OK with the fact that your users may now see a full exception page, you could set customErrors to “Off.”

.NET Core

Compared to previous versions of ASP.NET, .NET Core has completely changed how error handling works. You now need to use the DeveloperExceptionPage in your middleware.  

.NET Core gives you unmatched flexibility in how you want to see and manage your errors. It also makes it easy to wire in instrumentation like Retrace.

6. Using a .NET Profiler to Find ASP.NET Core Exceptions

.NET Profilers like Prefix (which is free!) can collect every single exception that is .NET throws in your code even if they are hidden in your code. 

Prefix is a free ASP.NET Core profiler designed to run on your workstation to help you optimize your code as you write it. Prefix can also show you your SQL queries, HTTP calls, and much, much more.

profiled asp.net iis error log

Get Proactive About Tracking Application Errors!

Trying to reproduce an error in production or chasing down IIS logs/IIS error logs is not fun. Odds are, there are probably many more errors going on that you aren’t even aware of. When a customer contacts you and says your site is throwing errors, you better have an easy way to see them!

Tracking application errors is one of the most important things every development team should do. If you need help, be sure to try Retrace which can collect every single exception across all of your apps and servers.

Also, check out our detailed guide on C# Exception Handling Best Practices.

If you are using Azure App Services, also check out this article: Where to Find Azure App Service Logs.

Schedule A Demo
]]>
Best Log Management Tools: Useful Tools for Log Management, Monitoring, Analytics, and More https://stackify.com/best-log-management-tools/ Wed, 12 Jul 2023 08:28:00 +0000 https://stackify.com/?p=11223 Gone are the days of painful plain-text log management. While plain-text data is still useful in certain situations, when it comes to doing extended analysis to gather insightful infrastructure data – and improve the quality of your code – it pays to invest in reliable log management tools and systems that can empower your business workflow.

Logs are not an easy thing to deal with, but regardless is an important aspect of any production system. When you are faced with a difficult issue, it’s much easier to use a log management tool than it is to weave through endless loops of text-files spread throughout your system environment.

Try Stackify’s free code profiler, Prefix, to write better code on your workstation. Prefix works with .NET, Java, PHP, Node.js, Ruby, and Python.

The big advantage of log management tools is that they can help you easily pinpoint the root cause of any application or software error, within a single query. The same applies to security-related concerns, where many of the following tools are capable of helping your IT team prevent attacks even before they happen. Another factor is having a visual overview of how your software is being used globally by your user base — getting all this crucial data in one single dashboard is going to make your productivity rise substantially.

When picking the right log management tool for your needs, evaluate your current business operation.  Decide on whether you’re still a small operation looking to get the basic data out of your logs, or you plan to enter the enterprise level – which will require more powerful logging system and efficient tools to tackle large scale log management.

We built Retrace to address the need for a cohesive, comprehensive developer tool that combines APM, errors, logs, metrics, and monitoring in a single dashboard. When it comes to log management tools, they run the gamut from stand-alone tools to robust solutions that integrate with your other go-to tools, analytics, and more. We put together this list of 52 useful log management tools (listed below in no particular order) to provide an easy reference for anyone wanting to compare the current offerings to find a solution that best meets your needs. 

46 Useful Log Management Tools, Monitoring, and Analytics

1. Retrace

@Stackify

Retrace

Tired of chasing bugs in the dark? Thanks to Retrace, you don’t have to. Retrace your code, find bugs, and improve application performance with this suite of essential tools that every developer needs, including logging, error monitoring, and code level performance.

Key Features: 

  • Combines logs, errors, and APM
  • Structured/semantic logging
  • Advanced searching and filtering capabilities
  • View and search custom log properties
  • Automatic color-coding to draw attention to errors and warnings
  • Tracking and reporting on where your log messages originated in your code
  • Detailed traces on web requests and transactions
  • View full application error details
  • Explore all your logging fields
  • Log analytics
  • Real-time log tailing
  • Use tags (highlighted in your logs)
  • Supports a variety of application and server logs

Cost:

2. Logentries

@Logentries

Logentries

Logentries is a cloud-based log management platform that makes any type of computer-generated type of log data accessible to developers, IT engineers, and business analysis groups of any size. Logentries’ easy onboarding process ensures that any business team can quickly and effectively start understanding their log data from day one.

Key Features:

  • Real-time search and monitoring; contextual view, custom tags, and live-tail search.
  • Dynamic scaling for different types and sizes of infrastructure.
  • In-depth visual analysis of data trends.
  • Custom alerts and reporting of pre-defined queries.
  • Modern security features to protect your data.
  • Flawless integration with leading chat and performance management tools.

Cost:

  • Essential: $3.82 per asset per month
  • Advanced: $6.36 per asset per month
  • Ultimate: $8.21 per asset per month
  • Enterprise: Custom quote.
    Note: this pricing is representative of environments with 250k assets

3. GoAccess

@GoAccess

GoAccess

GoAccess is a real-time log analyzer software intended to be run through the terminal of Unix systems, or through the browser. It provides a rapid logging environment where data can be displayed within milliseconds of it being stored on the server.

Key Features:

  • Truly real-time; updates log data within milliseconds within the terminal environment.
  • Custom log strings.
  • Monitor pages for their response time; ideal for apps.
  • Effortless configuration; select your log file and run GoAccess.
  • Understand your website visitor data in real-time.

Cost: Free (Open-Source)

4. Logz.io

@Logzio

Logz

Logz.io uses machine-learning and predictive analytics to simplify the process of finding critical events and data generated by logs from apps, servers, and network environments. Logz.io is a SaaS platform with a cloud-based back-end that’s built with the help of ELK Stack – Elasticsearch, Logstash & Kibana. This environment provides a real-time insight of any log data that you’re trying to analyze or understand.

Key Features:

  • Use ELK stack as a Service; analyze logs in the cloud.
  • Cognitive analysis provides critical log events before they reach production.
  • Fast set-up; five minutes to production.
  • Dynamic scaling accommodates businesses of all sizes.
  • AWS-built data protection to ensure your data stays safe and intact.

Cost:

  • Free: $0
  • Pro log management:  starting at $0.92 per ingested GB per day with 7-month data retention.
  • Enterprise: Custom quote.

5. Graylog

@Graylog2

Graylog

Graylog is a free and open-source log management tool that supports in-depth log collection and analysis. Used by teams in Network Security, IT Ops and DevOps, you can count on Graylog’s ability to discern any potential risks to security, lets you follow compliance rules, and helps to understand the root cause of any particular error or problem that your apps are experiencing.

Key Features:

  • Enrich and parse logs using a comprehensive processing algorithm.
  • Search through unlimited amounts of data to find what you need.
  • Custom dashboards for visual output of log data and queries.
  • Custom alerts and triggers to monitor any data failures.
  • Centralized management system for team members.
  • Custom permission management for users and their roles.

Cost:

  • Graylog Open: Free and self-managed.
  • Graylog Operations: $1250/mo – Cloud or Self-Managed
  • Graylog Security: $1550/mo – Cloud or Self-Managed

6. Splunk

@Splunk

Splunk

Splunk’s log management tool focuses on enterprise customers who need concise tools for searching, diagnosing and reporting any events surrounding data logs. Splunk’s software is built to support the process of indexing and deciphering logs of any type, whether structured, unstructured, or sophisticated application logs, based on a multi-line approach.

Key Features:

  • Splunk understands machine-data of any type; servers, web servers, networks, exchanges, mainframes, security devices, etc.
  • Flexible UI for searching and analyzing data in real-time.
  • Drilling algorithm for finding anomalies and familiar patterns across log files.
  • Monitoring and alert system for keeping an eye on important events and actions.
  • Visual reporting using an automated dashboard output.

Cost:

7. Logmatic

@Logmatic

Logmatic

Logmatic is an extensive log management tool that integrates seamlessly with any language or stack. Logmatic works equally well with front-end and back-end log data and provides a painless online dashboard for tapping into valuable insights and facts of what is happening within your server environment.

Key Features:

  • Upload & Go — share any type of logs or metrics, and Logmatic will automagically sort them for you.
  • Custom parsing rules let you weed through tons of complicated data to find patterns.
  • Powerful algorithm for pinpointing logs back to their origin.
  • Dynamic dashboards for scaling up time series, pie charts, calculated metrics, flow charts, etc.

Cost:

  • Starting at $0.10 per ingested or scanned GB per month
  • and $1.70 with a 15-day retention per million log events per month for retention.

8. Logstash

@Elastic

Elastic

Logstash from Elasticsearch is one of the most renowned open-source log management tool for managing, processing and transporting your log data and events. Logstash works as a data processor that can combine and transform data from multiple sources at the same time, then send it over to your favorite log management platform, such as Elasticsearch.

Key Features:

  • Ingest data from varied sets of sources: logs, metrics, web apps, data storages, AWS, without losing concurrency.
  • Real-time data parsing.
  • Create structure from unstructured data.
  • Pipeline encryption for data security.

Cost:

  • Standard: $95 per month.
  • Gold: $109 per month.
  • Platinum: $125 per month.
  • Enterprise: $175 per month.

9. Sumo Logic

@SumoLogic

SumoLogic

Sumo Logic is a unified logs and metrics platform that helps you analyze your data in real-time using machine-learning, Sumo Logic can quickly depict the root cause of any particular error or event, and it can be setup to be constantly on guard as to what is happening to your apps in real-time. Sumo Logic’s strong point is its ability to work with data at a rapid pace, removing the need for external data analysis and management tools.

Key Features:

  • Unified platform for all log and metrics.
  • Advanced analytics using machine learning and predictive algorithms.
  • Quick setup.
  • Support for high-resolution metrics.
  • Multi-tenant: single instance can serve groups of users.

Cost:

  • Upon request.

10. Papertrail

@PapertrailApp

Papertrail

Papertrail is a snazzy hosted log management tool that takes care of aggregating, searching, and analyzing any type of log files, system logs, or basic text log files. Its real-time features allow for developers and engineers to monitor live happenings for apps and servers as they are happening. Papertrail offers seamless integration with services like Slack, Librato and Email to help you set up alerts for trends and any anomalies.

Key Features:

  • Simple and user-friendly interface.
  • Easy setup; direct logs to a link provided by the service.
  • Log events and searches are updated in real-time.
  • Full-text search. Message, metadata, even substrings.
  • Graph with Librato, Geckoboard, or your own service.

Cost:

  • Free: 100MB/month
  • Pro: Starting at $7/month for 1GB/data

11. Fluentd

@Fluentd

Fluentd

Fluentd collects events from various data sources and writes them to files, RDBMS, NoSQL, IaaS, SaaS, Hadoop and so on. Fluentd helps you unify your logging infrastructure. Fluentd’s flagship feature is an extensive library of plugins which provide extended support and functionality for anything related to log and data management within a concise developer environment.

Key Features:

  • Unified logging layer that can decouple data from multiple sources.
  • Gives structure to unstructured logs.
  • Flexible, but simple. Takes a couple of minutes to get it going.
  • Compatible with a majority of modern data sources.

Cost:

  • Free: Open-Source
  • Enterprise: Upon request.

12. syslog-ng

@sngOSE

syslog-ng

Syslog is an open-source log management tool that helps engineers and DevOps to collect log data from a large variety of sources to process them and eventually send over to a preferred log analysis tool. With Syslog, you can effortlessly collect, diminish, categorize and correlate your log data from your existing stack and push it forward for analysis.

Key Features:

  • Open-source with a large community following.
  • Flexible scaling with any size infrastructure.
  • Plugin support for extended functionality.
  • PatternDB for finding patterns in complex data logs.
  • Data can be inserted into common database choices.

Cost: Free

13. rsyslog

@RGerhards

rsyslog

Rsyslog is a blazing-fast system built for log processing. It offers great performance benchmarks, tight security features, and a modular design for custom modifications. Rsyslog has grown from a singular logging system to be able to parse and sort logs from an extended range of sources, which it can then transform and provide an output to be used in dedicated log analysis software.

Key Features:

  • Easy to implement in common web hosts.
  • Lets you create custom parse methods.
  • Online config builder.
  • Regex generator and checker.
  • Custom development available for hire.

Cost: Free

14. LOGalyze

@LOGalyze

LOGalyze

LOGalyze is a simple to use log collection and analysis system with low operational costs, centralized system for log management and is capable of gathering log data from extended sources of operational systems. LOGalyze does predictive event detection in real-time while giving system admins and management personnel the right tools for indexing and searching through piles of data effortlessly.

Key Features:

  • High-performance and high-speed processing of logs.
  • Log-definitions for breaking down and indexing log lines.
  • Integrated front-end dashboard for efficient online access.
  • Secure log forwarding to chosen applications.
  • Automated reporting in PDF.
  • Compatible with Syslog, Rsyslog.
  • It breaks down the incoming log to fields and names them.

Cost: Free & Open-Source

15. Sentry

@GetSentry

Sentry

Sentry is a modern platform for managing, logging, and aggregation any potential errors within your apps and software. Sentry’s state of the art algorithm helps teams detect any potential errors within the app infrastructure that could be critical to production operations. Sentry essential helps teams to avoid the hassle of having to deal with a problem that’s too late to fix and instead uses its technology to help inform teams about any potential rollbacks or fixes that would sustain the health of the software.

Key Features:

  • Detailed error reporting: URL’s, used parameters, and header information.
  • Graphical interface for understanding nature of certain errors and where they originate from so that you can fix them.
  • Dynamic alerts and notifications using SMS, Email, and Chat services.
  • Real-time error reporting as you deploy a new version of your app so that errors can be monitored as they happen, and ultimately prevented before it’s too late.
  • User-feedback system to compare any potential error reporting to that of experience of the user himself.

Cost:

  • Free: 10k/events per month.
  • Team: Starting at $26/ month.
  • Business: Starting at $80/ month.
  • Enterprise: Upon request.

16. Flume

@TheASF

Flume

Apache Flume is an elegantly designed service for helping its users to stream data directly into Hadoop. It’s core architecture is based on streaming data flows — these can be used to ingest data from a variety of sources to directly link up with Hadoop for further analysis and storage purposes. Flume’s Enterprise customers use the service to stream data into the Hadoop’s HDFS; generally, this data includes data logs, machine data, geo-data, and social media data.

Key Features:

  • Multi-server support for ingesting data from multiple sources.
  • Collection can be done in real-time or collectively using batch modes.
  • Allows the ingestion of large data sets from common social and eCommerce networks for real-time analysis.
  • Scalable by adding more machines to transfer more events.
  • Reliable back-end built with durable storage and failover protection.

Cost: Free, Open-Source

17. Cloudlytics

@Cloudlytics

Cloudlytics

Cloudlytics is a SaaS startup designed to improve the analysis of log data, billing data, and cloud services. In particular, it is targeted at AWS Cloud services, such as CloudFront and S3 CloudTrail — using Cloudlytics customers can get in-depth insights and pattern discovery based on the data provided by those services. With three management modules, Cloudlytics gives its users the flexibility to choose from monitoring resources in their environment, analyze monthly bills or analyze AWS logs.

Key Features:

  • Real-time alerts of errors as soon as they appear.
  • Billing analytics let you closely watch over your consumption of resources.
  • Sophisticated user interfaces for getting a truly in-depth view of your data.
  • File download analytics including GEO data.
  • Automated cloud management for back-ups and service status.

Cost:  Upon request.

18. Octopussy

Octopussy

Octopussy is a Perl-based, open-source log management tool that can do alerting and reporting, and visualization of data. Its basic back-end functionality is to analyze logs, generate reports based on log data, and alert the administration to any relevant information.

Key Features:

  • Lightweight Directory Access Protocol for maintaining a users list.
  • Custom alert notifications through email, Jabber, Nagios and Zabbix.
  • Generate custom reports and export them using FTP, SCP, or Email.
  • Create custom maps for understanding the architecture of your back-end.
  • Custom support for popular services and software: Cisco, Postfix, MySQL, Syslog, etc.
  • Custom templates for interfaces and reports.

Cost: Free

19. NXLog

NXLog

Today’s environment of IT departments can provide a layer of challenges when it comes to truly in-depth understanding of why events occur and what logs are reporting. With thousands of log entries from a plethora of sources, and with the demand for logs to be analyzed real-time, there can arise difficulties in knowing how to manage all of the data in a centralized environment. NXLog strives to provide the required tools for concise analysis of logs from a variety of platforms, sources, and formats. NXLog can collect logs from files in various formats, receive logs from the network remotely over UDP, TCP or TLS/SSL on all supported platforms.

Key Features:

  • Multi-platform support for Linux, GNU, Solaris, BSD, Android, and Windows.
  • Modular environment through pluggable plugins.
  • Scalable and high-performance with the ability to collect logs at 500,000 EPS or more.
  • Message queuing enables you to buffer and prioritize logs so they don’t get lost in the pipeline.
  • Task schedule and log rotation.
  • Offline log processing capabilities for conversions, transfers, and general post processing.
  • Secure network transport over SSL.

Cost: Free (Community Edition), Enterprise (Upon request)

20. Sentinel Log Manager

@NetIQ

Sentinel Log Manager

NetIQ is an enterprise software company that focuses on products related to application management, software operations, and security and log management resources. The Sentinel Log Manager is a bundle of software applications that allow for businesses to take advantage of features like effortless log collector, analysis services, and secure storage units to keep your data accessible and safe. Sentinel’s cost-effective and flexible log management platforms make it easy for businesses to audit their logs in real-time for any possible security risks, or application threats that could upset production software.

Key Features:

  • Distributed search — find comprehensive details about events on your local or global Sentinel Log Manager servers.
  • Instant reports — create detailed one-click reports based on your search queries.
  • Sentinel Log Manager comes with reports needed for common regulatory reporting. These predefined reports reduce the time you must spend on compliance.
  • Choose from traditional text-oriented search or built custom, and more complex, search queries yourself.
  • Support for non-proprietary storage systems.
  • Intuitive storage analysis to let you know when you can expect to need more storage availability, based on the current rate of consumption.
  • Log encryption over the network to provide a hardened layer of security for your log data.

Cost: Custom quote upon request.

21. XpoLog

@XpoLog

XpoLog

XpoLog seeks out new and innovative ways to help its customers better understand and master their IT data. With their leading technology platform, XpoLog focuses on helping customers analyze their IT data using unique patents and algorithms that are affordable for all operation sizes. The platform drastically reduces time to resolution and provides a wealth of intelligence, trends, and insights into enterprise IT environments.

Key Features:

  • Agent-less technology for collective live data over an SSH connection.
  • Collect log events via traditional choices like HTTP or Syslog, or Fluentd and LogStash.
  • XpoLog’s technology can interpret any log format, including that of archived files.
  • Choose from dynamic or automated parsing rules.
  • Dynamic search platform that provides comprehensive search features within a Google-like search environment.
  • Search across live log data for application problems, IDs, IPs, errors, exceptions, and more.
  • Using search functions, users can filter and investigate logs and apply complex functions to aggregate and correlate events in the indexed data.

Cost:

  • Free: 500mb / per day
  • Pro: Starting at $39/ month.
  • Enterprise: Custom quote.

22. EventTracker

@LogTalk

EventTracker

EventTracker provides its customers with business-optimal services that help to correlate and identify system changes that potentially affect the overall performance, security, and availability of IT departments. EventTracker uses SIEM to create a powerful log management environment that can detect changes through concise monitoring tools, and provides USB security protection to keep IT infrastructure protected from emerging security attacks. EventTracker SIEM collates millions of security and log events and provides actionable results in dynamic dashboards so you can pinpoint indicators of a compromise while maintaining archives to meet regulatory retention requirements.

Key Features:

  • Malware detection and automated audit using MD5 and VirusTotal.
  • Network-wide threat hunting based on patterns.
  • Builds on top of the success of Snort and OpenVAS, providing a user-friendly environment to use both for extensive security measurements and audits.
  • Straightforward deployment of software to have it up and running quickly.
  • Pre-configured alerts for hundreds of security and operational conditions.

Cost: Upon request.

23. LogRhythm

@LogRhythm

LogRhythm

Getting your focus lost in an ocean of log data can be detrimental to your work and business productivity. You know the information you need is somewhere in those logs, but don’t quite have the power to pick it out from the rest. LogRhythm is a next-generation log management platform that does all the work of unfolding your data for you. Using comprehensive algorithms and the integration of Elasticsearch, anyone can identify crucial insights about business and IT operations. LogRhythm focuses on making sure that all of your data is understood, versus collecting it alone and only taking it from it what you need.

Key Features:

  • Smart data collection technology allows you to collect, analyze and parse virtually any kind of data.
  • Elasticsearch backend for concluding simple or sophisticated search queries that go through your data at lightning-fast speeds.
  • Critical attack monitoring to the very first and last second of occurrence.
  • Advanced visual dashboard to help you quickly understand how data is originating and whether a threat is present.
  • Meet compliance and data retention requirements by archiving data at a low cost. 

Cost: Upon request.

24. McAfee Enterprise

@IntelSec_Biz

McAfee Enterprise

McAfee is a household name in IT and Network security and has been known to provide modern and latest technology optimized tools for businesses and corporations of all sizes. The McAfee Enterprise Log Manager is an automated log management and analysis suite for all types of logs; Event, Database, Application, and System logs. The software’s in-built features can identify and validated logs for their authenticity — a truly necessary feature for compliance reasons. Organizations have been using McAfee to ensure that their infrastructure is in compliance with the latest security policies. McAfee Enterprise complies with more than 240 standards.

Key Features:

  • Keep your compliance costs low with automated log collection, management, and storage.
  • Native support for collecting, compressing, signing, and storing all root events so that they can be traced back to their origin.
  • Custom storage and retention options for individual log sources.
  • Option to choose from local or network storage areas.
  • Supports chain of custody and forensics.
  • Storage pools for flexible, long-term log storage. 

Cost: Upon request.

25. AlientVault USM (Unified Security Management): SIEM software & solutions

@attcyber

AlienVault

AlientVault USM (Unified Security Management) reaches far beyond the capabilities of SIEM solutions using a powerful AIO (All in One) security precautions and comprehensive threat analysis algorithm to identify threats in your physical or cloud locations. Resource-dependent IT teams that rely on SIEM are at risk of delaying their ability to detect and analyze threats as they happen, whereas AlienVault USM combines the powerful features of SIEM and integrates them with direct log management and other security features, such as; asset discovery, assessment of vulnerabilities, and direct-threat detection — all of which give you one and centralized platform for security monitoring.

Key Features:

  • Cost-effective by integrating third-party security tools.
  • Pre-written configs let you detect threats right from the get go.
  • Comprehensive security intelligence as provided by AlientVault Labs.
  • Kill-chain taxonomy for quick assessment of threats, their intent, and strategy.
  • Granular methods for in-depth search and security data analysis.
  • Network & Host IDS.

Cost: Upon request.

26. Bugfender

@BugfenderApp

Bugfender

Not everyone is in need of an enterprise solution for log management, in fact, many of today’s most well-known businesses operate solely on mobile-only platforms, which is a market that Bugfender is trying to impact with its high-quality log application for cloud-based analysis of general log and user behavior within your mobile apps.

Key Features:

  • Intuitive bug analysis lets you track your app errors and get them patched up before they reach production.
  • Customer history to provide better and more precise customer support.
  • Remote logging sends all log data directly to the cloud services provided by Bugfender.
  • Custom logging options for individual devices.
  • Offline data storage for transmission to live servers once the device is back online.
  • Extended device information for all logging sessions.

Cost:

  • Free: 100K log lines per day
  • Basic: $19 /month
  • Pro: $109 /month
  • Enterprise: $479 /month

27. Mezmo formally LogDNA

@mezmodata

LogDNA

Mezmo, formally LogDNA, prides itself as the easiest log management tools that you’ll ever put your hands on. Mezmo’s cloud-based log services enable for engineers, DevOps, and IT teams to suction any app or system logs within one simple dashboard. Using CMD or Web interface, you can search, save, tail, and store all of your logs in real-time. With Mezmo, you can diagnose issues, identify the source of server errors, analyze customer activity, monitor Nginx, Redis, and more. A live-streaming tail makes surfacing difficult-to-find bugs easy.

Key Features:

  • Gather logs from your favorite systems including Linux, Mac, Windows, Docker, Node, Python, Fluentd, and much more.
  • Easy to use and experiment with demo environment for a real-time product preview.
  • Powerful algorithm to identify and detect the core relationship between data and issues at hand.
  • Real-time data search, filter, and debug.
  • Built by an ambitious group of people who are keen to work on implementing new features and sets of tools.
  • Has a close relationship with the open-source community to provide transparency.

Cost:

  • Free: Unlimited / Single User
  • Pro: Starting at $0.80/GB.
  • Enterprise: Upon request.

28. Prometheus

@PrometheusIO

Prometheus

Prometheus is a systems and service monitoring system that collects metrics from configured targets at specified intervals, evaluates rule expressions, displays results and triggers alerts when pre-defined conditions are met. With customers like DigitalOcean, SoundCloud, Docker, CoreOS and countless others, the Prometheus repository is a great example of how open-source projects can compete with leading technology and innovate in the field of systems and log management.

Key Features:

  • A custom-built query language for digging deep into your data that can then be used to create graphs, charts, tables, and custom alerts.
  • A selection of data visualization methods: Grafana, Console, and an inbuilt ExpressionEngine.
  • Efficient storage techniques to scale data appropriately.

Cost: Free, Open-Source.

29. ScoutApp

@ScoutApp

ScoutApp

Scout is a language specific monitoring app that helps Ruby on Rails developers identify code errors, memory leaks, and more. Scout has been renowned for its simple yet advanced UI that provides an effortless experience of understanding what is happening with your Ruby on Rails apps in real-time. A recent business expansion also enabled Scout to expand its functionality for Elixir-built apps.

Key Features:

  • Memory leak detection.
  • Slow database query analysis.
  • Powerful integration with GitHub.
  • Automatic dependency instrumentation.

Cost: $59/server/month

  • Basic: $161 / Month.
  • Plus: $499/ Month.
  • Pro: $499/ Month.
  • Enterprise: Upon request.

30. Motadata

@MotadataSystems

Motodata

Motadata does more than just manages your logs; it can correlate, integrate and visualize near any of your IT data using native applications inbuilt within the platform. On top of world-class log management, Motadata is capable of monitoring the status and health of your network, servers, and apps. Contextual alerts ensure that you can sleep well-rested as any critical events or pre-defined thresholds will notify you or your team using frequently used platforms like Email, Messaging, or Chat applications.

Key Features:

  • Extensive log sourcing options: Firewalls, Routers, Switches, Servers (Web, App, Sys), Databases, Anti-Malware Software, Mail Servers, and more.
  • Gather essential data quickly in the event of a security breach. 
  • In-depth keyword search that pinpoints specific terms across all of your logs.
  • Audit analysis to discover crucial insights and trends that stem across your log data.
  • Native integration with apps like Jira, Jetty, AWS, IIS, Oracle, Microsoft, and much more.

Cost: Upon request.

31. InTrust

@Quest

InTrust

InTrust gives your IT department a flexible set of tools for collecting, storing, and searching through huge amounts of data that comes from general data sources, server systems, and usability devices within a single dashboard. InTrust delivers a real-time outlook on what your users are doing with your products, and how those actions affect security, compliance, and operations in general. With InTrust you can understand who is doing what within your apps and software, allowing you to make crucial data-driven decisions when necessary.

Key Features:

  • Security and Forensic analysis using pre-built templates and algorithms.
  • Concise and dynamic investigations in data about your users, files, and events.
  • Run smart searches on auditing data from Enterprise Reporter and Change Auditor to improve security, compliance, and operations while eliminating information silos from other tools.
  • Easily forward your Windows system data to a SIEM solution for deeper analysis.

Cost:

  • Free Trial.
  • Enterprise solution upon request.

32. Nagios

@NagiosInc

Nagios

Nagios provides a complete log management and monitoring solution which is based on its Nagios Log Server platform. With Nagios, a leading log analysis tool in this market, you can increase the security of all your systems, understand your network infrastructure and its events, and gain access to clear data about your network performance and how it can be stabilized.

Key Features:

  • A powerful out of the box dashboard that gives customers a way to filter, search, and conduct a comprehensive analysis of any incoming log data.
  • Extended availability through multiple server clusters so data isn’t lost in case of an outage.
  • Custom alert assignments based on queries and IT department in charge.
  • Tap into the live-stream of your data as its coming through the pipes.
  • Easy management of clusters lets you add more power and performance to your existing log management infrastructure.

Cost: Starting at $1995.

33. lnav

@LnavApp

lnav

If Enterprise-level log management tool is overwhelming you by now, you may want to look into LNAV — an advanced log data manager intended to be used by smaller-scale IT teams. With direct terminal integration, it can stream log data as it is incoming in real-time. You don’t have to worry about setting anything up or even getting an extra server; it all happens live on your existing server, and it’s beautiful. In order to run LNAV, you will need to get the following packages: libpcre, sqlite, ncurses, readline, zlib, and bz2.

Key Features:

  • Runs directly in your server terminal; easy to open, close, and manage.
  • Point and shoot concept, specify the log directory and start monitoring.
  • Custom filters automatically filter out the ‘garbage’ portion of your log data.

Cost: Open-Source

34. Seq

@GetSeq_Net

Seq

Seq is a software-specific log software for .NET applications. Developers can easily use Seq to monitor log data and performance through the process of developing the application all the way to production level. Search specific application logs from a simple events dashboard, and understand how your apps progress or perform when you push towards your final iteration.  

Key Features:

  • Structured logging provides a rich outlook on events and how they related to each other.
  • Intuitive filters allow developers to use SQL-like expressions or an equivalent of JavaScript and C# operators.
  • Full-text support.
  • Filters database for creating and saving filters based on what you’re searching for.
  • Custom analysis and charting using SQL syntax.

Cost:

  • Single-Use License: Free
  • Team: $690/ year.
  • Business: $1990/ year.
  • Enterprise: $7990/ year.

35. Logary

@LogaryLib

Logary

Logary is a high performance, multi-target logging, metric, tracing and health-check library for Mono and .Net. As a next-generation logging software, Logary uses the history of your app progress to build models from.

Key Features:

  • Logging from a class module.
  • Custom logging fields and templating capabilities.
  • Custom adapters: EventStore, FsSQL, Suave, Topshelf.

Cost: Open-Source

36. EventSentry

@netikus

EventSentry

EventSentry is an award-winning monitoring solution that includes a new NetFlow component for visualizing, measuring, and investigating network traffic. This log management tool helps SysAdmins and network professionals achieve more uptime and security.

Key Features:

  • See all traffic metadata that passes through network devices that support NetFlow.
  • Utilize network traffic data for troubleshooting purposes.
  • Map network traffic to a geo-location.
  • Communicate and document your network by adding notes or uploading documents in the web reports by @ mentioning the computer name so the web reports can associate the update with the appropriate device on the network.
  • Automatically extracts IP addresses from events and supplements them with reverse lookup and/or Geo IP lookup data.
  • Central collector service supports data collection over insecure mediums through strong TLS encryption.

Cost:

  • Full License: $85/Windows device + free year of maintenance and $15.30 for each additional year – Price decreases when purchasing multiple licenses at a time
  • Network Device Licenses: Starting at $58 + free year of maintenance – Price decreases when purchasing multiple licenses at a time
  • NetFlow License: $1,299/collector + free year of maintenance and $233.82 for each additional year

37. Logsign

@logsign

LogiSign

A full feature, all-in-one SIEM solution that unifies log management, security analytics, and compliance, Logsign is a next-generation solution that increases awareness and allows SysAdmins and network professionals to respond in real time.

Key Features:

  • With its flexible and scalable architecture, Logsign provides high availability and redundancy.
  • Able to reach millions of data within seconds via its HDFS-based NoSQL architecture.
  • Threat Intelligence embedded correlation.
  • Discovers next-gen threats and take precautions.
  • Detects internal and external threats, vulnerabilities.
  • High capacity log classification.
  • Multi-machine correlation architecture.
  • Hundreds of pre-defined dashboard and reports.
  • Optimizes compliance (PCI DSS, ISO 27001, HIPAA, SOX, NERC…) and information security processes.
     

Cost: FREE trial available. Contact for a quote

38. IT Operations Management (ITOM) (formerly Loom System)

@ServiceNow

IT Operations Management (ITOM) provides AI-powered log analysis for watching over your digital systems. This way, you can prevent and fix IT issues before they become problems. ITOM’s advanced AI analytics platform predicts and prevents problems in digital business by connecting to your digital assets and continually monitoring and learning about them by reading logs and detecting when something seems likely to go off course.

Key Features:

  • Automated log parsing for any type of application.
  • Problem prediction and cross-applicative correlation.
  • Automated root cause analysis and recommended resolutions.
  • Stream all logs from any application, and Loom automatically parses and analyzes them in real time.
  • Leverages AI to provide root causes of issues in real time.
  • Integrates seamlessly with your preferred monitoring tools, cloud platforms, serverless infrastructure, and asset management software.

Cost: 

  • Standard, professional, and AIOps enterprise. All upon request.

39. SolarWinds Log & Event Manager

@solarwinds

SolarWinds Log & Event Manager

SolarWinds offers IT management software and monitoring tools such as their Log & Event manager. This log management tool handles security, compliance, and troubleshooting by normalizing your log data to quickly spot security incidents and make troubleshooting a breeze.

Key Features:

  • Node-based licensing.
  • Real-time event correlation.
  • Real-time remediation.
  • File integrity monitoring.
  • USB defender.
  • Configurable dashboard.
  • Scheduled searches.
  • User defined groups.
  • Custom email templates.
  • Threat intelligence feed.

Cost: FREE trial available. Starts at $2,877

40. ManageEngine EventLog Analyzer

@manageengine

ManageEngine EventLog Analyzer

ManageEngine creates comprehensive IT management software for all of your business needs. Their EventLog Analyzer is an IT compliance and log management software for SIEM that is one of the most cost-effective on the market today.

Key Features:

  • Automate the entire process of managing terabytes of machine-generated logs by collecting, analyzing, correlating, searching, reporting, and archiving from one centralized console.
  • Monitor file integrity.
  • Conduct log forensics analysis.
  • Monitor privileged users.
  • Comply with various compliance regulatory bodies.
  • Analyzes logs to instantly generate a number of reports including user activity reports, historical trend reports, and more.

Cost:

  • FREE trial available.
  • Premium: $595/year.
  • Distributed: $2495/year.

41. PagerDuty

@pagerduty

PagerDuty

PagerDuty helps developers, ITOps, DevOps, and businesses protect their brand reputation and customer experiences. An incident resolution platform, PagerDuty automates your resolutions and provides full-stack visibility and delivers actionable insights for better customer experiences.

Key Features:

  • Visualize each dimension of the customer experience.
  • Gain event intelligence and understand the context of disruptions across your infrastructure with actionable, time-series visualizations of correlated events.
  • Response orchestration to enable better collaboration and rapid resolution.
  • Discover patterns in performance and view post-mortem reports to analyze system efficiency.

Cost: FREE trial available for 14 days

  • Free: $0/ month.
  • Pro: $21 per user/ month.
  • Business: $41 per user/ month.
  • Custom: Upon request

42. BLËSK

@bleskcanada

BLËSK

BLËSK Event Log Manager is an intuitive, comprehensive, and cost-effective iT and network management software solution. With BLËSK, you can collect log and event data automatically with zero installation and zero configuration.

Key Features:

  • Store logs and event data in a single place.
  • Centralize, analyze, and control logs from all of the equipment on your network and more.
  • Lightning fast access to millions of log entries on your network.
  • Collect log and event data in real-time from any device.
  • Fast, easy log collection for addressing different scaling needs.

Cost: FREE trial available. Contact for a quote

43. Alert Logic Log Manager

@alertlogic

Alert Logic Log Manager

Alert Logic offers full stack security and compliance. Their Log Manager with ActiveWatch is a Security-as-a-Service solution that meets compliance requirements and identifies security issues anywhere in your environment, even in the public cloud.

Key Features:

  • Collects, processes, and analyzes data while the ActiveWatch team unlocks the insights in your log data.
  • 24×7 expert monitoring and analysis.
  • Cloud-based log management.
  • Increased visibility, rapid custom reporting, and scalable, real-time log collection and log management.
  • Easy-to-use web interface with intuitive search interface.
  • Over 4,000 parsers available with new log format support added frequently.
  • Advanced correlation capabilities.

Cost: Contact for a quote.

44. WhatsUp Gold Network Monitoring

@Ipswitch

Alert Logic Log Manager

WhatsUp Gold Network Monitoring is a log management tool that delivers advanced visualization features that enable IT teams to make faster decisions and improve productivity. With WhatsUp Gold, you can deliver network reliability and performance and ensure optimized performance while minimizing downtime and continually monitoring networks.

Key Features:

  • Monitor applications, network, servers, VMs, and traffic flows with one flexible license.
  • Visualize your end-to-end network with interactive network maps.
  • Find problems and troubleshoot them more quickly to provide optimal availability and low MTTRs.
  • Unique, affordable consumption-based licensing approach.
  • Application monitoring, network traffic analysis, configuration management, discovery and network monitoring, and virtual environment monitoring.

Cost: FREE trial available for 30 days

  • WhatsUp Gold Basic: Starting at $1,755/license – Network monitoring essentials
  • WhatsUp Gold Pro: Starting at $2,415/license – Proactive server and network monitoring
  • WhatsUp Gold Total: Starting at $3,495/license – Visibility across your infrastructure and apps

45. Loggly

@Loggly

Loggly

Loggly is a cloud-based log management services that can dig deep into extensive collections of log data in real-time while giving you the most crucial information, on how to improve your code and deliver a better customer experience. Loggly’s flagship log data collection environment means that you can use traditional standards like HTTP and Syslog, versus having to install complicated log collector software on each server separately.

Key Features:

  • Collects and understands text logs from any sources, whether server or client side.
  • Keeps track of your logs even if you exceed your account limitations. (Pro & Enterprise)
  • Automatically parses logs from common web software; Apache, NGINX, JSON, etc.
  • Custom tags let you find related errors throughout your log data.
  • State of the art search algorithm for doing a global search, or individual based on set values.
  • Data analysis dashboard to give you a visual glimpse of your log data.

Cost:

  • Lite: Free
  • Standard: $79
  • Pro: $159
  • Enterprise: $279

46. Chaos Search

@ChaosSearch

ChaosSearch has developed a brand new approach to delivering data analytics and insights at scale. Their platform connects to and indexes the data within our customers’ cloud storage environments (ie., AWS S3), rendering all of their data fully searchable and available for analysis with the existing data visualization/analysis tools they are already using. Whereas all other solutions require complex data pipelines consisting of parsing or schema changes, ChaosSearch indexes all data as-is, without transformation, while auto-detecting native schemas.

ChaosSearch Features

  • Massive Scale: unlimited Ingest, data retention and queries
  • Huge cost savings – up to 80% less cost than an ELK Stack
  • Fully managed SaaS Service
  • Amazon S3 REST and Elasticsearch API support
  • Multi-user, SSO/OAuth, PCI compliant
  • Integrated Kibana with enhancements
  • Alerting notification with webhook integrations
  • Timelion time series data visualization
  • Customer dashboard for data analytics and tracking
  • Enhanced query management with burst/cancel
  • Index once and eliminate re-indexing

ChaosSearch Cost

  • Free Trial: $0·
  • Cloud Deployment (i.e., AWS S3): $0.30/GB + tenant cost at $144,000  at 1000 GB of average daily ingest.
  • Virtual Private Cloud (VPC): Custom Quote
]]>
Stackify by Netreo Receives “Best in Show” for Performance Monitoring in the 2023 SD Times 100 https://stackify.com/stackify-by-netreo-receives-best-in-show-for-performance-monitoring-in-the-2023-sd-times-100/ Wed, 14 Jun 2023 22:07:52 +0000 https://stackify.com/?p=40935 Fifth Straight Year Retrace Earns a Spot on the SD Times 100!

Stackify by Netreo received top honors from SD Times for Performance Monitoring in the 2023 SD Times 100. Each year, SD Times editors recognize leaders in the industry across 10 different categories and designate companies with “Best in Show” honors. Retrace APM is a full lifecycle APM solution and the driving force behind the successful placement within the SD Times 100 each of the past 5 years!

]]>
Democratizing Software Development with Retrace https://stackify.com/democratizing-software-development-with-retrace/ Wed, 24 May 2023 13:19:51 +0000 https://stackify.com/?p=40288 Creating High-Quality, High-Performance Applications for All

When I was working at AWS, one of the things that inspired me the most was the company’s founding principles. As articulated by Andy Jassy, then CEO of AWS:

We had a mental image of a college kid in his dorm room having the same access, the same scalability and same infrastructure costs as the largest businesses in the world.

AWS laid the foundation of a cloud infrastructure that works like Lego blocks and can be provisioned immediately. As customers’ businesses grew, they could scale effortlessly, paying only for what they needed rather than making large upfront investments.

Democratizing the access to tools and technologies has many advantages. Easy, low-cost access to critical tools and technologies fuels global innovation. Developers from diverse backgrounds and experiences bring new perspectives and ideas to the table, leading to new solutions and approaches. Providing accessible tools fosters a culture of empowerment and continuous feedback, motivating high-performing teams to deliver the best for customers.

With access to the resources they need, development teams collaborate more effectively and create higher quality products. Most importantly, democratizing software development reduces costs by eliminating the need for expensive licenses or infrastructure. All this makes it easier for developers to access the tools and technologies they need to do their jobs, regardless of their financial situation. 

New Retrace Starter Consumption Plan

Stackify’s purpose is to put APM/ELM capability in the hands of every developer in this world. We believe developing high quality, high-performance software solutions should not be limited to a select few. Inspired by the founding vision of AWS and our commitment to democratizing access to Retrace, Stackify is launching a full featured, all-inclusive $9.99/month consumption plan for our popular, cloud-based Application Performance Monitoring (APM) and Errors and Log Management (ELM) solution.

With more than 1,000 customers, Retrace helps software development teams monitor, troubleshoot and optimize the performance of their applications. Providing real-time APM insights, Retrace enables developers to quickly identify, diagnose and resolve errors, crashes, slow requests and more. Retrace also supports a range of programming languages, frameworks and platforms, including .NET, Java, .NET Core, Node.js, PHP, Python and Ruby.

With our new starter consumption plan, developers, DevOps teams and companies of all sizes can easily optimize code quality and application performance with affordable access to our all-inclusive, full-featured Retrace solution. On a $9.99 plan, Stackify is the only company to provide support where customers can interact with real people and resolve their queries. 

Retrace consumption pricing delivers “APM and ELM for All” without restrictive contracts for up to 1 million logs and 10 thousand traces. An unlimited number of users get complete resource and server monitoring for an unlimited number of servers, plus seven-day data retention and stellar support services during business hours. This is a significant opportunity for developers and development teams worldwide to improve their application management and provide a world-class experience to their customers.

]]>
Netreo Disrupts the APM Market with New Retrace Consumption-Based Pricing that Delivers “APM for All” https://stackify.com/netreo-disrupts-the-apm-market-with-new-retrace-consumption-based-pricing-that-delivers-apm-for-all/ Wed, 24 May 2023 12:13:00 +0000 https://stackify.com/?p=40293 Full-Featured, Fully Supported Retrace APM, Errors & Log Management for $9.99/mo.

Huntington Beach, Calif. – May 24, 2023 – Netreo, the award-winning provider of IT infrastructure monitoring and observability solutions and one of Inc. 5000’s fastest growing companies, today announced new, Retrace consumption-based pricing designed to deliver APM and ELM for All. Consumption-based pricing is designed to disrupt the market by enabling DevOps teams of all sizes to benefit from the full-featured Retrace application performance monitoring plus error and log management solution for only $9.99 per month (US dollars).

Currently, DevOps teams are forced to buy separate monitoring products or high-cost modules to get the same application performance, error and log management functionality offered by Retrace. Such go-to-market strategies from competitors effectively restrict purchasing to larger organizations that can afford the higher cost of piece part solutions. By removing pricing barriers and providing more functionality, Netreo is disrupting the market by enabling application developers, DevOps teams and companies of all sizes to benefit from the advantages of the integrated Retrace solution.

“All developers have the same goal of building high quality, high performing applications; and companies invest in multiple high-cost and often disconnected tools to ensure applications are defect free and optimized,” said Netreo APM Business Unit General Manager, Sanjeev Mittal. “Retrace is already unique in the market by offering application performance monitoring, errors and log management and more in our core APM solution. By removing price barriers, Netreo is enabling DevOps teams to collaborate more effectively and create higher quality products, while reducing expensive licenses and infrastructure costs.”

Starter consumption plans provide complete access to all Retrace functions without restrictive contracts. Customers pay monthly to get unlimited users complete resource and server monitoring for an unlimited number of servers, plus seven-day data retention and technical support (during business hours).

About Retrace

The Retrace full lifecycle APM solution delivers robust APM capabilities combined with the top tools and capabilities that developers and IT team needs most to eliminate application bugs and performance issues before impacting users. Turning detailed application tracing, centralized logging, critical metrics monitoring and more into actionable insights, Retrace enhances troubleshooting and optimizes performance throughout the entire lifecycle of enterprise applications. Supporting a wide range of programming languages, frameworks and platforms, including .NET, Java, .NET Core, Node.js, PHP, Python and Ruby, Retrace enables developers and DevOps teams worldwide to improve their application management and provide a world-class experience to their customers.

About Netreo

Netreo’s full-stack IT infrastructure management (ITIM), application performance monitoring (APM) and digital experience monitoring (DEM) solutions empower enterprise ITOps, developers and IT leaders with AIOps-driven observability, actionable insights, process automation and accelerated issue resolution. By having real-time intelligence on all resources, devices and applications deployed in cloud, on-premises and hybrid networks, Netreo’s users have the confidence to deliver more reliable and innovative internal and external customer digital experiences. Netreo is available via subscription, and in on-premises and cloud deployment models. Netreo is one of Inc. 5000’s fastest-growing companies and is trusted worldwide by thousands of private and public entities, managing half a billion resources per day.

Try Retrace and Prefix for free or connect with Stackify by Netreo on Twitter, LinkedIn and Facebook.

Request a demo of Netreo or connect with Netreo on Twitter, LinkedIn, and Facebook.

Media Contact:

Kyle Biniasz

Vice President of Marketing

kbiniasz@netreo.com

(949) 769-5705

]]>
What Is SDLC? Understand the Software Development Life Cycle https://stackify.com/what-is-sdlc/ Fri, 10 Mar 2023 08:01:00 +0000 https://stackify.com/?p=10182 The Software Development Life Cycle (SDLC) refers to a methodology with clearly defined processes for creating high-quality software. in detail, the SDLC methodology focuses on the following phases of software development:

  • Requirement analysis
  • Planning
  • Software design such as architectural design
  • Software development
  • Testing
  • Deployment

This article will explain how SDLC works, dive deeper in each of the phases, and provide you with examples to get a better understanding of each phase.

What is the software development life cycle?

SDLC or the Software Development Life Cycle is a process that produces software with the highest quality and lowest cost in the shortest time possible. SDLC provides a well-structured flow of phases that help an organization to quickly produce high-quality software which is well-tested and ready for production use.

The SDLC involves six phases as explained in the introduction. Popular SDLC models include the waterfall model, spiral model, and Agile model.

So, how does the Software Development Life Cycle work?

How the SDLC Works

SDLC works by lowering the cost of software development while simultaneously improving quality and shortening production time. SDLC achieves these apparently divergent goals by following a plan that removes the typical pitfalls of software development projects. That plan starts by evaluating existing systems for deficiencies.

Next, it defines the requirements of the new system. It then creates the software through the stages of analysis, planning, design, development, testing, and deployment. By anticipating costly mistakes like failing to ask the end-user or client for feedback, SLDC can eliminate redundant rework and after-the-fact fixes.

It’s also important to know that there is a strong focus on the testing phase. As the SDLC is a repetitive methodology, you have to ensure code quality at every cycle. Many organizations tend to spend few efforts on testing while a stronger focus on testing can save them a lot of rework, time, and money. Be smart and write the right types of tests.

Next, let’s explore the different stages of the Software Development Life Cycle.

Stages and Best Practices

Following the best practices and/or stages of SDLC ensures the process works in a smooth, efficient, and productive way.

1. Identify the Current Problems 

“What are the current problems?” This stage of the SDLC means getting input from all stakeholders, including customers, salespeople, industry experts, and programmers. Learn the strengths and weaknesses of the current system with improvement as the goal.

2. Plan

“What do we want?” In this stage of the SDLC, the team determines the cost and resources required for implementing the analyzed requirements. It also details the risks involved and provides sub-plans for softening those risks.

In other words, the team should determine the feasibility of the project and how they can implement the project successfully with the lowest risk in mind.

3. Design

“How will we get what we want?” This phase of the SDLC starts by turning the software specifications into a design plan called the Design Specification. All stakeholders then review this plan and offer feedback and suggestions. It’s crucial to have a plan for collecting and incorporating stakeholder input into this document. Failure at this stage will almost certainly result in cost overruns at best and the total collapse of the project at worst.

4. Build

“Let’s create what we want.”

At this stage, the actual development starts. It’s important that every developer sticks to the agreed blueprint. Also, make sure you have proper guidelines in place about the code style and practices.

For example, define a nomenclature for files or define a variable naming style such as camelCase. This will help your team to produce organized and consistent code that is easier to understand but also to test during the next phase.

5. Code Test

“Did we get what we want?” In this stage, we test for defects and deficiencies. We fix those issues until the product meets the original specifications.

In short, we want to verify if the code meets the defined requirements.

Try Stackify’s free code profiler, Prefix, to write better code on your workstation. Prefix works with .NET, Java, PHP, Node.js, Ruby, and Python.

6. Software Deployment

“Let’s start using what we got.”

At this stage, the goal is to deploy the software to the production environment so users can start using the product. However, many organizations choose to move the product through different deployment environments such as a testing or staging environment.

This allows any stakeholders to safely play with the product before releasing it to the market. Besides, this allows any final mistakes to be caught before releasing the product.

 Developers are now responsible for more and more steps of the entire development process

Extra: Software Maintenance

“Let’s get this closer to what we want.” The plan almost never turns out perfect when it meets reality. Further, as conditions in the real world change, we need to update and advance the software to match.

The DevOps movement has changed the SDLC in some ways. Developers are now responsible for more and more steps of the entire development process. We also see the value of shifting left. When development and Ops teams use the same toolset to track performance and pin down defects from inception to the retirement of an application, this provides a common language and faster handoffs between teams.

Application performance monitoring (APM) tools can be used in a development, QA, and production environment. This keeps everyone using the same toolset across the entire development lifecycle.

Read More: 3 Reasons Why APM Usage is Shifting Left to Development & QA

How does SDLC address security?

Security is an essential aspect of any software development process. However, unlike traditional software development that addresses security as a separate stage, SDLC addresses security every step of the way through DevSecOps practices.

DevSecOps, an extension of DevOps, is a methodology that emphasizes the integration of security assessments throughout the entire SDLC. It ensures that the software is secure from initial design to final delivery and can withstand any potential threat. During DevSecOps, the team undergoes security assurance activities such as code review, architecture analysis, penetration testing, and automated detection, which are integrated into IDEs, code repositories, and build servers.

How can DevSecOps be integrated into SDLC?

By following some best practices, DevSecOps can be integrated into SDLC in various ways.

  • Planning and Requirement Analysis: Here, security requirements and appropriate security choices that can mitigate potential threats and vulnerabilities are identified in this stage. What security design principles and best practices to be used are also thought about here.
  • Architectural Design: The development team uses the security design principle and architecture to consider potential risks. This stage involves threat modelling, access control, encryption mechanism, and architecture risk analysis.
  • Software Development and Testing: The code reviews are done to ensure software follows code standards and security controls are implemented. Security vulnerability tests like penetration testing are also done to identify potential issues.
  • Deployment: Automated DevSecOps tools are used to improve application security. To ensure the software is deployed securely, firewalls, access controls, and security settings are configured.
  • Maintenance: Security continues after deployment. The team must continuously monitor the software for security vulnerabilities. The team would also update the software with security patches and updates as necessary.

Examples

The most common SDLC examples or SDLC models are listed below.

Waterfall Model

This SDLC model is the oldest and most straightforward. With this methodology, we finish one phase and then start the next. Each phase has its own mini-plan and each phase “waterfalls” into the next. The biggest drawback of this model is that small details left incomplete can hold up the entire process.

Agile Model

The Agile SDLC model separates the product into cycles and delivers a working product very quickly. This methodology produces a succession of releases. Testing of each release feeds back info that’s incorporated into the next version. According to Robert Half, the drawback of this model is that the heavy emphasis on customer interaction can lead the project in the wrong direction in some cases.

Iterative Model

This SDLC model emphasizes repetition. Developers create a version very quickly and for relatively little cost, then test and improve it through rapid and successive versions. One big disadvantage here is that it can eat up resources fast if left unchecked.

V-Shaped Model

An extension of the waterfall model, this SDLC methodology tests at each stage of development. As with waterfall, this process can run into roadblocks.

Big Bang Model

This high-risk SDLC model throws most of its resources at development and works best for small projects. It lacks the thorough requirements definition stage of the other methods.

Spiral Model

The most flexible of the SDLC models, the spiral model is similar to the iterative model in its emphasis on repetition. The spiral model goes through the planning, design, build and test phases over and over, with gradual improvements at each pass.

The project's specifications and intended results significantly influence which model to use

Which SDLC model is the best and most commonly used?

Each SDLC model offers a unique process for your team’s various project challenges. The project’s specifications and intended results significantly influence which model to use. For example, the waterfall model works best for projects where your team has no or limited access to customers to provide constant feedback. However, the Agile model’s flexibility is preferred for complex projects with constantly changing requirements.

Hence, the Agile SDLC model has recently become increasingly popular and in demand. This demand can be primarily linked to the agile model’s flexibility and core principles. By its core principles, we mean adaptability, customer involvement, lean development, teamwork, time, sustainability, and testing, with its two primary elements being teamwork and time (faster delivery). So rather than creating a timeline for the project, agile breaks the project into individual deliverable ‘time-boxed’ pieces called sprints. This model prioritizes flexibility, adaptability, collaboration, communication, and quality while promoting early and continuous delivery. Ultimately, all this ensures that the final product meets customer needs and can quickly respond to market demands. 

However, regardless of the model you pick, there are a lot of tools and solutions, like Stackify’s Retrace tool, to assist you every step of the way.

Benefits of the SDLC

SDLC done right can allow the highest level of management control and documentation. Developers understand what they should build and why. All parties agree on the goal upfront and see a clear plan for arriving at that goal. Everyone understands the costs and resources required.

Several pitfalls can turn an SDLC implementation into more of a roadblock to development than a tool that helps us. Failure to take into account the needs of customers and all users and stakeholders can result in a poor understanding of the system requirements at the outset. The benefits of SDLC only exist if the plan is followed faithfully.

Want to improve application quality and monitor application performance at every stage of the SDLC? Try out Stackify’s Retrace tool for free and experience how it can help your organization at producing higher-quality software.

]]>