Java Archives - Stackify Tue, 14 May 2024 05:40:40 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.4 https://stackify.com/wp-content/uploads/2023/02/favicon.png Java Archives - Stackify 32 32 SOLID Design Principles Explained: Dependency Inversion Principle with Code Examples https://stackify.com/dependency-inversion-principle/ Mon, 31 Jul 2023 18:03:19 +0000 https://stackify.com/?p=18184 The SOLID design principles were promoted by Robert C. Martin and are some of the best-known design principles in object-oriented software development. SOLID is a mnemonic acronym for the following five principles:

Each of these principles can stand on its own and has the goal to improve the robustness and maintainability of object-oriented applications and software components. But they also add to each other so that applying all of them makes the implementation of each principle easier and more effective.

I explained the first four design principles in previous articles. In this one, I will focus on the Dependency Inversion Principle. It is based on the Open/Closed Principle and the Liskov Substitution Principle. You should, therefore, at least be familiar with these two principles, before you read this article.

High-level modules should not depend on low-level modules

Definition of the Dependency Inversion Principle

The general idea of this principle is as simple as it is important: High-level modules, which provide complex logic, should be easily reusable and unaffected by changes in low-level modules, which provide utility features. To achieve that, you need to introduce an abstraction that decouples the high-level and low-level modules from each other.

Based on this idea, Robert C. Martin’s definition of the Dependency Inversion Principle consists of two parts:

  1. High-level modules should not depend on low-level modules. Both should depend on abstractions.
  2. Abstractions should not depend on details. Details should depend on abstractions.

An important detail of this definition is, that high-level and low-level modules depend on the abstraction. The design principle does not just change the direction of the dependency, as you might have expected when you read its name for the first time. It splits the dependency between the high-level and low-level modules by introducing an abstraction between them. So in the end, you get two dependencies:

  1. the high-level module depends on the abstraction, and
  2. the low-level depends on the same abstraction.

Based on other SOLID principles

This might sound more complex than it often is. If you consequently apply the Open/Closed Principle and the Liskov Substitution Principle to your code, it will also follow the Dependency Inversion Principle.

The Open/Closed Principle required a software component to be open for extension, but closed for modification. You can achieve that by introducing interfaces for which you can provide different implementations. The interface itself is closed for modification, and you can easily extend it by providing a new interface implementation.

Your implementations should follow the Liskov Substitution Principle so that you can replace them with other implementations of the same interface without breaking your application.

Let’s take a look at the CoffeeMachine project in which I will apply all three of these design principles.

Brewing coffee with the Dependency Inversion Principle

You can buy lots of different coffee machines. Rather simple ones that use water and ground coffee to brew filter coffee, and premium ones that include a grinder to freshly grind the required amount of coffee beans and which you can use to brew different kinds of coffee.

If you build a coffee machine application that automatically brews you a fresh cup of coffee in the morning, you can model these machines as a BasicCoffeeMachine and a PremiumCoffeeMachine class.

Dependency Inversion Principle with Code Examples

Implementing the BasicCoffeeMachine

The implementation of the BasicCoffeeMachine is quite simple. It only implements a constructor and two public methods. You can call the addGroundCoffee method to refill ground coffee, and the brewFilterCoffee method to brew a cup of filter coffee.

import java.util.Map;

public class BasicCoffeeMachine implements CoffeeMachine {

    private Configuration config;
    private Map<CoffeeSelection, GroundCoffee> groundCoffee;
    private BrewingUnit brewingUnit;

    public BasicCoffeeMachine(Map<CoffeeSelection, GroundCoffee> coffee).   
        this.groundCoffee = coffee;
        this.brewingUnit = new BrewingUnit();
        this.config = new Configuration(30, 480);
    }

    @Override
    public Coffee brewFilterCoffee() {
        // get the coffee
        GroundCoffee groundCoffee = this.groundCoffee.get(CoffeeSelection.FILTER_COFFEE);
        // brew a filter coffee  
       return this.brewingUnit.brew(CoffeeSelection.FILTER_COFFEE, groundCoffee, this.config.getQuantityWater());
    }

    public void addGroundCoffee(CoffeeSelection sel, GroundCoffee newCoffee) throws CoffeeException { 
        GroundCoffee existingCoffee = this.groundCoffee.get(sel);
        if (existingCoffee != null) {
            if (existingCoffee.getName().equals(newCoffee.getName())) {
                existingCoffee.setQuantity(existingCoffee.getQuantity() + newCoffee.getQuantity())
            } else {
                throw new CoffeeException("Only one kind of coffee supported for each CoffeeSelection.")
            }
        } else {
            this.groundCoffee.put(sel, newCoffee)
        }
    }  
}

Implementing the PremiumCoffeeMachine

The implementation of the PremiumCoffeeMachine class looks very similar. The main differences are:

  • It implements the addCoffeeBeans method instead of the addGroundCoffee method.
  • It implements the additional brewEspresso method.

The brewFilterCoffee method is identical to the one provided by the BasicCoffeeMachine.

import java.util.HashMap;
import java.util.Map;

public class PremiumCoffeeMachine {
    private Map<CoffeeSelection, Configuration> configMap;
    private Map<CoffeeSelection, CoffeeBean> beans;
    private Grinder grinder
    private BrewingUnit brewingUnit;

    public PremiumCoffeeMachine(Map<CoffeeSelection, CoffeeBean> beans) {
        this.beans = beans;
        this.grinder = new Grinder();
        this.brewingUnit = new BrewingUnit();
        this.configMap = new HashMap<>();
        this.configMap.put(CoffeeSelection.FILTER_COFFEE, new Configuration(30, 480));
        this.configMap.put(CoffeeSelection.ESPRESSO, new Configuration(8, 28));
    }

    public Coffee brewEspresso() {
        Configuration config = configMap.get(CoffeeSelection.ESPRESSO);
        // grind the coffee beans
        GroundCoffee groundCoffee = this.grinder.grind(
            this.beans.get(CoffeeSelection.ESPRESSO),
            config.getQuantityCoffee())
        // brew an espresso
        return this.brewingUnit.brew(CoffeeSelection.ESPRESSO, groundCoffee,
            config.getQuantityWater());
    }

    public Coffee brewFilterCoffee() {
        Configuration config = configMap.get(CoffeeSelection.FILTER_COFFEE);
        // grind the coffee beans
        GroundCoffee groundCoffee = this.grinder.grind(
            this.beans.get(CoffeeSelection.FILTER_COFFEE),
            config.getQuantityCoffee());
        // brew a filter coffee
        return this.brewingUnit.brew(CoffeeSelection.FILTER_COFFEE, groundCoffee,
            config.getQuantityWater());
    }

    public void addCoffeeBeans(CoffeeSelection sel, CoffeeBean newBeans) throws CoffeeException {
        CoffeeBean existingBeans = this.beans.get(sel);
        if (existingBeans != null) {
            if (existingBeans.getName().equals(newBeans.getName())) {
                existingBeans.setQuantity(existingBeans.getQuantity() + newBeans.getQuantity());
            } else {
                throw new CoffeeException("Only one kind of coffee supported for each CoffeeSelection.");
            }
         } else {
             this.beans.put(sel, newBeans); 
         }
    }
}

To implement a class that follows the Dependency Inversion Principle and can use the BasicCoffeeMachine or the PremiumCoffeeMachine class to brew a cup of coffee, you need to apply the Open/Closed and the Liskov Substitution Principle. That requires a small refactoring during which you introduce interface abstractions for both classes.

Introducing abstractions

The main task of both coffee machine classes is to brew coffee. But they enable you to brew different kinds of coffee. If you use a BasicCoffeeMachine, you can only brew filter coffee, but with a PremiumCoffeeMachine, you can brew filter coffee or espresso. So, which interface abstraction would be a good fit for both classes?

As all coffee lovers will agree, there are huge differences between filter coffee and espresso. That’s why we are using different machines to brew them, even so, some machines can do both. I, therefore, suggest to create two independent abstractions:

  • The FilterCoffeeMachine interface defines the Coffee brewFilterCoffee() method and gets implemented by all coffee machine classes that can brew a filter coffee.
  • All classes that you can use to brew an espresso, implement the EspressoMachine interface, which defines the Coffee brewEspresso() method.

As you can see in the following code snippets, the definition of both interface is pretty simple.

 
public interface CoffeeMachine {
    Coffee brewFilterCoffee();
}

public interface EspressoMachine {
    Coffee brewEspresso();
}

In the next step, you need to refactor both coffee machine classes so that they implement one or both of these interfaces.

Refactoring the BasicCoffeeMachine class

Let’s start with the BasicCoffeeMachine class. You can use it to brew a filter coffee, so it should implement the CoffeeMachine interface. The class already implements the brewFilterCoffee() method. You only need to add implements CoffeeMachine to the class definition.

public class BasicCoffeeMachine implements CoffeeMachine {
    private Configuration config;
    private Map<CoffeeSelection, GroundCoffee> groundCoffee;
    private BrewingUnit brewingUnit;

    public BasicCoffeeMachine(Map<CoffeeSelection, GroundCoffee> coffee) {
        this.groundCoffee = coffee;
        this.brewingUnit = new BrewingUnit();
        this.config = new Configuration(30, 480);
    }

    @Override
    public Coffee brewFilterCoffee() {
        // get the coffee
        GroundCoffee groundCoffee = this.groundCoffee.get(CoffeeSelection.FILTER_COFFEE);
        // brew a filter coffee
        return this.brewingUnit.brew(CoffeeSelection.FILTER_COFFEE, groundCoffee, this.config.getQuantityWater());
    }

    public void addGroundCoffee(CoffeeSelection sel, GroundCoffee newCoffee) throws CoffeeException {
        GroundCoffee existingCoffee = this.groundCoffee.get(sel);
        if (existingCoffee != null) {
            if (existingCoffee.getName().equals(newCoffee.getName())) {
                existingCoffee.setQuantity(existingCoffee.getQuantity() + newCoffee.getQuantity());
            } else {
             throw new CoffeeException("Only one kind of coffee supported for each CoffeeSelection.");
           }
        } else {
            this.groundCoffee.put(sel, newCoffee);
        }
    } 
}

Refactoring the PremiumCoffeeMachine class

The refactoring of the PremiumCoffeeMachine also doesn’t require a lot of work. You can use the coffee machine to brew filter coffee and espresso, so the PremiumCoffeeMachine class should implement the CoffeeMachine and the EspressoMachine interfaces. The class already implements the methods defined by both interfaces. You just need to declare that it implements the interfaces.

import java.util.HashMap;
import java.util.Map;

public class PremiumCoffeeMachine implements CoffeeMachine, EspressoMachine {
    private Map<CoffeeSelection, Configuration> configMap;
    private Map<CoffeeSelection, CoffeeBean> beans;
    private Grinder grinder;
    private BrewingUnit brewingUnit;

    public PremiumCoffeeMachine(Map<CoffeeSelection, CoffeeBean> beans) {
        this.beans = beans;
        this.grinder = new Grinder();
        this.brewingUnit = new BrewingUnit();
        this.configMap = new HashMap<>();
        this.configMap.put(CoffeeSelection.FILTER_COFFEE, new Configuration(30, 480));
        this.configMap.put(CoffeeSelection.ESPRESSO, new Configuration(8, 28)); 
    }

    @Override
    public Coffee brewEspresso() {
        Configuration config = configMap.get(CoffeeSelection.ESPRESSO);
        // grind the coffee beans
        GroundCoffee groundCoffee = this.grinder.grind(
           this.beans.get(CoffeeSelection.ESPRESSO),
           config.getQuantityCoffee());
       // brew an espresso
       return this.brewingUnit.brew(CoffeeSelection.ESPRESSO, groundCoffee,
           config.getQuantityWater());
    }

    @Override
    public Coffee brewFilterCoffee() {
        Configuration config = configMap.get(CoffeeSelection.FILTER_COFFEE);
        // grind the coffee beans
        GroundCoffee groundCoffee = this.grinder.grind(
            this.beans.get(CoffeeSelection.FILTER_COFFEE),
            config.getQuantityCoffee());
        // brew a filter coffee
        return this.brewingUnit.brew(CoffeeSelection.FILTER_COFFEE, 
            groundCoffee,config.getQuantityWater());
    }

    public void addCoffeeBeans(CoffeeSelection sel, CoffeeBean newBeans) throws CoffeeException {
        CoffeeBean existingBeans = this.beans.get(sel);
        if (existingBeans != null) {
            if (existingBeans.getName().equals(newBeans.getName())) {
                existingBeans.setQuantity(existingBeans.getQuantity() + newBeans.getQuantity());
            } else {
                throw new CoffeeException("Only one kind of coffee supported for each CoffeeSelection.");
            }
        } else {
            this.beans.put(sel, newBeans);
        }
    }
}

The BasicCoffeeMachine and the PremiumCoffeeMachine classes now follow the Open/Closed and the Liskov Substitution principles. The interfaces enable you to add new functionality without changing any existing code by adding new interface implementations. And by splitting the interfaces into CoffeeMachine and EspressoMachine, you separate the two kinds of coffee machines and ensure that all CoffeeMachine and EspressMachine implementations are interchangeable.

Dependency Inversion Principle with Code Examples

Implementing the coffee machine application

You can now create additional, higher-level classes that use one or both of these interfaces to manage coffee machines without directly depending on any specific coffee machine implementation.

As you can see in the following code snippet, due to the abstraction of the CoffeeMachine interface and its provided functionality, the implementation of the CoffeeApp is very simple. It requires a CoffeeMachine object as a constructor parameter and uses it in the prepareCoffee method to brew a cup of filter coffee.

public class CoffeeApp {

public class CoffeeApp {
    private CoffeeMachine coffeeMachine;

    public CoffeeApp(CoffeeMachine coffeeMachine) {
     this.coffeeMachine = coffeeMachine
    }

    public Coffee prepareCoffee() throws CoffeeException {
        Coffee coffee = this.coffeeMachine.brewFilterCoffee();
        System.out.println("Coffee is ready!");
        return coffee;
    }  
}

The only code that directly depends on one of the implementation classes is the CoffeeAppStarter class, which instantiates a CoffeeApp object and provides an implementation of the CoffeeMachine interface. You could avoid this compile-time dependency entirely by using a dependency injection framework, like Spring or CDI, to resolve the dependency at runtime.

import java.util.HashMap;
import java.util.Map;

public class CoffeeAppStarter {
    public static void main(String[] args) {
        // create a Map of available coffee beans
        Map<CoffeeSelection, CoffeeBean> beans = new HashMap<CoffeeSelection, CoffeeBean>();
        beans.put(CoffeeSelection.ESPRESSO, new CoffeeBean(
            "My favorite espresso bean", 1000));
        beans.put(CoffeeSelection.FILTER_COFFEE, new CoffeeBean(
             "My favorite filter coffee bean", 1000))
        // get a new CoffeeMachine object
        PremiumCoffeeMachine machine = new PremiumCoffeeMachine(beans);
        // Instantiate CoffeeApp
        CoffeeApp app = new CoffeeApp(machine);
        // brew a fresh coffee
        try {
           app.prepareCoffee();
        } catch (CoffeeException e) {
            e.printStackTrace();
        }
    }
}

Dependency Inversion vs Dependency Injection

Throughout this post, we have delved into the concepts of Dependency Inversion and how it comes in handy when tackling tightly coupled software development. Dependency Injection also has the same objective. However, they both employ different techniques. The crux of Dependency Inversion lies in reversing dependencies by enabling higher-level modules to establish abstractions and lower-level modules to implement them. Conversely, Dependency Injection entails supplying dependent objects with essential dependencies from an external source, commonly via constructors or setters.

Why Should You Go for Dependency Inversion?

Embracing Dependency Inversion enables the construction of loosely coupled components—leading to simpler testing and replacement of modules without causing disruptions to the entire system. Furthermore, this approach fosters the utilization of SOLID principles, ultimately leading to a more refined and sustainable codebase.

The Dependency Inversion Principle is the fifth and final design principle that we discussed in this series

Summary

The Dependency Inversion Principle is the fifth and final design principle that we discussed in this series. It introduces an interface abstraction between higher-level and lower-level software components to remove the dependencies between them.

As you have seen in the example project, you only need to consequently apply the Open/Closed and the Liskov Substitution principles to your code base. After you have done that, your classes also comply with the Dependency Inversion Principle. This enables you to change higher-level and lower-level components without affecting any other classes, as long as you don’t change any interface abstractions.

If you enjoyed this article, you should also read my other articles about the SOLID design principles:

With APM, server health metrics, and error log integration, improve your application performance with Stackify Retrace.  Try your free two week trial today

]]>
Gradle vs. Maven: Performance, Compatibility, Builds, & More https://stackify.com/gradle-vs-maven/ Thu, 27 Jul 2023 20:05:00 +0000 https://stackify.com/?p=12360 Gradle is one of several Java development tools featured in Stackify’s Comprehensive Java Developer’s Guide, but it’s not the only build automation tool to consider. Maven is an older and commonly used alternative, but which build system is best for your project? With other tools, such as Spring, allowing developers to choose between the two systems, coupled with an increasing number of integrations for both, the decision is largely up to you.

The size of your project, your need for customization, and a few other variables can help you choose. Let’s take a look.

What is Gradle?

Gradle is a build automation system that is fully open source and uses the concepts you see on Apache Maven and Apache Ant. It uses domain-specific language based on the programming language Groovy, differentiating it from Apache Maven, which uses XML for its project configuration. It also determines the order of tasks run by using a directed acyclic graph.

Developers first introduced Gradle in 2007, and by 2013, Google had adopted it as the build system for Android projects. Designed to support substantial multi-project builds, Gradle also enables incremental additions to your build, identifying updated parts of your project. This feature prevents the re-execution of tasks dependent on updated parts. As of now, the most recent stable release, version 3.4, launched in February 2017, facilitates development and deployment using Java, Scala, and Groovy, with the promise of incorporating other project workflows and languages in the future.

What is Maven?

Developers use Maven for automating project builds using Java. Maven assists in outlining how to build a particular software and its different dependencies. It leverages an XML file to describe the project under construction, the software’s dependencies on third-party modules and parts, the build order, and the necessary plugins. Maven has pre-defined targets for tasks such as packaging and compiling.

Maven will download libraries and plugins from the different repositories and then puts them all in a cache on your local machine. While predominantly used for Java projects, you can use it for Scala, Ruby, and C#, as well as a host of other languages.

Both Gradle and Maven excel at handling dynamic and transitive dependencies, using third-party dependency caches, and reading POM metadata format

Approaching Builds: Gradlevs. Maven

Gradle and Maven fundamentally differ in their approach to builds. Gradle operates based on a graph of task dependencies, with tasks performing the work. Conversely, Maven adopts a fixed, linear model of phases, assigning goals to project phases. These goals, like Gradle’s tasks, are the “workhorses.”

Performance: Speed and Efficiency

Both Gradle and Maven support parallel execution of multi-module builds. Gradle, however, stands out for its use of incremental builds. It achieves this by checking the status of tasks and skipping any that aren’t updated, resulting in shorter build times. Gradle enhances performance with the following features:

  • Incremental compilations for Java classes
  • Compile avoidance for Java
  • APIs for incremental subtasks
  • A compiler daemon for faster compiling

 

Dependency Management: Flexibility and Compatibility

Both Gradle and Maven excel at handling dynamic and transitive dependencies, using third-party dependency caches, and reading POM metadata format. They can also declare library versions through central versioning definition and enforce it. Each can download transitive dependencies from their artifact repositories, Maven from Maven Central, and Gradle from JCenter. Both support the definition of a private company repository. If a project requires multiple dependencies, Maven can download these concurrently.

Despite these similarities, Gradle outperforms Maven in areas like API and implementation dependencies, and concurrent safe caches. Gradle preserves repository metadata with cached dependencies, preventing overwrites when multiple projects use the same cache. It also features a checksum-based cache and synchronizes the cache with the repository. Moreover, Gradle supports IVY Metadata, allowing custom rules for dynamic dependencies and resolving version conflicts, unlike Maven.

Exclusive Gradle features include:

  • Substitution rules for compatible libraries
  • The application of ReplacedBy rules
  • Advanced metadata resolution
  • Capability to dynamically replace project dependencies with external ones and vice versa

Composite Builds and Execution Models

Gradle simplifies working with composite builds and supports both ad-hoc and permanent composite builds. It allows for the combination of different builds and importing a composite build into Eclipse or IntelliJ IDEA.

Both Gradle and Maven offer task groups and descriptions and can build only the specified project and its dependencies. However, Gradle uses a fully configurable DAG, while Maven permits the attachment of a goal only to one other goal. Gradle also supports task exclusions, transitive exclusions, task dependency inference, advanced task ordering, and finalizers.

Infrastructure Administration

Gradle shines in administering build infrastructure through its use of wrappers that support auto-provisioning, unlike Maven, which requires an extension for self-provisioning builds. Gradle can configure version-based build environments automatically and allows for custom distributions.

Code Examples

In a comparison of Ant, Gradle, and Maven, Naresh Joshi compares the code required to create a build script that compiles, performs static analysis, runs unit tests, and creates JAR files at Programming Mitra.

Maven Code Example

Here’s the code required to achieve this with Maven:

<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">

<modelVersion>4.0.0</modelVersion>
<groupId>com.programming.mitra</groupId>
<artifactId>java-build-tools</artifactId>
<packaging>jar</packaging>
<version>1.0</version>

<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.11</version>
</dependency>
</dependencies>

<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>2.3.2</version>
</plugin>
</plugins>
</build>
</project>

To run the Maven goal that creates the JAR file, you would execute the following:

mvn package

Note that by using this code, you’re setting the parameters but not specifying the tasks that must be carried out. You can add plugins (such as Maven CheckStyle, FindBugs, and PMD) to execute the static analysis as a single target together with unit tests, but you’ll want to specify the path to the customs check style configuration to ensure that it fails on error, using code such as:

<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-checkstyle-plugin</artifactId>
<version>2.12.1</version>
<executions>
<execution>
<configuration>
<configLocation>config/checkstyle/checkstyle.xml</configLocation>
<consoleOutput>true</consoleOutput>
<failsOnError>true</failsOnError>
</configuration>
<goals>
<goal>check</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>findbugs-maven-plugin</artifactId>
<version>2.5.4</version>
<executions>
<execution>
<goals>
<goal>check</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-pmd-plugin</artifactId>
<version>3.1</version>
<executions>
<execution>
<goals>
<goal>check</goal>
</goals>
</execution>
</executions>
</plugin>

To run the goal to achieve this, execute the following:

mvn verify

It requires quite a bit of XML code to achieve some basic and common tasks, and for this reason, projects in Maven with a large number of tasks and dependencies can result in pom.xml files that consist of hundreds to thousands of lines of code.

Gradle Code Example

To compare, here’s an example of build.gradle code that achieves a similar outcome:

apply plugin:'java'
apply plugin:'checkstyle'
apply plugin:'findbugs'
apply plugin:'pmd'

version ='1.0'

repositories {
    mavenCentral()
}

dependencies {
    testCompile group:'junit', name:'junit', version:'4.11'
}

This code is shorter and also introduces some useful tasks that aren’t covered with the Maven code above. Execute the following for a list of tasks that Gradle can run with the current configuration:

gradle tasks --all
With Maven, you can easily define your project’s metadata and dependencies, but creating a highly customized build might be a nightmare for Maven users

How to Choose

Overall, both tools have their respective strengths and weaknesses.

  • Customized builds. With Maven, you can easily define your project’s metadata and dependencies, but creating a highly customized build might be a nightmare for Maven users. The POM file can easily get bloated as your project grows and might as well be an unreadable XML file later on.
  • Dependency management and directory structure. Still, Maven provides simple yet effective dependency management, and since it has a directory structure for your projects, you have some sort of standard layout for all your projects. It uses a declarative XML file for its POM file and has a host of plugins that you can use. Gradle uses the directory structure you see on Maven, but this can be customized.
  • Plugins and integrations. Maven also supports a wide variety of build life-cycle steps and integrates seamlessly with third-party tools such as CI servers, code coverage plugins, and artifact repository systems, among others. As far as plugins go, there is a growing number of available plugins now, and there are large vendors that have Gradle-compatible plugins. However, there are still more available plugins for Maven compared to the number available for Gradle.
  • Flexibility. Gradle, on the other hand, is very flexible and is based on a script. Custom builds would be easy to do on Gradle. However, the number of developers who know Gradle inside-out might be limited because Gradle is virtually a newcomer.

In the end, what you choose will depend primarily on what you need. Gradle is more powerful.  However, there are times when you really do not need most of the features and functionalities it offers. Maven might be best for small projects, while Gradle is best for bigger projects.

Additional Resources and Tutorials on Gradle and Maven

For further reading and more information, including helpful tutorials, visit the following resources:

]]>
What is Java Garbage Collection? How It Works, Best Practices, Tutorials, and More https://stackify.com/what-is-java-garbage-collection/ Wed, 03 May 2023 12:49:00 +0000 https://stackify.com/?p=11099 At Stackify, we battle our fair share of code performance problems too, including issues surrounding Java garbage collection. In this post, we’ll take a look at Java garbage collection, how it works, and why it matters.

Java garbage collection is the process by which Java programs perform automatic memory management

A Definition of Java Garbage Collection

Java garbage collection is the process by which Java programs perform automatic memory management. Java programs compile to bytecode that can be run on a Java Virtual Machine, or JVM for short. When Java programs run on the JVM, objects are created on the heap, which is a portion of memory dedicated to the program. Eventually, some objects will no longer be needed. The garbage collector finds these unused objects and deletes them to free up memory.

How Java Garbage Collection Works

Java garbage collection is an automatic process. The programmer does not need to explicitly mark objects to be deleted. The garbage collection implementation lives in the JVM. Every JVM can implement garbage collection however it pleases. The only requirement is that it should meet the JVM specification. Although there are many JVMs, Oracle’s HotSpot is by far the most common. It offers a robust and mature set of garbage collection options.

What are the Various Steps During the Garbage Collection?

While HotSpot has multiple garbage collectors that are optimized for various use cases, all its garbage collectors follow the same basic process. In the first step, unreferenced objects are identified and marked as ready for garbage collection. In the second step, marked objects are deleted. Optionally, memory can be compacted after the garbage collector deletes objects, so remaining objects are in a contiguous block at the start of the heap. The compaction process makes it easier to allocate memory to new objects sequentially after the JVM allocates the memory blocks to existing objects.

How Generational Garbage Collection Strategy Works

All of HotSpot’s garbage collectors implement a generational garbage collection strategy that categorizes objects by age. The rationale behind generational garbage collection is that most objects are short-lived and will be ready for garbage collection soon after creation.

Java Garbage Collection Heaps

Image via Wikipedia

What are Different Classification of Objects by Garbage Collector?

We can divide the heap into three sections:

  • Young Generation: Newly created objects start in the Young Generation. The garbage collector further subdivides Young Generation into an Eden space, where all new objects start, and two Survivor spaces, where it moves objects from Eden after surviving one garbage collection cycle. When objects are garbage collected from the Young Generation, it is a minor garbage collection event.
  • Old Generation: Eventually, the garbage collector moves the long-lived objects from the Young Generation to the Old Generation. When objects are garbage collected from the Old Generation, it is a major garbage collection event.
  • Permanent Generation: The JVM stores the metadata, such as classes and methods, in the Permanent Generation. JVM garbage collects the classes from the Permanent Generation that are no longer in use.

During a full garbage collection event, unused objects from all generations are garbage collected.

What are Different Types of Garbage Collector?

HotSpot has four garbage collectors:

  • Serial: All garbage collection events are conducted serially in one thread. JVM executes the compaction after each garbage collection.
  • Parallel: JVM uses multiple threads for minor garbage collection. It uses a single thread for major garbage collection and Old Generation compaction. Alternatively, the Parallel Old variant uses multiple threads for major garbage collection and Old Generation compaction.
  • CMS (Concurrent Mark Sweep): Multiple threads are used for minor garbage collection using the same algorithm as Parallel. Major garbage collection is multi-threaded, like Parallel Old. Still CMS runs concurrently alongside application processes to minimize “stop the world” events (i.e., when the garbage collector running stops the application). Here, the JVM does not perform compaction of memory.
  • G1 (Garbage First): The newest garbage collector is intended as a replacement for CMS. It is parallel and concurrent, like CMS. However, it works quite differently under the hood than older garbage collectors.

Benefits of Java Garbage Collection

The biggest benefit of Java garbage collection is that it automatically handles the deletion of unused objects or objects that are out of reach to free up vital memory resources. Programmers working in languages without garbage collection (like C and C++) must implement manual memory management in their code.

Despite the extra work required, some programmers argue in favor of manual memory management over garbage collection, primarily for reasons of control and performance. While the debate over memory management approaches continues to rage on, garbage collection is now a standard component of many popular programming languages. For scenarios in which the garbage collector is negatively impacting performance, Java offers many options for tuning the garbage collector to improve its efficiency.

What Triggers Garbage Collection?

The Garbage Collection process is triggered by a variety of events that signal to the Garbage Collector that memory needs to be reclaimed.

Here are some common events that trigger Java Garbage Collection:

  1. Allocation Failure: When an object cannot be allocated in the heap because there is not enough contiguous free space available, the JVM triggers the Garbage Collection to free up memory.
  2. Heap Size: When the heap reaches a certain capacity threshold, the JVM triggers Garbage Collection to reclaim memory and prevent an OutOfMemoryError.
  3. System.gc(): Calling the System.gc()  method can trigger Garbage Collection, although it does not guarantee that Garbage Collection will occur.
  4. Time-Based: Some Garbage Collection algorithms, such as G1 Garbage Collection, use time-based triggers to initiate Garbage Collection.

Ways for requesting JVM to run Garbage Collector

There are several ways to request the JVM to run Garbage Collector in a Java application:

System.gc() method:

Calling this method is the most common way to request Garbage Collection in a Java application. However, it does not guarantee that Garbage Collection will occur as it is only a suggestion to the JVM.

Runtime.getRuntime().gc() method:

This method provides another way to request Garbage Collection in a Java application. This method is similar to the System.gc() method, and it also suggests that the JVM should run Garbage Collector, but again it does not guarantee that Garbage Collection will occur.

JConsole or VisualVM:

JConsole or VisualVM is a profiling tool that is included with the Java Development Kit. These tools provide a graphical user interface that allows developers to monitor the memory usage of their Java application in real-time. They also provide a way to request Garbage Collection on-demand by clicking a button.

Command-Line Options:

The JVM can be configured with various command-line options to control Garbage Collection. For example, the -Xmx option can be used to specify the maximum heap size, which can affect the frequency and duration of Garbage Collection events. The -XX:+DisableExplicitGC option can be used to disable explicit calls to System.gc() or Runtime.getRuntime().gc().

Heap Dumps:

Heap dumps are snapshots of the Java heap that can be taken at any time during the application’s execution. They can be analyzed to identify memory leaks or other memory-related issues. Heap dumps can be requested using command-line options or profiling tools.

It is worth noting that requesting Garbage Collection too frequently can negatively impact the performance of the application. It is important to monitor the memory usage of the application and only request Garbage Collection when it is necessary. By using profiling tools and selecting appropriate Garbage Collection algorithms, developers can ensure that Garbage Collection is triggered in a way that minimizes the impact on the application’s performance.

Why Does a Programmer need to Understand Garbage Collection?

For many simple applications, Java garbage collection is not something that a programmer needs to consciously consider. However, for programmers who want to advance their Java skills, it is important to understand how Java garbage collection works and the ways in which it can be tuned.

Besides the basic mechanisms of garbage collection, one of the most important points to understand about garbage collection in Java is that it is non-deterministic, and there is no way to predict when garbage collection will occur at run time. It is possible to include a hint in the code to run the garbage collector with the System.gc()  or Runtime.getRuntime().gc()  methods, but they provide no guarantee that the garbage collector will actually run.

The best approach to tuning Java garbage collection is setting flags on the JVM

Java Garbage Collection Best Practices

The best approach to tuning Java garbage collection is setting flags on the JVM. Various flags such as the initial and maximum size of the heap, the size of the heap sections (e.g. Young Generation, Old Generation), can adjust the garbage collector to be used (e.g. Serial, G1, etc.). The nature of the application being tuned is a good initial guide to settings. For example, the Parallel garbage collector is efficient but will frequently cause “stop the world” events, making it better suited for backend processing where long pauses for garbage collection are acceptable.

On the other hand, the CMS garbage collector is designed to minimize pauses, making it ideal for GUI applications where responsiveness is important. Additional fine-tuning can be accomplished by changing the size of the heap or its sections and measuring garbage collection efficiency using a tool like jstat.

Try Stackify’s free code profiler, Prefix, to write better code on your workstation. Prefix works with .NET, Java, PHP, Node.js, Ruby, and Python.

Additional Resources and Tutorials on Java Garbage Collection

Visit the following resources and tutorials for further reading on Java garbage collection:

]]>
What to Do About Java Memory Leaks: Tools, Fixes, and More https://stackify.com/java-memory-leaks-solutions/ Fri, 03 Sep 2021 10:26:48 +0000 https://stackify.com/?p=12647 Memory management is Java’s strongest suit and one of the many reasons developers choose Java over other platforms and programming languages. On paper, you create objects, and Java deploys its garbage collector to allocate and free up memory. But that’s not to say Java is flawless. As a matter of fact, memory leaks happen and they happen a lot in Java applications. 

We put together this guide to arm you with the know-how to detect, avoid and fix memory leaks in Java.

Should You Worry About Memory Leaks?

Memory leaks often involve small amounts of memory resources, which you might not expect to have problems with. But when your applications return a java.lang.OutOfMemoryError, then your first and most likely suspect will be a memory leak.

Memory leaks are often an indicator of poorly written programs. If you are the type of programmer who wants everything to be perfect, you should investigate every memory leak you encounter. As a Java programmer, there is no way to know when a Java virtual machine will run the garbage collector. This is true, even if you specify System.gc(). The garbage collector will probably run when memory runs low or when the available memory is less than what your program needs. If the garbage collector does not free up enough memory resources, your program will take memory from your operating system.

A Java memory leak is not always serious compared to memory leaks that happen in C++ and other programming languages. According to Jim Patrick of IBM developerWorks, there are two factors you should be concerned with considering a memory leak:

  1. the size of the leak
  2. the program’s lifetime.

A small Java application might have a memory leak, but it will not matter if the JVM has enough memory to run your program. However, if your Java application runs constantly, then memory leaks will be a problem. This is because a continuously running program will eventually run out of memory resources.

Another area where memory leaks might be a problem is when the program calls for a lot of temporary objects that use up large amounts of memory. When these memory-hogging objects are not de-referenced, the program will soon have less available memory than needed.

How to Avoid Java Memory Leaks

To avoid memory leaks, you need to pay attention to how you write your code. Here are specific methods to help you stamp out memory leaks.

1. Use reference objects to avoid memory leaks

Raimond Reichert at JavaWorld writes that you can use reference objects to get rid of memory leaks.

Using the java.lang.ref package, you can work with the garbage collector in your program. This allows you to avoid directly referencing objects and use special reference objects that the garbage collector easily clears. The special subclasses allow you to refer to objects indirectly. For instance, Reference has three subclasses: PhantomReference, SoftReference and WeakReference.

A referent, or an object referenced by these subclasses, can be accessed using that reference object’s get method. The advantage of using this method is that you can clear a reference easily by setting it to null and that the reference is pretty much immutable. How does garbage collector act with each type of referent?

  • SoftReference object: garbage collector is required to clear all SoftReference objects when memory runs low.
  • WeakReference object: when the garbage collector senses a weakly referenced object, all references to it are cleared and ultimately taken out of memory.
  • PhantomReference object: garbage collector is unable to clean up PhantomReference objects automatically, leaving you to manually clean up all PhantomReference objects and references.

Using reference objects, you can work with the garbage collector to automate the task of removing listeners that are weakly reachable. WeakReference objects, especially with a cleanup thread, can help you avoid memory errors.

2. Avoid memory leaks related to a WebApp classloader

Using Jetty 7.6.6. or higher, you can prevent WebApp classloader pinning. When your code keeps referring to a WebApp classloader, memory leaks can easily happen. There are two types of leaks in this case: daemon threads and static fields.

  • Static fields are started with the classloader’s value. Even as Jetty stops deploying and then redeploys your web application, the static reference persists, so the object cannot be cleared from memory.
  • Daemon threads that are started outside the lifecycle of a web application and are prone to memory leaks because these threads have references to the classloader that started the threads.

With Jetty, you can use preventers to help you address problems associated with WebApp classloaders. For instance, an app context leak preventer, such as appcontext.getappcontext(), helps you keep the static references within the context classloader. Other preventers you can use include the following:

  • AWT leak preventer
  • DOM leak preventer
  • Driver manager leak preventer
  • GC thread leak preventer
  • Java2D leak preventer
  • LDAP leak preventer
  • Login configuration leak preventer
  • Security provider leak preventer

3. Other specific steps

BurnIgnorance also lists several ways to prevent memory leaks in Java, including:

  • Release the session when it is no longer needed. Use the HttpSession.invalidate() to do this.
  • Keep the time-out time low for each session.
  • Store only the necessary data in your HttpSession.
  • Avoid using string concatenation. Use StringBuffer’s append() method because the string is an unchangeable object while string concatenation creates many unnecessary objects. A large number of temporary objects will slow down performance.
  • As much as possible, you should not create HttpSession on your jsp page. You can do this by using the page directive <%@page session=”false”%>.
  • If you are writing a frequently executed query, use PreparedStatement object rather than using Statement object. Why? PreparedStatement is precompiled, while Statement is compiled every time your SQL statement is transmitted to the database.
  • When using JDBC code, avoid using “*” when you write your query. Try to use the corresponding column name instead.
  • If you are going to use stmt = con.prepareStatement(sql query) within a loop, then be sure to close it inside that particular loop.
  • Be sure to close the Statement and ResultSet when you need to reuse these.
  • Close the ResultSet, Connection, PreparedStatement and Statement in the final block.

What to Do When You Suspect Memory Leaks

If you find it takes longer to execute your application or notice a considerable slowdown, it is time to check for memory leaks.

How do you know your program has a memory leak? A prevalent sign is the java.lang.OutOfMemoryError error. This error has several detailed messages that would allow you to determine if there is a memory leak or not:

  • Java heap space: memory resources could not be allocated for a particular object in the Java heap. This can mean several things, including a memory leak or lower specified heap size than the application needs or your program is using a lot of finalizers.
  • PermGen space: the permanent generation area is already full. This area is where the method and class objects are stored. You can easily correct this by increasing the space via –XX:MaxPermSize.
  • Requested array size exceeds VM limit: the program is trying to assign an array that is > than the heap size.
  • Request <size> bytes for <reason>. Out of swap space?: an allocation using the local heap did not succeed, or the native heap is close to being used up.
  • <Reason> <stack trace> (Native method): a native method was not allocated the required memory.

Less Common Memory Leaks

There are times when your application crashes without returning an OutOfMemoryError message, making it more challenging to diagnose memory leaks as the problem and make corrections. The good news is that you can check the fatal log error or the crash dump to see what went wrong.

Moreover, there are many monitoring and diagnostic tools you can use to help identify and correct memory leaks. Stackify’s Darin Howard has identified Java profilers as an excellent way to track down memory leaks and run the garbage collector manually. You can use Java profilers to review how memory is being used, which will easily show you the processes and classes that are using too much memory. You can also use JVM Performance Metrics, which give you tons of data on garbage collection, thread counts and memory usage.

A quick word about Java profilers

Java profiling helps you monitor different JVM parameters, including object creation, thread execution, method execution and yes, garbage collection.

When you have ruled out memory leaks as the reason for your application’s slow down, use Java profiling tools to get a closer view of how your application is utilizing memory and other resources. Instead of going over your code to find the problems, simply use these tools, which will save you the time and effort needed to ensure that your code is up to par.

Java profilers give you a comprehensive set of statistics and other information you can use to trace your coding mistakes. Profilers also help you find what is causing performance slowdowns, multi-threading problems and memory leaks. In short, profilers give you a more stable and scalable application. And the best part is these Java profiling tools will give you a fine-grained analysis of every problem and how to solve them.

Java Profiling Metrics

If you use these tools early into your project and regularly – particularly when used in conjunction with other Java performance tools – you can create efficient, high-performing, fast and stable applications. Profiling tools will also help you know critical issues before you deploy your app.

Some metrics you can find out using Java profiling tools include:

  • A method’s CPU time
  • Memory utilization
  • Information on method calls
  • What objects are created
  • What objects are removed from memory or garbage collected

The Java profiler Memory Analyzer (MAT) allows you to analyze Java heap to search for memory look and lower memory use. You can easily analyze heap dumps even when there are millions of objects living in them, see the sizes of each object and why garbage collector is not deleting specific objects from memory. MAT gives you a nifty report on these objects, helping you narrow down suspected memory leaks.

The Java Flight Recorder is a diagnostic and profiling tool that gives you more information about a running application and often better data than those provided by other tools. The Java Flight Recorder allows APIs created by third-party services and lowers your total cost of ownership. A commercial feature of Oracle Java SE, Java Flight Recorder also gives you an easy way to detect memory leaks, find the classes responsible for these leaks and locate the leak to correct it.

Other tools you should know

  • NetBeans Profiler – supports Java SE, Java FX, EJB, mobile applications, and Web applications and could be used to monitor memory, threads and CPU resources.
  • JProfiler – a thread, memory and CPU profiling tool that can also be used to analyze memory leaks and other performance bottlenecks.
  • GC Viewer – an open-source tool that allows you to easily visualize information produced by JVM. You can use GC Viewer to see performance metrics related to garbage collection, including accumulated pauses, longest pauses and throughput. Aside from enabling you to run garbage collection, you can also use this tool to set up the preliminary heap size.
  • VisualVM – based on the NetBeans platform, VisualVM is an easily extensible tool using various plugins to give you detailed data on your applications for monitoring both remote and local apps. You can get memory profiling and manually run the garbage collector using this tool.
  • Patty in action – another open-source tool that you can use as a profiling tool to give you target and drilled down profiling. You can use this tool to analyze heaps.
  • JRockit – a proprietary solution from Oracle, JRockit is for Java SE applications that may be used to predict latency, visualize garbage collection and sort through memory-related issues.
  • GCeasy – GCeasy is a tool that analyzes logs related to garbage collection and is an easy way to detect memory leak problems when analyzing garbage collection logs. Another reason to use GCeasy is that it is available online; there is no need to install it on your machine to use it.

Java Memory Leaks: Solutions

Now that you know your program has memory leaks, you can use these tools to help fix leaks when they become a problem – preferably before leaks become an issue.

Using tools that can detect memory leaks

For our next example, we are going to use VisualVM.

Once you have downloaded and configured VisualVM, analyze your code by running your application with VisualVM attached to it. When the task that slows down your application is performed, VisualVM looks at the “monitor” and “memory pools” tabs. What do you need to look out for? When you see spikes in memory usage in the Monitor tab, press on the “Perform GC” button, which will activate garbage collection. This should help decrease the amount of memory used.

If that does not work, switch to “memory pools” and look at the Old Gen section. If objects are leaking, you would see it here. Remember that active objects are placed in “Eden” and will then be moved to “Survivor.” Meanwhile, older objects are found in the ‘Old Gen’ pool.

At this point, you can go back to your code and comment out the irrelevant parts, up to the point where you notice that there is performance slow down or where it just stops. Repeat all these steps until you have eliminated all the leaks.

Enable some parts of your code to check memory usage, and if you find another leak, get into the method that caused these leaks to help plug it. Keep on narrowing it down until you only have a single class or method left. Validate all file buffers to see if these are closed. Also, check all hashmaps to see if you are using these properly.

Using heap dumps

If you find the above-mentioned method too tedious, you might be able to reduce the time you spend on fixing memory leaks by using heap dumps. Heap dumps allow you to see the number of instances open and how much space these instances take up. If there is a specific instance that you want to investigate further, you can just double click on that particular instance and see more information. Heap dumps help you know just how many objects are generated by your application.

Using Eclipse memory leak warnings

Another way to save time is to rely on Eclipse memory leak warnings. If you have a code compliant with JDK 1.5 or higher, you can use Eclipse to warn you when a reference is ended but the object persists and is not closed. Just be sure to enable leak detection in your project settings. Be aware that using Eclipse might not be a comprehensive solution. Eclipse does not detect all leaks and may miss some file closures, especially when you have code that is not JDK 1.5 (or higher) compliant. Another reason why Eclipse does not always work is because these file closures and openings are nested very deeply.

Additional Resources and Tutorials

Get more insights and information on how to avoid, detect and rectify memory leaks from the following resources and tutorials:

Summary

Memory leaks are certainly a concern for Java developers, but they’re not always the end of the world. Arm yourself with the know-how to prevent them before they occur and address them when they arise.

Stackify by Netreo is rapidly growing and so is the usage of our services. If your app is constantly changing or usage is increasing, it is critical that you have good tools in place for monitoring and finding the root cause of performance problems. Building better code can be easy in Java – Prefix provides an instant feedback loop to give you visibility into how your app is performing as you write code, and Retrace offers powerful application performance management (APM) for all your Java applications.

Related: 11 Simple Java Performance Tuning Tips

]]>
A Step By Step Guide to Tomcat Performance Monitoring https://stackify.com/tomcat-performance-monitoring/ Fri, 23 Jul 2021 11:00:00 +0000 https://stackify.com/?p=13971 Application server monitoring metrics and runtime characteristics are essential for the applications running on each server. Additionally, monitoring prevents or resolves potential issues in a timely manner. As far as Java applications go, Apache Tomcat is one of the most commonly used servers. Tomcat performance monitoring can be done with JMX beans or a monitoring tool such as MoSKito or JavaMelody.

It’s important to know what is relevant to monitor and the acceptable values for the metrics being watched. In this article, you will take a look at:

  • How you can set up Tomcat memory monitoring
  • What metrics can be used to keep tabs on Tomcat performance

Tomcat Performance Metrics

When checking application performance, there are several areas that provide clues on whether everything is working within ideal parameters. Here are some of the key areas you’ll want to monitor:

Memory Usage

This reading is critical because running low on heap memory will cause your application to perform slower. It can even lead to OutOfMemory exceptions. In addition, using as little available memory as possible could decrease your memory needs and minimize costs.

Garbage Collection

You have to determine the right frequency for running garbage collection, since this is a resource-intensive process. Additionally, you need to see if a sufficient amount of memory has been freed up.

Thread Usage

Too many active threads at the same time can slow down the application or the whole server.

Request Throughput

Request Throughput measures the number of requests the server can handle for a certain unit of time and helps determine your hardware needs.

Number of Sessions

A similar measure to the request throughput, this metric identifies the number of sessions the server can support at a given time.

Response Time

Users are likely to quit if your system takes too long to respond to requests, therefore it is crucial to monitor the response time and investigate the potential causes of response delays.

Database Connection Pool

Monitoring the data connection pool can help determine the number of connections in a pool that your application needs.

Error Rates

This metric helps identify codebase issues.

Uptime

The uptime metric shows how long your server has been running or down.

Tomcat servers help you monitor performance by providing JMX beans for most of these metrics, which can be verified using a tool like Tomcat Manager or JavaMelody.

Next, we’re going to look at each area of Tomcat performance, any MBeans definitions that can help you monitor performance, and the means by which you can view metric values.

But first, let’s start with investigating a very simple application that we are going to use as an example to monitor.

Example Application to Monitor

For this example, we’re gonna use a small web service application that uses an H2 database built with Maven and Jersey.

The application will manipulate a simple User entity:

public class User {
    private String email;
    private String name;

    // standard constructors, getters, setters
}

The REST web service defined has two endpoints that saves a new User to the database and outputs the list of Users in JSON format:

@Path("/users")
public class UserService {
    private UserDAO userDao = new UserDAO();
    
    public UserService () {
        userDao.createTable();
    }

    @POST
    @Consumes(MediaType.APPLICATION_JSON)
    public Response addUser(User user) {
        userDao.add(user);
        return Response.ok()
            .build();
    }

    @GET
    @Produces(MediaType.APPLICATION_JSON)
    public List<User> getUsers() {
        return userDao.findAll();
    }
}

Building a REST web service is outside the scope of this piece. For more information, check out our article on Java Web Services.

Also, note that the examples in this article are tested with Tomcat version 9.0.0.M26. For other versions, the names of beans or attributes may differ slightly.

Tomcat Performance Manager

One way of obtaining the values of the MBeans is through the Manager App that comes with Tomcat. This app is protected, so to access it, you need to first define a user and password by adding the following in the conf/tomcat-users.xml file:

<role rolename="manager-gui"/>
<role rolename="manager-jmx"/>
<user username="tomcat" password="s3cret" roles="manager-gui, manager-jmx"/>

The Manager App interface can be accessed at http://localhost:8080/manager/html and contains some minimal information on the server status and the deployed applications. Manager App also provides the capability of deploying a new application.

For the purpose of performance monitoring, one interesting feature of the Manager App is the ability to check for memory leaks.

checking for memory leaks:

The “Find Leaks” feature  will look for memory leaks in all the deployed applications.

Information on the JMX beans can be found at http://localhost:8080/manager/jmxproxy. The information is in text format, as it is intended for tool processing.

To retrieve data about a specific bean, you can add parameters to the URL that represent the name of the bean and attribute you want:

http://localhost:8080/manager/jmxproxy/?get=java.lang:type=Memory&att=HeapMemoryUsage

Overall, this tool can be useful for a quick check, but it’s limited and unreliable, so not recommended for production instances.

Next, let’s move on to a tool that provides a friendlier user interface.

Where to Start:

Enabling Tomcat Performance Monitoring with JavaMelody

If you’re using Maven, simply add the javamelody-core dependency to the pom.xml:

<dependency>
    <groupId>net.bull.javamelody</groupId>
    <artifactId>javamelody-core</artifactId>
    <version>1.69.0</version>
</dependency>

In this way, you can enable monitoring of your web application.

After deploying the application on Tomcat, you can access the monitoring screens at the /monitoring URL.

JavaMelody contains useful graphs for displaying information related to various performance measures, as well as a way to find the values of the Tomcat JMX beans.

Most of these beans are JVM-specific and not application-specific.

Let’s go through each of the most important metrics, see what MBeans are available and how to monitor them in other ways.

Where to Start:

Memory Usage

Monitoring used and available memory is helpful for both ensuring proper functioning of the server and obtaining statistics. When the system can no longer create new objects due to a lack of memory, the JVM will throw an exception.

Note that a constant increase in memory usage without a corresponding rise in activity level is indicative of a memory leak.

Generally, it’s difficult to set a minimum absolute value for the available memory. You should instead base it on observing the trends of a particular application. Of course, the maximum value should not exceed the size of the available physical RAM.

The minimum and maximum heap size can be set in Tomcat by adding the parameters:

set CATALINA_OPTS=%CATALINA_OPTS% -Xms1024m -Xmx1024m

Oracle recommends setting the same value for the two arguments to minimize garbage collections.

To view the available memory, you can inspect the MBean java.lang:type=Memory with the attribute HeapMemoryUsage:

Mbean java

The MBeans page is accessible at the /monitoring?part=mbeans URL.

Also, the MBean java.lang:type=MemoryPool has attributes that show the memory usage for every type of heap memory.

Since this bean only shows the current status of the memory, you can check the “Used memory” graph of JavaMelody to see the evolution of memory usage over a period of time.

"Used memory" graph of JavaMelody

In the graph, you can see the highest memory-use reading was 292 MB, while the average is 202 MB of the allocated 1024 MB, which means more than enough memory is available for this process.

Note that JavaMelody runs on the same Tomcat server, which does have a small impact on the readings.

Where to Start:

Garbage Collection

Garbage collection is the process through which unused objects are released to free up memory. If the system spends more than 98% of CPU time doing garbage collection and recovers less than 2% heap, the JVM will throw an OutOfMemoryError with the message “GC overhead limit exceeded.”

Such an error message usually indicates a memory leak, so it’s a good idea to watch for values approaching these limits and investigate the code.

To check these values, look at the java.lang:type=GarbageCollector MBean, particularly the LastGcInfo attribute, which shows information about the memory status, duration and thread count of the last execution of the GC.

A full garbage collection cycle can be triggered from JavaMelody using the “Execute the garbage collection” link.Let’s look at the evolution of the memory usage before and after garbage collection:

Java Garbage collection graph

In the case of the example application, the GC is run at 23:30 and the graph shows that a large percentage of memory is reclaimed.

Where to Start:

Thread Usage

To find the status of the in-use threads, Tomcat provides the ThreadPool MBean. The attributes currentThreadsBusy, currentThreadCount and maxThreads provide information on the number of threads currently busy, currently in the thread pool and the maximum number of threads that can be created.

By default, Tomcat uses a maxThreads number of 200.

If you expect a larger number of concurrent requests, you can increase the count naturally by modifying the conf/server.xml file:

<Connector port="8080" protocol="HTTP/1.1"
  connectionTimeout="20000"
  redirectPort="8443" 
  maxThreads="400"/>

Alternatively, if the system performs poorly with a high thread count, you can adjust the value. What’s important here is a good battery of performance tests to put load on the system to see how the application and the server handle that load.

Where to Start:

Request Throughput and Response Time

For determining the number of requests in a given period, you can use the MBean Catalina:type=GlobalRequestProcessor, which has attributes like requestCount and errorCount that represent the total number of requests performed and errors encountered.

The maxTime attribute shows the longest time to process a request, while processingTime represents the total time for processing all requests.

Request Throughput and Response Time

The disadvantage of viewing this MBean directly is that it includes all the requests made to the server. To isolate the HTTP requests, you can check out the “HTTP hits per minute” graph of the JavaMelody interface.

Let’s send a request that retrieves the list of users, then a set of requests to add a user and display the list again:

To isolate the HTTP requests, you can check out the "HTTP hits per minute" graph of the JavaMelody interface.

You can see the number of requests sent around 17:00 displayed in the chart with an average execution time of 65 ms.

JavaMelody provides high-level information on all the requests and the average response time. However, if you want more detailed knowledge on each request, you can add another tool like Prefix for monitoring the performance of the application per individual web request.

Another advantage of Prefix is locating which requests belong to which application, in case you have multiple applications deployed on the same Tomcat server.

Using JavaMelody and Prefix

In order to use both JavaMelody and Prefix, you have to disable the gzip compression of the JavaMelody monitoring reports to avoid encoding everything twice. 

To disable the gzip compression, simply add the gzip-compression-disabled parameter to the MonitoringFilter class in the web.xml of the application:

<filter>
  <filter-name>javamelody</filter-name>
  <filter-class>net.bull.javamelody.MonitoringFilter</filter-class>
  <init-param>
    <param-name>gzip-compression-disabled</param-name>
    <param-value>true</param-value>
  </init-param>
</filter>

Next, download Prefix, then create a setenv.bat (setenv.sh for Unix systems) file in the bin directory of the Tomcat installation. In this file, add the -javaagent parameter to CATALINA_OPTS to enable Prefix profiling for the Tomcat server.

set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:"C:\Program Files (x86)\StackifyPrefix\java\lib\stackify-java-apm.jar"

Now you can access the Prefix reports at http://localhost:2012/, view the time at which each request was executed and how long it took:

access the Prefix reports at http://localhost:2012/ - and view the time at which each request was executed and how long it took

This is very useful for tracking down the cause of any lag in your application.

Database Connections

Connecting to a database is an intensive process, which is why it’s important to use a connection pool.

Tomcat provides a way to configure a JNDI data source that uses connection pooling by adding a Resource element in the conf/context.xml file:

<Resource
  name="jdbc/MyDataSource"
  auth="Container"
  type="javax.sql.DataSource"
  maxActive="100"
  maxIdle="30"
  maxWait="10000"
  driverClassName="org.h2.Driver"
  url="jdbc:h2:mem:myDb;DB_CLOSE_DELAY=-1"
  username="sa"
  password="sa"
/>

The MBean Catalina:type=DataSource can then display information regarding the JNDI data source, such as numActive and numIdle, representing the number of active or idle connections.

For the database connections to be displayed in the JavaMelody interface, you need to name the JNDI data source MyDataSource. Afterwards, you can consult graphs such as “SQL hits per minute,” “SQL mean times,” and “% of sql errors.”

For more detail on each SQL command sent to the database, you can consult Prefix for each HTTP request. A database icon marks requests that involve a database connection.

Prefix will display the SQL query that was generated by the application. Let’s see the data recorded by Prefix for a call to the addUser() endpoint method:

Prefix will display the SQL query that was generated by the application

The screenshot above shows the SQL code, as well as the result of the execution.

In case there is an SQL error, Prefix will show you this as well. For example, if someone attempts to add a user with an existing email address, this causes a primary key constraint violation:

Prefix will show you the SQL error

The tool shows the SQL error message, as well as the script that caused it.

[adinserter block=”33″]

Error Rates

Errors are a sign that your application is not performing as expected, so it’s important to monitor the rate at which they occur. Tomcat does not provide an MBean for this, but you can use other tools to find this information.

Let’s introduce an error in the example application by writing an incorrect name for the JNDI data source and see how the performance tools behave.

JavaMelody provides a “%of HTTP errors” chart which shows what percentage of requests at a given time resulted in an error:

JavaMelody provides a "%of HTTP errors" chart which shows what percentage of requests at a given time resulted in an error

The chart shows you that an error occurred, but it’s not very helpful in identifying the error. To do this, you can turn to Prefix, which highlights HTTP requests that ended with an error code:

If you select this request, Prefix will display details regarding the endpoint that was accessed and the error encountered:

Prefix will display details regarding the endpoint that was accessed and the error encountered

Using Prefix we see that the error happened when accessing the /users endpoint, and the cause is “MyyyDataSource is not bound in this context,” meaning the JNDI data source with the incorrect name was not found.

Conclusion

Tomcat performance monitoring is crucial in running your Java applications in production successfully. Tomcat memory monitoring ensures that your application responds to requests without significant delays and identifies any potential errors or memory leaks in your code. You need this data to keep track of production applications and proactively monitor any issues that may come up.

Tomcat anticipates this need by providing a series of performance-related JMX beans you can monitor. In addition, a production-grade APM tool such as Prefix can make the task a lot easier, as well as scalable.  

Prefix is a developer’s trusted sidekick that helps them write better code through web request tracing and other functions. Prefix works with .NET, Java, PHP, Node.js, Ruby, and Python.

See Prefix in action. Download for FREE today!

]]>
Ways to Ensure App Security With Java Features https://stackify.com/ways-to-ensure-app-security-with-java-features/ Fri, 12 Jun 2020 14:40:27 +0000 https://stackify.com/?p=29170 As important as adding new features, app developers need to start placing more emphasis on the security aspect of the applications they design. After all, more app features mean more data residing within an app. Without proper security controls in place, that data can be vulnerable to intruders. 

Java is one of the most secure and most popular programming languages in the world right now. It has consistently gained a positive reputation since the mid-1990s, especially after managing to eliminate the many security pitfalls and vulnerabilities of C and C++ languages. However, being the most secure coding language doesn’t exempt Java coding from possible cybersecurity threats. Developers still have to deliver secure codes and ensure that their apps are foolproof even when they are developed with Java features. These 10 tips will always come in handy to ensure app security with Java features:

  1. Use Java ME on Pi platforms

If you’re using the Raspberry Pi 4 as a platform to design a Java application, installing Java ME on your Pi will allow you to effortlessly embed, test, and tweak the app’s security features, even for devices with small memory space or disk footprint. Java ME is built with CLDC-based runtime, allowing  it to run on highly memory-constrained devices (as low as 1MB). You will need Java ME with CDC-based runtime if your device has a memory capacity of 10MB or more. Just ensure that the versions of Java ME you are using to develop your apps are built specifically for the Raspberry Pi.

  1. Avoid complex and cluttered coding

Serialization is useful in that it allows Java programmers to transform remote inputs/objects into transportable byte streams, which can then be saved to disk as fully endowed objects. The process can be reversed (through Java deserialization) to recreate the original object from the saved byte stream. 

However, Java deserialization can be vulnerable because it is impossible to tell, from a saved byte stream, what the original object was until after you decode it. That means if an attacker sends a serialized malicious object to your app, you have to decode it first, at which point you’ll already have instantiated it. The unknown data will already be running code in the JVM. 

These attacks could be preventable if you were possible to remove vulnerabilities on your classpath. Problem is, with the massive amount of classes in Java libraries and third-party libraries, plus the class in your own code, it is almost impossible to guarantee the absence of vulnerable classes on your classpath.

  1. Encrypt the data

There are tons of open source libraries that consist of tons of class definitions (pre-written code) dedicated to Java development. They include logging libraries (e.g. Log4j, SLF4j, LogBack), parsing libraries (e.g. JSON), and the general purpose libraries (e.g. Google Guava and the Apache Commons library), among others. 

But not all libraries are secure. To ensure that a library is reliable, consider:

  • Its documentation. If it is poorly documented, it probably isn’t secure.
  • Does it have an active support community behind it; maybe a developers’ forum where you can access help? 
  • How is the application programming interface (API) documentation? 
  • Is the library in active development and if yes, how stable/streamlined is it? 
  1. Use query parameterization 

Injection is one of the top app vulnerabilities today. Intruders use typical SQL injection in Java to link sql queries together in a chain, resulting in unsafe execution of the SQL. You can prevent it using query parameterization. The parameters block out intruders from accessing the static part of a query, so they are unable to gain critical app information. 

To prevent injection in Java, a programmer prepares a statement that an end user must use to access the database of an app. If a user doesn’t create their queries via this pre-existing statement, then the app will know that the SQL is unsafe to execute. Simply put, query parameterization means defining the full SQL code of an app and the parameters of a safe query.  It separates the SQL code from the parameter data so that the query can’t be hijacked.

  1. Use high level authentication

Authentication mechanisms can make or break your application security. If the authentication is weak, your app will be vulnerable, and vice versa. As a developer and a user, you need to use strong passwords to safeguard app data. But because some users can be reckless with their passwords, it is your job as an app developer to come up with a password policy that forces users to be vigilant with their passwords. 

Another way of ensuring that user recklessness does not jeopardize the credibility of your app is to minimize storage of sensitive data within the app. You can even make it impossible for users to save their confidential data in your servers. 

Pro tip: High level authentication also means minimizing your reliance on logs. Make sure that users can access your content without having to log in all the time and even when they do, their login credentials are automatically deleted. 

  1. Install tamper detection features

There are multiple Java features that will help you detect and thwart any tamper attempts early enough. Such tamper detection features will alert you in case someone is trying to modify or change your codes. Note that malicious programmers are always seeking to inject bad code into your application so that they can either ruin it for you or steal data. 

  1. Configure your XML-parsers 

This will help you prevent your app’s eXternal Entity (XXE). Sometimes intruders create malicious XMLs and use them to read content in selected files within your app. Note that XXE attacks are among the top vulnerabilities in Java programming. All an intruder needs is a Java SAX parser of their own and a naïve implementation of your XML-parsers and they will easily parse your XML files. 

  1. Protect data using VPN

A reputed VPN service will make your app data password protected. Intruders will not be able to steal, copy, or share your data. 

  1. Leverage the Java Security Manager

The Java Security Manager allows you to configure your own security policy. You can use it to create either:

  • A blacklist: This list contains the operations that your app cannot allow. Everything that is not on this list is allowed. You, therefore, need to understand all your app’s potential security threats and include them in the blacklist. 
  • A white list: This list contains only the operations that the app allows. All operations that are not in this list are, by default, disallowed.

Creating your own policy file and having the power to limit the necessary permissions makes it easy for you to run the application. The Java security manager basically puts you in charge of your app security and vulnerabilities. 

  1. A thorough quality assessment can help

Before launching your app, start by testing it against possible security vulnerabilities. It is better to discover security vulnerabilities yourself. Note that the success of your app is dependent on the end-user satisfaction, and users cannot be satisfied unless their data is safe. 

Conclusion 

Java platform comes with tons of tested and proven built-in security features. The language is also frequently updated for new security vulnerabilities; it includes a variety of tools for detecting and reporting security issues. That means that developing your app on Java will save you a lot of app security troubles. 

With that in mind, the reality today is that it is impossible to outthink all hackers in the world, even if you follow all app security tips during your coding process. Someone will eventually find a way around your codes no matter how secure you think they are. That is why it is important to constantly improve your app security features and reimagine possible vulnerabilities. It is also important to invest in security management solutions so that you can catch vulnerabilities and solve them in real-time.  

Stackify’s Application Performance Management tool, Retrace, provides support for your Java applications.  Try your free, 14 day trial of Retrace today. 

]]>
10 of the Most Popular Java Frameworks of 2020 https://stackify.com/10-of-the-most-popular-java-frameworks-of-2020/ Mon, 24 Feb 2020 18:32:03 +0000 https://stackify.com/?p=27799 There are plenty of reasons why Java, being one of the older software programming languages, is still widely used. For one, the immense power one wields when using Java is enough to make it their staple. Couple that with the possibilities that using good Java frameworks bring and you could lessen the turnaround time for big projects.

This post will show you some of the most popular Java frameworks of 2020. While there are more than just 10 such frameworks, the ones listed and discussed stick out. Features and ease of use are some of the rationales used for qualification.

What Are Java Frameworks?

For the sake of leveling the discussion before a load of information comes your way, let’s quickly settle the “what” question. Java frameworks are themselves software created to make programming with Java an easier endeavor. They come in sets of prewritten code that you can append to your own to create custom solutions to problems.

As you would expect, many iterations of such helpful frameworks would exist given how different every other programmer is from the rest. That said, let’s look at just 10 of the Java frameworks popular at the time of writing.

How one would pick out a single framework over the rest is purely a matter of preference. For the most part, that could be based on how much flesh the framework provides when you start new projects. The visual aspect comes into play too. How pretty can you make the UI using the framework? Depending on which you choose, tools within the framework can make it either easy or nearly impossible to create interfaces that final users will love.

Let’s crack open some of the Java frameworks and discover similarities or even differences, all in service of making coding with Java much easier.

1. Spring

Spring is a very lightweight implementation of the Java framework, usable for pretty much any type of Java project. It’s a modular framework that you could use for any level or layer of a project. What makes it stick out is the fact that you can use it to work on not just one layer of a project but also the entire scope.

Spring Java Framework

If working in the MVC architecture is your thing, you’ll love Spring. The framework also has good security features that you can just call as already written functions. This makes processes such as authentication, verification, and validation so much easier to include (properly) into any project. Companies like Netflix and eBay use Spring.

Here are some advantages of using the Spring Java framework:

  • It’s lightweight and doesn’t require a web server besides the default container.
  • It supports backward compatibility.
  • It has annotation-style configuration compatibility.

2. Hibernate

Hibernate is an object-relational mapping (ORM) framework that makes common data handling mismatch cases a thing of the past. If you’re always working with relational databases, the Hibernate ORM framework could easily become your staple.

The framework comes stock with data handling muscle that bridges paradigm differences. Companies like IBM and Dell have used the Hibernate framework for their web applications.

Hibernate Java Framework

Advantages to using Hibernate include the following:

  • There’s a capability for strong data manipulation with little coding.
  • It’s perfect for OOP-type projects that require high productivity and portability at the same time.
  • Hibernate is open source. It won’t hurt your wallet to give it try on your next project.

3. JSF (JavaServer Faces)

It’s often a huge task for back end developers to get the front side of complex applications right. This is where JSF comes in handy.

The Oracle-built, stable framework comes with a component-based MVC environment to create beautiful “faces” for Java applications. It’s packed to the brim with libraries that allow developers to experiment with the front end—without introducing other frameworks for that part.

JavaServer Faces

Typical advantages of using JSF include but are not limited to the following:

  • JSF is a big chunk of what makes up Java 2E. It’s here to stay and has massive support.
  • Back end developers have plenty of front end tools to use without too much coding.

4. GWT (Google Web Toolkit)

As can be expected from a Google product, GWT is open source. The main reason many developers’ work starts with GWT is that it’s easy to make beautiful UIs with little knowledge of front-end scripting languages. It basically turns Java code into browser-friendly packages.

Web apps such as Blogger, Google Analytics, and Google Adsense are all built using Java with the GWT framework. It’s fully featured and supported by a large group of developers dedicated to the framework, making it perfect for scale-sensitive application development.

Google Web Toolkit

Here are some advantages of using GWT:

  • It bridges the gap between back-end and front-end development.
  • The cross-browser compatibility comes in handy when deploying applications online.
  • Google APIs are easier to implement using GWT—and boy, are there plenty of them.

5. Struts (The Later Version)

Struts is an Apache-run enterprise-level framework perfect for web developers. It’s feature-rich and comes in two versions: Struts 1 and 2. The most widely used is Struts 2, which basically extends the first version with everything that comes with OpenSymphony web framework tools.

That means you get to apply new technologies such as Ruby and new JavaScript frameworks to extend your Java applications’ functionality.

Struts Java Framework

Interesting advantages of using the Struts Java framework include the following:

  • Struts fits into other frameworks seamlessly.
  • You can bring what you’re already working with and extend capabilities to those already in Struts.
  • You’ll enjoy drastically reduced development effort and time required, allowing you to make more applications rapidly.

6. Blade

The Blade framework is a very lightweight fork from the larger Let’s Blade project. If you’re predominantly a solo programmer (a freelancer, maybe) and speed is of the essence, Blade will have you making apps in no time.

Most of the work is already done for you when you start a Maven project. All you have to do is add the most current dependencies to your config file and you’re good to go. There’s no external server required, much like Node.js, from which a lot of inspiration was drawn when making the Blade framework.

Blade Framework for Java Software Development

Here’s why you’d use the Blade Java framework:

  • You can add extensions to make your coding faster.
  • The Jetty server comes handy in maintaining a lightweight environment.
  • It’s predominantly an MVC framework.

7. Play

The Play framework was created with the ease of web application development in mind. To use Play, one only needs a web browser, any text editor, and some inkling of how the command interface works on any OS. Because it’s so lightweight and because it has seamless NoSQL compatibility, it’s perfect for mobile development as well.

There are plenty of plugins and libraries from the communities around Java and web development in general, making it a good framework where resources are not exactly abundant.

Play Java Framework

Here’s why you might use Play for Java development:

  • Companies such as EA, LinkedIn, Verizon, and Samsung are using Play in their stacks.
  • The Play Java framework is restful by default.
  • Realtime development changes appear in the browser or a test device.
  • Cloud deployment options make it possible for teams spread across the world to participate in mission-critical projects.

8. Vaadin

There’s an idea out there that end users are petty, caring less about how an app was made (the code and sweat) than how it looks and feels when in use. If you agree with this notion, then the Vaadin Java framework will work just fine for you.

With Vaadin, a developer can focus on using pure Java to build apps, and the framework will handle the interface. That’s thanks to the built-in UI components that can be called as though they were functions. Like Cordova, a JavaScript framework for cross-platform development, Vaadin allows you to use a single codebase to deploy native mobile apps, as well as web or even desktop applications, after packaging.

Vaadin Java Framework

Here are some Vaadin Java framework advantages:

  • Responsive and good-looking CSS interfaces come as defaults for all instances.
  • You have built-in JavaBeans validation by annotation.
  • If data visualization is a major deliverable for a project, Vaadin will put your results on steroids.

9. Grails

Like most of Apache’s offerings, Grails is open source, and it comes bearing so much to ease a Java developer’s life.

To start with, it has markup views such that you can generate HTML code. The same applies for JSON and XML. An active community exists around Grails too. Working with the Groove language, they continuously develop plugins you can use for free to enhance your own applications. To complete the front-side development ease, GORM (a data handling toolkit) allows developers to access and work with both relational and nonrelational datasets.

Grails Java Framework

Here’s why you should use Grails:

  • You won’t have to try out a new IDE; whatever you’re using now will do.
  • The gentle learning curve for Grails is good for time-sensitive projects.
  • The documentation is clear, and courses are often running to get you up and deploying in no time.

10. DropWizard

Probably the least concerned about bells and whistles, Dropwizard is mostly made to get things done. Developers are able to deploy quicker due to less sophistication and the abundance of tools to make applications. It’s also part of the Apache 2 project, making it open source. It too inherits the millions of users and contributors that make working with Apache projects so much more pleasurable.

Dropwizard Java Framework

Here are some advantages of using Dropwizard:

  • It’s always getting better. Thousands of monthly pull requests make every glitch easy to navigate.
  • A step-by-step guide to Dropwizard can leave you with an app in less time than it takes to listen to most songs—five minutes!
  • Upon initiation, Jetty, a server, works from within the project. As a result, testing becomes easy.

Java Frameworks Similarities and Differences

At this point, you must have found the pattern common in almost all the Java frameworks that we’ve expanded above: they allow you to do so much more, regardless of how much you actually know and even if you have little coding experience. This way, one can spend time getting familiar with a framework rather than digging deeper into the language. Solid volumes of documentation are always available when working on new aspects of Java.

Using any of the frameworks together alongside a dynamic code analysis tool like Prefix. This way, you not only have inheritances from the framework; in addition, you also possess a tool to profile your code as you write it. Prefix users identify errors and inefficiencies in their code before they push. You get all that along with other aspects that determine how good users will find your applications, regardless of the Java framework you fancy.

Depending on what makes life easier for you, you may find even the older Java frameworks more to your liking. At the end of the day, it’s not so much what you use to make an application but what it does to solve a problem that matters. Which framework will you integrate with Retrace for better performance and a close eye on what really matters when making software?

Explore Prefix
]]>
5 Best Security Practices for Tomcat Servers https://stackify.com/5-best-security-practices-for-tomcat-servers/ Mon, 13 Jan 2020 14:58:26 +0000 https://stackify.com/?p=27479 Tomcat servers are widely used application servers for today’s development architectures, popular for hosting Java based applications. Below is a guide on best security practices for security your Tomcat Server environment.

1. Beware of Banner Grabbing

What is banner grabbing?

Banner grabbing is the process of gaining information from computer systems including services, open ports, version, etc.

How banner grabbing affects tomcat?

When sending a server host request via telnet command, you pass along the server name, port, and version. This makes it easy for an attacker to use the displayed information and use the web server error pages to discover vulnerabilities and attack.

2. Disable Weak Ciphers and Protocols

What is Cipher?

In cryptology, a cipher is an algorithm for encrypting and decrypting data. In other words, a cipher is a method of hiding words or text with encryption by replacing original letters with other letters, numbers and symbols through substitution or transposition.

By enabling strong cipher suites and protocols,  improve security and reduce the risk of cyber security attacks. For example; TLS 1.3 is much better, faster and secure compared to TLS 1.2. Advantages of TLS 1.3 can improve your server performance and security.

Steps to disable weak ciphers

Backup server.xml file

1. Open this file for edit

2. Look for this line in the server.xml file

  <!– HTTPS Connector added by Automation API Installation –>

  <Connector port=”8443″ protocol=”org.apache.coyote.http11.Http11Protocol” SSLEnabled=”true”

  maxThreads=”150″ scheme=”https” secure=”true” clientAuth=”false” sslProtocol=”TLS” keystoreFile=”conf/emweb_unsigned.keystore” keystorePass=”empass” />

3. Add the following line to disable the weak ciphers:

  ciphers=”<Required ciphers List”/>

  For example, to disable the 3DES and RC4 ciphers, add the following:

ciphers=”TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA” />

4.Restart Tomcat server.

3. Enable redirection and fix mix content

Redirection enhances security and provides encryption, with your website is displayed with a padlock sign. 

Below is the redirection code used in tomcat: 

  <security-constraint>  

       <web-resource-collection>  

          <web-resource-name>SECURE</web-resource-name>  

  <url-pattern>/*</url-pattern>  

         </web-resource-collection>  

  <user-data-constraint>  

  <transport-guarantee>CONFIDENTIAL</transport-guarantee>  

    </user-data-constraint>  

    </security-constraint>  

Mix content issues occur when some content loaded on the URL is not on https. Your website should not have any resources coming from HTTP. For better security practice, always try to fix mixed content errors when you come across any.

Below is the image which is seen when there is a mix content issue

4. Secure Tomcat Server

You may be using Tomcat servers in your production environment, backup environment, or test environment. Securing any environment is the best approach to follow. One way to secure your Tomcat server is to install SSL certificate on tomcat servers to protect all data in transit. Another way is to remove unsecured connectors from $tomcat/server.xml.

5. Enable Security through Monitoring

Monitoring the server for the availability and response time along with logs should be done regularly in order to track performance, CPU utilization, disk utilization, memory utilization, running services and processes.

Including an Application Performance Management tools, such as Stackify Retrace, allows users to accelerate application performance with centralized logging and error tracking.  

To learn more about monitoring Tomcat, check out our Step by Step Guide to Tomcat Performance Monitoring. 

]]>
What Are Java Agents and How to Profile With Them https://stackify.com/what-are-java-agents-and-how-to-profile-with-them/ Tue, 03 Dec 2019 15:48:01 +0000 https://stackify.com/?p=27150 Java agents are a special type of class which, by using the Java Instrumentation API, can intercept applications running on the JVM, modifying their bytecode. Java agents aren’t a new piece of technology. On the contrary, they’ve existed since Java 5. But even after all of this time, many developers still have misconceptions about this feature—and others don’t even know about it.

In this post, we remedy this situation by giving you a quick guide on Java agents. You’ll understand what Java agents are, what are the benefits of employing them, and how you can use them to profile your Java applications. Let’s get started.

Defining Java Agents

Java agents are part of the Java Instrumentation API. So to understand agents, we need to understand what instrumentation is.

Instrumentation, in the context of software, is a technique used to change an existing application, adding code to it. You can perform instrumentation both manually and automatically. You can also do it both at compiling time and runtime.

So, what is instrumentation good for?  It’s meant to allow you to change code, altering its behavior, without actually having to edit its source code file. This can be extremely powerful and also dangerous. What you can do with that is left to you. The possibilities are endless. Aspect-Oriented Programming? Mutation testing? Profiling? You name it.

With that out of the way, let’s focus again on Java agents. What are these things, and how do they relate to instrumentation?

In short, a Java agent is nothing more than a normal Java class. The difference is that it has to follow some specific conventions. The first convention has to do with the entry point for the agent. The entry point consists of a method called “premain,” with the following signature:

 public static void premain(String agentArgs, Instrumentation inst) 

If the agent class doesn’t have the “premain” method with the signature above, it should have the following, alternative method:

 public static void premain(String agentArgs) 

As soon as the JVM initializes, it calls the premain method of every agent. After that, it calls the main method of the Java application debugging as usual. Every premain method has to resume execution normally for the application to proceed to the startup phase.

The agent should have yet another method called “agentmain.” What follows are the two possible signatures for the method:

 public static void agentmain(String agentArgs, Instrumentation inst) 
 public static void agentmain(String agentArgs) 

Such methods are used when the agents are called not at JVM initialization, but after it.

How to Write a Java Agent

A java agent, in practice, is a special type of .jar file. As we’ve already mentioned, to create such an agent, we’ll have to use the Java Instrumentation API. Such an API isn’t new, as we’ve also mentioned before.

The first ingredient we need to create our agent is the agent class. The agent class is just a plain Java class that implements the methods we’ve discussed in the previous section.

To create our Java agent, we’re going to need a sample project. So, we’re going to create a silly, simple app that does just one thing: print the n first numbers of the Fibonacci sequence, n being a number supplied by the user. As soon as the application is up and running, we’re going to use a little bit of Java instrumentation to perform some basic profiling.

Building Our Sample App

For this project, I’m going to use the free community edition of the IntelliJ IDEA, but feel free to use whatever IDE or code editor you feel most comfortable using. So, let’s begin.

Open the IDE and click on “Create New Project,” as you can see in the following picture:

In the “create new project” window, select “Java” as the type of the project and click on “Next:”

Then, on the next screen, mark the “Create project from template” box, select the “Command Line App” template for the application and click on “Next” again:

After that, the only thing that’s left is to configure the name and location for the project and click on “Finish:”

With our project created, let’s create the Fibonacci logic. Copy the following content and paste on your main class:

 package com.company;
 import java.util.Scanner;

 public class Main {

     public static void main(String[] args) {
         Scanner scanner = new Scanner(System.in);
         System.out.println("How many items do you want to print?");
         int items, previous, next;
         items = scanner.nextInt();
         previous = 0;
         next = 1;

         for (int i = 1; i <= items; ++i)
         {
             System.out.println(previous);
             int sum = previous + next;
             previous = next;
             next = sum;
         }
     }
 } 

The application is super simple. It starts asking the user for the number of items they wish to print. Then, it generates and prints the Fibonacci sequence with as many terms as the number the user informed.

Of course, the application is very naive. It doesn’t check for invalid items, for one. Another problem is that if the user enters a large enough value, it causes the program to overflow the upper limit of int. You could use long or even the BigInteger class to handle larger inputs. None of that matters for our example, though, so feel free to add those improvements as an exercise, if you wish to do so.

Starting Our Java Agent

Our sample application is up and running, so we’re ready to create our Java agent. Repeat the process of creating a new project. Call it “MyFirstAgentProject.”

Create a new class by going to File > New Java Class, like in the following image:

Then, name the class “MyFirstAgent” and press enter. After that, replace the content of the created file with what follows:

 package com.company;
 import java.lang.instrument.Instrumentation;

 public class MyFirstAgent {

     public static void premain(String agentArgs, Instrumentation inst) {
         System.out.println("Start!");
     }
 } 

Now we’ll have to create a custom manifest. Let’s start by adding Maven support to our project. Right-click on the “MyFirstAgentProject” module. Then, click on “Add Framework Support.”

On the “Add Frameworks Support” window, check “Maven” and click on OK. After that, IntelliJ will create a pom.xml file and open it so you can edit. Add the following content to the pom.xml file and save it:

 <build>
         <plugins>
             <plugin>
                 <groupId>org.apache.maven.plugins</groupId>
                 <artifactId>maven-jar-plugin</artifactId>
                 <version>3.2.0</version>
                 <configuration>
                     <archive>
                         <manifestFile>src/main/resources/META-INF/MANIFEST.MF</manifestFile>
                     </archive>
                 </configuration>
             </plugin>
         </plugins>
     </build>
 <properties>
         <maven.compiler.source>1.6</maven.compiler.source>
         <maven.compiler.target>1.6</maven.compiler.target>
     </properties> 

In the code above, we add the “maven-jar-plugin” plugin to our pom file, as well as configuring the location for our manifest file. Now we need to create it. To do that, copy the following content, paste it on a new file, and save it as “src/main/resources/META-INF/MANIFEST.MF.”

 Manifest-Version: 1.0 
 Premain-Class: com.company.javaagent.helloworldagent.MyFirstAgent
 Agent-Class: com.company.javaagent.helloworldagent.MyFirstAgent 

We’re almost there! With the manifest creation out of the way, let’s now perform a maven install. On the “Maven” tool window, expand the “Lifecycle” folder, right-click on install and then check the “Execute After Build” option.

With that setting, the IDE will perform a maven install every time we build the application. So, let’s build it! Go to Build > Build Project, or use the CTRL + F9 shortcut. If everything went well, you should be able to find the resulting jar file, under “target.”

We’ve successfully finished creating the jar file for our first Java agent. Now, let’s test it!

Loading the Agent

We’re now going to use our agent, and to do that, we need to load it. There are two ways to load a Java agent, and they are called static and dynamic loading. Static loading happens before the application runs. It invokes the premain method, and it’s activated by using the -javaagent option when running the application. Dynamic loading, on the other hand, is activated with the application already running, which is done using the Java Attach API.

Here we’re going to use static loading. With the sample application open in IntelliJ IDEA, go to Run > Edit Configurations…, as you can see in the image below:

A new window will be shown. There, you can, as the name suggests, configure many different options regarding the running and debugging of the application. What you have to do now is to add the -javaagent option to the VM options field, passing the path to the agent’s jar file as an argument to it.

After configuring the path, you can click on OK and then run the project as usual. If everything went right, that’s the output you should see:

As you can see, the message “Start!” that we’ve defined using the premain method, was printed just before the main method of the application being run. That means that our agent was successfully loaded.

 Start!
 How many items do you want to print?
 10
 0
 1
 1
 2
 3
 5
 8
 13
 21
 34

 Process finished with exit code 0 

What Comes Next?

You might wonder if all that we’ve seen is too much trouble for little result. The answer to that is a firm “no.” First, you must keep in mind that our example here is the equivalent of a “Hello world” for Java agents. Things can get—and they do get—a lot more complex than this. As we’ve already mentioned, there are very sophisticated tools that make use of the Java Instrumentation API.

Secondly, keep in mind that there are many additional tools one can use to really extend the power of Java instrumentation to new levels and allow you to do things like bytecode manipulation, for instance.Also, consider that much of the heavy lifting has already been done for you, regarding profiling. There are a lot of powerful tools out there, coming in different types that cater to virtually all profiling needs you might have.

]]>
Spring AOP Tutorial With Examples https://stackify.com/spring-aop-tutorial-with-examples/ Tue, 26 Nov 2019 17:12:09 +0000 https://stackify.com/?p=27129 You may have heard of aspect-oriented programming, or AOP, before. Or maybe you haven’t heard about it but have come across it through a Google-search rabbit hole. You probably do use Spring, however. So you’re probably curious how to apply this AOP to your Spring application.

In this article, I’ll show you what AOP is and break down its key concepts with some simple examples. We’ll touch on why it can be a powerful way of programming and then go into a contrived, but plausible, example of how to apply it in Spring.  All examples will be within a Spring application and written in JVM Kotlin, mainly because Kotlin is one of my favorite useful languages.

Quick Description of AOP

“Aspect-oriented programming” is a curious name. It comes from the fact that we’re adding new aspects to existing classes. It’s an evolution of the decorator design pattern. A decorator is something you hand-code before compiling, using interfaces or base classes to enhance an existing component. That’s all nice and good, but aspect-oriented programming takes this to another level. AOP lets you enhance classes with much greater flexibility than the traditional decorator pattern. You can even do it with third-party code.

The Parts of Spring AOP

In AOP, you have a few key parts:

  • Core component. This is the class or function you want to alter. In Spring AOP, we’re always altering a function. For example, we may have the following command:
  • @Component
    class PerformACommand {
        @Logged
        fun execute(input: String): String {
            return "this is a result for $input"
        }
    }
    

  • Aspect. This is the new logic you want to add to the existing class, method, sets of classes, or sets of methods. A simple example is adding log messages to the execution of certain functions:
  •  

    @Aspect
    @Component
    class LoggingAspect {
    
        @Around("@annotation(Logged)")
        fun logMethod(joinPoint: ProceedingJoinPoint) {
            var output = joinPoint.proceed()
            println("method '${joinPoint.signature}' was called with input '${joinPoint.args.first()}' and output '$output'")
        }
    }
    
    • JoinPoint. OK, now the terms get weird. A JoinPoint is the place within the core component that we’ll be adding an aspect. I’m putting this term here mainly because you’ll see it a lot when researching AOP. But for Spring AOP, the JoinPoint is always a function execution. In this example, it will be any function with an “@Logged” annotation: 
    @Target(AnnotationTarget.FUNCTION)
    annotation class Logged
    • Pointcut. The pointcut is the logic by which an aspect knows to intercept and decorate the JoinPoint. Spring has a few annotations to represent these, but by far the most popular and powerful one is “@Around.” In this example, the aspect is looking for the annotation “Logged” on any functions. 
    @Around("@annotation(Logged)")

    If you wire the example code up to a Spring application and run:

    command.execute("whatever")

    You’ll see something like this in your console: “method ‘String com.example.aop.PerformACommand.execute(String)’ was called with input ‘whatever’ and output ‘this is a result for whatever’

    Spring AOP can achieve this seeming magic by scanning the components in its ApplicationContext and dynamically generating code behind the scenes. In AOP terms, this is called “weaving.”

    Why AOP Is Useful

    With that explanation and examples providing understanding, let’s move on to the favorite part for any programmer. That’s the question “why?” We love this question as developers. We’re knowledge workers who want to solve problems, not take orders. So, what problems does AOP solve in Spring? What goals does it help one achieve?

    Quick Code Reuse

    For one thing, adding aspects lets me reuse code across many, many classes. I don’t even have to touch much of my existing code. With a simple annotation like “Logged,” I can enhance numerous classes without repeating that exact logging logic.

    Although I could inject a logging method into all these classes, AOP lets me do this without significantly altering them. This means I can add aspects to my code in large swaths quickly and safely.

    Dealing With Third-Party Code

    Let’s say normally I want to inject shared behavior into a function that I then use in my core components. If my code is proved by a third-party library or framework, I can’t do that! I can’t alter the third-party code’s behavior. Even if they’re open source, it’ll still take time to understand and change the right places. With AOP, I just decorate the needed behavior without touching the third-party code at all. I’ll show you exactly how to do that in Spring with the blog translator example below.

    Cross-Cutting Concerns

    You’ll hear the term “cross-cutting concerns” a lot when researching AOP. This is where it shines. Applying AOP lets you stringently use the single responsibility principle. You can surgically slice out the pieces of your core components that aren’t connected to its main behavior: authentication, logging, tracing, error handling, and the like. Your core components will be much more readable and changeable as a result.

    Example: A Blog Translator

    Although I showed snippets of a logging aspect earlier, I want to walk through how we might think through a more complex problem we might have, and how we can apply Spring AOP to solve it.

    As a blog author, imagine if you had a tool that would automatically check your grammar for you and alter your text, even as you write! You download this library and it works like a charm. It checks grammar differently based on what part of the blog post you’re on: introduction, main body, or conclusion. It heavily encourages you to have all three sections in any blog post.

    You’re humming along, cranking out some amazing blog posts, when a client commissions a request: can you start translating your blogs to German to reach our German audience better? So you scratch your head and do some research. You stumble upon a great library that lets you translate written text easily. You tell the client, “Yes, I can do that!” But now you have to figure out how to wire it into your grammar-checking library. You decide this will be a great case to try out Spring AOP to combine your grammar tool with this translation library.

    Wiring It Up

    First, we want to add the Spring AOP dependency to our Spring Boot project. We have a “build.gradle” file to put this into:

    dependencies {
     implementation("org.springframework.boot:spring-boot-starter")
     implementation("org.springframework.boot:spring-boot-starter-aop")
    }
     

    Analyzing Our Core Components

    Before we implement anything, we take a close look at our tool codebase. We see that we have three main components, one for each section of a blog post:

    class IntroductionGrammarChecker {
        fun check(input: BlogText): BlogText {
           ...
        }
    }
    
    class MainContentGrammarChecker {
    
        fun check(input: BlogText): BlogText {
           ...
        }
    }
    
    class ConclusionGrammarChecker {
        fun check(input: BlogText, author: Author): BlogText {
            ...
        }
    }

    Hmm…it looks like each one produces the same output: a BlogText. We want to alter the output of each of these checkers to produce German text instead of English. Looking closer, we can see that they all share the same signature. Let’s keep that in mind when we figure out our pointcut.

    The Core Logic

    Next, let’s bang out the core logic of our aspect. It’ll take the output of our core component, send it through our translator library, and return that translated text:

    @Aspect
    @Component
    class TranslatorAspect(val translator: Translator) {
    
        @Around("execution(BlogText check(BlogText))")
        fun around(joinPoint: ProceedingJoinPoint): BlogText {
            val preTranslatedText = joinPoint.proceed() as BlogText
            val translatedText = translator.translate(preTranslatedText.text, Language.GERMAN)
            return BlogText(translatedText)
        }
    }

    Note a few things here. First, we annotate it with “@Aspect.” This cues Spring AOP in to treat it appropriately. The “@Component” annotation Spring Boot will see it in the first place.

    We also use the “@Around” pointcut, telling it to apply this aspect to all classes that have a method signature of “check(BlogText): BlogText.” There are numerous different expressions we can write here. See this Baeldung article for more. I could’ve used an annotation, like the “@Logged” above, but this way I don’t have to touch the existing code at all! Very useful if you’re dealing with third-party code that you can’t alter.

    The method signature of our aspect always takes in a ProceedingJoinPoint, which has all the info we need to run our aspect. It also contains a “proceed()” method, which will execute the inner component’s function. Inside the function, we proceed with the core component, grabbing its output and running it through the translator, just as we planned. We return it from the aspect, with anything that uses it being none the wiser that we just translated our text to German.

    A Trace of Something Familiar

    Now that you’re familiar with Spring AOP, you may notice something about the “@Logged” annotation. If you’ve ever used custom instrumentation for Java in Retrace, you may notice it looks a lot like the “@Trace” annotation.

    The similarity of “@Logged” to “@Trace” is not by coincidence. “@Trace” is a pointcut! Although Retrace does not use spring AOP per se, it does apply many AOP principles into how it lets you configure instrumentation.

    The Final Aspect

    We’ve only touched the surface of AOP in Spring here, but I hope you can still see its power. Spring AOP gives a nonintrusive way of altering our components, even if we don’t own the code for that component! With this, we can follow the principles of code reuse. We can also implement wide-sweeping, cross-cutting concerns with just a few lines of code. So, find a place in your Spring application where this can bring value. I highly recommend starting with something like “@Logged” or “@Trace” so you can easily measure and improve your system performance.

    ]]>