Enterprise Java

Spring Data Pivotal Gemfire Tutorial

1. Spring Data Pivotal Gemfire – Introduction

In this post, we feature a comprehensive Tutorial on Spring Data Pivotal Gemfire. Pivotal Gemfire is an in-memory data grid solution powered by Apache Geode. The applications built with Pivotal Gemfire allow you to scale your system easily across distributed server nodes. Pivotal Gemfire ensures data consistency irrespective of the distribution architecture. It enables the applications to serve real time data to millions of users.
Spring framework, on the other hand, is an widely used framework that provides the foundation to build enterprise scale application. In this article, we discuss how Spring Data, one of the many modules of Spring framework, integrates with Pivotal Gemfire to speed the development process and bring the power of Spring Framework into Pivotal Gemfire applications.

2. Prerequisites for the tutorial

Before we jump into the tutorial, it is necessary to understand the assumptions that are made and the tools that will be required to proceed with the tutorial. Herein, I assume that you understand the below:

  • Understanding of accessing Pivotal Data
  • Basic understanding of Spring Framework
  • Basic understanding of Pivotal API calls

Throughout the tutorial, we will be using the below tools and specifications:

  • JDK 1.8
  • Spring Tool Suite/ IntelliJ
  • Maven 3.2+

3. Getting started

To begin with the project, let us create a Maven project and add the below dependency.

pom.xml

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>org.springframework.samples</groupId>
  <artifactId>pivotal_tutorial</artifactId>
  <version>0.0.1-SNAPSHOT</version>
  
  
    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>2.0.3.RELEASE</version>
    </parent>


    <properties>
        <spring-shell.version>1.2.0.RELEASE</spring-shell.version>
    </properties>

    <repositories>
        <repository>
            <id>spring-releases</id>
            <url>https://repo.spring.io/libs-release</url>
        </repository>
    </repositories>

    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter</artifactId>
            <exclusions>
                <exclusion>
                    <groupId>org.springframework.boot</groupId>
                    <artifactId>spring-boot-starter-logging</artifactId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupId>org.springframework.data</groupId>
            <artifactId>spring-data-gemfire</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.shell</groupId>
            <artifactId>spring-shell</artifactId>
            <version>${spring-shell.version}</version>
            <scope>runtime</scope>
        </dependency>
        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>
        </plugins>
    </build>

</project>

Save the project with these dependencies and allow it to build. The above file contains the necessary Spring Boot dependencies and also the Spring Data dependency for Pivotal Gemfire. Once the project has downloaded the relevant dependencies, you can proceed with the coding part.

Spring Data with Pivotal Gemfire helps us configure the access to distributed data access centres. This combination helps us lower the hits to the disk and maintain a better response time using in-memory caching levels. The tutorial will take you through this complete setup and configuration process.

4. Creating an entity

To begin with, the primary requirement is to create an entity. Let us create a simple Person entity that holds the details of a person like the name and age of person. To create such an entity, use the below code.

PersonEntity.java

package pivotal_tutorial;

import java.io.Serializable;

import org.springframework.data.annotation.Id;
import org.springframework.data.annotation.PersistenceConstructor;
import org.springframework.data.gemfire.mapping.annotation.Region;

import lombok.Getter;

@Region(value = "People")
public class PersonEntity implements Serializable {

    @Id
    @Getter
    private final String name;

    @Getter
    private final int age;

    @PersistenceConstructor
    public PersonEntity(String name, int age) {
        this.name = name;
        this.age = age;
    }

    public String getName() {
		return name;
	}

	public int getAge() {
		return age;
	}

	@Override
    public String toString() {
        return String.format("%s is %d years old", getName(), getAge());
    }
}

As you can see, there are two attributes – name and age along with a Persistent constructor. Here, notice carefully that the class ha been annotated with the annotation @Region. This annotation is a Pivotal indicator to tell the framework to store the instance of this class with a specific name. When it reads the annotation @Region("People"), it will understand that the instance of PersonEntity has to be stored in the name of People. The field annotated with the annotation @Id will be the unique key for the instance.

It is being assumed here that you understand that Pivotal does not have any automated key generation system in place. Hence, before actually proceeding with data persistence, you need to ensure that the id field is set.

5. Creating simple queries

Spring Data clubbed with the Pivotal Gemfire framework is all about storing and persisting data. It focusses on managing the access to the data by this. Additionally, it also inherits the powerful features of Spring Data framework like the power to derive the queries. The power of framework is that you do not need to learn the pivotal gemfire query language any more. All you need to do is write few java code snippets and the framework will build the queries in the backend.
Let us start by creating similar snippets for the entity shown above.

PersonQueries.java

package pivotal_tutorial;
import org.springframework.data.gemfire.repository.query.annotation.Trace;
import org.springframework.data.repository.CrudRepository;

public interface PersonRepo extends CrudRepository<PersonEntity, String> {

    @Trace
    PersonEntity findByName(String name);

    @Trace
    Iterable findByAgeGreaterThan(int age);

    @Trace
    Iterable findByAgeLessThan(int age);

    @Trace
    Iterable findByAgeGreaterThanAndAgeLessThan(int greaterThanAge, int lessThanAge);
}

In the above code, notice carefully that the class extends CrudRepository which is a class provided by Spring Data framework. The annotation @Trace identifies that the concerned functions need to be used for creating queries for the Pivotal Gemfire framework running in the backend. The functions are quite simple to understand. A brief explanation has been provided below:

  • findByName: Finds the entity by the value of name provided as an argument
  • findByAgeGreaterThan: Finds the entities with age greater than the provided value. Returns an iterable list of PersonEntity instances.
  • findAgeLessThan: Finds the entities with age less than the provided value. Returns an iterable list of PersonEntity instances.
  • findByAgeGreaterThanAndLessThan: Finds the entities with age greater than or less than the provided value. Returns an iterable list of PersonEntity instances.

6. Creating an application

Now that we have our query and entity ready, let us begin with the creation of actual application that helps in the data transaction. The application will be created with the view to instantiate the entity and transact the data. The below code creates the application with all the necessary components.

App.java

package pivotal_tutorial;
import static java.util.Arrays.asList;
import static java.util.stream.StreamSupport.stream;

import java.io.IOException;

import org.apache.geode.cache.client.ClientRegionShortcut;
import org.springframework.boot.ApplicationRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;
import org.springframework.data.gemfire.config.annotation.ClientCacheApplication;
import org.springframework.data.gemfire.config.annotation.EnableEntityDefinedRegions;
import org.springframework.data.gemfire.repository.config.EnableGemfireRepositories;

@SpringBootApplication
@ClientCacheApplication(name = "AccessingDataGemFireApplication", logLevel = "error")
@EnableEntityDefinedRegions(basePackageClasses = PersonEntity.class,
  clientRegionShortcut = ClientRegionShortcut.LOCAL)
@EnableGemfireRepositories
public class App {

    public static void main(String[] args) throws IOException {
        SpringApplication.run(App.class, args);
    }

    @Bean
    ApplicationRunner run(PersonRepo personRepository) {

        return args -> {

            PersonEntity abk = new PersonEntity("Abhishek Kothari", 26);
            PersonEntity sumit = new PersonEntity("Sumit Punjabi", 25);
            PersonEntity john = new PersonEntity("John Doe", 34);

            System.out.println("Entering into accessing data from Pivotal GemFire framework");

            asList(abk, sumit, john).forEach(person -> System.out.println("\t" + person));

            System.out.println("Saving Alice, Bob and Carol to Pivotal GemFire...");

            personRepository.save(abk);
            personRepository.save(sumit);
            personRepository.save(john);

            System.out.println("Lookup each person by name...");

            asList(abk.getName(), sumit.getName(), john.getName())
              .forEach(name -> System.out.println("\t" + personRepository.findByName(name)));

            System.out.println("Query adults (over 18):");

            stream(personRepository.findByAgeGreaterThan(18).spliterator(), false)
              .forEach(person -> System.out.println("\t" + person));

            System.out.println("Query teens (less than 30):");

            stream(personRepository.findByAgeLessThan(30).spliterator(), false)
              .forEach(person -> System.out.println("\t" + person));

            System.out.println("Query teens (between 12 and 30):");

            stream(personRepository.findByAgeGreaterThanAndAgeLessThan(12, 30).spliterator(), false)
              .forEach(person -> System.out.println("\t" + person));
        };
    }
}

The above class contains all the possible queries calls for the defined entity. Notice here that the class has been annotated with numerous annotations. The description of each annotation is provided below:

@SpringBootApplication: This annotation specifies that the class is to be treated as the starting point of Spring boot application.
@ClientCacheApplication: This annotation specifies that the application should use client side caching of data powered by Spring Data in the backend.
@EnableDefinedRegions: The annotation is used to specify the entities that need to be used and made available. This annotation basically does the task of exposing the methods of the corresponding entities for the class.
@EnableGemfireRepositories: This is the most important annotation. The purpose of the annotation is clear from the name itself. This annotation is mandatory to enable the gemfire repository on the start of Spring application. This annotation will force scan the current package for finding the entities extending one of the Spring Data repository classes like the PersonEntity.

Occasionally, there might be a case where we do not wish to expose all the Spring Data entities into the Gemfire framework. This can be prevented by explicitly specifying the class that is being extended by the desired entities. This can be done using its property basePackageClasses = TheRepository.class
It is to be noted here that in the region definition we have specified a local region. This is important for Pivotal Gemfire. In order to store data, Pivotal requires atleast 1 or more regions.


 

7. Cache Configuration

There are three different cache configurations possible in Pivotal. Depending on the region that we plan to use, we can use one of the desired annotation to specify the caching and data persistence with the Pivotal Gemfire backend using Spring Data framework. Below are the three possible annotations that could be used:

@ClientCacheApplication: Caches data client side in local storage
@PeerCacheApplication: Caches data between peers
@CacheServerApplication: Caches data on the server side

Pivotal Gemfire supports multiple caching topologies like client/server, peer to peer and even WAN or LAN arrangements. In the client/server caching topology, the clients cache the queried data while server caches all of it. In the peer to peer topology, even device in the network will cache the data to provide it to the nearest peers. In case of WAN or LAN topology, your device caches the data if you are connected to the specific network and starts distributing data to the other users. In the above case, we have used client caching and hence once the query has been executed, the caching will be done entirely on the client side. We have specified the region LOCAL for the same reason.

We connected the entity to a region named People. This was specified using the annotation Region. This annotation was used from the Spring Data framework. This region is later mapped in the application layer using the code snippet ClientRegionFactoryBean<String, PersonEntity> for bean definition. Thus, we injected a bean definition as well as defined the instance in the region People which would have otherwise been impossible without Spring Data framework.

8. Storing of objects

In this guide, you are creating three local Person objects, Abhishek, Sumit John. Initially, they only exist in memory. After creating them, you have to save them to Pivotal GemFire.

Now you run several queries. The first looks up everyone by name. Then you execute a handful of queries to find adults, babies, and teens, all using the age attribute. With the logging turned up, you can see the queries Spring Data for Pivotal GemFire writes on your behalf.

9. Executing the code and building a jar

Now that we have understood the code perfeclty, its time for the next step. The next step is to actually execute the code and see how the code works. To execute the code, run the application as a Java application in your Spring Tool Suite or IntelliJ. On executing the application, you see an output that is similar to the one shown below. It may slightly vary depending on the version of libraries that you might be using.

  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::        (v2.0.3.RELEASE)

[info 2018/08/30 20:36:45.110 IST  tid=0x1] Starting App on MacBook-Air.local with PID 96473 (/Users/abhishekkothari/Documents/workspace-sts-3.9.5.RELEASE/pivotal_tutorial/target/classes started by abhishekkothari in /Users/abhishekkothari/Documents/workspace-sts-3.9.5.RELEASE/pivotal_tutorial)

[info 2018/08/30 20:36:45.118 IST  tid=0x1] No active profile set, falling back to default profiles: default

[info 2018/08/30 20:36:45.219 IST  tid=0x1] Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@6c1a5b54: startup date [Thu Aug 30 20:36:45 IST 2018]; root of context hierarchy

SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
Entering into accessing data from Pivotal GemFire framework
	Abhishek Kothari is 26 years old
	Sumit Punjabi is 25 years old
	John Doe is 34 years old
Saving Alice, Bob and Carol to Pivotal GemFire...
Lookup each person by name...
	Abhishek Kothari is 26 years old
	Sumit Punjabi is 25 years old
	John Doe is 34 years old
Query adults (over 18):
	Sumit Punjabi is 25 years old
	John Doe is 34 years old
	Abhishek Kothari is 26 years old
Query teens (less than 30):
	Sumit Punjabi is 25 years old
	Abhishek Kothari is 26 years old
Query teens (between 12 and 30):
	Sumit Punjabi is 25 years old
	Abhishek Kothari is 26 years old

As it can be seen, the application got executed and the lamba functions fetched use the data according to specified filters. Notice here that we created a completion entity, stored it and retrieved it without really doing any setup for the Pivotal gemfire database. These queries returned us the instances without any major efforts. In this manner, Spring Data annotations help in simplifying the application development for the Pivotal Gemfire applications and help you reduce the whole effort of coding and setting up from scratch.

In order to build this application and export it elsewhere to use it remotely, all you need to do is use Maven to build the application jar. To do so, just execute the below command.

./mvnw spring-boot:run

The above command will build a runnable jar for you to execute in any system. Thus, you can build easily portable applications using Spring Data Framework with the Pivotal Gemfire data distribution.

10. Download the Project

The STS project of what has been discussed above can be found at the below link.

Download
You can download the full source code of this example here : pivotal-tutorial.zip

Abhishek Kothari

Abhishek is a Web Developer with diverse skills across multiple Web development technologies. During his professional career, he has worked on numerous enterprise level applications and understood the technological architecture and complexities involved in making an exceptional project. His passion to share knowledge among the community through various mediums has led him towards being a Professional Online Trainer, Youtuber as well as Technical Content Writer.
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

2 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Hari Kiran
Hari Kiran
5 years ago

This tutorial is an exact same as the one on pivotal website. May be a good place to enhance this would be to discuss distributed cache regions than local

Back to top button