< Sergii Kostenko's blog

Firebase push notifications with Eclipse Microprofile Rest Client

05 March 2020

Nowadays Push notifications is a must have feature for any trend application. Firebase Cloud Messaging (FCM) is a free (at least in this moment) cross-platform solution for messages and notifications for Android, iOS and Web applications.

firebase, push, microprofile, rest client

To enable push notification on client side you should create Firebase project and follow the manual or examples. From the server side perspective all you need to send push notification is:

Firebase provides https://fcm.googleapis.com/fcm/send endpoint and very simple HTTP API like

    "to": "<Instance ID token>",
    "notification": {
      "title": "THIS IS MP REST CLIENT!",
      "body": "The quick brown fox jumps over the lazy dog."

So, let's design simple Microprofile REST client to deal with above:

@RegisterRestClient(configKey = "push-api")
public interface PushClientService {

    @ClientHeaderParam(name = "Authorization", value = "{generateAuthHeader}")
    void send(PushMessage msg);

    default String generateAuthHeader() {
        return "key=" + ConfigProvider.getConfig().getValue("firebase.server_key", String.class);
public class PushMessage {

    public String to;
    public PushNotification notification;

    public static class PushNotification {
        public String title;
        public String body;

and application.properties

# firebase server key
# rest client

Actually, this is it! Now you able to @Inject PushClientService and enjoy push notifications as well.

PushClientService pushService;

If you would like to test how it works from client side perspective, - feel free to use Test web application to generate instance ID token and check notifications delivery.

Described sample application source code with swagger-ui endpoint and firebase.server_key available on GitHub


Well secured and documented REST API with Eclipse Microprofile and Quarkus

20 February 2020

Eclipse Microprofile specification provides several many helpful sections about building well designed microservice-oriented applications. OpenAPI, JWT Propagation and JAX-RS - the ones of them.
microprofile, jwt, openapi, jax-rs
To see how it works on practice let's design two typical REST resources: insecured token to generate JWT and secured user, based on Quarkus Microprofile implementation.

Easiest way to bootstrap Quarkus application from scratch is generation project structure by provided starter page - code.quarkus.io. Just select build tool you like and extensions you need. In our case it is:

I prefer gradle, - and my build.gradle looks pretty simple

group 'org.kostenko'
version '1.0.0'
plugins {
    id 'java'
    id 'io.quarkus'
repositories {
dependencies {
    implementation 'io.quarkus:quarkus-smallrye-jwt'
    implementation 'io.quarkus:quarkus-smallrye-openapi'
    implementation 'io.quarkus:quarkus-resteasy-jackson'    
    implementation 'io.quarkus:quarkus-resteasy'
    implementation enforcedPlatform("${quarkusPlatformGroupId}:${quarkusPlatformArtifactId}:${quarkusPlatformVersion}")
    testImplementation 'io.quarkus:quarkus-junit5'
    testImplementation 'io.rest-assured:rest-assured'
compileJava {
    options.compilerArgs << '-parameters'

Now we are ready to improve standard JAX-RS service with OpenAPI and JWT stuff:

@Tags(value = @Tag(name = "user", description = "All the user methods"))
@SecurityScheme(securitySchemeName = "jwt", type = SecuritySchemeType.HTTP, scheme = "bearer", bearerFormat = "jwt")
public class UserResource {

    Optional<JsonString> userName;

    @APIResponses(value = {
        @APIResponse(responseCode = "400", description = "JWT generation error"),
        @APIResponse(responseCode = "200", description = "JWT successfuly created.", content = @Content(schema = @Schema(implementation = User.class)))})
    @Operation(summary = "Create JWT token by provided user name")
    public User getToken(@PathParam("userName") String userName) {
        User user = new User();
        return user;    

    @SecurityRequirement(name = "jwt", scopes = {})
    @APIResponses(value = {
        @APIResponse(responseCode = "401", description = "Unauthorized Error"),
        @APIResponse(responseCode = "200", description = "Return user data", content = @Content(schema = @Schema(implementation = User.class)))})
    @Operation(summary = "Return user data by provided JWT token")
    public User getUser() {
        User user = new User();
        return user;

First let's take a brief review of used Open API annotations:

To more details about Open API annotations, please refer to the MicroProfile OpenAPI Specification.

After start the application, you will able to get your Open API description in the .yaml format by the next URL or even enjoy Swagger UI as well by :
microprofile, openapi, swagger-ui

Note By default swagger-ui available in the dev mode only. If you would like to keep swagger on production, - add next property to your application.properties


Second part of this post is a JWT role based access control(RBAC) for microservice endpoints. JSON Web Tokens are an open, industry standard RFC 7519 method for representing claims securely between two parties and below we will see how easy it can be integrated in your application with Eclipse Microprofile.

As JWT suggests usage of cryptography - we need to generate public\private key pair before start coding:

# Generate a private key
openssl genpkey -algorithm RSA -out private_key.pem -pkeyopt rsa_keygen_bits:2048

# Derive the public key from the private key
openssl rsa -pubout -in private_key.pem -out public_key.pem

Now we are able to generate JWT and sign data with our private key in the, for example, next way:

public static String generateJWT(String userName) throws Exception {

    Map<String, Object> claimMap = new HashMap<>();
    claimMap.put("iss", "https://kostenko.org");
    claimMap.put("sub", "jwt-rbac");
    claimMap.put("exp", currentTimeInSecs + 300)
    claimMap.put("iat", currentTimeInSecs);
    claimMap.put("auth_time", currentTimeInSecs);
    claimMap.put("jti", UUID.randomUUID().toString());
    claimMap.put("upn", "UPN");
    claimMap.put("groups", Arrays.asList("user"));
    claimMap.put("raw_token", UUID.randomUUID().toString());
    claimMap.put("user_bane", userName);

    return Jwt.claims(claimMap).jws().signatureKeyId("META-INF/private_key.pem").sign(readPrivateKey("META-INF/private_key.pem"));

For additional information about JWT structure, please refer https://jwt.io

Time to review our application security stuff:
@RequestScoped - It is not about security as well. But as JWT is request scoped we need this one to work correctly;
@PermitAll - Specifies that all security roles are allowed to invoke the specified method;
@RolesAllowed("user") - Specifies the list of roles permitted to access method;
@Claim("user_name") - Allows us inject provided by JWT field;

To configure JWT in your application.properties, please add


# quarkus.log.console.enable=true
# quarkus.log.category."io.quarkus.smallrye.jwt".level=TRACE
# quarkus.log.category."io.undertow.request.security".level=TRACE

And actually that is it, - if you try to reach /user/current service without or with bad JWT token in the Authorization header - you will get HTTP 401 Unauthorized error.

curl example:

curl -X GET "http://localhost:8080/user/current" -H "accept: application/json" -H "Authorization: Bearer eyJraWQiOiJNRVRBLUlORi9wcml2YXRlX2tleS5wZW0iLCJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJqd3QtcmJhYyIsInVwbiI6IlVQTiIsInJhd190b2tlbiI6IjQwOWY3MzVkLTQyMmItNDI2NC1iN2UyLTc1YTk0OGFjMTg3MyIsInVzZXJfbmFtZSI6InNlcmdpaSIsImF1dGhfdGltZSI6MTU4MjE5NzM5OSwiaXNzIjoiaHR0cHM6Ly9rb3N0ZW5rby5vcmciLCJncm91cHMiOlsidXNlciJdLCJleHAiOjkyMjMzNzIwMzY4NTQ3NzU4MDcsImlhdCI6MTU4MjE5NzM5OSwianRpIjoiMzNlMGMwZjItMmU0Yi00YTczLWJkNDItNDAzNWQ4NTYzODdlIn0.QteseKKwnYJWyj8ccbI1FuHBgWOk98PJuN0LU1vnYO69SYiuPF0d9VFbBada46N_kXIgzw7btIc4zvHKXVXL5Uh3IO2v1lnw0I_2Seov1hXnzvB89SAcFr61XCtE-w4hYWAOaWlkdTAmpMSUt9wHtjc0MwvI_qSBD3ol_VEoPv5l3_W2NJ2YBnqkY8w68c8txL1TnoJOMtJWB-Rpzy0XrtiO7HltFAz-Gm3spMlB3FEjnmj8-LvMmoZ3CKIybKO0U-bajWLPZ6JMJYtp3HdlpsiXNmv5QdIq1yY7uOPIKDNnPohWCgOhFVW-bVv9m-LErc_s45bIB9djwe13jFTbNg"

Source code of described sample application available on GitHub


Simple note about using JPA relation mappings

14 February 2020

There is a lot of typical examples how to build JPA @OneToMany and @ManyToOne relationships in your Jakarta EE application. And usually it looks like:

@Table(name = "author")
public class Author {
    private List<Book> book;
@Table(name = "book")
public class Book {
    private Author author;

This code looks pretty clear, but on my opinion you should NOT USE this style in your real world application. From years of JPA using experience i definitely can say that sooner or later your project will stuck with known performance issues and holy war questions about: N+1, LazyInitializationException, Unidirectional @OneToMany , CascadeTypes ,LAZY vs EAGER, JOIN FETCH, Entity Graph, Fetching lot of unneeded data, Extra queries (for example: select Author by id before persist Book) etcetera. Even if you are have answers for each potential issue above, usually proposed solution will add unreasonable complexity to the project.

To avoid potential issues i recommend to follow next rules:

Unfortunately, simple snippet below does not work as expected in case persist

@ManyToOne(targetEntity = Author.class)
private long authorId;

But,we can use next one instead of

@JoinColumn(name = "authorId", insertable = false, updatable = false)
@ManyToOne(targetEntity = Author.class)
private Author author;

private long authorId;

public long getAuthorId() {
    return authorId;

public void setAuthorId(long authorId) {
    this.authorId = authorId;

Hope, this two simple rules helps you enjoy all power of JPA with KISS and decreasing count of complexity.


ISPN000299: Unable to acquire lock after 15 seconds for key

05 February 2020

Distributed cache is a wide used technology that provides useful possibilities to share state whenever it necessary. Wildfly supports distributed cache through infinispan subsystem and actually it works well, but in case height load and concurrent data access you may run into a some issues like:

and others.

In my case i had two node cluster with next infinispan configuration:


distributed cache above means that number of copies are maintained, however this is typically less than the number of nodes in the cluster. From other point of view, to provide redundancy and fault tolerance you should configure enough amount of owners and obviously 2 is the necessary minimum here. So, in case usage small cluster and keep in mind the BUG, - i recommend use replicated-cache (all nodes in a cluster hold all keys)

Please, compare Which cache mode should I use? with your needs.


/profile=full-ha/subsystem=infinispan/cache-container=myCache/replicated-cache=userdata/component=transaction:add(mode=NON_DURABLE_XA, locking=OPTIMISTIC)

Note!, NON_DURABLE_XA doesn't keep any transaction recovery information and if you still getting Unable to acquire lock errors on application critical data - you can try to resolve it by some retry policy and fail-fast transaction:

/profile=full-ha/subsystem=infinispan/cache-container=myCache/distributed-cache=userdata/component=locking:write-attribute(name=acquire-timeout, value=0)


Logging for JPA SQL queries with Wildfly

10 January 2020

Logging for real SQL queries is very important in case using any ORM solution, - as you never can be sure which and how many requests will send JPA provider to do find, merge, query or some other operation.

Wildfly uses Hibernate as JPA provider and provides few standard ways to enable SQL logging:

1. Add hibernate.show_sql property to your persistence.xml :

    <property name="hibernate.show_sql" value="true" />
INFO  [stdout] (default task-1) Hibernate: insert into blogentity (id, body, title) values (null, ?, ?)
INFO  [stdout] (default task-1) Hibernate: select blogentity0_.id as id1_0_, blogentity0_.body as body2_0_, blogentity0_.title as title3_0_ from blogentity blogentity0_ where blogentity0_.title=?

2. Enable ALL log level for org.hibernate category like:

/subsystem=logging/periodic-rotating-file-handler=sql_handler:add(level=ALL, file={"path"=>"sql.log"}, append=true, autoflush=true, suffix=.yyyy-MM-dd,formatter="%d{yyyy-MM-dd HH:mm:ss,SSS}")
DEBUG [o.h.SQL] insert into blogentity (id, body, title) values (null, ?, ?)
TRACE [o.h.t.d.s.BasicBinder] binding parameter [1] as [VARCHAR] - [this is body]
TRACE [o.h.t.d.s.BasicBinder] binding parameter [2] as [VARCHAR] - [title]
DEBUG [o.h.SQL] select blogentity0_.id as id1_0_, blogentity0_.body as body2_0_, blogentity0_.title as title3_0_ from blogentity blogentity0_ where blogentity0_.title=?
TRACE [o.h.t.d.s.BasicBinder] binding parameter [1] as [VARCHAR] - [title]

3. Enable spying of SQL statements:

DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [DataSource] getConnection()
DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [Connection] prepareStatement(insert into blogentity (id, body, title) values (null, ?, ?), 1)
DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [PreparedStatement] setString(1, this is body)
DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [PreparedStatement] setString(2, title)
DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [PreparedStatement] executeUpdate()
DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [PreparedStatement] getGeneratedKeys()
DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [ResultSet] next()
DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [ResultSet] getMetaData()
DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [ResultSet] getLong(1)
DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [ResultSet] close()
DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [DataSource] getConnection()
DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [Connection] prepareStatement(select blogentity0_.id as id1_0_, blogentity0_.body as body2_0_, blogentity0_.title as title3_0_ from blogentity blogentity0_ where blogentity0_.title=?)
DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [PreparedStatement] setString(1, title)
DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [PreparedStatement] executeQuery()
DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [ResultSet] next()
DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [ResultSet] getLong(id1_0_)
DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [ResultSet] wasNull()
DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [ResultSet] getString(body2_0_)
DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [ResultSet] wasNull()
DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [ResultSet] getString(title3_0_)
DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [ResultSet] wasNull()
DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [ResultSet] next()
DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [ResultSet] close()

So, from above we can see that variants 2 and 3 is most useful as allows to log queries with parameters. From other point of view - SQL logging can generate lot of unneeded debug information on production. To avoid garbage data in your log files, feel free to use Filter Expressions for Logging