Reverse Proxying Wildfly JMS Connections with NGINX

Published in Java on 29-November-2015

Wildfly 8 and above gives users the ability to take advantage of the http-upgrade feature of HTTP 1.1 (https://en.wikipedia.org/wiki/HTTP/1.1_Upgrade_header) to establish connections for a variety of protocols over standard HTTP ports. The initial communication is established on port 80 (or 443 if you are using SSL) and then through a negotiation process the protocol is upgraded to whatever actual protocol, binary or otherwise, you wish to use. This greatly simplifies the scenario where your Application Server is behind a firewall, as there are significantly fewer ports you must worry about opening up to allow the desired functionality.

This works fantastically if your Application Server is also your web gateway, and in Wildfly, Undertow is a pretty decent web server, so it's not unreasonable for that to be the case. However, in many deployment scenarios you will have other servers sitting in front of your application servers, be they proxies, web servers, load balancers, or other similar systems. If you wish to take advantage of this http-upgrade functionality, you must ensure that these intermediary systems are capable of supporting http upgrading.

Recently, I was in a situation where I had NGINX acting as a web server, serving up a Single Page web application. It also acted as a reverse proxy to the REST API layer that backed the application. There were some external systems that needed to communicate with this back-end via JMS and I wanted to avoid having to open up additional ports or bind to additional IP addresses to accomplish this. Using the http-upgrade support in Wildfly seemed like an ideal way to accomplish this.

Fortunately, as of version 1.3.13, NGINX added support for reverse-proxying http-upgrade requests. The most predominant use-case for the http-upgrade protocol is WebSockets, a technology that has been gaining a great deal of popularity over the past few years, allowing browsers and servers to establish bi-directional, always-on communication. NGINX added support for the http-upgrade protocol in order to support WebSockets, but it has the side benefit of allowing us to reverse-proxy other protocols that use http-upgrade, just like JNDI and JMS in Wildfly.  While I was trying to figure out how to make this work, I came across this page on NGINX's website on configuring Websocket proxying, which I drew heavily on for my solution.

Traditionally, when establishing a JMS connection to Wildfly, two things happen. First, you establish a connection to the JNDI server on Wildfly to allow you to access objects residing there (Like JMS Connection Factories). Secondly, you use the information retrieved from the JNDI to actually establish communication with the JMS server. This process now actually involves two http-upgrade requests, one for the JNDI and one for the JMS server.

Typically, your client code will look something like this:

final String JNDI_FACTORY =
    "org.jboss.naming.remote.client.InitialContextFactory";
final String JNDI_URL = "http-remoting://localhost:8080";
final String CONNECTION_FACTORY_JNDI_NAME = "jms/RemoteConnectionFactory";

final String JMS_USERNAME = "myJmsUser";
final String JMS_PASSWORD = "myJmsPassword";

final Hashtable<String, String> jndiEnvironment = new Hashtable<>();

jndiEnvironment.put(Context.INITIAL_CONTEXT_FACTORY, JNDI_FACTORY);
jndiEnvironment.put(Context.PROVIDER_URL, JNDI_URL);
jndiEnvironment.put(Context.SECURITY_PRINCIPAL, JMS_USERNAME);
jndiEnvironment.put(Context.SECURITY_CREDENTIALS, JMS_PASSWORD);

final InitialContext context = new InitialContext(jndiEnvironment);

final ConnectionFactory connectionFactory =
    (ConnectionFactory)context.lookup(CONNECTION_FACTORY_JNDI_NAME);
final Connection connection =
    connectionFactory.createConnection(JMS_USERNAME, JMS_PASSWORD);

When making the JNDI request via http-upgrade, the Wildfly JNDI client will include three special headers in the http request. The first is the CONNECTION header used to indicate that an upgrade is being requested, the second is the UPGRADE header used to indicate the protocol the client is requesting the connection be upgraded to. The third is the SEC-JBOSSREMOTING-KEY header (a variation of the SEC-WEBSOCKET-KEY header used in WebSocket requests to prevent accidental upgrades by clients). When making JMS upgrade requests, it will be the same, except that a SEC-HORNETQREMOTING-KEY header will be used).

So, if you know that all requests you will be receiving will be forwarded on to the destination Wildfly server, you can simply configure NGINX to pass along any instances of CONNECTION, UPGRADE, SEC-JBOSSREMOTING-KEY and SEC-HORNETQREMOTING-KEY headers in the appropriate location block of your NGINX configuration and NGINX will happily reverse proxy your connections for you.

server {
  listen 80;
  server_name _;

  root /var/www;
  index index.html index.htm;

  location / {
    #Enable reverse proxying to Wildfly on internal port 8080  
    proxy_pass http://localhost:8080;

    #Tell NGINX to proxy these headers
    proxy_set_header sec_jbossremoting_key $http_sec_jbossremoting_key;
    proxy_set_header sec_hornetqremoting_key $http_sec_hornetqremoting_key;
    proxy_set_header upgrade $http_upgrade;
    proxy_set_header connection "upgrade";
    proxy_set_header host $http_host;
  }

  #Other NGINX config details omitted for brevity
}

However, if, like I was, you are serving up a web application in addition to reverse proxying API requests to a specified endpoint and also trying to handle the reverse proxying of http-upgrade requests, a little more is needed in your NGINX configuration.

server {
  listen 80;
  server_name _;

  root /var/www;
  index index.html index.htm;

  location / {
    set $should_proxy "";
    #We will set this only if we detect one of the special headers
    #indicating a Wildfly client upgrade request
    set $upgrade_header "";

    #Test if SEC-JBOSSREMOTING-KEY header is passed in the request
    if ($http_sec_jbossremoting_key) {
      set $should_proxy "Y";
    }

    #Test if SEC-HORNETQREMOTING-KEY header is passed in the request
    if ($http_sec_hornetqremoting_key) {
      set $should_proxy "Y";
    }

    #If either the two headers above are present, 
    #configure the proxy pass directive
    if ($should_proxy = Y) {
      proxy_pass http://localhost:8080;
      set $upgrade_header "upgrade";
    }

    #If we are not proxying, the Single Page webapp will be served up
    #from the configured root

    #Ensure that relevant headers are proxied to Wildfly.  If any of
    #these are empty, nothing will be passed along
    proxy_set_header sec_jbossremoting_key $http_sec_jbossremoting_key;
    proxy_set_header sec_hornetqremoting_key $http_sec_hornetqremoting_key;
    proxy_set_header upgrade $http_upgrade;
    proxy_set_header connection $upgrade_header;
    proxy_set_header host $http_host;
  }

  #Reverse proxy to the REST API served by Wildfly on local port 8080
  #when the url /api is requested
  location /api {
    proxy_pass http://localhost:8080/api;
    proxy_redirect off;
  }

  #Other NGINX config details omitted for brevity
}

It is worth noting that using if directives in NGINX can have some strange and undersirable side-effects if used incorrectly (See this post for more details on this). I believe I am using them in an appropriate way above, but if anyone has information to the contrary, please let me know.

So, with a few lines of NGINX configuration, we can fairly easily serve up a single page web application and a REST api in NGINX, in addition to allowing more advanced protocols like JMS or Websocket to proxy through to our backend Wildfly server.

About the Author

dan.jpg

Daniel Morton is a Software Developer with Shopify Plus in Waterloo, Ontario and the co-owner of Switch Case Technologies, a software development and consulting company. Daniel specializes in Enterprise Java Development and has worked and consulted in a variety of fields including WAN Optimization, Healthcare, Telematics, Media Publishing, and the Payment Card Industry.