Run JGroups on Kubernetes
As a special wish here is a short tutorial on how to setup JGroups on Kubernetes. There is a maven dependency that allows JGroups to discover JGroup members via Kubernetes API. Therefore it uses ServiceAccounts to ask the Kubernetes API for available Pods in a namespace, by label or in combination of namespace and label. If you use RBAC you have to define a role that gives access to the pods get method.
Create a short application
Here is a small application that generates UUID’s and sends them via JGroups to other members in the network. First we need to add the maven dependency:
<dependency>
<groupId>org.jgroups.kubernetes</groupId>
<artifactId>jgroups-kubernetes</artifactId>
<version>1.0.3.Final</version>
</dependency>
This dependency brings the jgroups dependency out of the box in it, so we don’t need to append the jgroups dependency by it self.
Next we write the Java code to send messages and receive them:
@SpringBootApplication
@EnableScheduling
public class JgroupsApplication implements CommandLineRunner {
private final Logger logger = LoggerFactory.getLogger(this.getClass());
private JChannel channel;
public static void main(String[] args) {
SpringApplication.run(JgroupsApplication.class, args);
}
@Override
public void run(String... args) throws Exception {
InputStream resource = getClass().getResourceAsStream("/config.xml");
this.channel = new JChannel(resource);
channel.setReceiver(new ReceiverAdapterImpl());
channel.connect("ChatCluster");
}
@Scheduled(fixedRate = 5000)
public void sendMessage() throws Exception {
String message = UUID.randomUUID().toString();
logger.info("Send message '{}'", message);
Message msg = new Message(null, message);
channel.send(msg);
}
}
public class ReceiverAdapterImpl extends ReceiverAdapter {
@Override
public void receive(Message msg) {
System.out.println("received message from: " + msg.getSrc() + " with content: " + msg.getObject());
}
}
As you can see we send a message in a scheduled task every 5 seconds and log the send message. Next we extend the ReceiverAdapter and override the receive method. Here we log the received message also.
Next we need to define the config.xml file, in it we specify the KUBE_PING protocol and set which namespace, label or both will be used:
<!--
TCP-based configuration for running on Kubernetes
Author: Bela Ban
-->
<config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="urn:org:jgroups"
xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/jgroups.xsd">
<TCP
external_addr="${JGROUPS_EXTERNAL_ADDR:match-interface:eth0}"
bind_addr="site_local,match-interface:eth0"
bind_port="${TCP_PORT:7800}"
recv_buf_size="5M"
send_buf_size="1M"
max_bundle_size="64K"
enable_diagnostics="true"
thread_naming_pattern="cl"
thread_pool.min_threads="0"
thread_pool.max_threads="500"
thread_pool.keep_alive_time="30000" />
<org.jgroups.protocols.kubernetes.KUBE_PING
port_range="1"
namespace="${KUBE_NAMESPACE:jgroups-example}"
/>
<MERGE3 max_interval="30000"
min_interval="10000"/>
<FD_SOCK external_addr="${JGROUPS_EXTERNAL_ADDR}"
start_port="${FD_SOCK_PORT:9000}"/>
<FD_ALL timeout="30000" interval="5000"/>
<VERIFY_SUSPECT timeout="1500" />
<BARRIER />
<pbcast.NAKACK2 xmit_interval="500"
xmit_table_num_rows="100"
xmit_table_msgs_per_row="2000"
xmit_table_max_compaction_time="30000"
use_mcast_xmit="false"
discard_delivered_msgs="true" />
<UNICAST3
xmit_table_num_rows="100"
xmit_table_msgs_per_row="1000"
xmit_table_max_compaction_time="30000"/>
<pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000"
max_bytes="8m"/>
<pbcast.GMS print_local_addr="true" join_timeout="3000"
view_bundling="true"/>
<MFC max_credits="2M"
min_threshold="0.4"/>
<FRAG2 frag_size="60K" />
<pbcast.STATE_TRANSFER />
<CENTRAL_LOCK />
<COUNTER/>
</config>
In this example we use if availabe the environment variable called KUBE_NAMESPACE and otherwise the namespace called jgroups-example. As already described we could also use labels to search for other JGroup members. Here is an example from the github page of jgrups-kubernetes.
<org.jgroups.protocols.kubernetes.KUBE_PING
port_range="1"
namespace="${KUBE_NAMESPACE:production}"
labels="${KUBE_LABEL:cluster=nyc}"
Now we build a Docker image and create the deployment file:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: jgroups
namespace: jgroups
labels:
app: jgroups
spec:
replicas: 2
selector:
matchLabels:
app: jgroups
template:
metadata:
labels:
app: jgroups
spec:
containers:
- name: jgroups
image: jgroups-server:v1
resources:
limits:
memory: "538Mi"
env:
- name: JAVA_OPTS
value: "-Xmx512m -Xms512m -Djava.net.preferIPv4Stack=true"
- name: KUBE_NAMESPACE
value: "jgroups"
ports:
- containerPort: 8080
For me it was important to prefer IPv4 because otherwise it tries to bind to an IPv6 address that fails with an exception opening the port.
Now we see 2 replications running in the cluster that can communicate to each other. First they ask the Kubernetes API for other existing members in the cluster:
# the first server only finds itself
2017-11-01 11:17:01.988 INFO 1 --- [ main] o.j.protocols.kubernetes.KUBE_PING : getPods(jgroups, null) = [Pod{name='jgroups-755dd4cbd9-s6br2', ip='10.244.1.9', parentDeployment='null'}]
# the second server finds itself and the first server
2017-11-01 11:18:02.710 INFO 1 --- [bd9-9xklr-54150] o.j.protocols.kubernetes.KUBE_PING : getPods(jgroups, null) = [Pod{name='jgroups-755dd4cbd9-9xklr', ip='10.244.1.10', parentDeployment='null'}, Pod{name='jgroups-755dd4cbd9-s6br2', ip='10.244.1.9', parentDeployment='null'}]
Now if we look at the output we can see that they communicate to each other:
# first server sends a message and receives it
2017-11-01 11:18:25.295 INFO 1 --- [pool-1-thread-1] ication$$EnhancerBySpringCGLIB$$b6a568ed : Send message '9fbfe381-d9f0-48ed-97a0-d78a150ad0ae'
received message from: jgroups-755dd4cbd9-s6br2-20315 with content: 9fbfe381-d9f0-48ed-97a0-d78a150ad0ae
# the second server only receives this message
received message from: jgroups-755dd4cbd9-s6br2-20315 with content: 9fbfe381-d9f0-48ed-97a0-d78a150ad0ae
In german words for the affected person:
Das kosten nen Bier ;-)