23

AFAIK, memory in Java is based on heap from which the memory is allotted to objects dynamically and there is no concept of shared memory.

If there is no concept of shared memory, then the communication between Java programs should be time consuming. In C where inter-process communication is quicker via shared memory compared to other modes of communication.

Correct me if I'm wrong. Also what is the quickest way for 2 Java progs to talk to each other.

asked Sep 29, 2009 at 9:34
1
  • You mean "in C where shared memory communication is quicker..."? Commented Sep 29, 2009 at 9:36

10 Answers 10

25

A few ways:

Details here and here with some performance measurements.

Dave Jarvis
31.3k43 gold badges186 silver badges327 bronze badges
answered Sep 15, 2011 at 22:08
Sign up to request clarification or add additional context in comments.

Comments

13

Since there is no official API to create a shared memory segment, you need to resort to a helper library/DDL and JNI to use shared memory to have two Java processes talk to each other.

In practice, this is rarely an issue since Java supports threads, so you can have two "programs" run in the same Java VM. Those will share the same heap, so communication will be instantaneous. Plus you can't get errors because of problems with the shared memory segment.

answered Sep 29, 2009 at 9:39

3 Comments

This changed since Java 7. Java now has memory-mapped files.
@Arkadiy: Memory mapped files were added with Java 1.4 with NIO. Java 7 extended this with NIO 2. But OP wants to know about shared memory which is a different concept.
While shared memory is indeed a different concept, mem-mapped files can be used in somewhat similar way for IPC. And yes, you're right about Java 1.4 :)
9

Java Chronicle is worth looking at; both Chronicle-Queue and Chronicle-Map use shared memory.

These are some tests that I had done a while ago comparing various off-heap and on-heap options.

Dave Jarvis
31.3k43 gold badges186 silver badges327 bronze badges
answered Mar 16, 2012 at 18:42

Comments

8

One thing to look at is using memory-mapped files, using Java NIO's FileChannel class or similar (see the map() method). We've used this very successfully to communicate (in our case one-way) between a Java process and a C native one on the same machine.

I'll admit I'm no filesystem expert (luckily we do have one on staff!) but the performance for us is absolutely blazingly fast -- effectively you're treating a section of the page cache as a file and reading + writing to it directly without the overhead of system calls. I'm not sure about the guarantees and coherency -- there are methods in Java to force changes to be written to the file, which implies that they are (sometimes? typically? usually? normally? not sure) written to the actual underlying file (somewhat? very? extremely?) lazily, meaning that some proportion of the time it's basically just a shared memory segment.

In theory, as I understand it, memory-mapped files CAN actually be backed by a shared memory segment (they're just file handles, I think) but I'm not aware of a way to do so in Java without JNI.

answered Sep 29, 2009 at 13:24

3 Comments

pretty old thread, but I was wondering if you can comment on the speed (latency) performance of your method as opposed to regular TCP/IP sockets ? I've been researching a way to improve the latency between 2 apps (one java, one C/C++), currently communicating through sockets...
You're almost certainly not going to find anything faster than a memory-mapped file. Writes to the file, even to random positions, are basically just writes to main memory (which is about the fastest thing you can do). The question of Java's 'rules' about when to flush to the underlying filesystem is an open one for me, as I haven't seen it documented + haven't looked at the source. I can tell you we were doing many many hundreds, probably thousands, of small (1-4 byte) writes to the file per second, at random locations, and never noticed anything remotely like a performance problem.
This thread is even older now but you can look at github.com/peter-lawrey/Java-Chronicle This achieve 5-20 million messages per second persisted with a one way latency of less than 200 ns between processes.
7

Shared memory is sometimes quick. Sometimes its not - it hurts CPU caches and synchronization is often a pain (and should it rely upon mutexes and such, can be a major performance penalty).

Barrelfish is an operating system that demonstrates that IPC using message passing is actually faster than shared memory as the number of cores increases (on conventional X86 architectures as well as the more exotic NUMA NUCA stuff you'd guess it was targeting).

So your assumption that shared memory is fast needs testing for your particular scenario and on your target hardware. Its not a generic sound assumption these days!

answered Sep 29, 2009 at 9:42

1 Comment

This is misleading. Contended writes hurt performance, locking hurts performance. Shared reads help performance, and shared memory beats IPC with a good implementation (indeed, the best IPC systems are implemented with shared memory.) Using kernel IPC is horrific for performance. So while it has its place and is often conceptually easier to work with, performance is definitely not a reason to use IPC!
3

There's a couple of comparable technologies I can think of:

  1. A few years back there was a technology called JavaSpaces but that never really seemed to take hold, a shame if you ask me.
  2. Nowadays there are the distributed cache technologies, things like Coherence and Tangosol.

Unfortunately neither will have the out right speed of shared memory, but they do deal with the issues of concurrent access, etc.

answered Sep 29, 2009 at 9:54

Comments

2

The easiest way to do that is to have two processes instantiate the same memory-mapped file. In practice they will be sharing the same off-heap memory space. You can grab the physical address of this memory and use sun.misc.Unsafe to write/read primitives. It supports concurrency through the putXXXVolatile/getXXXVolatile methods. Take a look on CoralQueue which offers IPC easily as well as inter-thread communication inside the same JVM.

Disclaimer: I am one of the developers of CoralQueue.

answered May 1, 2015 at 3:08

Comments

1

Similar to Peter Lawrey's Java Chronicle, you can try Jocket.

It also uses a MappedByteBuffer but does not persist any data and is meant to be used as a drop-in replacement to Socket / ServerSocket.

Roundtrip latency for a 1kB ping-pong is around a half-microsecond.

answered Oct 29, 2013 at 12:47

Comments

1

MappedBus (http://github.com/caplogic/mappedbus) is a library I've added on github which enable IPC between multiple (more than two) Java processes/JVMs by message passing.

The transport can be either a memory mapped file or shared memory. To use it with shared memory simply follow the examples on the github page but point the readers/writers to a file under "/dev/shm/".

It's open source and the implementation is fully explained on the github page.

answered May 18, 2015 at 16:59

Comments

0

The information provided by Cowan is correct. However, even shared memory won't always appear to be identical in multiple threads (and/or processes) at the same time. The key underlying reason is the Java memory model (which is built on the hardware memory model). See Can multiple threads see writes on a direct mapped ByteBuffer in Java? for a quite useful discussion of the subject.

answered Oct 24, 2013 at 17:06

Comments

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.