Have you ever wished you had the convenience of Unix Domain Sockets even when transmitting data between cluster nodes? Where you yourself determine the addresses you want to bind to and use? Where you don't have to perform DNS lookups and worry about IP addresses? Where you don't have to start timers to monitor the continuous existence of peer sockets? And yet without the downsides of that socket type, such as the risk of lingering inodes?
Welcome to the Transparent Inter Process Communication service, TIPC in short, which gives you all of this, and a lot more.
A fundamental concept in TIPC is that of Service Addressing which makes it possible for a programmer to chose his own address, bind it to a server socket and let client programs use only that address for sending messages.
Communication between any two nodes in a cluster is maintained by one or two Inter Node Links, which both guarantee data traffic integrity and monitor the peer node's availability.
By applying the Overlapping Ring Monitoring algorithm on the inter node links it is possible to scale TIPC clusters up to 1000 nodes with a maintained neighbor failure discovery time of 1-2 seconds. For smaller clusters this time can be made much shorter.
Neighbor Node Discovery in the cluster is done by Ethernet broadcast or UDP multicast, when any of those services are available. If not, configured peer IP addresses can be used.
When running TIPC in single node mode no configuraton whatsoever is needed. When running in cluster mode TIPC must as a minimum be given a node address (before Linux 4.17) and told which interface to attach to. The "tipc" configuration tool makes is possible to add and maintain many more configuration parameters.
TIPC message transfer latency times are better than in any other known protocol. Maximal byte throughput for inter-node connections is still somewhat lower than for TCP, while they are superior for intra-node and inter-container throughput on the same host.
The TIPC user API has support for C, Python, Perl, Ruby, D and Go.