Showing posts with label TCP. Show all posts
Showing posts with label TCP. Show all posts

Sunday, May 25, 2014

Solaris: Loopback Optimization and TCP_FUSION

Abstract:
Since early days of computing, the most slowest interconnects have always been between platforms through input and output channels. The movement from Serial ports to higher speed communications channels such as TCP/IP became the standard mechanism for applications to not only communicate between physical systems, but also on the same system! During Solaris 10 development, a capability to increase the performance of the TCP/IP stack with application on the same server was introduced called TCP_FUSION. Some application vendors may be unaware of safeguards built into Solaris 10 to keep denial of service attacks or starvation of the applications due to the high performance of TCP writers on the loopback interface.
Functionality:
Authors Brendan Gregg and Jim Mauro describe the functionality of TCP_FUSION in their book: DTrace: Dynamic Tracing in Oracle Solaris, Mac OS X, and FreeBSD.
Loopback TCP packets on Solaris may be processed by tcp fusion, a performance feature that bypasses the ip layer. These are packets over a fused fused connection, which will not be visible using the ip:::send and ip:::receive probes, (but they can be seen using the tcp:::send and tcp:::receive probes.) When TCP fusion is enabled (which it is by default), loopback connections become fused after a TCP handshake, and then all data packets take a shorter code path that bypasses the IP layer.
The modern application hosted under Solaris will demonstrate a significant benefit over being hosted under alternative operating systems.

Demonstrated Benefits:
TCP socket performance, under languages such as Java, may demonstrate a significant performance improvement, often shocking software developers!
While comparing java TCP socket performance between RH Linux and Solaris, one of my test is done by using a java client sending strings and reading the replies from a java echo server. I measure the time spent to send and receive the data (i.e. the loop back round trip).
The test is run 100,000 times (more occurrence are giving similar results). From my tests Solaris is 25/30% faster on average than RH Linux, on the same computer with default system and network settings, same JVM arguments (if any) etc.
The answer seems clear, TCP_FUSION is the primary reason.
In Solaris that's called "TCP Fusion" which means two local TCP endpoints will be "fused". Thus they will bypass the TCP data path entirely.
Testing will confirm this odd performance benefit under stock Solaris under Linux.
Nice! I've used the command
echo 'do_tcp_fusion/W 0' | mdb -kw

and manage to reproduce times close to what I've experienced on RH Linux. I switched back to re-enable it using
echo 'do_tcp_fusion/W 1' | mdb -kw

Thanks both for your help.
Once people understand the benefits of TCP_FUSION, they will seldom go back.

Old Issues:
The default nature of TCP_FUSION means any application hosted under Solaris 10 or above will, by default, receive the benefit of this huge performance boost. Some early releases of Solaris 10 without patches may experience a condition where a crash can occur, because of kernel memory usage. The situation, workaround, and resolution is described:

Solaris 10 systems may panic in the tcp_fuse_rcv_drain() TCP/IP function when using TCP loopback connections, where both ends of the connection are on the same system. This may allow a local unprivileged user to cause a Denial of Service (DoS) condition on the affected host.
To work around the described issue until patches can be installed, disable TCP Fusion by adding the following line to the "/etc/system" file and rebooting the system: set ip:do_tcp_fusion = 0x0.
This issue is addressed in the following releases: SPARC Platform Solaris 10 with patch 118833-23 or later and x86 Platform Solaris 10 with patch 118855-19 or later.
Disabling TCP_FUSION feature is no longer needed for DoS protections.

Odd Application Behavior:
If an application running under Solaris does not experience a performance boost, but rather a performance degradation, it is possible your ISV is not completely understand TCP_FUSION or the symptoms of an odd code implementation. When developers expect the receiving application on a socket to respond slowly, this can result in bad behavior with TCP sockets accelerated by Solaris.

Instead of application developers optimizing the behavior of their receiving application to take advantage of 25%-30% potential performance benefit, some of those applications vendors chose to suggest disabling TCP_FUSION with their applications: Riverbed's Stingray Traffic Manager and Veritas NetBackup (4x slowdown.) Those unoptimized TCP reading applications, which perform reads 8x slower than their TCP writing application counterparts, perform extremely poorly in the TCP_FUSION environment.

Possible bad TCP_FUSION interaction?
There is a better way to debug this issue rather than shutting off the beneficial behavior. Blogger Steffen Weiberle at Oracle wrote pretty extensively on this.

First, one may want to understand if it is being used. TCP_FUSION is often used, but not always:
There are some exceptions to this, including when using IPsec, IPQoS, raw-socket, kernel SSL, non-simple TCP/IP conditions. or the two end points are on different squeues. A fused connect will revert to unfused if an IP Filter rule will drop a packet. However TCP fusion is done in the general case.
When TCP_FUSION is enabled for an application, there is a risk that the TCP data provider can provide data so fast over TCP that it can cause starvation of the receiving application! Solaris OS developers anticipated this in their acceleration design.
With TCP fusion enabled (which it is by default in Solaris 10 6/06 and later, and in OpenSolaris), when a TCP connection is created between processes on a system, the necessary things are set up to transfer data from the sender to the receiver without sending it down and back up the stack. The typical flow control of filling a send buffer (defaults to 48K or the value of tcp_xmit_hiwat, unless changed via a socket operation) still applies. With TCP Fusion on, there is a second check, which is the number of writes to the socket without a read. The reason for the counter is to allow the receiver to get CPU cycles, since the sender and receiver are on the same system and may be sharing one or more CPUs. The default value of this counter is eight (8), as determined by tcp_fusion_rcv_unread_min.
Some ISV developers may have coded their applications in such a way to anticipate that TCP is slow and coded their receiving application to be less efficient than the sending application. If the receiving application is 8x slower in servicing the reading from the TCP socket, the OS will slow down the provider. Some vendors call this a "bug" in the OS.

When doing large writes, or when the receiver is actively reading, the buffer flow control dominates. However, when doing smaller writes, it is easy for the sender to end up with a condition where the number of consecutive writes without a read is exceeded, and the writer blocks, or if using non-blocking I/O, will get an EAGAIN error.
So now, one may see the symptoms: errors with TCP applications where connections on the same system are experiencing slowdowns and may even provide EAGAIN errors.

Tuning Option: Increase Slow Reader Tolerance
If the TCP reading application is known to be 8x slower than the TCP writing application, one option is to increase the threshold that the TCP writer becomes blocked, so maybe 32x as many writes can be issued [to a single read] before the OS performs a block on the writer, from a safety perspective. Steffen Weiberle also suggested:
To test this I suggested the customer change the tcp_fusion_rcv_unread_min on their running system using mdb(1). I suggested they increase the counter by a factor of four (4), just to be safe.
# echo "tcp_fusion_rcv_unread_min/W 32" | mdb -kw
tcp_fusion_rcv_unread_min: 0x8 = 0x20
Here is how you check what the current value is.
# echo "tcp_fusion_rcv_unread_min/D" | mdb -k
tcp_fusion_rcv_unread_min:
tcp_fusion_rcv_unread_min: 32
After running several hours of tests, the EAGAIN error did not return.
Tuning Option: Removing Slow Reader Protections
If the reading application is just poorly written and will never keep up with the writing application, another option is to remove the write-to-read protection entirely. Steffen Weiberle wrote:
Since then I have suggested they set tcp_fusion_rcv_unread_min to 0, to turn the check off completely. This will allow the buffer size and total outstanding write data volume to determine whether the sender is blocked, as it is for remote connections. Since the mdb is only good until the next reboot, I suggested the customer change the setting in /etc/system.
\* Set TCP fusion to allow unlimited outstanding writes up to the TCP send buffer set by default or the application.
\* The default value is 8.
set ip:tcp_fusion_rcv_unread_min=0
There is a buffer safety tunable, where the writing application will block if the kernel buffer fills, so you will not crash Solaris if you turn this write-to-read ratio safety switch off.

Tuning Option: Disabling TCP_FUSION
This is the proverbial hammer on inserting a tack into a cork board. Steffen Weiberle wrote:
To turn TCP Fusion off all together, something I have not tested with, the variable do_tcp_fusion can be set from its default 1 to 0.
...
And I would like to note that in OpenSolaris only the do_tcp_fusion setting is available. With the delivery of CR 6826274, the consecutive write counting has been removed.
Network Management has not investigated what the changes were in the final releases of OpenSolaris or more recent Solaris 11 releases from Oracle in regards to TCP_FUSION tuning.
Tuning Guidelines:
The assumption of Network Management is that the common systems administrator is working with well-designed applications, where the application reader is keeping up with the application writer, under Solaris 10. If there are ill-behaved applications under Solaris 10, but one is interested in maintaining the 25%-30% performance improvement, some of the earlier tuning suggestions below will provide much better help than the typical ISV suggested final step.

Check for TCP_FUSION - 0=off, 1=on (default)
SUN9999/root# echo "do_tcp_fusion/D" | mdb -k
do_tcp_fusion:
do_tcp_fusion: 1

Check for TCP_FUSION unread to written ratio - 0=off, 8=default
SUN9999/root# echo "tcp_fusion_rcv_unread_min/D" | mdb -k
tcp_fusion_rcv_unread_min:
tcp_fusion_rcv_unread_min: 8 
Quadruple the TCP_FUSION unread to write ratio and check the results:
SUN9999/root# echo "tcp_fusion_rcv_unread_min/W 32" | mdb -kw
tcp_fusion_rcv_unread_min: 0x8 = 0x20
SUN9999/root# echo "tcp_fusion_rcv_unread_min/D" | mdb -k
tcp_fusion_rcv_unread_min:
tcp_fusion_rcv_unread_min: 32
Disable the unread to write ratio and check the results:
SUN9999/root# echo "tcp_fusion_rcv_unread_min/W 0" | mdb -kw
SUN9999/root# echo "tcp_fusion_rcv_unread_min/D" | mdb -k
tcp_fusion_rcv_unread_min:
tcp_fusion_rcv_unread_min: 0
Finally, disable TCP_FUSION to lose all performance benefits of Solaris, but keep your ISV happy.
SUN9999/root# echo "do_tcp_fusion/W 0" | mdb -kw
May this be helpful for Solaris 10 platform administrators, especially with Network Management platforms!

Thursday, February 16, 2012

Shut Down EMC Ionix (Voyence) NCM Port

Shut Down EMC Ionix (Voyence) NCM Port

Every try to shut down EMC Ionix (formerly Voyence) NCM (Network Configuration Manager) related tcp port services, by disabling /etc/init.d scripts, to find that there are still sockets being listened to?

The Problem

It was noted, on an NCM or Voyence platform, that a required port was still being listened to.
sun9999/root# netstat -anf inet | grep 1029
*.1029 *.* 0 0 49152 0 LISTEN
Verify the Culprit

Was it really a part of EMC Ionix NCM or Voyence?
sun9999/root# telnet localhost 1029
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.

Welcome to EMC Proxy
Copyright (c) 2011 EMC Corporation

User Access Verification
Enter user name:
^]
telnet> quit
Connection to localhost closed.
Well, it appears that EMC is definitely at the root cause.

Not a Start/Stop Script?

Since all the start/stop scripts were disabled from starting up, what else could be the cause?

Under modern UNIX systems, there is a service management facility.

Track Down the Service

Check the port against the registered services file.
sun9999/root# grep telnetproxy /etc/services
telnetproxy 1029/tcp # telnetproxy
Check Against Service Management Facility

EMC appeared nice enough to name the service consistently across the infrastructure
sun9999/root# inetadm | grep telnetproxy
enabled onlinne svc:/network/telnetproxy/tcp:default

sun9999/root# svcs -a | grep telnetproxy
enabled 18:22:21 svc:/network/telnetproxy/tcp:default
Where is the Executable for the Service?

The inet service can be interrogated to reveal the executable being run.
sun9999/root# inetadm -l svc:/network/telnetproxy/tcp:default
SCOPE NAME=VALUE
name="telnetproxy"
endpoint_type="stream"
proto="tcp"
isrpc=FALSE
wait=FALSE
exec="/usr/sbin/in.telnetproxy"
user="root"
default bind_addr=""
default bind_fail_max=-1
default bind_fail_interval=-1
default max_con_rate=-1
default max_copies=-1
default con_rate_offline=-1
default failrate_cnt=40
default failrate_interval=60
default inherit_env=TRUE
default tcp_trace=FALSE
default tcp_wrappers=FALSE
default connection_backlog=10


sun9999/root# ls -al /usr/sbin/in.telnetproxy
-rwxr-xr-x 1 root voyence 1151 Feb 7 18:18 /usr/sbin/in.telnetproxy

EMC was kind enough to name the group of the file, to correctly identify the origin. It is safe to shut down this service.
sun9999/root# svcs svc:/network/telnetproxy/tcp:default
STATE STIME FMRI
online Feb_07 svc:/network/telnetproxy/tcp:default

sun9999/root# svcadm disable svc:/network/telnetproxy/tcp:default

sun9999/root# svcs svc:/network/telnetproxy/tcp:default
STATE STIME FMRI
disabled 18:22:21 svc:/network/telnetproxy/tcp:default

Verify the Telnet Proxy Disable

Check for the tcp port via netstat, to verify that disabling the service did the job.
sun9999/root# netstat -anf inet |grep 1029
sun9999/root#

Monday, February 13, 2012

Vonage and MSN Port Usage

Vonage and MSN Port Usage

Abstract:

Adding Voice over IP (VoIP) and Instant Messaging to a home is normally a simple process. The goal is often to increase communication while reducing telecommunications bills. Occasionally, there are problems with access, which required troubleshooting or more advanced features are desired. A user may need to understand the protocols, in order to better maintain security, and limit scope to attacks by viruses and worms.

Vonage Voice Adapters

Vonage is a low-cost VoIP phone provider service. Normally, not much needs to be done, except plug in a device. Here are the protocols which are required.
Service
TCP
UDP
Notes
DNS

53
Name Resolution
TFTP
21,69,2400

Firmware Upgrade
HTTP
80

Configuration
SIP

5061pre-2005 Vonage devices
RTP

10000-20000
RTP (Voice) traffic

When a call is made, a random port between 10000 and 20000 is used for RTP (Voice) traffic. If any of these ports are blocked, you may experience one way or no audio.

Microsoft MSN and Windows Messenger

Microsoft provides various tools like MSN and Windows Messenger, but in order to get full functionality, occasionally users must forward ports through firewalls and expand exposure to worms and viruses. Use very carefully.
Service
TCP
UDP
Notes
Windows Messenger - voice

2001 - 2120
6801, 6901
Computer to Phone
MSN Messenger - file transfers
6891 - 6900

Allows up to 10 simultaneous transfers
MSN Messenger - voice
6901
6901
Voice communications computer to computer.
MSN Messenger text
1863

Instant text messages

The ports may be helpful when you want to limit vulnerabilities within your environment to unfriendly viruses and worms.
Subscribe to: Comments (Atom)

AltStyle によって変換されたページ (->オリジナル) /