I have a client whose Postgres database I need to connect to periodically. This database is currently hosted on their server, let's call it psql.example.com, with standard port 5432. They have it configured so that I can connect to it from my "primary" server, let's say that's on IP 123.45.67.89, using my_username
and my_password
as credentials.
The thing is, it's not my primary server that need to connect to it, but rather my load-balanced AWS nodes. It seems that I should be able to run something on my primary server with ssh -L
or iptables
, which will listen for incoming connections (on a different port, say 42042, to avoid potential conflicts later on) from my load-balanced nodes (easy to secure that end of things with standard AWS functionality), and transparently forward it on to their database, passing any response back.
I've tried things like ssh -L 42042:psql.example.com:5432 [email protected]
, but this doesn't work; if I debug, I see that it's trying to connect to port 22, not 5432. Seems that I shouldn't be the only one wanting to do something like this, but most of what I've found is about using SSH to secure the connection to the DB server. It doesn't seem like any of the [email protected]
bit should be required on my primary server, that should be more for securing the connection from my end node to my server? Security of the connection to their server would be done through their existing IP limits and then PSQL credentials.
They're running quite an old version of Postgres, 8.1, which I don't think would make any difference for a pure port-forwarding solution, if that exists, but might limit other options.
1 Answer 1
This works for me.
You say the ssh command is trying to connect to port 22. That is what it should be doing, in the absence of -p
telling it to do otherwise. It connects to port 22 on host 123.45.67.89 to set up the tunnel. If 123.45.67.89 runs its sshd on a nonstandard port, then you need to specify it with -p
.
Once that is done, then connections to localhost:42042 on the same machine where you ran the ssh command (your aws node) will get routed through 123.45.67.89 and on to psql.example.com on port 5432.
-
That seems like a slightly different use case? My primary server's IP is allowed to connect directly to their port 5432, I don't need to go through SSH, it seemed like that was just a convenient port forwarding option. Maybe this is a fatal misunderstanding on my part, and a key reason why I couldn't get it to work.Greg Schmidt– Greg Schmidt2020年08月07日 02:31:03 +00:00Commented Aug 7, 2020 at 2:31
-
My use case was just to test the command you showed, and the use case you described. So if the uses cases differ, I don't understand how they differ.jjanes– jjanes2020年08月07日 13:39:45 +00:00Commented Aug 7, 2020 at 13:39
-
The NAT gateway implementation has resolved this for me, so I have no remaining incentive to chase down whether some assumption in my question was faulty. It does seem likely, so I'm accepting your answer.Greg Schmidt– Greg Schmidt2020年08月13日 01:08:00 +00:00Commented Aug 13, 2020 at 1:08
ssh
command tries to connect to the wrong port. Trying to connect directly to Postgres from my actual nodes gives a timeout, of course, because those IPs aren't permitted. Not sure what error I'd get by trying to connect through the tunnel without the tunnel running; that didn't seem worth trying.