Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Proxy connection is not reestablished and questions about proxies in general #1609

Open
@anyc

Description

Hello,

I am currently trying libcoap's proxy functions as you suggested in #1577. It looks like the proxy connection to the upstream server is not reestablished after it gets disconnected, e.g., if the upstream server restarts for some reason:

Mar 21 15:55:15.861 DEBG ** [fe80::20f:69ff:feff:d6c3]:5684 <-> [fe80::c02b:fc8f:99e5:337c]:51688 (if7) DTLS: lg_xmit 0x72447080 released
Mar 21 15:55:17.662 DEBG * /tmp/coap-proxy-client <-> /tmp/my.socket TCP : netif: recv 18 bytes
v:Reliable c:4.04 {01} [ ]
Mar 21 15:55:17.662 DEBG ** /tmp/coap-proxy-client <-> /tmp/my.socket TCP : lg_crcv 0x72432a40 released
Mar 21 15:55:17.663 DEBG ** process upstream incoming 4.04 response:
Mar 21 15:55:17.663 DEBG * [fe80::20f:69ff:feff:d6c3]:5684 <-> [fe80::c02b:fc8f:99e5:337c]:51688 (if7) DTLS: netif: sent 34 bytes
Mar 21 15:55:17.664 DEBG * [fe80::20f:69ff:feff:d6c3]:5684 <-> [fe80::c02b:fc8f:99e5:337c]:51688 (if7) DTLS: dtls: sent 5 bytes
v:1 t:CON c:4.04 i:f65e {07} [ ]
Mar 21 15:55:17.664 DEBG ** [fe80::20f:69ff:feff:d6c3]:5684 <-> [fe80::c02b:fc8f:99e5:337c]:51688 (if7) DTLS: mid=0xf65e: added to retransmit queue (2906ms)
v:Reliable c:4.04 {0b} [ ]
Mar 21 15:55:17.664 DEBG ** /tmp/coap-proxy-client <-> /tmp/my.socket TCP : lg_crcv 0x72452d40 released
Mar 21 15:55:17.665 DEBG ** process upstream incoming 4.04 response:
Mar 21 15:55:17.665 DEBG ** [fe80::20f:69ff:feff:d6c3]:5684 <-> [fe80::c02b:fc8f:99e5:337c]:51688 (if7) DTLS: mid=0xf65f: delayed
v:Reliable c:4.04 {03} [ ]
Mar 21 15:55:17.665 DEBG ** /tmp/coap-proxy-client <-> /tmp/my.socket TCP : lg_crcv 0x724327c0 released
Mar 21 15:55:17.665 DEBG ** process upstream incoming 4.04 response:
Mar 21 15:55:17.666 DEBG ** [fe80::20f:69ff:feff:d6c3]:5684 <-> [fe80::c02b:fc8f:99e5:337c]:51688 (if7) DTLS: mid=0xf660: delayed
v:Reliable c:4.04 {09} [ ]
Mar 21 15:55:17.666 DEBG ** /tmp/coap-proxy-client <-> /tmp/my.socket TCP : lg_crcv 0x724499c0 released
Mar 21 15:55:17.666 DEBG ** process upstream incoming 4.04 response:
Mar 21 15:55:17.667 DEBG ** [fe80::20f:69ff:feff:d6c3]:5684 <-> [fe80::c02b:fc8f:99e5:337c]:51688 (if7) DTLS: mid=0xf661: delayed
v:Reliable c:4.04 {07} [ ]
Mar 21 15:55:17.667 DEBG ** /tmp/coap-proxy-client <-> /tmp/my.socket TCP : lg_crcv 0x72431c80 released
Mar 21 15:55:17.667 DEBG ** process upstream incoming 4.04 response:
Mar 21 15:55:17.668 DEBG ** [fe80::20f:69ff:feff:d6c3]:5684 <-> [fe80::c02b:fc8f:99e5:337c]:51688 (if7) DTLS: mid=0xf662: delayed
v:Reliable c:4.04 {05} [ ]
Mar 21 15:55:17.671 DEBG ** /tmp/coap-proxy-client <-> /tmp/my.socket TCP : lg_crcv 0x72432540 released
Mar 21 15:55:17.671 DEBG ** process upstream incoming 4.04 response:
Mar 21 15:55:17.671 DEBG ** [fe80::20f:69ff:feff:d6c3]:5684 <-> [fe80::c02b:fc8f:99e5:337c]:51688 (if7) DTLS: mid=0xf663: delayed
Mar 21 15:55:17.672 DEBG * /tmp/coap-proxy-client <-> /tmp/my.socket TCP : netif: failed to receive any bytes (Connection reset by peer) state 4
Mar 21 15:55:17.672 DEBG ***/tmp/coap-proxy-client <-> /tmp/my.socket TCP : session disconnected (COAP_NACK_NOT_DELIVERABLE)
Mar 21 15:55:17.672 DEBG ***EVENT: COAP_EVENT_TCP_CLOSED
Mar 21 15:55:17.673 DEBG ***EVENT: COAP_EVENT_SESSION_CLOSED
Mar 21 15:55:36.463 DEBG * [fe80::20f:69ff:feff:d6c3]:5684 <-> [fe80::c02b:fc8f:99e5:337c]:51688 (if7) DTLS: netif: recv 68 bytes
Mar 21 15:55:36.463 DEBG * [fe80::20f:69ff:feff:d6c3]:5684 <-> [fe80::c02b:fc8f:99e5:337c]:51688 (if7) DTLS: dtls: recv 39 bytes
v:1 t:CON c:GET i:86a9 {01c7} [ Uri-Path:my, Uri-Path:path, Request-Tag:0xfc5e6c15 ]
Mar 21 15:55:36.464 DEBG call custom handler for resource '- Rev Proxy -' (3)
Mar 21 15:55:36.464 DEBG coap_send: Socket closed
Mar 21 15:55:36.464 DEBG proxy: upstream PDU send error
error forward to /tmp/my.socket

After looking at the coap_proxy_check_timeouts and coap_proxy_get_ongoing_session functions, I assume it might already be sufficient to provide a public function that sets proxy->ongoing to NULL and that I can call in my event callback if the session is closed? Although I am not sure right now how the application can detect that the closed session is one of libcoap's internal proxy sessions.

As the proxy_list is registered in the context, would it be possible for libcoap to handle the reconnect without the application, and maybe even call coap_proxy_forward_response() itself? Or would it make more sense to create an own (libcoap-internal) context for the proxy sessions?

In my specific case, another solution might be that my application could pass the session to the upstream server as part of the coap_proxy_forward_request call as I have to create a session anyway for sending own requests the upstream server. So my application would be aware of the upstream session and could also identify upstream responses and events more easily. What do you think?

I also noticed that the local unix domain socket always has the same path which might cause problems. I think it would be best to create a safe temporary directory with mkdtemp and create the socket file inside of it. Would you agree? I can try to come up with a pull request for this.

Regarding the proxy documentation, it looks like observing resources of the upstream server only works if I set idle_timeout_secs to zero. Maybe an additional note in the documentation would help others. If you agree, I can create a pull request.

After writing all this, I realize that my application has to be aware of the observations my clients create of resources of the upstream server. If the upstream server restarts, the clients still think they will be notified, as the connection to my application persists, although the upstream server has no more information about the observations from before its restart. My goal is that clients do not have to know that some resources are handled by a separate process (that might have higher privileges). Maybe this goal is too specific for my needs and I have to keep using my own proxy routines to achieve this and copy the code from the coap_proxy_forward_* functions.

But this would bring me back to the issue we discussed in the previous ticket: a resource created by the unknown handler has to inherit the observe "information" from the unknown handler as a client should not have to know that the observe flag of the first request is "ignored". I could query the upstream server for all resources after startup and proactively create resources but a low boot time of our device is a crucial requirement and this might also create race condition issues.

Thank you for help!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

      Relationships

      None yet

      Development

      No branches or pull requests

      Issue actions

        AltStyle によって変換されたページ (->オリジナル) /