Relays and Portals

Ockam Relays make it easy to traverse NATs and run end-to-end protocols between Ockam Nodes in far away private networks. Ockam Portals make existing protocols work over Ockam Routing.

In the previous section, we learned how Ockam Routing and Transports create a foundation for end-to-end application layer protocols. When discussing Transports, we put together a specific example communication topology – a transport bridge.

Bridges

Node n1 wishes to access a service on node n3, but it can't directly connect to n3. This can happen for many reasons, maybe because n3 is in a separate IP subnet, or it could be that the communication from n1 to n2 uses UDP while from n2 to n3 uses TCP or other similar constraints. The topology makes n2 a bridge or gateway between these two separate networks to enable end-to-end protocols between n1 and n3 even though they are not directly connected.

Relays

It is common, however, to encounter communication topologies where the machine that provides a service is unwilling or is not allowed to open a listening port or expose a bridge node to other networks. This is a common security best practice in enterprise environments, home networks, OT networks, and VPCs across clouds. Application developers may not have control over these choices from the infrastructure / operations layer. This is where relays are useful.

Relays make it possible to establish end-to-end protocols with services operating in a remote private network, without requiring a remote service to expose listening ports to an outside hostile network like the Internet.

Delete any existing nodes and then try this new example:

» ockam node create n2 --tcp-listener-address=127.0.0.1:7000

» ockam node create n3
» ockam service start hop --at n3
» ockam relay create n3 --at /node/n2 --to /node/n3
     ✔︎ Now relaying messages from /node/n2/service/25716d6f86340c3f594e99dede6232df → /node/n3/service/forward_to_n3

» ockam node create n1
» ockam tcp-connection create --from n1 --to 127.0.0.1:7000
» ockam message send hello --from n1 --to /worker/603b62d245c9119d584ba3d874eb8108/service/forward_to_n3/service/uppercase
HELLO

In this example, the direction of the second TCP connection is reversed in comparison to our first example that used a bridge. n2 is the only node that has to listen for TCP connections.

Node n2 is running a relay service. n3 makes an outgoing TCP connection to n2 and requests a forwarding address from the relay service. n3 then becomes reachable via n2 at the address /service/forward_to_n3.

Node n1 connects with n2 and routes messages to n3 via its forwarding relay.

The message in the above example took the following route. This is very similar to our earlier example except for the direction of the second TCP connection. The relay worker remembers the route to back to n3. n1 just has to get the message to the forwarding relay and everything just works.

Using this simple topology rearrangement, Ockam Routing makes it possible to establish end-to-end protocols between applications that are running in completely private networks.

We can traverse NATs and pierce through network boundaries. And since this is all built using a very simple application layer routing protocol, we can have any number of transport connection hops in any transport protocol, and we can mix-match bridges with relays to create end-to-end protocols in any communication topology.

Portals

Portals make existing protocols work over Ockam Routing without changing any code in the existing applications.

Continuing from our Relays example, create a Python-based web server to represent a sample web service. This web service is listening on 127.0.0.1:9000.

» python3 -m http.server --bind 127.0.0.1 9000

» ockam tcp-outlet create --at n3 --from /service/outlet --to 127.0.0.1:9000
» ockam tcp-inlet create --at n1 --from 127.0.0.1:6000 \
    --to /worker/603b62d245c9119d584ba3d874eb8108/service/forward_to_n3/service/hop/service/outlet

» curl --head 127.0.0.1:6000
HTTP/1.0 200 OK
...

Then create a TCP Portal Outlet that makes 127.0.0.1:9000 available on worker address /service/outlet on n3. We already have a forwarding relay for n3 on n2 at service/forward_to_n3.

We then create a TCP Portal Inlet on n1 that will listen for TCP connections to 127.0.0.1:6000. For every new connection, the inlet creates a portal following the --to route all the way to the outlet. As it receives TCP data, it chunks and wraps them into Ockam Routing messages and sends them along the supplied route. The outlet receives Ockam Routing messages, unwraps them to extract TCP data and sends that data along to the target web service on 127.0.0.1:9000. It all just seamlessly works.

The HTTP requests from curl enter the inlet on n1, travel to n2, and are relayed back to n3 via its forwarding relay to reach the outlet and onward to the Python-based web service. Responses take the same return route back to curl.

The TCP Inlet/Outlet work for a large number of TCP-based protocols like HTTP. It is also simple to implement portals for other transport protocols. There is a growing base of Ockam Portal Add-Ons in our GitHub Repository.

Recap

To clean up and delete all nodes, run: ockam node delete --all

Ockam Routing and Transports combined with the ability to model Bridges and Relays make it possible to create end-to-end, application layer protocols in any communication topology - across networks, clouds, and boundaries.

Portals take this powerful capability a huge step forward by making it possible to apply these end-to-end protocols and their guarantees to existing applications, without changing any code!

This lays the foundation to make both new and existing applications - end-to-end encrypted and secure-by-design.

If you're stuck or have questions at any point, please reach out to us.

Next, let's learn how to create cryptographic identities and store secret keys in safe vaults.

Last updated