Kubernetes
Last updated
Last updated
Let's connect a nodejs app in one kubernetes cluster with a postgres database in another private kubernetes cluster.
Each company’s network is private, isolated, and doesn't expose ports. To learn how end-to-end trust is established, please read: “How does Ockam work?”
This example requires Bash, Git, Curl, Kind, and Kubectl. Please set up these tools for your operating system, then run the following commands:
If everything runs as expected, you'll see the message: The example run was successful 🥳
The run.sh script, that you ran above, and its accompanying files are full of comments and meant to be read. The example setup is only a few simple steps, so please take some time to read and explore.
The run.sh script calls the run function which invokes the enroll command to create an new identity, sign into Ockam Orchestrator, set up a new Ockam project, make you the administrator of this project, and get a project membership credential.
The run function then generates two new enrollment tickets. The tickets are valid for 10 minutes. Each ticket can be redeemed only once and assigns attributes to its redeemer. The first ticket is meant for the Ockam node that will run in Bank Corp.’s kubernetes cluster. The second ticket is for the Ockam node that will run in Analysis Corp.’s kubernetes cluster.
In a typical production setup an administrator or provisioning pipeline generates enrollment tickets and gives them to nodes that are being provisioned. In our example, the run function is acting on your behalf as the administrator of the Ockam project. It uses kubernetes secrets to give tickets to Ockam nodes that are being provisioned in Bank Corp.’s and Analysis Corp.’s kubernetes clusters.
The run function takes the enrollment tickets, sets them as kubernetes secrets, and uses kind with kubectl to create Bank Corp.’s and Analysis Corp.’s kubernetes clusters.
Bank Corp.’s kubernetes manifest defines a pod and containers to run in Bank Corp’s isolated kubernetes cluster. The run.sh script invokes kind to create the cluster, prepares container images and calls kubectl apply to start the pod and its containers.
One of the containers defined in Bank Corp.’s kubernetes manifest runs a PostgreSQL database makes it available on localhost:5432 inside its pod.
Another container defined inside that same pod runs an Ockam node as a companion to the postgres container. The Ockam node container is created using this dockerfile and this entrypoint script. The enrollment ticket from run.sh is passed to the container.
When the Ockam node container starts in the Bank Corp cluster, it runs its entrypoint. The entrypoint script creates a new identity and uses the enrollment ticket to enroll with your project and get a project membership credential that attests to the attribute postgres-outlet=true. The run function assigned this attribute to the enrollment ticket.
The entrypoint script then creates a node that uses this identity and membership credential to authenticate and create a relay in the project, back to the node, at relay address: postgres. The run function gave the enrollment ticket permission to use this relay address.
Next, the entrypoint sets an access control policy that only allows project members that possesses a credential with attribute postgres-inlet="true" to connect to tcp portal outlets on this node. It then creates tcp portal outlet to postgres at localhost:5432.
Analysis Corp.’s kubernetes manifest defines a pod and containers to run in Analysis Corp.’s isolated kubernetes cluster. The run.sh script invokes kind to create the cluster, prepares container images and calls kubectl apply to start the pod and its containers. The manifest defines a pod with two containers an Ockam node container and an app container.
The Ockam node container is created using this dockerfile and this entrypoint script. The enrollment ticket from run.sh is passed to the container.
When the Ockam node container starts in the Analysis Corp network, it runs its entrypoint. The entrypoint script creates a new identity and uses the enrollment ticket to enroll with your project and get a project membership credential that attests to the attribute postgres-inlet=true. The run function assigned this attribute to the enrollment ticket.
The entrypoint script then creates a node that uses this identity and membership credential. It then sets an access control policy that only allows project members that possesses a credential with attribute postgres-outlet="true" to connect to tcp portal inlets on this node.
Next, the entrypoint creates tcp portal inlet that makes the remote postgres available on all localhost IPs at 0.0.0.0:15432. This makes postgres available at localhost:15432 within Analysis Corp’s pod that also has the app container.
The app container is created using this dockerfile which runs this app.js file on startup. The app.js file is a nodejs app, it connects with postgres on localhost:15432, then creates a table in the database, inserts some data into the table, queries it back, and prints it.
We connected a nodejs app in one kubernetes cluster with a postgres database in another kubernetes cluster over an end-to-end encrypted portal.
Sensitive business data in the postgres database is only accessible to Bank Corp. and Analysis Corp. All data is encrypted with strong forward secrecy as it moves through the Internet. The communication channel is mutually authenticated and authorized. Keys and credentials are automatically rotated. Access to connect with postgres can be easily revoked.
Analysis Corp. does not get unfettered access to Bank Corp.’s cluster. It gets access only to run queries on the postgres server. Bank Corp. does not get unfettered access to Analysis Corp.’s cluster. It gets access only to respond to queries over a tcp connection. Bank Corp. cannot initiate connections.
All access controls are secure-by-default. Only project members, with valid credentials, can connect with each other. NAT’s are traversed using a relay and outgoing tcp connections. Bank Corp. or Analysis Corp. don’t expose any listening endpoints on the Internet. Their kubernetes clusters are completely closed and protected from any attacks from the Internet.
To delete all containers and images: