Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Create an Ockam Portal between any application, to any database, in any environment.
In each example, we connect a nodejs app in one private network with a database in another private network.
Each company’s network is private, isolated, and doesn't expose ports. To learn how end-to-end trust is established, please read: “How does Ockam work?”
Please select an example to dig in:
Use one of the Ockam Snowflake connectors to build private connections to Snowflake in minutes.
We connect a nodejs app in one private network with a PostgreSQL database in another private network.
We connect a nodejs app in one private network with a MongoDB database in another private network.
We connect a nodejs app in one private network with a InfluxDB database in another private network.
Let’s build a simple example together. We will create an encrypted Ockam Portal from a psql microservice in Azure to a Postgres Database in AWS.
When you get done with this page you will understand
the basic building blocks of Ockam,
the first steps you should take in your architecture, and
how to build an end-to-end encrypted portal between two private services.
Sign up for Ockam and pick a subscription plan through the guided workflow on Ockam.io. After you complete this step you will have a Project in Ockam Orchestrator. A Project offers two services: a Membership Authority and a Relay service. More on both of those later.
Run the following commands to install Ockam Command on your dev machine.
The `enroll` command does a lot! All at once it...
creates an Ockam Node on your machine.
generates a private key Identifier as your local Node’s cryptographic Identity.
creates a local Vault to store keys.
guides you to sign in to your new Ockam Orchestrator Project.
asks your Project’s Membership Authority to issue and sign a membership Credential for this Node.
makes you the administrator of your Project.
creates a Secure Channel between your local Ockam Node and your Project in Orchestrator.
Congrats! Your dev machine Node has a secure, encrypted Ockam Portal connection to your Project Node inside of Ockam Orchestrator over a Secure Channel!
The process is repeated in AWS through the same set of commands.
You now have an Ockam Node running in your VPC. As before, this Node will have
a set of private key Identifiers, stored in a local Vault
a Membership Credential that will allow this Ockam Node to join your Project in Orchestrator.
An Outlet is created in the Ockam Node and a raw TCP connection is created to the postgres server on localhost port 5432.
This command
initiates an outgoing tcp connection from the Ockam Node in AWS to your Project in Ockam Orchestrator.
creates a Secure Channel over the tcp connection.
creates a Relay in your Project at the address: postgres
Notice that we didn’t have to change anything in the AWS network settings. It’s possible because Bank Corp’s network allows outgoing tcp connections to the Internet. We use this port to create the Secure Channel.
This command
creates a tcp Portal Inlet.
creates a tcp listener on localhost port 15432.
creates an outgoing tcp connection to your Project.
creates a Secure Channel to your Project over this tcp connection.
creates an end-to-end Secure Channel from the Inlet to the Outlet in Bank Corp’s VPC via the Relay in your Project at address: postgres
Congrats! The psql microservice at Analysis Corp and the Postgres database at Bank Corp are connected with an Ockam Portal.
The psql service now has an end-to-end encrypted, mutually authenticated, secure channel connection with the postgres database on localhost:15432
All of the data-in-motion is end-to-end encrypted with strong forward secrecy as it moves through the Internet. The communication channel is mutually authenticated and authorized. Keys and Credentials are automatically rotated. Access to connect with postgres can be easily revoked.
This is just one simple example. Ockam’s stack of protocols work together to ensure security, privacy, and trust in data. They can be combined and composed in all sorts of ways.
In the next section we will dive into all sorts of ways to build portals across different infrastructures, networks, and applications.
Try one of these demos yourself, or get a video walk through.
Ockam empowers you to build secure-by-design apps that can trust data-in-motion.
With Ockam:
Impossible connections become possible. Establish secure channels between systems in private networks that previously could not be connected because it is either too difficult or insecure.
All public endpoints become private. Connect your applications and databases without exposing anything publicly.
At its core, Ockam is a toolkit for developers to build applications that can create end-to-end encrypted, mutually authenticated, secure communication channels:
From anywhere to anywhere: Ockam works across any network, cloud, or on prem infrastructure.
Over any transport topology: Ockam is compatible with every transport layer including TCP, UDP, Kafka, or even Bluetooth.
Without no infrastructure, network, or application changes: Ockam works at the application layer, so you don’t need to make complex changes.
While ensuring the risky things are impossible to get wrong: Ockam’s protocols do the heavy lifting to establish end-to-end encrypted, mutually authenticated secure channels
Traditionally, connections made over TCP are secured with TLS. However, the security guarantees of a TLS secure channel only apply for the length of the underlying TCP connection. It is not possible to connect two systems in different private networks over a single TCP connection. Thus, connecting these two systems requires exposing one of them over the Internet, and breaking the security guarantees of TLS.
Ockam works differently. Our secure channel protocol sits on top of an application layer routing protocol. This routing protocol can hand over messages from one transport layer connection to another. This can be done over any transport protocol, with any number of transport layer hops: TCP to TCP to TCP, TCP to UDP to TCP, UDP to Bluetooth to TCP to Kafka, etc.
Over these transport layer connections, Ockam sets up an end-to-end encrypted, mutually authenticated connection. This unlocks the ability to create secure channels between systems that live in entirely private networks, without exposing either end to the Internet.
Since Ockam’s routing protocol is at the application layer, complex network and infrastructure changes are not required to make these connections. Rather than a months-long infrastructure project, you can connect private systems in minutes while ensuring the risky things are impossible to get wrong. NATs are traversed; Keys are stored in vaults; Credentials are short-lived; Messages are authenticated; Data-integrity is guaranteed; Senders are protected from key compromise impersonation; Encryption keys are ratcheted; Nonces are never reused; Strong forward secrecy is ensured; Sessions recover from network failures; and a lot more.
The magic of Ockam is it's simplicity. All you need to do is subscribe to Ockam Orchestrator, and then deploy one of the following distributions next to the applications you'd like to connect:
Ockam Programming Libraries (Rust …)
Ockam Command
Ockam Docker Images
RedPanda Connect
Managed Ockam Nodes from the AWS Marketplace
Snowflake Native Apps
Lambda/Serverless Functions
Ockam empowers you to build secure-by-design apps that can trust data-in-motion.
You can use Ockam to create end-to-end encrypted and mutually authenticated channels. Ockam secure channels authenticate using cryptographic identities and credentials. They give your apps granular control over all trust and access decisions. This control makes it easy to enforce fine-grained, attribute-based authorization policies – at scale.
Similarly, using another simple command a kafka producer can publish end-to-end encrypted messages for a specific kafka consumer. Kafka brokers in the middle can’t see, manipulate, or accidentally leak sensitive enterprise data. This minimizes risk to sensitive business data and makes it easy to comply with data governance policies.
Portals carry various application protocols over end-to-end encrypted Ockam secure channels.
For example: a TCP Portal carries TCP over Ockam, a Kafka Portal carries Kafka Protocol over Ockam, etc. Since portals work with existing application protocols you can use them through companion Ockam Nodes, that run adjacent to your application, without changing any of your application’s code.
A tcp portal makes a remote tcp server virtually adjacent to the server’s clients. It has two parts: an inlet and an outlet. The outlet runs adjacent to the tcp server and inlets run adjacent to tcp clients. An inlet and the outlet work together to create a portal that makes the remote tcp server appear on localhost adjacent to a client. This client can then interact with this localhost server exactly like it would with the remote server. All communication between inlets and outlets is end-to-end encrypted.
NATs are traversed
Keys are stored in vaults
Credentials are short-lived
Messages are authenticated
Data-integrity is guaranteed
Senders are protected from key compromise impersonation
Encryption keys are ratcheted
Nonces are never reused
Sessions recover from network failures
...and lots more!
In each example, we will connect a nodejs app in one private network with a postgres database in another private network.
Please select an example to dig in:
These core capabilities are composed to enable private and secure communication in a wide variety of application architectures. For example, with one simple command, an app in your cloud can create an to a micro-service in another cloud. The service doesn’t need to be exposed to the Internet. You don’t have to change anything about networks or firewalls.
You can use Ockam Command to start nodes with one or more inlets or outlets. The underlying handle the hard parts :
Strong forward secrecy is ensured and has been validated by an
Each company’s network is private, isolated, and doesn't expose ports. To learn how end-to-end trust is established, please read: “”
We connect a nodejs app in one virtual private network with a postgres database in another virtual private network. The example uses docker and docker compose to create these virtual networks.
We connect a nodejs app in one private kubernetes cluster with a postgres database in another private kubernetes cluster. The example uses docker and kind to create these kubernetes clusters.
We connect a nodejs app in one Amazon VPC with a Amazon Aurora managed Postgres database in another Amazon VPC. The example uses AWS CLI to create these VPCs.
We connect a nodejs app in one Amazon VPC with a Amazon RDS managed Postgres database in another Amazon VPC. The example uses AWS CLI to create these VPCs.
Let's connect a nodejs app in one private network with a postgres database in another private network.
Each company’s network is private, isolated, and doesn't expose ports. To learn how end-to-end trust is established, please read: “How does Ockam work?”
This example requires Bash, Git, Curl, Docker, and Docker Compose. Please set up these tools for your operating system, then run the following commands:
If everything runs as expected, you'll see the message: The example run was successful 🥳
The run.sh script, that you ran above, and its accompanying files are full of comments and meant to be read. The example setup is only a few simple steps, so please take some time to read and explore.
The run.sh script calls the run function which invokes the enroll command to create an new identity, sign into Ockam Orchestrator, set up a new Ockam project, make you the administrator of this project, and get a project membership credential.
The run function then generates two new enrollment tickets. The tickets are valid for 10 minutes. Each ticket can be redeemed only once and assigns attributes to its redeemer. The first ticket is meant for the Ockam node that will run in Bank Corp.’s network. The second ticket is meant for the Ockam node that will run in Analysis Corp.’s network.
In a typical production setup an administrator or provisioning pipeline generates enrollment tickets and gives them to nodes that are being provisioned. In our example, the run function is acting on your behalf as the administrator of the Ockam project. It uses environment variables to give tickets to and provision Ockam nodes in Bank Corp.’s and Analysis Corp.’s network.
The run function takes the enrollment tickets, sets them as the value of an environment variable, and invokes docker-compose to create Bank Corp.’s and Analysis Corp.’s networks.
Bank Corp.’s docker-compose configuration is used when run.sh invokes docker-compose. It creates an isolated virtual network for Bank Corp.
In this network, docker compose starts a container with a PostgreSQL database. This container becomes available at postgres:5432 in the Bank Corp network.
Once the postgres container is ready, docker compose starts an Ockam node in a container as a companion to the postgres container. The Ockam node container is created using this dockerfile and this entrypoint script. The enrollment ticket from run.sh is passed to the container.
When the Ockam node container starts in the Bank Corp network, it runs its entrypoint. The entrypoint script creates a new identity and uses the enrollment ticket to enroll with your project and get a project membership credential that attests to the attribute postgres-outlet=true. The run function assigned this attribute to the enrollment ticket.
The entrypoint script then creates a node that uses this identity and membership credential to authenticate and create a relay in the project, back to the node, at relay address: postgres. The run function gave the enrollment ticket permission to use this relay address.
Next, the entrypoint sets an access control policy that only allows project members that possesses a credential with attribute postgres-inlet="true" to connect to tcp portal outlets on this node. It then creates tcp portal outlet to postgres at postgres:5432.
Analysis Corp.’s docker-compose configuration is used when run.sh invokes docker-compose. It creates an isolated virtual network for Analysis Corp. In this network, docker compose starts an Ockam node container and an app container.
The Ockam node container is created using this dockerfile and this entrypoint script. The enrollment ticket from run.sh is passed to the container.
When the Ockam node container starts in the Analysis Corp network, it runs its entrypoint. The entrypoint script creates a new identity and uses the enrollment ticket to enroll with your project and get a project membership credential that attests to the attribute postgres-inlet=true. The run function assigned this attribute to the enrollment ticket.
The entrypoint script then creates a node that uses this identity and membership credential. It then sets an access control policy that only allows project members that possesses a credential with attribute postgres-outlet="true" to connect to tcp portal inlets on this node.
Next, the entrypoint creates tcp portal inlet that makes the remote postgres available on all localhost IPs at 0.0.0.0:15432. This makes postgres available at ockam:15432 within Analysis Corp’s virtual private network.
Once the Ockam node container is ready, docker compose starts an app container. The app container is created using this dockerfile which runs this app.js file on startup.
The app.js file is a nodejs app, it connects with postgres on ockam:15432, then creates a table in the database, inserts some data into the table, queries it back, and prints it.
We connected a nodejs app in one virtual private network with a postgres database in another virtual private network over an end-to-end encrypted portal.
Sensitive business data in the postgres database is only accessible to Bank Corp. and Analysis Corp. All data is encrypted with strong forward secrecy as it moves through the Internet. The communication channel is mutually authenticated and authorized. Keys and credentials are automatically rotated. Access to connect with postgres can be easily revoked.
Analysis Corp. does not get unfettered access to Bank Corp.’s network. It gets access only to run queries on the postgres server. Bank Corp. does not get unfettered access to Analysis Corp.’s network. It gets access only to respond to queries over a tcp connection. Bank Corp. cannot initiate connections.
All access controls are secure-by-default. Only project members, with valid credentials, can connect with each other. NAT’s are traversed using a relay and outgoing tcp connections. Bank Corp. or Analysis Corp. don’t expose any listening endpoints on the Internet. Their networks are completely closed and protected from any attacks from the Internet.
To delete all containers and images:
In each example, we connect a nodejs app in one private network with a MongoDB database in another private network.
Each company’s network is private, isolated, and doesn't expose ports. To learn how end-to-end trust is established, please read: “How does Ockam work?”
Please select an example to dig in:
Let's connect a nodejs app in one kubernetes cluster with a postgres database in another private kubernetes cluster.
Each company’s network is private, isolated, and doesn't expose ports. To learn how end-to-end trust is established, please read: “How does Ockam work?”
This example requires Bash, Git, Curl, Kind, and Kubectl. Please set up these tools for your operating system, then run the following commands:
If everything runs as expected, you'll see the message: The example run was successful 🥳
The run.sh script, that you ran above, and its accompanying files are full of comments and meant to be read. The example setup is only a few simple steps, so please take some time to read and explore.
The run.sh script calls the run function which invokes the enroll command to create an new identity, sign into Ockam Orchestrator, set up a new Ockam project, make you the administrator of this project, and get a project membership credential.
The run function then generates two new enrollment tickets. The tickets are valid for 10 minutes. Each ticket can be redeemed only once and assigns attributes to its redeemer. The first ticket is meant for the Ockam node that will run in Bank Corp.’s kubernetes cluster. The second ticket is for the Ockam node that will run in Analysis Corp.’s kubernetes cluster.
In a typical production setup an administrator or provisioning pipeline generates enrollment tickets and gives them to nodes that are being provisioned. In our example, the run function is acting on your behalf as the administrator of the Ockam project. It uses kubernetes secrets to give tickets to Ockam nodes that are being provisioned in Bank Corp.’s and Analysis Corp.’s kubernetes clusters.
The run function takes the enrollment tickets, sets them as kubernetes secrets, and uses kind with kubectl to create Bank Corp.’s and Analysis Corp.’s kubernetes clusters.
Bank Corp.’s kubernetes manifest defines a pod and containers to run in Bank Corp’s isolated kubernetes cluster. The run.sh script invokes kind to create the cluster, prepares container images and calls kubectl apply to start the pod and its containers.
One of the containers defined in Bank Corp.’s kubernetes manifest runs a PostgreSQL database makes it available on localhost:5432 inside its pod.
Another container defined inside that same pod runs an Ockam node as a companion to the postgres container. The Ockam node container is created using this dockerfile and this entrypoint script. The enrollment ticket from run.sh is passed to the container.
When the Ockam node container starts in the Bank Corp cluster, it runs its entrypoint. The entrypoint script creates a new identity and uses the enrollment ticket to enroll with your project and get a project membership credential that attests to the attribute postgres-outlet=true. The run function assigned this attribute to the enrollment ticket.
The entrypoint script then creates a node that uses this identity and membership credential to authenticate and create a relay in the project, back to the node, at relay address: postgres. The run function gave the enrollment ticket permission to use this relay address.
Next, the entrypoint sets an access control policy that only allows project members that possesses a credential with attribute postgres-inlet="true" to connect to tcp portal outlets on this node. It then creates tcp portal outlet to postgres at localhost:5432.
Analysis Corp.’s kubernetes manifest defines a pod and containers to run in Analysis Corp.’s isolated kubernetes cluster. The run.sh script invokes kind to create the cluster, prepares container images and calls kubectl apply to start the pod and its containers. The manifest defines a pod with two containers an Ockam node container and an app container.
The Ockam node container is created using this dockerfile and this entrypoint script. The enrollment ticket from run.sh is passed to the container.
When the Ockam node container starts in the Analysis Corp network, it runs its entrypoint. The entrypoint script creates a new identity and uses the enrollment ticket to enroll with your project and get a project membership credential that attests to the attribute postgres-inlet=true. The run function assigned this attribute to the enrollment ticket.
The entrypoint script then creates a node that uses this identity and membership credential. It then sets an access control policy that only allows project members that possesses a credential with attribute postgres-outlet="true" to connect to tcp portal inlets on this node.
Next, the entrypoint creates tcp portal inlet that makes the remote postgres available on all localhost IPs at 0.0.0.0:15432. This makes postgres available at localhost:15432 within Analysis Corp’s pod that also has the app container.
The app container is created using this dockerfile which runs this app.js file on startup. The app.js file is a nodejs app, it connects with postgres on localhost:15432, then creates a table in the database, inserts some data into the table, queries it back, and prints it.
We connected a nodejs app in one kubernetes cluster with a postgres database in another kubernetes cluster over an end-to-end encrypted portal.
Sensitive business data in the postgres database is only accessible to Bank Corp. and Analysis Corp. All data is encrypted with strong forward secrecy as it moves through the Internet. The communication channel is mutually authenticated and authorized. Keys and credentials are automatically rotated. Access to connect with postgres can be easily revoked.
Analysis Corp. does not get unfettered access to Bank Corp.’s cluster. It gets access only to run queries on the postgres server. Bank Corp. does not get unfettered access to Analysis Corp.’s cluster. It gets access only to respond to queries over a tcp connection. Bank Corp. cannot initiate connections.
All access controls are secure-by-default. Only project members, with valid credentials, can connect with each other. NAT’s are traversed using a relay and outgoing tcp connections. Bank Corp. or Analysis Corp. don’t expose any listening endpoints on the Internet. Their kubernetes clusters are completely closed and protected from any attacks from the Internet.
To delete all containers and images:
Let's connect a nodejs app in one Amazon VPC with an Amazon RDS managed Postgres database in another Amazon VPC.
Each company’s network is private, isolated, and doesn't expose ports. To learn how end-to-end trust is established, please read: “How does Ockam work?”
This example requires Bash, Git, and AWS CLI. Please set up these tools for your operating system. In particular you need to login to your AWS account with aws sso login
.
Then run the following commands:
If everything runs as expected, you'll see the message: The example run was successful 🥳
The run.sh script, that you ran above, and its accompanying files are full of comments and meant to be read. The example setup is only a few simple steps, so please take some time to read and explore.
The run.sh script calls the run function which invokes the enroll command to create an new identity, sign into Ockam Orchestrator, set up a new Ockam project, make you the administrator of this project, and get a project membership credential.
The run function then generates two new enrollment tickets. The tickets are valid for 10 minutes. Each ticket can be redeemed only once and assigns attributes to its redeemer. The first ticket is meant for the Ockam node that will run in Bank Corp.’s network. The second ticket is meant for the Ockam node that will run in Analysis Corp.’s network.
In a typical production setup an administrator or provisioning pipeline generates enrollment tickets and gives them to nodes that are being provisioned. In our example, the run function is acting on your behalf as the administrator of the Ockam project.
The run function passes the enrollment tickets as variables of the run scripts provisioning Bank Corp.'s network and Analysis Corp.'s network.
First, the bank_corp/run.sh
script creates a network to host the database:
We create a VPC and tag it.
We create an Internet gateway and attach it to the VPC.
We create a route table and create a route to the Internet via the gateway.
We create two subnets, located in two distinct availability zones, and associated to the route table.
We finally create a security group so that there is:
And one ingress to Postgres from within our two subnets.
Then, the bank_corp/run.sh
script creates an Aurora database:
This requires a subnet group.
Once the subnet group is created, we create a database cluster an a database instance.
We are now ready to create an EC2 instance where the Ockam outlet node will run:
We select an AMI.
We start an instance using the AMI above and a start script based on run_ockam.sh
where:
ENROLLMENT_TICKET
is replaced by the enrollment ticket created by the administrator and given as a parameter to run.sh
.
POSTGRES_ADDRESS
is replaced by the database address that we previously saved.
When the instance is started, the run_ockam.sh
script is executed:
We then create an Ockam node:
With a TCP outlet.
A policy associated to the outlet. The policy authorizes identities with a credential containing the attribute postgres-inlet="true".
With a relay capable of forwarding the TCP traffic to the TCP outlet.
First, the analysis_corp/run.sh
script creates a network to host the nodejs application:
We create a VPC and tag it.
We create an Internet gateway and attach it to the VPC.
We create a route table and create a route to the Internet via the gateway.
We create a subnet, and associated to the route table.
We finally create a security group so that there is:
And One SSH ingress to download and install the nodejs application.
We are now ready to create an EC2 instance where the Ockam inlet node will run:
We select an AMI.
We start an instance using the AMI above and a start script based on run_ockam.sh
where:
ENROLLMENT_TICKET
is replaced by the enrollment ticket created by the administrator and given as a parameter to run.sh
.
The instance is started and the run_ockam.sh
script is executed:
We then create an Ockam node:
With a TCP inlet.
A policy associated to the inlet. The policy authorizes identities with a credential containing the attribute postgres-outlet="true".
We finally wait for the instance to be ready and install the nodejs application:
The app.js file is copied to the instance (this uses the previously created key.pem
file to identify).
Once the nodejs application is started:
It creates a database table and runs some SQL queries to check that the connection with the Postgres database works.
We connected a nodejs app in one virtual private network with a postgres database in another virtual private network over an end-to-end encrypted portal.
Sensitive business data in the postgres database is only accessible to Bank Corp. and Analysis Corp. All data is encrypted with strong forward secrecy as it moves through the Internet. The communication channel is mutually authenticated and authorized. Keys and credentials are automatically rotated. Access to connect with postgres can be easily revoked.
Analysis Corp. does not get unfettered access to Bank Corp.’s network. It gets access only to run queries on the postgres server. Bank Corp. does not get unfettered access to Analysis Corp.’s network. It gets access only to respond to queries over a tcp connection. Bank Corp. cannot initiate connections.
All access controls are secure-by-default. Only project members, with valid credentials, can connect with each other. NAT’s are traversed using a relay and outgoing tcp connections. Bank Corp. or Analysis Corp. don’t expose any listening endpoints on the Internet. Their networks are completely closed and protected from any attacks from the Internet.
To delete all AWS resources:
Let's connect a nodejs app in one virtual private network with a MongoDB database in another virtual private network. The example uses docker and docker compose to create these virtual networks.
Each company’s network is private, isolated, and doesn't expose ports. To learn how end-to-end trust is established, please read: “How does Ockam work?”
This example requires Bash, Git, Curl, Docker, and Docker Compose. Please set up these tools for your operating system, then run the following commands:
If everything runs as expected, you'll see the message: The example run was successful 🥳
The run.sh script, that you ran above, and its accompanying files are full of comments and meant to be read. The example setup is only a few simple steps, so please take some time to read and explore.
The run.sh script calls the run function which invokes the enroll command to create an new identity, sign into Ockam Orchestrator, set up a new Ockam project, make you the administrator of this project, and get a project membership credential.
The run function then generates two new enrollment tickets. The tickets are valid for 10 minutes. Each ticket can be redeemed only once and assigns attributes to its redeemer. The first ticket is meant for the Ockam node that will run in Bank Corp.’s network. The second ticket is meant for the Ockam node that will run in Analysis Corp.’s network.
In a typical production setup an administrator or provisioning pipeline generates enrollment tickets and gives them to nodes that are being provisioned. In our example, the run function is acting on your behalf as the administrator of the Ockam project. It uses environment variables to give tickets to and provision Ockam nodes in Bank Corp.’s and Analysis Corp.’s network.
The run function takes the enrollment tickets, sets them as the value of an environment variable, and invokes docker-compose to create Bank Corp.’s and Analysis Corp.’s networks.
Bank Corp.’s docker-compose configuration is used when run.sh invokes docker-compose. It creates an isolated virtual network for Bank Corp.
In this network, docker compose starts a container with a MongoDB database. This container becomes available at mongodb:27017 in the Bank Corp network.
Once the mongodb container is ready, docker compose starts an Ockam node in a container as a companion to the mongodb container. The Ockam node container is created using this dockerfile and this entrypoint script. The enrollment ticket from run.sh is passed to the container.
When the Ockam node container starts in the Bank Corp network, it runs its entrypoint. The entrypoint script creates a new identity and uses the enrollment ticket to enroll with your project and get a project membership credential that attests to the attribute mongodb-outlet=true. The run function assigned this attribute to the enrollment ticket.
The entrypoint script then creates a node that uses this identity and membership credential to authenticate and create a relay in the project, back to the node, at relay address: mongodb. The run function gave the enrollment ticket permission to use this relay address.
Next, the entrypoint sets an access control policy that only allows project members that possesses a credential with attribute mongodb-inlet="true" to connect to tcp portal outlets on this node. It then creates tcp portal outlet to mongodb at mongodb:27017.
Analysis Corp.’s docker-compose configuration is used when run.sh invokes docker-compose. It creates an isolated virtual network for Analysis Corp. In this network, docker compose starts an Ockam node container and an app container.
The Ockam node container is created using this dockerfile and this entrypoint script. The enrollment ticket from run.sh is passed to the container.
When the Ockam node container starts in the Analysis Corp network, it runs its entrypoint. The entrypoint script creates a new identity and uses the enrollment ticket to enroll with your project and get a project membership credential that attests to the attribute mongodb-inlet=true. The run function assigned this attribute to the enrollment ticket.
The entrypoint script then creates a node that uses this identity and membership credential. It then sets an access control policy that only allows project members that possesses a credential with attribute mongodb-outlet="true" to connect to tcp portal inlets on this node.
Next, the entrypoint creates tcp portal inlet that makes the remote mongodb available on all localhost IPs at 0.0.0.0:17017. This makes mongodb available at ockam:17017 within Analysis Corp’s virtual private network.
Once the Ockam node container is ready, docker compose starts an app container. The app container is created using this dockerfile which runs this app.js file on startup.
The app.js file is a nodejs app, it connects with mongodb on ockam:17017, then inserts some data, queries it back, and prints it.
We connected a nodejs app in one virtual private network with a MongoDB database in another virtual private network over an end-to-end encrypted portal.
Sensitive business data in the MongoDB database is only accessible to Bank Corp. and Analysis Corp. All data is encrypted with strong forward secrecy as it moves through the Internet. The communication channel is mutually authenticated and authorized. Keys and credentials are automatically rotated. Access to connect with MongoDB can be easily revoked.
Analysis Corp. does not get unfettered access to Bank Corp.’s network. It gets access only to run queries on the MongoDB server. Bank Corp. does not get unfettered access to Analysis Corp.’s network. It gets access only to respond to queries over a tcp connection. Bank Corp. cannot initiate connections.
All access controls are secure-by-default. Only project members, with valid credentials, can connect with each other. NAT’s are traversed using a relay and outgoing tcp connections. Bank Corp. or Analysis Corp. don’t expose any listening endpoints on the Internet. Their networks are completely closed and protected from any attacks from the Internet.
To delete all containers and images:
Let's connect a nodejs app in one one private kubernetes cluster with a mongodb database in another private kubernetes cluster. The example uses docker and docker compose to create these virtual networks.
Each company’s network is private, isolated, and doesn't expose ports. To learn how end-to-end trust is established, please read: “How does Ockam work?”
This example requires Bash, Git, Curl, Kind, and Kubectl. Please set up these tools for your operating system, then run the following commands:
If everything runs as expected, you'll see the message: The example run was successful 🥳
The run.sh script, that you ran above, and its accompanying files are full of comments and meant to be read. The example setup is only a few simple steps, so please take some time to read and explore.
The run.sh script calls the run function which invokes the enroll command to create an new identity, sign into Ockam Orchestrator, set up a new Ockam project, make you the administrator of this project, and get a project membership credential.
The run function then generates two new enrollment tickets. The tickets are valid for 10 minutes. Each ticket can be redeemed only once and assigns attributes to its redeemer. The first ticket is meant for the Ockam node that will run in Bank Corp.’s kubernetes cluster. The second ticket is for the Ockam node that will run in Analysis Corp.’s kubernetes cluster.
In a typical production setup an administrator or provisioning pipeline generates enrollment tickets and gives them to nodes that are being provisioned. In our example, the run function is acting on your behalf as the administrator of the Ockam project. It uses kubernetes secrets to give tickets to Ockam nodes that are being provisioned in Bank Corp.’s and Analysis Corp.’s kubernetes clusters.
The run function takes the enrollment tickets, sets them as kubernetes secrets, and uses kind with kubectl to create Bank Corp.’s and Analysis Corp.’s kubernetes clusters.
Bank Corp.’s kubernetes manifest defines a pod and containers to run in Bank Corp’s isolated kubernetes cluster. The run.sh script invokes kind to create the cluster, prepares container images and calls kubectl apply to start the pod and its containers.
One of the containers defined in Bank Corp.’s kubernetes manifest runs a MongoDB database makes it available on localhost:5432 inside its pod.
Another container defined inside that same pod runs an Ockam node as a companion to the mongodb container. The Ockam node container is created using this dockerfile and this entrypoint script. The enrollment ticket from run.sh is passed to the container.
When the Ockam node container starts in the Bank Corp cluster, it runs its entrypoint. The entrypoint script creates a new identity and uses the enrollment ticket to enroll with your project and get a project membership credential that attests to the attribute mongodb-outlet=true. The run function assigned this attribute to the enrollment ticket.
The entrypoint script then creates a node that uses this identity and membership credential to authenticate and create a relay in the project, back to the node, at relay address: mongodb. The run function gave the enrollment ticket permission to use this relay address.
Next, the entrypoint sets an access control policy that only allows project members that possesses a credential with attribute mongodb-inlet="true" to connect to tcp portal outlets on this node. It then creates tcp portal outlet to mongodb at localhost:5432.
Analysis Corp.’s kubernetes manifest defines a pod and containers to run in Analysis Corp.’s isolated kubernetes cluster. The run.sh script invokes kind to create the cluster, prepares container images and calls kubectl apply to start the pod and its containers. The manifest defines a pod with two containers an Ockam node container and an app container.
The Ockam node container is created using this dockerfile and this entrypoint script. The enrollment ticket from run.sh is passed to the container.
When the Ockam node container starts in the Analysis Corp network, it runs its entrypoint. The entrypoint script creates a new identity and uses the enrollment ticket to enroll with your project and get a project membership credential that attests to the attribute mongodb-inlet=true. The run function assigned this attribute to the enrollment ticket.
The entrypoint script then creates a node that uses this identity and membership credential. It then sets an access control policy that only allows project members that possesses a credential with attribute mongodb-outlet="true" to connect to tcp portal inlets on this node.
Next, the entrypoint creates tcp portal inlet that makes the remote mongodb available on all localhost IPs at 0.0.0.0:15432. This makes mongodb available at localhost:15432 within Analysis Corp’s pod that also has the app container.
The app container is created using this dockerfile which runs this app.js file on startup. The app.js file is a nodejs app, it connects with mongodb on localhost:15432, then creates a table in the database, inserts some data into the table, queries it back, and prints it.
We connected a nodejs app in one kubernetes cluster with a mongodb database in another kubernetes cluster over an end-to-end encrypted portal.
Sensitive business data in the mongodb database is only accessible to Bank Corp. and Analysis Corp. All data is encrypted with strong forward secrecy as it moves through the Internet. The communication channel is mutually authenticated and authorized. Keys and credentials are automatically rotated. Access to connect with mongodb can be easily revoked.
Analysis Corp. does not get unfettered access to Bank Corp.’s cluster. It gets access only to run queries on the mongodb server. Bank Corp. does not get unfettered access to Analysis Corp.’s cluster. It gets access only to respond to queries over a tcp connection. Bank Corp. cannot initiate connections.
All access controls are secure-by-default. Only project members, with valid credentials, can connect with each other. NAT’s are traversed using a relay and outgoing tcp connections. Bank Corp. or Analysis Corp. don’t expose any listening endpoints on the Internet. Their kubernetes clusters are completely closed and protected from any attacks from the Internet.
To delete all containers and images:
Let's connect a nodejs app in one Amazon VPC with a Amazon Timestream managed InfluxDB database in another Amazon VPC. We’ll create an end-to-end encrypted Ockam Portal to InfluxDB.
To understand the details of how end-to-end trust is established, and how the portal works even though the two networks are isolated with no exposed ports, please read: “How does Ockam work?”
This example requires Bash, Git, AWS CLI. Please set up these tools for your operating system. In particular you need to login to your AWS account.
Then run the following commands:
If everything runs as expected, you'll see the message: The example run was successful 🥳
The run.sh script, that you ran above, and its accompanying files are full of comments and meant to be read. The example setup is only a few simple steps, so please take some time to read and explore.
The run.sh script calls the run function which invokes the enroll command to create an new identity, sign into Ockam Orchestrator, set up a new Ockam project, make you the administrator of this project, and get a project membership credential.
The run function then generates two new enrollment tickets. The tickets are valid for 10 minutes. Each ticket can be redeemed only once and assigns attributes to its redeemer. The first ticket is meant for the Ockam node that will run in Metrics Corp.’s network. The second ticket is meant for the Ockam node that will run in Datastream Corp.’s network.
In a typical production setup an administrator or provisioning pipeline generates enrollment tickets and gives them to nodes that are being provisioned. In our example, the run function is acting on your behalf as the administrator of the Ockam project.
The run function passes the enrollment tickets as variables of the run scripts provisioning Metrics Corp.'s network and Datastream Corp.'s network.
First, the metrics_corp/run.sh
script creates a network to host the database:
It creates a VPC and tags it.
It creates an Internet gateway and attaches it to the VPC.
It creates a route table and a route to the Internet via the gateway.
It creates a subnet and associates it with the route table.
It creates a security group which allows:
TCP egress to the Internet.
Ingress to InfluxDB from within the subnet.
SSH ingress to provision EC2 instances.
Then, the metrics_corp/run.sh
script creates a InfluxDB database using Timestream. Next the script creates an EC2 instance. This instance runs an Ockam TCP Outlet.
It selects an AMI.
It then starts an instance using this AMI and a start script based on run_ockam.sh
where:
ENROLLMENT_TICKET
is replaced by the enrollment ticket created by the administrator and given as a parameter to run.sh
.
INFLUXDB_ADDRESS
is replaced by the database address that we previously saved.
When EC2 starts the instance, it executes the run_ockam.sh
script:
It installs the Influxdb client and configures it.
It generates an InfluxDB auth token to send to Datastream Corp and saves it to file.
It installs the ockam
command.
It then creates an Ockam node with:
A TCP outlet.
An access control policy associated to the outlet. The policy authorizes only identities with a credential attesting to the attribute influxdb-inlet="true".
A a relay that can forward TCP traffic to the TCP outlet.
First, the datastream_corp/run.sh
script creates a network to host the nodejs application:
It creates a VPC and tags it.
It creates an Internet gateway and attaches it to the VPC.
It creates a route table and a route to the Internet via the gateway.
It creates a subnet and associates it with the route table.
It creates a security group that allows:
TCP egress to the Internet,
SSH ingress to provision EC2 instances.
Next, the script creates an EC2 instance. This instance runs an Ockam TCP Inlet.
It selects an AMI.
It then starts an instance using that AMI and a start script based on run_ockam.sh
in which the:
The variable ENROLLMENT_TICKET
is replaced by the enrollment ticket created by the administrator and given as a parameter to run.sh
.
When EC2 starts the instance, it executes the run_ockam.sh
script:
It installs ockam
command.
It then creates an Ockam node with:
A TCP inlet.
An access control policy associated with the inlet. The policy authorizes identities with a credential attesting to the attribute influxdb-outlet="true".
Next datastream_corp/run.sh
waits for the instance to be ready and provisions it using SSH:
It copies app.js and token.txt into the instance using SCP
Finally, the nodejs application is started:
It inserts a few system metrics into a bucket and retrieves them back to show that the connection with the InfluxDB database is working.
We connected a nodejs app in one virtual private network with a InfluxDB database in another virtual private network over an end-to-end encrypted portal.
Sensitive business data in the InfluxDB database is only accessible to Metrics Corp. and Datastream Corp. All data is encrypted with strong forward secrecy as it moves through the Internet. The communication channel is mutually authenticated and authorized. Keys and credentials are automatically rotated. Access to connect with InfluxDB can be easily revoked.
Datastream Corp. does not get unfettered access to Metrics Corp.’s network. It gets access only to query InfluxDB. Metrics Corp. does not get unfettered access to Datastream Corp.’s network. It gets access only to respond to queries over a tcp connection. Metrics Corp. cannot initiate connections.
All access controls are secure-by-default. Only project members, with valid credentials, can connect with each other. NAT’s are traversed using a relay and outgoing tcp connections. Metrics Corp. or Datastream Corp. don’t expose any listening endpoints on the Internet. Their networks are completely closed and protected from any attacks from the Internet.
To delete all AWS resources:
This section contains hands-on examples that use Ockam to create encrypted portals to InfluxDB databases running in various environments.
In each example, we connect a nodejs app in one private network with a InfluxDB database in another private network. To understand how end-to-end trust is established, and how the portal works even though the two networks are isolated with no exposed ports, please read: “How does Ockam work?”
Please select an example to dig in:
Create an Ockam Portal from any application, to any API, in any environment.
In each example, we connect a client app in one private network with am API service in another private network.
Each company’s network is private, isolated, and doesn't expose ports. To learn how end-to-end trust is established, please read: “How does Ockam work?”
Please select an example to dig in:
Let's connect a nodejs app in one AWS VPC with a nodejs API in another AWS VPC. The example uses AWS CLI to create these VPCs.
Each company’s network is private, isolated, and doesn't expose ports. To learn how end-to-end trust is established, please read: “How does Ockam work?”
This example requires Bash, Git, and AWS CLI. Please set up these tools for your operating system. In particular you need to login to your AWS account with aws sso login
.
If everything runs as expected, you'll see the message: The example run was successful 🥳
The run.sh script, that you ran above, and its accompanying files are full of comments and meant to be read. The example setup is only a few simple steps, so please take some time to read and explore.
The run.sh script calls the run function which invokes the enroll command to create an new identity, sign into Ockam Orchestrator, set up a new Ockam project, make you the administrator of this project, and get a project membership credential.
The run function then generates two new enrollment tickets. The tickets are valid for 60 minutes. Each ticket can be redeemed only once and assigns attributes to its redeemer. The first ticket is meant for the Ockam node that will run in Monitoring Corp.’s network. The second ticket is meant for the Ockam node that will run in Travel App Corp.’s network.
In a typical production setup an administrator or provisioning pipeline generates enrollment tickets and gives them to nodes that are being provisioned. In our example, the run function is acting on your behalf as the administrator of the Ockam project.
The run function passes the enrollment tickets as variables of the run scripts provisioning Monitoring Corp.’s network and Travel App Corp.’s network.
First, the monitoring_corp/run.sh
script creates a network to host the database:
We create a VPC and tag it.
We create an Internet gateway and attach it to the VPC.
We create a route table and create a route to the Internet via the gateway.
We create a subnet, and associate it with the route table.
We finally create a security group so that there is:
And one SSH ingress to install nodejs and run the API service.
Then, the monitoring_corp/run.sh
script creates an EC2 instance where the Ockam outlet node will run:
We select an AMI.
We start an instance using the AMI above and a start script based on run_ockam.sh
where the
ENROLLMENT_TICKET
is replaced by the enrollment ticket created by the administrator and given as a parameter to run.sh
.
Next, the instance is started and the run_ockam.sh
script is executed:
We then create an Ockam node:
With a TCP outlet.
A policy associated to the outlet. The policy authorizes identities with a credential containing the attribute monitoring-api-inlet="true".
With a relay capable of forwarding the TCP traffic to the TCP outlet.
Finally, we wait for the instance to be ready and run the nodejs api application:
The api.js file is copied to the instance (this uses the previously created key.pem
file to identify).
First, the travel_app_corp/run.sh
script creates a network to host the nodejs application:
We create a VPC and tag it.
We create an Internet gateway and attach it to the VPC.
We create a route table and create a route to the Internet via the gateway.
We create a subnet, and associate it with the route table.
We finally create a security group so that there is:
And One SSH ingress to download and install the nodejs application.
Then, we create an EC2 instance where the Ockam inlet node will run:
We select an AMI.
We start an instance using the AMI above and a start script based on run_ockam.sh
where the
ENROLLMENT_TICKET
is replaced by the enrollment ticket created by the administrator and given as a parameter to run.sh
.
Next, the instance is started and the run_ockam.sh
script is executed:
We then create an Ockam node:
With a TCP inlet.
A policy associated to the inlet. The policy authorizes identities with a credential containing the attribute monitoring-api-outlet="true".
Finally, we wait for the instance to be ready and run the nodejs client application:
The client.js file is copied to the instance (this uses the previously created key.pem
file to identify).
We can then SSH to the instance and:
We connected a nodejs app in one AWS VPC with a nodejs API service in another AWS VPC over an end-to-end encrypted portal.
Non-public access to private API endpoints are only accessible to enrolled members of the project with the appropriate attributes. The communication channel is mutually authenticated and authorized. Keys and credentials are automatically rotated. Access to the API can be easily revoked.
Travel App Corp. does not get unfettered access to Monitoring Corp.’s network. It only gets access to the API service. Monitoring Corp. does not get unfettered access to Travel App Corp.’s network. It gets access only to respond to requests over a tcp connection. Monitoring Corp. cannot initiate connections.
All access controls are secure-by-default. Only project members, with valid credentials, can connect with each other. NAT’s are traversed using a relay and outgoing tcp connections. Neither Monitoring Corp. nor Travel App Corp. expose any listening endpoints to the Internet. Their networks are completely closed and protected from any attacks from the Internet.
To delete all AWS resources created by this example:
Let's connect a nodejs app in one Amazon VPC with an Amazon RDS managed Postgres database in another Amazon VPC.
Each company’s network is private, isolated, and doesn't expose ports. To learn how end-to-end trust is established, please read: “How does Ockam work?”
This example requires Bash, Git, and AWS CLI. Please set up these tools for your operating system. In particular you need to login to your AWS account with aws sso login
.
Then run the following commands:
If everything runs as expected, you'll see the message: The example run was successful 🥳
The run.sh script, that you ran above, and its accompanying files are full of comments and meant to be read. The example setup is only a few simple steps, so please take some time to read and explore.
The run.sh script calls the run function which invokes the enroll command to create an new identity, sign into Ockam Orchestrator, set up a new Ockam project, make you the administrator of this project, and get a project membership credential.
The run function then generates two new enrollment tickets. The tickets are valid for 10 minutes. Each ticket can be redeemed only once and assigns attributes to its redeemer. The first ticket is meant for the Ockam node that will run in Bank Corp.’s network. The second ticket is meant for the Ockam node that will run in Analysis Corp.’s network.
In a typical production setup an administrator or provisioning pipeline generates enrollment tickets and gives them to nodes that are being provisioned. In our example, the run function is acting on your behalf as the administrator of the Ockam project.
The run function passes the enrollment tickets as variables of the run scripts provisioning Bank Corp.'s network and Analysis Corp.'s network.
First, the bank_corp/run.sh
script creates a network to host the database:
We create a VPC and tag it.
We create an Internet gateway and attach it to the VPC.
We create a route table and create a route to the Internet via the gateway.
We create two subnets, located in two distinct availability zones, and associated to the route table.
We finally create a security group so that there is:
And one ingress to Postgres from within our two subnets.
Then, the bank_corp/run.sh
script creates a RDS database:
This requires a subnet group.
Once the subnet group is created, we create a database cluster an a database instance.
We are now ready to create an EC2 instance where the Ockam outlet node will run:
We select an AMI.
We start an instance using the AMI above and a start script based on run_ockam.sh
where:
ENROLLMENT_TICKET
is replaced by the enrollment ticket created by the administrator and given as a parameter to run.sh
.
POSTGRES_ADDRESS
is replaced by the database address that we previously saved.
When the instance is started, the run_ockam.sh
script is executed:
We then create an Ockam node:
With a TCP outlet.
A policy associated to the outlet. The policy authorizes identities with a credential containing the attribute postgres-inlet="true".
With a relay capable of forwarding the TCP traffic to the TCP outlet.
First, the analysis_corp/run.sh
script creates a network to host the nodejs application:
We create a VPC and tag it.
We create an Internet gateway and attach it to the VPC.
We create a route table and create a route to the Internet via the gateway.
We create a subnet, and associated to the route table.
We finally create a security group so that there is:
And One SSH ingress to download and install the nodejs application.
We are now ready to create an EC2 instance where the Ockam inlet node will run:
We select an AMI.
We start an instance using the AMI above and a start script based on run_ockam.sh
where:
ENROLLMENT_TICKET
is replaced by the enrollment ticket created by the administrator and given as a parameter to run.sh
.
The instance is started and the run_ockam.sh
script is executed:
We then create an Ockam node:
With a TCP inlet.
A policy associated to the inlet. The policy authorizes identities with a credential containing the attribute postgres-outlet="true".
We finally wait for the instance to be ready and install the nodejs application:
The app.js file copied to the instance (this uses the previously created key.pem
file to identify).
Once the nodejs application is started:
It creates a database table and runs some SQL queries to check that the connection with the Postgres database works.
We connected a nodejs app in one virtual private network with a postgres database in another virtual private network over an end-to-end encrypted portal.
Sensitive business data in the postgres database is only accessible to Bank Corp. and Analysis Corp. All data is encrypted with strong forward secrecy as it moves through the Internet. The communication channel is mutually authenticated and authorized. Keys and credentials are automatically rotated. Access to connect with postgres can be easily revoked.
Analysis Corp. does not get unfettered access to Bank Corp.’s network. It gets access only to run queries on the postgres server. Bank Corp. does not get unfettered access to Analysis Corp.’s network. It gets access only to respond to queries over a tcp connection. Bank Corp. cannot initiate connections.
All access controls are secure-by-default. Only project members, with valid credentials, can connect with each other. NAT’s are traversed using a relay and outgoing tcp connections. Bank Corp. or Analysis Corp. don’t expose any listening endpoints on the Internet. Their networks are completely closed and protected from any attacks from the Internet.
To delete all AWS resources:
We connect a nodejs app in one virtual private network with a MongoDB database in another virtual private network. The example uses docker and docker compose to create these virtual networks.
We connect a nodejs app in one Amazon VPC with a InfluxDB database in another Amazon VPC. The example uses AWS CLI to create these VPCs.
We connect a nodejs app in one AWS VPC with a nodejs API service in another AWS VPC.
We connect a python app in one AWS VPC with a python API service in another AWS VPC.
Create an Ockam Portal from any application, to any code repo, in any environment.
In each example, we connect a nodejs app in one company's private network with a git repository managed by gitlab in another company's private network.
Each company’s network is private, isolated, and doesn't expose ports. To learn how end-to-end trust is established, please read: “How does Ockam work?”
Please select an example to dig in:
Let's connect a python app in one AWS VPC with a python API in another AWS VPC. The example uses AWS CLI to create these VPCs.
Each company’s network is private, isolated, and doesn't expose ports. To learn how end-to-end trust is established, please read: “How does Ockam work?”
This example requires Bash, Git, and AWS CLI. Please set up these tools for your operating system. In particular you need to login to your AWS account with aws sso login
.
If everything runs as expected, you'll see the message: The example run was successful 🥳
The run.sh script, that you ran above, and its accompanying files are full of comments and meant to be read. The example setup is only a few simple steps, so please take some time to read and explore.
The run.sh script calls the run function which invokes the enroll command to create an new identity, sign into Ockam Orchestrator, set up a new Ockam project, make you the administrator of this project, and get a project membership credential.
The run function then generates two new enrollment tickets. The tickets are valid for 60 minutes. Each ticket can be redeemed only once and assigns attributes to its redeemer. The first ticket is meant for the Ockam node that will run in Monitoring Corp.’s network. The second ticket is meant for the Ockam node that will run in Travel App Corp.’s network.
In a typical production setup an administrator or provisioning pipeline generates enrollment tickets and gives them to nodes that are being provisioned. In our example, the run function is acting on your behalf as the administrator of the Ockam project.
The run function passes the enrollment tickets as variables of the run scripts provisioning Monitoring Corp.’s network and Travel App Corp.’s network.
First, the monitoring_corp/run.sh
script creates a network to host the database:
We create a VPC and tag it.
We create an Internet gateway and attach it to the VPC.
We create a route table and create a route to the Internet via the gateway.
We create a subnet, and associate it with the route table.
We finally create a security group so that there is:
And one SSH ingress to install python and run the API service.
Then, the monitoring_corp/run.sh
script creates an EC2 instance where the Ockam outlet node will run:
We select an AMI.
We start an instance using the AMI above and a start script based on run_ockam.sh
where the
ENROLLMENT_TICKET
is replaced by the enrollment ticket created by the administrator and given as a parameter to run.sh
.
Next, the instance is started and the run_ockam.sh
script is executed:
We then create an Ockam node:
With a TCP outlet.
A policy associated to the outlet. The policy authorizes identities with a credential containing the attribute monitoring-api-inlet="true".
With a relay capable of forwarding the TCP traffic to the TCP outlet.
Finally, we wait for the instance to be ready and run the python api application:
The app.py file is copied to the instance (this uses the previously created key.pem
file to identify).
First, the travel_app_corp/run.sh
script creates a network to host the python application:
We create a VPC and tag it.
We create an Internet gateway and attach it to the VPC.
We create a route table and create a route to the Internet via the gateway.
We create a subnet, and associate it with the route table.
We finally create a security group so that there is:
And One SSH ingress to download and install the python application.
Then, we create an EC2 instance where the Ockam inlet node will run:
We select an AMI.
We start an instance using the AMI above and a start script based on run_ockam.sh
where the
ENROLLMENT_TICKET
is replaced by the enrollment ticket created by the administrator and given as a parameter to run.sh
.
Next, the instance is started and the run_ockam.sh
script is executed:
We then create an Ockam node:
With a TCP inlet.
A policy associated to the inlet. The policy authorizes identities with a credential containing the attribute monitoring-api-outlet="true".
Finally, we wait for the instance to be ready and run the python client application:
The client.py file is copied to the instance (this uses the previously created key.pem
file to identify).
We connected a python app in one AWS VPC with a python API service in another AWS VPC over an end-to-end encrypted portal.
Non-public access to private API endpoints are only accessible to enrolled members of the project with the appropriate attributes. The communication channel is mutually authenticated and authorized. Keys and credentials are automatically rotated. Access to the API can be easily revoked.
Travel App Corp. does not get unfettered access to Monitoring Corp.’s network. It only gets access to the API service. Monitoring Corp. does not get unfettered access to Travel App Corp.’s network. It gets access only to respond to requests over a tcp connection. Monitoring Corp. cannot initiate connections.
All access controls are secure-by-default. Only project members, with valid credentials, can connect with each other. NAT’s are traversed using a relay and outgoing tcp connections. Neither Monitoring Corp. nor Travel App Corp. expose any listening endpoints to the Internet. Their networks are completely closed and protected from any attacks from the Internet.
To delete all AWS resources created by this example:
Let's connect a nodejs app in one virtual private network with an application serving a self hosted model in another virtual private network. The example uses the AWS CLI to create these virtual networks.
Each company’s network is private, isolated, and doesn't expose ports. To learn how end-to-end trust is established, please read: “How does Ockam work?”
This example requires Bash, Git, Curl, and the AWS CLI. Please set up these tools for your operating system. In particular you need to login to your AWS account with aws sso login
.
Then run the following commands:
If everything runs as expected, you'll see the answer to the question: "What is Ockham's Razor?".
The run.sh script script, that you ran above, and its accompanying files are full of comments and meant to be read. The example setup is only a few simple steps, so please take some time to read and explore.
The run.sh script calls the run function which invokes the enroll command to create an new identity, sign into Ockam Orchestrator, set up a new Ockam project, make you the administrator of this project, and get a project membership credential.
The run function then generates two new enrollment tickets. The tickets are valid for 10 minutes. Each ticket can be redeemed only once and assigns attributes to its redeemer. The first ticket is meant for the Ockam node that will run in AI Corp.’s network. The second ticket is meant for the Ockam node that will run in Health Corp.’s network.
In a typical production setup an administrator or provisioning pipeline generates enrollment tickets and gives them to nodes that are being provisioned. In our example, the run function is acting on your behalf as the administrator of the Ockam project.
The run function passes the enrollment tickets as variables of the run scripts provisioning AI Corp.'s network and Health Corp.'s network.
First, the ai_corp/run.sh
script creates a network to host the application exposing the LLaMA model API:
We create a VPC and tag it.
We create an Internet gateway and attach it to the VPC.
We create a route table and create a route to the Internet via the gateway.
We create a subnet, with associate it to the route table.
We finally create a security group so that there is:
And one SSH ingress to install the model and its application.
We are now ready to create an EC2 instance where the Ockam outlet node will run:
We select an AMI.
We create a key pair in order to access the EC2 instance via SSH.
Before creating the EC2 instance we check that the AWS region we are using proposes this kind of instance type. Indeed, we need properly sized instance in order to run a large language model, and those instances are not available in all regions. If the instance is not available in the current region, we return the list of all the regions where that instance type is available.
We start an instance using the selected AMI and right instance type. Starting the instance executes a start script based on ai_corp/run_ockam.sh
where:
ENROLLMENT_TICKET
is replaced by the enrollment ticket created by the administrator and given as a parameter to ai_corp/run.sh
.
When the instance is started, the run_ockam.sh
script is executed:
We then create an Ockam node:
With a TCP outlet.
A policy associated to the outlet. The policy authorizes identities with a credential containing the attribute ai-inlet="true".
With a relay capable of forwarding the TCP traffic to the TCP outlet.
First, the health_corp/run.sh
script creates a network to host the client.js
application which will connect to the LLaMA model:
We create a VPC and tag it.
We create an Internet gateway and attach it to the VPC.
We create a route table and create a route to the Internet via the gateway.
We create a subnet, and associate it to the route table.
We finally create a security group so that there is:
And One SSH ingress to download and install the nodejs client application.
We are now ready to create an EC2 instance where the Ockam inlet node will run:
We select an AMI.
We start an instance using the AMI above and a start script based on run_ockam.sh
where:
ENROLLMENT_TICKET
is replaced by the enrollment ticket created by the administrator and given as a parameter to run.sh
.
The instance is started and the run_ockam.sh
script is executed:
We then create an Ockam node:
With a TCP inlet.
Connected to the ai
relay.
A policy associated to the inlet. The policy authorizes identities with a credential containing the attribute ai-outlet="true".
We finally wait for the instance to be ready and install the client.js
application:
The client.js file is copied to the instance (this uses the previously created key.pem
file to identify).
We can then SSH to the instance and:
Once the client.js
application is started:
It sends a query and waits for a response from the model.
The response is then printed on the console.
We connected a nodejs application in one virtual private network with an application serving a LLaMA model in another virtual private network over an end-to-end encrypted portal.
Sensitive business data coming from the model is only accessible to AI Corp. and Health Corp. All data is encrypted with strong forward secrecy as it moves through the Internet. The communication channel is mutually authenticated and authorized. Keys and credentials are automatically rotated. Access to connect with the model API can be easily revoked.
Health Corp. does not get unfettered access to AI Corp.’s network. It gets access only to run API queries. AI Corp. does not get unfettered access to Health Corp.’s network. It gets access only to respond to queries over a TCP connection. AI Corp. cannot initiate connections.
All access controls are secure-by-default. Only project members, with valid credentials, can connect with each other. NAT’s are traversed using a relay and outgoing tcp connections. AI Corp. or Health Corp. don’t expose any listening endpoints on the Internet. Their networks are completely closed and protected from any attacks from the Internet.
To delete all AWS resources:
Create an Ockam Portal from any application, to any AI model, in any environment.
In each example, we connect a nodejs app in one private network with an AI service in another private network.
Each company’s network is private, isolated, and doesn't expose ports. To learn how end-to-end trust is established, please read: “How does Ockam work?”
Please select an example to dig in:
Let's connect a python app in one virtual private network with an Azure OpenAI model configured with private endpoint in another virtual private network. You will use the Azure CLI to create these virtual networks and resources.
Each company’s network is private, isolated, and doesn't expose ports. To learn how end-to-end trust is established, please read: “How does Ockam work?”
Sign up for Ockam and pick a subscription plan through the guided workflow
Run the following commands to install Ockam Command and enroll with the Ockam Orchestrator. This step creates a Project in Ockam Orchestrator.
This example requires Bash, Git, Curl, and the Azure CLI. Please set up these tools for your operating system. In particular you need to login to your Azure with az login
.
Then run the following commands:
If everything runs as expected, you'll see the answer to the question: "What is Ockham's Razor?".
The run.sh script script, that you ran above, and its accompanying files are full of comments and meant to be read. The example setup is only a few simple steps, so please take some time to read and explore.
The run.sh script calls the run function which invokes the enroll command to create an new identity, sign into Ockam Orchestrator, set up a new Ockam project, make you the administrator of this project, and get a project membership credential.
The run function then generates two new enrollment tickets. The tickets are valid for 60 minutes. Each ticket can be redeemed only once and assigns attributes to its redeemer. The first ticket is meant for the Ockam node that will run in AI Corp.’s network. The second ticket is meant for the Ockam node that will run in Health Corp.’s network.
In a typical production setup an administrator or provisioning pipeline generates enrollment tickets and gives them to nodes that are being provisioned. In our example, the run function is acting on your behalf as the administrator of the Ockam project.
The run function passes the enrollment tickets as variables of the run scripts provisioning AI Corp.'s network and Health Corp.'s network.
First, the ai_corp/run.sh
script creates a network to host the application exposing the Azure OpenAI Service Endpoint
Network Infrastructure:
We create an Azure Resource Group to contain all resources.
We create a Virtual Network (VNet) with a subnet to host the services.
Azure OpenAI Service Configuration:
We deploy an Azure OpenAI Service instance.
OpenAI Model Deployment:
We retrieve the API key for authentication.
We create an environment file (.env.azure) containing:
The Azure OpenAI endpoint URL.
The API key for authentication.
Virtual Machine Deployment:
We process the Ockam setup script (run_ockam.sh) by replacing variables:
Replaces SERVICE_NAME and TICKET placeholders.
We create a Red Hat Enterprise Linux VM:
Place it in the configured VNet/subnet.
Generate SSH keys for access.
Inject the processed Ockam setup script as custom data.
The default Network Security Group (NSG) is configured with basic rules: inbound SSH access (port 22), internal virtual network communication, Azure Load Balancer access, and a final deny rule for all other inbound traffic. For outbound, it allows virtual network and internet traffic, with a final deny rule for all other outbound traffic.
Ensure your Azure Subscription has access to deploy the "gpt-4o-mini" model (version: 2024-07-18). You may need to request quota/access for this model through the Azure Portal if not already enabled for your subscription.
First, the health_corp/run.sh
script creates a network to host the client.py
application which will connect to the Azure OpenAI model:
Network Infrastructure Setup:
We create an Azure Resource Group to contain all resources.
We create a Virtual Network (VNet) with a subnet to host the services.
VM Deployment and Ockam Setup:
We process the run_ockam.sh script by replacing:
${SERVICE_NAME} with the configured service name.
${TICKET} with the provided enrollment ticket.
We create a Red Hat Enterprise Linux 8 VM where the Ockam inlet node will run:
Use latest RHEL 8 LVM Gen2 image.
Generate SSH keys automatically.
Inject the processed Ockam setup script as custom data.
Client Application Deployment:
We wait for VM to be accessible.
We copy required files to the VM:
Transfers client.py to the VM.
Copies .env.azure configuration file containing OpenAI credentials.
We set up the Python environment:
Install Python 3.9 and pip.
Install the OpenAI SDK.
Client Application Operation:
The client.py application:
Connects to the Azure OpenAI service using credentials from .env.azure.
Sends queries to the model.
We connected a Python application in one virtual network with an application serving an Azure OpenAI model in another virtual network over an end-to-end encrypted portal.
Sensitive business data coming from the Azure OpenAI model is only accessible to AI Corp. and Health Corp. All data is encrypted with strong forward secrecy as it moves through the Internet. The communication channel is mutually authenticated and authorized. Keys and credentials are automatically rotated. Access to connect with the model API can be easily revoked.
Health Corp. does not get unfettered access to AI Corp.'s network. It gets access only to run API queries to the Azure OpenAI service. AI Corp. does not get unfettered access to Health Corp.'s network. It gets access only to respond to queries over a TCP connection. AI Corp. cannot initiate connections.
All access controls are secure-by-default. Only project members, with valid credentials, can connect with each other. NATs are traversed using a relay and outgoing TCP connections. AI Corp. or Health Corp. don't expose any listening endpoints on the Internet. Their Azure virtual networks are completely closed and protected from any attacks from the Internet through Network Security Groups (NSGs) that only allow essential communications.
To delete all Azure resources:
Let's connect a nodejs app in one virtual private network with an application serving an Amazon Bedrock model in another virtual private network. The example uses the AWS CLI to create these virtual networks.
Each company’s network is private, isolated, and doesn't expose ports. To learn how end-to-end trust is established, please read: “How does Ockam work?”
This example requires Bash, Git, Curl, and the AWS CLI. Please set up these tools for your operating system. In particular you need to login to your AWS account with aws sso login
.
Then run the following commands:
If everything runs as expected, you'll see the answer to the question: "What is Ockham's Razor?".
The run.sh script script, that you ran above, and its accompanying files are full of comments and meant to be read. The example setup is only a few simple steps, so please take some time to read and explore.
The run.sh script calls the run function which invokes the enroll command to create an new identity, sign into Ockam Orchestrator, set up a new Ockam project, make you the administrator of this project, and get a project membership credential.
The run function then generates two new enrollment tickets. The tickets are valid for 10 minutes. Each ticket can be redeemed only once and assigns attributes to its redeemer. The first ticket is meant for the Ockam node that will run in AI Corp.’s network. The second ticket is meant for the Ockam node that will run in Health Corp.’s network.
In a typical production setup an administrator or provisioning pipeline generates enrollment tickets and gives them to nodes that are being provisioned. In our example, the run function is acting on your behalf as the administrator of the Ockam project.
The run function passes the enrollment tickets as variables of the run scripts provisioning AI Corp.'s network and Health Corp.'s network.
First, the ai_corp/run.sh
script creates a network to host the application exposing the Bedrock model API:
We create a VPC and tag it.
We enable DNS attributes and hostnames for the VPC. This will be used to create the private link below.
We create an Internet gateway and attach it to the VPC.
We create a route table and create a route to the Internet via the gateway.
We create a subnet, with associate it to the route table.
We create a security group so that there is:
And one SSH ingress to install the server application accessing the model.
We finally create a private link to the Amazon Bedrock service to allow the Bedrock client inside the server application to access the Bedrock model.
We are now ready to create an EC2 instance where the Ockam outlet node will run:
We select an AMI.
We create a key pair in order to access the EC2 instance via SSH.
We start an instance using the selected AMI. Starting the instance executes a start script based on ai_corp/run_ockam.sh
where:
ENROLLMENT_TICKET
is replaced by the enrollment ticket created by the administrator and given as a parameter to ai_corp/run.sh
.
When the instance is started, the run_ockam.sh
script is executed:
We then create an Ockam node:
With a TCP outlet.
A policy associated to the outlet. The policy authorizes identities with a credential containing the attribute ai-inlet="true".
With a relay capable of forwarding the TCP traffic to the TCP outlet.
The model used in this example is the "Titan Text G1 - Lite" model. In order to use it, you will need to request access to this model.
First, the health_corp/run.sh
script creates a network to host the client.js
application which will connect to the Bedrock model:
We create a VPC and tag it.
We create an Internet gateway and attach it to the VPC.
We create a route table and create a route to the Internet via the gateway.
We create a subnet, and associate it to the route table.
We finally create a security group so that there is:
And one SSH ingress to download and install the nodejs client application.
We are now ready to create an EC2 instance where the Ockam inlet node will run:
We select an AMI.
We start an instance using the AMI above and a start script based on run_ockam.sh
where:
ENROLLMENT_TICKET
is replaced by the enrollment ticket created by the administrator and given as a parameter to run.sh
.
The instance is started and the run_ockam.sh
script is executed:
We then create an Ockam node:
With a TCP inlet.
Forwarding messages to the ai
relay
A policy associated to the inlet. The policy authorizes identities with a credential containing the attribute ai-outlet="true".
We finally wait for the instance to be ready and install the client.js
application:
The client.js file is copied to the instance (this uses the previously created key.pem
file to identify).
We can then SSH to the instance and:
Once the client.js
application is started:
It sends a query and waits for a response from the model.
The response is then printed on the console.
We connected a nodejs application in one virtual private network with an application serving an Amazon Bedrock model in another virtual private network over an end-to-end encrypted portal.
Sensitive business data coming from the model is only accessible to AI Corp. and Health Corp. All data is encrypted with strong forward secrecy as it moves through the Internet. The communication channel is mutually authenticated and authorized. Keys and credentials are automatically rotated. Access to connect with the model API can be easily revoked.
Health Corp. does not get unfettered access to AI Corp.’s network. It gets access only to run API queries. AI Corp. does not get unfettered access to Health Corp.’s network. It gets access only to respond to queries over a TCP connection. AI Corp. cannot initiate connections.
All access controls are secure-by-default. Only project members, with valid credentials, can connect with each other. NAT’s are traversed using a relay and outgoing tcp connections. AI Corp. or Health Corp. don’t expose any listening endpoints on the Internet. Their networks are completely closed and protected from any attacks from the Internet.
To delete all AWS resources:
Let's connect a nodejs app in one AWS VPC with a MongoDB database that resides in another AWS VPC. The example uses docker and docker compose to create these virtual networks.
Each company’s network is private, isolated, and doesn't expose ports. To learn how end-to-end trust is established, please read: “How does Ockam work?”
This example requires Bash, Git, Curl, and AWS CLI. Please set up these tools for your operating system, then run the following commands:
If everything runs as expected, you'll see the message: The example run was successful 🥳
The run.sh script, that you ran above, and its accompanying files are full of comments and meant to be read. The example setup is only a few simple steps, so please take some time to read and explore.
The run.sh script calls the run function which invokes the enroll command to create a new identity, sign into Ockam Orchestrator, set up a new Ockam project, make you the administrator of this project, and get a project membership credential.
The run function then generates two new enrollment tickets. The tickets are valid for 10 minutes. Each ticket can be redeemed only once and assigns attributes to its redeemer. The first ticket is meant for the Ockam node that will run in Bank Corp.’s network. The second ticket is meant for the Ockam node that will run in Analysis Corp.’s network.
In a typical production setup, an administrator or provisioning pipeline generates enrollment tickets and gives them to nodes that are being provisioned. In our example, the run function is acting on your behalf as the administrator of the Ockam project. It passes the enrollment tickets as a function argument to provision Ockam nodes in Bank Corp network and also passes the enrollment key as an function argument to provision Ockam nodes in Analysis Corp.’s network.
For Bank Corp, the run function calls a run.sh script which will create an Amazon VPC that will host our MongoDB instance over a closed network this will be hosted by Bank Corp.
For Analysis Corp, we will also call a run.sh script which will run a nodejs app that will write to our mongoDB data hosted by Bank Corp.
First, the bank_corp/run.sh
script creates a network to host the database:
We create a VPC and tag it.
We create an Internet gateway and attach it to the VPC.
We create a route table and create a route to the Internet via the gateway.
We create a subnet, and associate it with the route table.
We finally create a security group so that there is:
No Outbound connection is allowed for this VPC
Then, the bank_corp/run.sh
script creates an EC2 instance where the Ockam outlet node will run:
We select an AMI.
We start an instance using the AMI above and a start script based on run_ockam.sh
where the
ENROLLMENT_TICKET
is replaced by the enrollment ticket created by the administrator and given as a parameter to run.sh
.
Next, the instance is started and the run_ockam.sh
script is executed:
We then create an Ockam node:
With a TCP outlet.
A policy associated to the outlet. The policy authorizes identities with a credential containing the attribute monitoring-api-inlet="true".
With a relay capable of forwarding the TCP traffic to the TCP outlet.
First, the analysis_corp/run.sh
script creates a network to host the nodejs application:
We create a VPC and tag it.
We create an Internet gateway and attach it to the VPC.
We create a route table and create a route to the Internet via the gateway.
We create a subnet, and associate it with the route table.
We finally create a security group so that there is:
And One SSH ingress to download and install the nodejs application.
Then, we create an EC2 instance where the Ockam inlet node will run:
We select an AMI.
We start an instance using the AMI above and a start script based on run_ockam.sh
where the
ENROLLMENT_TICKET
is replaced by the enrollment ticket created by the administrator and given as a parameter to run.sh
.
Next, the instance is started and the run_ockam.sh
script is executed:
We then create an Ockam node:
With a TCP inlet.
A policy associated to the inlet. The policy authorizes identities with a credential containing the attribute monitoring-api-outlet="true".
Finally, we wait for the instance to be ready and run the nodejs app application:
The app.js file is copied to the instance (this uses the previously created key.pem
file to identify).
We can then SSH to the instance and:
We connected a nodejs app in one AWS VPC with a MongoDB database in another AWS VPC over an end-to-end encrypted portal.
Non-public access to MongoDB database are only accessible to enrolled members of the project with the appropriate attributes. The communication channel is mutually authenticated and authorized. Keys and credentials are automatically rotated. Access to the API can be easily revoked.
Analysis Corp. does not get unfettered access to Banking Corp.’s network. It only gets access to run queries on the MongoDB server. Bank Corp. does not get unfettered access to Travel App Corp.’s network. It gets access only to respond to requests over a tcp connection. Bank Corp. cannot initiate connections.
All access controls are secure-by-default. Only project members, with valid credentials, can connect with each other. NAT’s are traversed using a relay and outgoing tcp connections. Neither Bank Corp. nor Analysis App Corp. expose any listening endpoints to the Internet. Their networks are completely closed and protected from any attacks from the Internet.
To delete all AWS resources created by this example:
Create an Ockam Portal to send end-to-end encrypted messages through Apache Kafka.
Please select an example to dig in:
Create an Ockam Portal to send end-to-end encrypted messages through Redpanda.
Please select an example to dig in:
Create an Ockam Portal to send end-to-end encrypted messages through Kafka - from any producer, to any consumer, through any Kafka API compatible data streaming platform.
Please select an example to dig in:
In this hands-on example we send end-to-end encrypted messages through Apache Kafka.
This example requires Bash, Git, Curl, Docker, and Docker Compose. Please set up these tools for your operating system, then run the following commands:
If everything runs as expected, you'll see the message: The example run was successful 🥳
We sent end-to-end encrypted messages through Apache Kafka.
Messages are encrypted with strong forward secrecy as soon as they leave a Producer, and only the intended Consumer can decrypt those messages. Kafka brokers and other Consumers can only see encrypted messages.
All communication is mutually authenticated and authorized. Keys and credentials are automatically rotated. Access can be easily revoked.
To delete all containers and images:
In this hands-on example we send end-to-end encrypted messages through Redpanda.
This example requires Bash, Git, Curl, Docker, and Docker Compose. Please set up these tools for your operating system, then run the following commands:
If everything runs as expected, you'll see the message: The example run was successful 🥳
You can view the Redpanda console available at http://127.0.0.1:8080 to see the encrypted messages
We sent end-to-end encrypted messages through Redpanda.
Messages are encrypted with strong forward secrecy as soon as they leave a Producer, and only the intended Consumer can decrypt those messages. Redpanda and other Consumers can only see encrypted messages.
All communication is mutually authenticated and authorized. Keys and credentials are automatically rotated. Access can be easily revoked.
To delete all containers and images:
Let's connect a nodejs app in one company's Amazon VPC with a CodeRepository hosted on a Gitlab Server in another company's Amazon VPC. The example uses AWS CLI to create these VPCs.
Then run the following commands:
If everything runs as expected, you'll see the message: The example run was successful 🥳
In a typical production setup an administrator or provisioning pipeline generates enrollment tickets and gives them to nodes that are being provisioned. In our example, the run function is acting on your behalf as the administrator of the Ockam project.
First, the bank_corp/run.sh
script creates a network to host the database:
We are now ready to create an EC2 instance where the Gitlab server and Ockam outlet node will run:
When the instance is started, the run_gitlab.sh
script is executed:
Password can be used to access the gitlab console from local machine
When the instance is started, the run_ockam.sh
script is executed:
We then create an Ockam node:
First, the analysis_corp/run.sh
script creates a network to host the nodejs application:
We are now ready to create an EC2 instance where the Ockam inlet node will run:
The instance is started and the run_repoaccess.sh
script is executed:
The instance is started and the run_ockam.sh
script is executed:
We then create an Ockam node:
We finally wait for the instance to be ready and install the nodejs application:
Once the nodejs application is started:
We connected a nodejs app in one virtual private network with a Gitlab CodeRepository in another virtual private network over an end-to-end encrypted portal.
Analysis Corp. does not get unfettered access to Bank Corp.’s network. It gets access only to the codebase hosted on the Gitlab server. Bank Corp. does not get unfettered access to Analysis Corp.’s network. It gets access only to respond to queries over a tcp connection. Bank Corp. cannot initiate connections.
To delete all AWS resources:
Create an Ockam Portal to send end-to-end encrypted messages through Warpstream Cloud.
Please select an example to dig in:
Create an Ockam Portal to send end-to-end encrypted messages through Confluent Cloud.
Please select an example to dig in:
In this hands-on example we send end-to-end encrypted messages through Instaclustr.
This example requires Bash, Git, jq, Curl, Docker, and Docker Compose. Please set up these tools for your operating system, then run the following commands:
If everything runs as expected, you'll see the message: The example run was successful 🥳
Upon logged in to Instaclustr console, Account API keys can be created from the console by going to gear icon to the top right > Account Settings > API Keys. Create a Provisioning API key and note it down.
Alternative to entering the username and API key, you can export them as environment variables INSTACLUSTR_USER_NAME
and INSTACLUSTR_API_KEY
We sent end-to-end encrypted messages through Instaclustr.
Messages are encrypted with strong forward secrecy as soon as they leave a Producer, and only the intended Consumer can decrypt those messages. Instaclustr and other Consumers can only see encrypted messages.
All communication is mutually authenticated and authorized. Keys and credentials are automatically rotated. Access can be easily revoked.
To delete all containers, images and instaclustr cluster:
In this hands-on example we send end-to-end encrypted messages through Warpstream Cloud.
This example requires Bash, Git, Curl, Docker, and Docker Compose. Please set up these tools for your operating system. It's also necessary to include your warpstream application key as an environment variable when running the example, example can be run as following:
If everything runs as expected, you'll see the message: The example run was successful 🥳
We sent end-to-end encrypted messages through Warpstream cloud.
Messages are encrypted with strong forward secrecy as soon as they leave a Producer, and only the intended Consumer can decrypt those messages. Warpstream Cloud and other Consumers can only see encrypted messages.
All communication is mutually authenticated and authorized. Keys and credentials are automatically rotated. Access can be easily revoked.
To delete all containers and images:
Create an Ockam Portal to send end-to-end encrypted messages through Instaclustr.
Please select an example to dig in:
In this hands-on example we send end-to-end encrypted messages through Confluent Cloud.
This example requires Bash, Confluent CLI, JQ, Git, Curl, Docker, and Docker Compose. Please set up these tools for your operating system, login to Confluent using your Confluent CLI so that clusters can be created and deleted, then run the following commands:
If everything runs as expected, you'll see the message: The example run was successful 🥳
You can view the Confluent website to see the encrypted messages as they are being sent by the producer.
We sent end-to-end encrypted messages through Confluent cloud.
Messages are encrypted with strong forward secrecy as soon as they leave a Producer, and only the intended Consumer can decrypt those messages. Confluent Cloud and other Consumers can only see encrypted messages.
All communication is mutually authenticated and authorized. Keys and credentials are automatically rotated. Access can be easily revoked.
To delete all containers and images:
encrypts messages from a Producer to a specific Consumer. Only that specific Consumer can decrypt these messages. This guarantees that your data cannot be observed or tampered as it passes through Kafka. Operators of the Kafka cluster only see end-to-end encrypted data. Any compromise of an operator's infrastructure cannot compromise your business data.
To learn how end-to-end trust is established, please read: “”
encrypts messages from a Producer all-of-the-way to a specific Consumer. Only that specific Consumer can decrypt these messages. This guarantees that your data cannot be observed or tampered with as it passes through Redpanda or the network where it is hosted. The operators of Redpanda can only see encrypted data in the network and in service that they operate. Thus, a compromise of the operator's infrastructure will not compromise the data stream's security, privacy, or integrity.
To learn how end-to-end trust is established, please read: “”
encrypts messages from a Producer to a specific Consumer. Only that specific Consumer can decrypt these messages. This guarantees that your data cannot be observed or tampered as it passes through Kafka. Operators of the Kafka cluster only see end-to-end encrypted data. Any compromise of an operator's infrastructure cannot compromise your business data.
To learn how end-to-end trust is established, please read: “”
encrypts messages from a Producer to a specific Consumer. Only that specific Consumer can decrypt these messages. This guarantees that your data cannot be observed or tampered as it passes through Kafka. Operators of the Kafka cluster only see end-to-end encrypted data. Any compromise of an operator's infrastructure cannot compromise your business data.
To learn how end-to-end trust is established, please read: “”
The that you ran above, and its , are full of comments and meant to be read. The example setup is only a few simple steps, so please take some time to read and explore.
The calls the which invokes the to create a new identity, sign in to Ockam Orchestrator, set up a new Ockam project, make you the administrator of this project, and get a project membership .
The run function then , each valid for 10 minutes, and can be redeemed only once. The is meant for the Ockam node that will run in Kafka Operator's network. The are meant for the Ockam node that will run in Application Team’s network.
In a typical production setup, an administrator or provisioning pipeline generates enrollment tickets and gives them to nodes that are being provisioned. In our example, the run function is acting on your behalf as the administrator of the Ockam project. It provisions Ockam nodes in and , passing them their tickets using environment variables.
The run function invokes docker-compose for both and .
Kafka Operator’s is used when run.sh invokes docker-compose. It creates an for Kafka Operator.
In this network, docker compose starts a . This container becomes available at kafka:9092 in the Kafka Operator's network.
Once the Kafka container , docker compose starts an as a companion to the Kafka container described by ockam.yaml
, . The node will automatically create an identity, using the ticket , and set up Kafka outlet.
The Ockam node then uses this identity and membership credential to authenticate and create a relay in the project, back to the node, at relay: kafka. The run function to use this relay address.
Application Team’s is used when run.sh invokes docker-compose. It creates an for Application Team. In this network, docker compose starts a and a .
The Kafka consumer container is created using and an . The consumer enrollment ticket from run.sh is via an environment variable.
When the Kafka consumer node container starts in the Application Team's network, it runs , creating the Ockam node described by ockam.yaml
, . The node will automatically create an identity, enroll with your project, and set up the Kafka inlet.
Next, the entrypoint at the end executes the , which launches a Kafka consumer waiting for messages in the demo topic. Once the messages are received, they are printed out.
In the producer container, the process is analogous. Once the Ockam node is setup, the launches a Kafka producer that sends messages.
encrypts messages from a Producer all-of-the-way to a specific Consumer. Only that specific Consumer can decrypt these messages. This guarantees that your data cannot be observed or tampered with as it passes through Redpanda or the network where it is hosted. The operators of Redpanda can only see encrypted data in the network and in service that they operate. Thus, a compromise of the operator's infrastructure will not compromise the data stream's security, privacy, or integrity.
To learn how end-to-end trust is established, please read: “”
The , that you ran above, and its are full of comments and meant to be read. The example setup is only a few simple steps, so please take some time to read and explore.
The calls the which invokes the to create an new identity, sign in to Ockam Orchestrator, set up a new Ockam project, make you the administrator of this project, and get a project membership .
The run function then , each valid for 10 minutes, and can be redeemed only once. The is meant for the Ockam node that will run in Redpanda Operator’s network. The are meant for the Consumer and Producer, in the Ockam node that will run in Application Team’s network.
In a typical production setup, an administrator or provisioning pipeline generates enrollment tickets and gives them to nodes that are being provisioned. In our example, the run function is acting on your behalf as the administrator of the Ockam project. It provisions Ockam nodes in and , passing them their tickets using environment variables.
The run function takes the enrollment tickets, sets them as the value of an , and to create Redpanda Operator’s and Application Team’s networks.
Redpanda Operator’s is used when run.sh invokes docker-compose. It creates an for Redpanda Operator.
In this network, docker compose starts a . This container becomes available at redpanda:9092 in the Redpanda Operator network.
In the same network, docker compose also starts a , connecting directly to redpanda:9092. The console will be reachable throughout the example at http://127.0.0.1:8080.
Once the Redpanda container , docker compose starts an as a companion to the Redpanda container described by ockam.yaml
, . The node will automatically create an identity, using the ticket , and set up Kafka outlet.
The Ockam node then uses this identity and membership credential to authenticate and create a relay in the project, back to the node, at relay: redpanda. The run function to use this relay address.
Application Team’s is used when run.sh invokes docker-compose. It creates an for the Application Team. In this network, docker compose starts a and a .
The Kafka consumer node container is created using and this . The consumer enrollment ticket from run.sh is via environment variable.
When the Kafka consumer node container starts in the Application Team's network, it runs . The entrypoint creates the Ockam node described by ockam.yaml
, . The node will automatically create an identity, , and setup Kafka inlet.
Next, the entrypoint at the end executes the , which launches a Kafka consumer waiting for messages in the demo topic. Once the messages are received, they are printed out.
In the producer container, the process is analogous, once the Ockam node is set up the launches a Kafka producer that sends messages.
Each company’s network is private, isolated, and doesn't expose ports. To learn how end-to-end trust is established, please read: “”
This example requires Bash, Git, AWS CLI, Influx CLI, jq. Please set up these tools for your operating system. In particular you need to with aws sso login
.
The , that you ran above, and its are full of comments and meant to be read. The example setup is only a few simple steps, so please take some time to read and explore.
The calls the which invokes the to create an new identity, sign into Ockam Orchestrator, set up a new Ockam project, make you the administrator of this project, and get a project membership .
The run function then . The tickets are valid for 10 minutes. Each ticket can be redeemed only once and assigns to its redeemer. The is meant for the Ockam node that will run in Bank Corp.’s network. The is meant for the Ockam node that will run in Analysis Corp.’s network.
The run function passes the enrollment tickets as variables of the run scripts provisioning and .
We and tag it.
We and attach it to the VPC.
We and to the Internet via the gateway.
We , and associated to the route table.
We finally so that there is:
,
from the local machine running the example, to access Gitlab on port 22 and 80.
An SSH keypair to access gitlab repository is created and, .
We .
We to access EC2 and to obtain gitlab password to be able to login to gitlab console.
We above and a start script based on run_ockam.sh
and run_gitlab.sh
where:
created by the administrator and given as a parameter to run.sh
.
in run_gitlab.sh
script
We and .
We wait for 3 minutes for gitlab to be setup and
.
.
.
.
.
.
The .
The .
With .
A . The policy authorizes identities with a credential containing the attribute gitlab-inlet="true".
With capable of forwarding the TCP traffic to the TCP outlet.
We and tag it.
We and attach it to the VPC.
We and to the Internet via the gateway.
We , and associated to the route table.
We finally so that there is:
,
And to download and install the nodejs application from local machine running the script.
We .
We above and a start script based on run_ockam.sh
where:
created by the administrator and given as a parameter to run.sh
.
The is created on the EC2 with details of the private SSH key and permissions are updated
The .
The .
With .
A . The policy authorizes identities with a credential containing the attribute gitlab-outlet="true".
The has code to access the code repository on port 1222
configured
We can then and:
.
.
.
.
It will .
It that clones the repository, makes sure README.md file exists, inserts a line to the README.md file, does a commit and push the commit to the remote gitlab server.
Sensitive business data in the Gitlab Codebase is only accessible to Bank Corp. and Analysis Corp. All data is with strong forward secrecy as it moves through the Internet. The communication channel is and . Keys and credentials are automatically rotated. Access to connect with InfluxDB can be easily revoked.
All are secure-by-default. Only project members, with valid credentials, can connect with each other. NAT’s are traversed using a relay and outgoing tcp connections. Bank Corp. or Analysis Corp. don’t expose any listening endpoints on the Internet. Their networks are completely closed and protected from any attacks from the Internet.
encrypts messages from a Producer all-of-the-way to a specific Consumer. Only that specific Consumer can decrypt these messages. This guarantees that your data cannot be observed or tampered with as it passes through Warpstream Cloud or the network where it is hosted. The operators of Warpstream Cloud can only see encrypted data in the network and in service that they operate. Thus, a compromise of the operator's infrastructure will not compromise the data stream's security, privacy, or integrity.
To learn how end-to-end trust is established, please read: “”
encrypts messages from a Producer all-of-the-way to a specific Consumer. Only that specific Consumer can decrypt these messages. This guarantees that your data cannot be observed or tampered with as it passes through Confluent Cloud or the network where it is hosted. The operators of Confluent Cloud can only see encrypted data in the network and in service that they operate. Thus, a compromise of the operator's infrastructure will not compromise the data stream's security, privacy, or integrity.
To learn how end-to-end trust is established, please read: “”
encrypts messages from a Producer all-of-the-way to a specific Consumer. Only that specific Consumer can decrypt these messages. This guarantees that your data cannot be observed or tampered with as it passes through Instaclustr or the network where it is hosted. The operators of Instaclustr can only see encrypted data in the network and in service that they operate. Thus, a compromise of the operator's infrastructure will not compromise the data stream's security, privacy, or integrity.
To learn how end-to-end trust is established, please read: “”
The , that you ran above, and its are full of comments and meant to be read. The example setup is only a few simple steps, so please take some time to read and explore.
The calls the which invokes the to create an new identity, sign into Ockam Orchestrator, set up a new Ockam project, make you the administrator of this project, and get a project membership .
The run function then , each valid for 10 minutes, and can be redeemed only once. The is meant for the Ockam node that will run in Instaclustr Operator’s network. The are meant for the Consumer and Producer, in the Ockam node that will run in Application Team’s network.
Run function using Username and API Key and to create and configure a kafka cluster
gets invoked which:
Creates a .
for kafka consumer and producer to use.
to access the cluster from the machine running the script.
.
In a typical production setup, an administrator or provisioning pipeline generates enrollment tickets and gives them to nodes that are being provisioned. In our example, the run function is acting on your behalf as the administrator of the Ockam project. It provisions Ockam nodes in and , passing them their tickets using environment variables.
The run function takes the enrollment tickets, sets them as the value of an , and to create Instaclustr Operator’s and Application Teams’s networks.
Instaclustr Operator’s is used when run.sh invokes docker-compose. It creates an for Instaclustr Operator.
In the same network, docker compose starts a , connecting directly to ${BOOTSTRAPSERVER}:9092. The console will be reachable throughout the example at http://127.0.0.1:8080.
Docker compose starts an described by ockam.yaml
, . The node will automatically create an identity, using the ticket , and set up Kafka outlet with the passed to the container
The Ockam node then uses this identity and membership credential to authenticate and create a relay in the project, back to the node, at relay: instaclustr. The run function to use this relay address.
Application Teams’s is used when run.sh invokes docker-compose. It creates an for Application Teams. In this network, docker compose starts a and a .
The Kafka consumer node container is created using and this . The consumer enrollment ticket from run.sh is via environment variable.
When the Kafka consumer node container starts in the Application Teams network, it runs . The entrypoint creates the Ockam node described by ockam.yaml
, . The node will automatically create an identity, , and setup Kafka inlet.
Next, the executes the , which launches a Kafka consumer waiting for messages in the demo topic. Once the messages are received, they are printed out.
In the producer container, the process is analogous, once the Ockam node is set up the launches a Kafka producer that sends messages.
Both consumer and producer uses that has credentials of the kafka user created when setting up the cluster
You can view the Kafak UI available at to see the encrypted messages
encrypts messages from a Producer all-of-the-way to a specific Consumer. Only that specific Consumer can decrypt these messages. This guarantees that your data cannot be observed or tampered with as it passes through Warpstream Cloud or the network where it is hosted. The operators of Warpstream Cloud can only see encrypted data in the network and in service that they operate. Thus, a compromise of the operator's infrastructure will not compromise the data stream's security, privacy, or integrity.
To learn how end-to-end trust is established, please read: “”
The , that you ran above, and its are full of comments and meant to be read. The example setup is only a few simple steps, so please take some time to read and explore.
The calls the which invokes the to create a new identity, sign in to Ockam Orchestrator, set up a new Ockam project, make you the administrator of this project, and get a project membership .
The run function then by using your Warpstream's application key.
An Ockam relay is then started which creates an encrypted relay that transmits Kafka messages over a secure portal.
We then , each valid for 10 minutes, and can be redeemed only once. The are meant for the Consumer and Producer, in the Ockam node that will run in Application Team’s network.
In a typical production setup, an administrator or provisioning pipeline generates enrollment tickets and gives them to nodes that are being provisioned. In our example, the run function is acting on your behalf as the administrator of the Ockam project. It creates a Kafka relay using a pre-baked Ockam Kafka addon which will host the Warpstream Kafka server and , passing them their tickets using environment variables.
For the Application team, the run function takes the enrollment tickets, sets them as the value of an , and to create the Application Teams’s networks.
Application Teams’s is used when run.sh invokes docker-compose. It creates an for Application Teams. In this network, docker compose starts a and a .
The Kafka consumer node container is created using and this . The consumer enrollment ticket from run.sh is via environment variable.
When the Kafka consumer node container starts in the Application Teams network, it runs . The entrypoint and then calls the which starts the Kafka inlet and listens and traffic connection on localhost port 9092 through Ockam relay.
Next, the entrypoint at the end executes the , which launches a Kafka consumer waiting for messages in the demo topic. Once the messages are received, they are printed out.
In the producer container, the process is analogous, once the Ockam kafka-producer inlet is set up, the launches a Kafka producer that sends messages.
encrypts messages from a Producer all-of-the-way to a specific Consumer. Only that specific Consumer can decrypt these messages. This guarantees that your data cannot be observed or tampered with as it passes through Instaclustr or the network where it is hosted. The operators of Instaclustr can only see encrypted data in the network and in service that they operate. Thus, a compromise of the operator's infrastructure will not compromise the data stream's security, privacy, or integrity.
To learn how end-to-end trust is established, please read: “”
encrypts messages from a Producer all-of-the-way to a specific Consumer. Only that specific Consumer can decrypt these messages. This guarantees that your data cannot be observed or tampered with as it passes through Confluent Cloud or the network where it is hosted. The operators of Confluent Cloud can only see encrypted data in the network and in service that they operate. Thus, a compromise of the operator's infrastructure will not compromise the data stream's security, privacy, or integrity.
To learn how end-to-end trust is established, please read: “”
The , that you ran above, and its are full of comments and meant to be read. The example setup is only a few simple steps, so please take some time to read and explore.
The calls the which invokes the to create an new identity, sign in to Ockam Orchestrator, set up a new Ockam project, make you the administrator of this project, and get a project membership .
The run function then using the Confluent CLI.
An Ockam relay is then started which creates an encrypted relay that transmits Kafka messages over a secure portal.
We then , each valid for 10 minutes, and can be redeemed only once. The are meant for the Consumer and Producer, in the Ockam node that will run in Application Team’s network.
In a typical production setup, an administrator or provisioning pipeline generates enrollment tickets and gives them to nodes that are being provisioned. In our example, the run function is acting on your behalf as the administrator of the Ockam project. It creates a Kafka relay using a pre-baked Ockam confluent addon which will host the Confluent Kafka server and , passing them their tickets using environment variables.
For the Application team, the run function takes the enrollment tickets, sets them as the value of an , and to create the Application Teams’s networks.
Application Teams’s is used when run.sh invokes docker-compose. It creates an for Application Teams. In this network, docker compose starts a and a .
The Kafka consumer node container is created using and this . The consumer enrollment ticket from run.sh is via environment variable.
When the Kafka consumer node container starts in the Application Teams network, it runs . The entrypoint and then calls the which starts the Kafka inlet and listens and traffic connection on localhost port 9092 through Ockam relay.
Next, the entrypoint at the end executes the , which launches a Kafka consumer waiting for messages in the demo topic. Once the messages are received, they are printed out.
In the producer container, the process is analogous, once the Ockam kafka-producer inlet is set up, the launches a Kafka producer that sends messages.
We connect a nodejs app in an AWS virtual private network with a Gitlab hosted CodeRepository in another AWS virtual private network. The example uses the AWS CLI to instantiate AWS resources.
Amazon EC2
We connect a nodejs app in an AWS virtual private network with a LLaMA model provisioned on an EC2 instance in another AWS virtual private network. The example uses the AWS CLI to instantiate AWS resources.
Amazon Bedrock
We connect a nodejs app in an AWS virtual private network with an Amazon Bedrock API in another AWS virtual private network. The example uses the AWS CLI to instantiate AWS resources.
Scale mutual trust using lightweight, short-lived, revocable, attribute-based credentials.
An Ockam Credential is a signed attestation by an Issuer about the Attributes of Subject. The Issuer and Subject are both Ockam Identities. Attributes is a list of name and value pairs.
Any Ockam Identity can issue credentials about another Ockam Identity.
The Issuer can include specific attributes in the attestation:
Trust and authorization decisions must be anchored in some pre-existing knowledge.
In the previous section about Ockam Secure Channels we ran an example of mutual authorization using pre-existing knowledge of Ockam Identifiers. In this example n1 knows i2
and n2 know i1
:
Ockam Identities are unique, cryptographically verifiable digital identities. These identities authenticate by proving possession of secret keys. Ockam Vaults safely store these secret keys.
In order to make decisions about trust, we must authenticate senders of messages.
Ockam Identities authenticate by cryptographically proving possession of specific secret keys. Ockam Vaults safely store these secret keys in cryptographic hardware and cloud key management systems.
You can create a vault as follows:
This command will, by default, create a file system based vault, where your secret keys are stored at a specific file path.
Vaults are designed to be used in a way that secret keys never have to leave a vault. There is a growing base of Ockam Vault implementations in the Ockam GitHub Repository that safely store secret keys in specific KMSs, HSMs, Secure Enclaves etc.
Ockam Identities are unique, cryptographically verifiable digital identities.
You can create new identities, by typing:
The secret keys belonging to this identity are stored in the specified vault. This can be any type of vault - File Vault, AWS KMS, Azure KeyVault, YubiKey etc. If no vault is specified, the default vault is used. If a default vault doesn't exist yet, a new file systems based vault is created, set as default, and then used to generate secret keys.
To ensure privacy and eliminate the possibility of correlation of behavior across trust contexts, we've made it easy to generate and use different identities and identifiers for separate trust contexts.
Each Ockam Identity starts its life by generating a secret key and its corresponding public key. Secret keys, must remain secret, while public keys can be shared with the world.
Ockam Identities support two types of Elliptic Curve secret keys that live in vaults - Curve25519 or NIST P-256.
Each Ockam Identity has a unique public identifier, called the Ockam Identifier of this identity:
This Identifier is generated by hashing the first public key of the Identity.
Ockam Identities can periodically rotate their keys to indicate that the latest public key is the one that should be used for authentication. Each Ockam Identity maintains a self-signed change history of key rotation events, you can see this full history by running:
Authentication, within Ockam, starts by proving control of a specific Ockam Identifier. To prove control of a specific Identifier, the prover must present the identifier, the full signed change history of the identifier, and a signature on a challenge using the secret key corresponding to the latest public key in the identifier's change history.
Next, let's combine everything we've learnt so far to create mutually authenticated and end-to-end encrypted secure channels that guarantee data authenticity, integrity, and confidentiality.
We send end-to-end encrypted messages through Apache Kafka.
Send end-to-end encrypted messages through Redpanda.
Send end-to-end encrypted messages through Warpstream Cloud.
Send end-to-end encrypted messages through Confluent Cloud.
Send end-to-end encrypted messages through Instaclustr.
In this hands-on example we send end-to-end encrypted messages through Aiven Cloud.
Ockam encrypts messages from a Producer all-of-the-way to a specific Consumer. Only that specific Consumer can decrypt these messages. This guarantees that your data cannot be observed or tampered with as it passes through Aiven Cloud or the network where it is hosted. The operators of Aiven Cloud can only see encrypted data in the network and in service that they operate. Thus, a compromise of the operator's infrastructure will not compromise the data stream's security, privacy, or integrity.
To learn how end-to-end trust is established, please read: “How does Ockam work?”
This example requires Aiven CLI, Bash, JQ, Git, Curl, Docker, and Docker Compose. Please set up these tools for your operating system, then run the following commands:
If everything runs as expected, you'll see the message: The example run was successful 🥳
The run.sh script, that you ran above, and its accompanying files are full of comments and meant to be read. The example setup is only a few simple steps, so please take some time to read and explore.
The run.sh script calls the run function which invokes the enroll command to create an new identity, sign in to Ockam Orchestrator, set up a new Ockam project, make you the administrator of this project, and get a project membership credential.
The run function then creates a new Kafka cluster using the Aiven CLI.
An Ockam relay is then started using the Ockam Kafka addon which creates an encrypted Kafka relay that transmits Kafka messages over a secure portal.
We then generate two new enrollment tickets, each valid for 10 minutes, and can be redeemed only once. The two tickets are meant for the Consumer and Producer, in the Ockam node that will run in Application Team’s network.
In a typical production setup, an administrator or provisioning pipeline generates enrollment tickets and gives them to nodes that are being provisioned. In our example, the run function is acting on your behalf as the administrator of the Ockam project. It creates a Kafka relay using a pre-baked Ockam Kafka addon which will host the Aiven Kafka server and Application Team’s network, passing them their tickets using environment variables.
For the Application team, the run function takes the enrollment tickets, sets them as the value of an environment variable, passes the Aiven authentication variables and invokes docker-compose to create the Application Teams’s networks.
Application Teams’s docker-compose configuration is used when run.sh invokes docker-compose. It creates an isolated virtual network for Application Teams. In this network, docker compose starts a Kafka Consumer container and a Kafka Producer container.
The Kafka consumer node container is created using this dockerfile and this entrypoint script. The consumer enrollment ticket from run.sh is passed to the container via environment variable.
When the Kafka consumer node container starts in the Application Teams network, it runs its entrypoint. The entrypoint enrolls with your project and then calls the Ockam kafka-consumer command which starts the Kafka inlet and listens and traffic connection on localhost port 9092 through Ockam relay.
Next, the entrypoint at the end executes the command present in the docker-compose configuration, which launches a Kafka consumer waiting for messages in the demo topic. Once the messages are received, they are printed out.
In the producer container, the process is analogous, once the Ockam kafka-producer inlet is set up, the command within docker-compose configuration launches a Kafka producer that sends messages.
You can view the Aiven website to see the encrypted messages as they are being sent by the producer.
We sent end-to-end encrypted messages through Aiven cloud.
Messages are encrypted with strong forward secrecy as soon as they leave a Producer, and only the intended Consumer can decrypt those messages. Aiven Cloud and other Consumers can only see encrypted messages.
All communication is mutually authenticated and authorized. Keys and credentials are automatically rotated. Access can be easily revoked.
To delete all containers and images:
Create an Ockam Portal to send end-to-end encrypted messages through Aiven.
Ockam encrypts messages from a Producer all-of-the-way to a specific Consumer. Only that specific Consumer can decrypt these messages. This guarantees that your data cannot be observed or tampered with as it passes through Aiven or the network where it is hosted. The operators of Aiven can only see encrypted data in the network and in service that they operate. Thus, a compromise of the operator's infrastructure will not compromise the data stream's security, privacy, or integrity.
To learn how end-to-end trust is established, please read: “How does Ockam work?”
Please select an example to dig in:
Ockam Routing and Transports enable protocols that provide end-to-end guarantees to messages traveling across many network connection hops and protocols boundaries.
Data, within modern applications, routinely flows over complex, multi-hop, multi-protocol routes before reaching its end destination. It's common for application layer requests and data to move across network boundaries, beyond data centers, via shared or public networks, through queues and caches, from gateways and brokers to reach remote services and other distributed parts of an application.
Ockam is designed to enable end-to-end application layer guarantees in any communication topology.
For example, Ockam Secure Channels provide end-to-end guarantees of data authenticity, integrity, and privacy in any of the above communication topologies. In contrast, traditional secure communication implementations are typically tightly coupled with transport protocols in a way that all their security is limited to the length and duration of one underlying transport connection.
For example, most TLS implementations are tightly coupled with the underlying TCP connection. If your application's data and requests travel over two TCP connection hops TCP -> TCP
, then all TLS guarantees break at the bridge between the two networks. This bridge, gateway or load balancer then becomes a point of weakness for application data.
To make matters worse, if you don't set up another mutually authenticated TLS connection on the second hop between the gateway and your destination server, then the entire second hop network – which may have thousands of applications and machines within it – becomes an attack vector to your application and its data. If any of these neighboring applications or machines are compromised, then your application and its data can also be easily compromised.
Traditional secure communication protocols are also unable to protect your application's data if it travels over multiple different transport protocols. They can't guarantee data authenticity or data integrity if your application's communication path is UDP -> TCP
or BLE -> TCP
.
Ockam Routing is a simple and lightweight message-based protocol that makes it possible to bidirectionally exchange messages over a large variety of communication topologies: TCP -> TCP
or TCP -> TCP -> TCP
or BLE -> UDP -> TCP
or BLE -> TCP -> TCP
or TCP -> Kafka -> TCP
or any other topology you can imagine.
Ockam Transports adapt Ockam Routing to various transport protocols. By layering Ockam Secure Channels and other protocols over Ockam Routing, we can provide end-to-end guarantees over arbitrary transport topologies that span many networks and clouds.
Let's start by creating a node and sending a message to a service on that node.
We get a reply back and the message flow looked like this.
To achieve this, Ockam Routing Protocol messages carry with them two metadata fields: onward_route
and return_route
. A route is an ordered list of addresses describing a message's path travel. All of this information is carried in a really compact binary format.
Pay very close attention to the Sender, Hop, and Replier rules in the sequence diagrams below. Note how onward_route
and return_route
are handled as the message travels.
The above was just one message hop. We can extend this to two hops:
This very simple protocol can extend to any number of hops, try following command:
So far, we've routed messages between Workers on one Node. Next, let's see how we can route messages across nodes and machines using Ockam Routing adapters called Transports.
Ockam Transports adapt Ockam Routing for specific transport protocol, like TCP, UDP, WebSockets, Bluetooth etc. There is a growing base of Ockam Transport implementations in the Ockam GitHub Repository.
Let's start by exploring TCP transport. Create two new nodes: n2
and n3
and explicitly specify that they should listen on the local TCP addresses 127.0.0.1:7000
and 127.0.0.1:8000
respectively:
Next, let's create two TCP connections, one from n1 to n2
and the other from n2 to n3
. Let's also add a hop for routing purposes:
Note, from the output, that the TCP connection from n1 to n2
on n1
has worker address ac40f7edbf7aca346b5d44acf82d43ba
and the TCP connection from n2 to n3
on n2
has the worker address 7d2f9587d725311311668075598e291e
. We can combine this information to send a message over two TCP hops.
The message in the above command took the following route:
In this example, we ran a simple uppercase
request and response protocol between n1
and n3
, two nodes that weren't directly connected to each other. This simple combination of Ockam Routing and Transports the foundation of end-to-end protocols in Ockam.
We can have any number of TCP hops along the route to the uppercase service. We can also easily have some hops that use a completely different transport protocol, like UDP or Bluetooth. Transport protocols are pluggable, and there is a growing base of Ockam Transport Add-Ons in our GitHub Repository.
Ockam Routing is a simple and lightweight message-based protocol that makes it possible to bidirectionally exchange messages over a large variety of communication topologies: TCP -> TCP
or TCP -> TCP -> TCP
or BLE -> UDP -> TCP
or BLE -> TCP -> TCP
or TCP -> Kafka -> TCP
or any other topology you can imagine. Ockam Transports adapt Ockam Routing to various transport protocols.
Together they give us a simple yet extremely flexible foundation to describe end-to-end, application layer protocols that can operate in any communication topology.
Next, let's explore how Ockam Relays and Portals make it simple to connect existing applications across networks.
Command line tools to build and orchestrate secure by design applications.
Ockam Command is our command line interface to build secure by design applications that can trust all data in motion. It makes it easy to orchestrate end-to-end encryption, mutual authentication, key management, credential management, and authorization policy enforcement – at a massive scale.
No more having to design error-prone ad-hoc ways to distribute sensitive credentials and roots of trust. Ockam's integrated approach takes away this complexity and gives you simple tools for:
Create end-to-end encrypted, authenticated secure channels over any transport topology.
Create secure channels over multi-hop, multi-protocol routes over TCP, UDP, WebSockets, BLE, etc.
Provision encrypted relays for applications distributed across many edge, cloud and data-center private networks.
Make any protocol secure by tunneling it through mutually authenticated and encrypted portals.
Bring end-to-end encryption to enterprise messaging, pub/sub and event streams - Kafka, Kinesis, RabbitMQ, etc.
Generate cryptographically provable unique identities.
Store private keys in safe vaults - hardware secure enclaves and cloud key management systems.
Operate scalable credential authorities to issue lightweight, short-lived, revocable, attribute-based credentials.
Onboard fleets of self-sovereign application identities using secure enrollment protocols.
Rotate and revoke keys and credentials – at scale, across fleets.
Define and enforce project-wide attribute-based access control policies. Choose ABAC, RBAC or ACLs.
Integrate with enterprise identity providers and policy providers for seamless employee access.
Ockam Command provides the above collection of composable building blocks that are accessible through various sub-commands. In a step-by-step guide let's walk through various Ockam sub-commands to understand how you can use them to build end-to-end trustful communication for any application in any communication topology.
If you haven't already, the first step is to install Ockam Command:
If you use Homebrew, you can install Ockam using brew.
This will download a precompiled binary and add it to your path. If you don't use Homebrew, you can also install on Linux and macOS systems using curl. See instructions for other systems in the next tab.
On Linux and macOS, you can download precompiled binaries for your architecture using curl.
This will download a precompiled binary and add it to your path. If the above instructions don't work on your machine, please post a question, we'd love to help.
Check that everything was installed correctly by enrolling with Ockam Orchestrator. This will create a Space and Project for you in Ockam Orchestrator.
Next, let's dive in and learn how to use Nodes and Workers.
Ockam Relays make it easy to traverse NATs and run end-to-end protocols between Ockam Nodes in far away private networks. Ockam Portals make existing protocols work over Ockam Routing.
In the previous section, we learned how Ockam Routing and Transports create a foundation for end-to-end application layer protocols. When discussing Transports, we put together a specific example communication topology – a transport bridge.
Node n1
wishes to access a service on node n3
, but it can't directly connect to n3
. This can happen for many reasons, maybe because n3
is in a separate IP
subnet, or it could be that the communication from n1 to n2
uses UDP while from n2 to n3
uses TCP or other similar constraints. The topology makes n2
a bridge or gateway between these two separate networks to enable end-to-end protocols between n1
and n3
even though they are not directly connected.
It is common, however, to encounter communication topologies where the machine that provides a service is unwilling or is not allowed to open a listening port or expose a bridge node to other networks. This is a common security best practice in enterprise environments, home networks, OT networks, and VPCs across clouds. Application developers may not have control over these choices from the infrastructure / operations layer. This is where relays are useful.
Relays make it possible to establish end-to-end protocols with services operating in a remote private network, without requiring a remote service to expose listening ports to an outside hostile network like the Internet.
Delete any existing nodes and then try this new example:
In this example, the direction of the second TCP connection is reversed in comparison to our first example that used a bridge. n2
is the only node that has to listen for TCP connections.
Node n2
is running a relay service. n3
makes an outgoing TCP connection to n2
and requests a forwarding address from the relay service. n3
then becomes reachable via n2
at the address /service/forward_to_n3
.
Node n1
connects with n2
and routes messages to n3
via its forwarding relay.
The message in the above example took the following route. This is very similar to our earlier example except for the direction of the second TCP connection. The relay worker remembers the route to back to n3
. n1
just has to get the message to the forwarding relay and everything just works.
Using this simple topology rearrangement, Ockam Routing makes it possible to establish end-to-end protocols between applications that are running in completely private networks.
We can traverse NATs and pierce through network boundaries. And since this is all built using a very simple application layer routing protocol, we can have any number of transport connection hops in any transport protocol, and we can mix-match bridges with relays to create end-to-end protocols in any communication topology.
Portals make existing protocols work over Ockam Routing without changing any code in the existing applications.
Continuing from our Relays example, create a Python-based web server to represent a sample web service. This web service is listening on 127.0.0.1:9000
.
Then create a TCP Portal Outlet that makes 127.0.0.1:9000
available on worker address /service/outlet
on n3
. We already have a forwarding relay for n3
on n2
at service/forward_to_n3
.
We then create a TCP Portal Inlet on n1
that will listen for TCP connections to 127.0.0.1:6000
. For every new connection, the inlet creates a portal following the --to
route all the way to the outlet. As it receives TCP data, it chunks and wraps them into Ockam Routing messages and sends them along the supplied route. The outlet receives Ockam Routing messages, unwraps them to extract TCP data and sends that data along to the target web service on 127.0.0.1:9000
. It all just seamlessly works.
The HTTP requests from curl enter the inlet on n1
, travel to n2
, and are relayed back to n3
via its forwarding relay to reach the outlet and onward to the Python-based web service. Responses take the same return route back to curl.
The TCP Inlet/Outlet work for a large number of TCP-based protocols like HTTP. It is also simple to implement portals for other transport protocols. There is a growing base of Ockam Portal Add-Ons in our GitHub Repository.
Ockam Routing and Transports combined with the ability to model Bridges and Relays make it possible to create end-to-end, application layer protocols in any communication topology - across networks, clouds, and boundaries.
Portals take this powerful capability a huge step forward by making it possible to apply these end-to-end protocols and their guarantees to existing applications, without changing any code!
This lays the foundation to make both new and existing applications - end-to-end encrypted and secure-by-design.
Next, let's learn how to create cryptographic identities and store secret keys in safe vaults.
Ockam Nodes and Workers decouple applications from the host environment and enable simple interfaces for stateful and asynchronous message-based protocols.
At Ockam's core is a collection of cryptographic and messaging protocols. These protocols make it possible to create private and secure by design applications that provide end-to-end application layer trusted data.
Ockam is designed to make these powerful protocols easy and safe to use in any application environment – from highly scalable cloud services to tiny battery operated microcontroller based devices.
However, many of these protocols require multiple steps and have complicated internal state that must be managed with care. It can be quite challenging to make them simple to use, secure, and platform independent.
Ockam Nodes, Workers, and Services help hide this complexity and decouple from the host environment - to provide simple interfaces for stateful and asynchronous message-based protocols.
An Ockam Node is any program that can interact with other Ockam Nodes using various Ockam protocols like Ockam Routing and Ockam Secure Channels.
You can create a standalone node using Ockam Command or embed one directly into your application using various Ockam programming libraries. Nodes are built to leverage the strengths of their operating environment. Our Rust implementation, for example, makes it easy to adapt to various architectures and processors. It can run efficiently on tiny microcontrollers or scale horizontally in cloud environments.
A typical Ockam Node is implemented as an asynchronous execution environment that can run very lightweight, concurrent, stateful actors called Ockam Workers. Using Ockam Routing, a node can deliver messages from one worker to another local worker. Using Ockam Transports, nodes can also route messages to workers on other remote nodes.
Ockam Command makes it super easy to create and manage local or remote nodes. If you run ockam node create
, it will create and start a node in the background and give it a random name:
Similarly, you can also create a node with a name of your choice:
You could also start a node in the foreground and optionally tell it display verbose logs:
To stop the foreground node, you can press Ctrl-C
. This will stop the node but won't delete its state.
You can see all running nodes with ockam node list
You can stop a running node with ockam node stop
.
You can start a stopped node with ockam node start
.
You can permanently delete a node by running:
You can also delete all nodes with:
Ockam Nodes run very lightweight, concurrent, and stateful actors called Ockam Workers. They are like processes on your operating system, except that they all live inside one node and are very lightweight so a node can have hundreds of thousands of them, depending on the capabilities of the machine hosting the node.
When a worker is started on a node, it is given one or more addresses. The node maintains a mailbox for each address and whenever a message arrives for a specific address it delivers that message to the corresponding worker. In response to a message, a worker can: make local decisions, change internal state, create more workers, or send more messages.
You can see the list of workers in a node by running:
Note the workers in node n1
with address echo
and uppercase
. We'll send them some messages below as we look at services. A node can also deliver messages to workers on a different node using the Ockam Routing Protocol and its Transports. Later in this guide, when we dig into routing, we'll send some messages across nodes.
From ockam
command, we don't usually create workers directly but instead start predefined services like Transports and Secure Channels that in turn start one or more workers. Using our libraries you can also develop your own workers.
Workers are stateful and can asynchronously send and receive messages. This makes them a potent abstraction that can take over the responsibility of running multistep, stateful, and asynchronous message-based protocols. This enables ockam
command and Ockam Programming Libraries to expose very simple and safe interfaces for powerful protocols.
One or more Ockam Workers can work as a team to offer a Service. Services can also be attached to a trust context and authorization policies to enforce attribute based access control rules.
For example, nodes that are created with Ockam Command come with some predefined services including an example service /service/uppercase
that responds with an uppercased version of whatever message you send it:
Services have addresses represented by /service/{ADDRESS}
. You can see a list of all services on a node by running:
Later in this guide, we'll explore other commands that interact with pre-defined services. For example every node created with ockam
command starts a secure channel listener at the address /service/api
, which allows other nodes to create mutually authenticated secure channels with it.
Ockam Spaces are infinitely scalable Ockam Nodes in the cloud. Ockam Orchestrator can create, manage, and scale spaces for you. Like other nodes, Spaces offer services. For example, you can create projects within a space, invite teammates to it, or attach payment subscriptions.
When you run ockam enroll
for the first time, we create a space for you to host your projects.
Ockam Projects are also infinitely scalable Ockam Nodes in the cloud. Ockam Orchestrator can create, manage, and scale projects for you. Projects are created within a Space and can inherit permissions and subscriptions from their parent space. There can be many projects within one space.
When you run ockam enroll
for the first time, we create a default project for you, within your default space.
Like other nodes, Projects offer services. For example, the default project has an echo
service just like the local nodes we created above. We can send messages and get replies from it. The echo
service replies with the same message we send it.
Ockam Nodes are programs that interact with other nodes using one or more Ockam protocols like Routing and Secure Channels. Nodes run very lightweight, concurrent, and stateful actors called Workers. Nodes and Workers hide complexities of environment and state to enable simple interfaces for stateful, asynchronous, message-based protocols.
One or more Workers can work as a team to offer a Service. Services can be attached to trust contexts and authorization policies to enforce attribute based access control rules. Ockam Orchestrator can create and manage infinitely scalable nodes in the cloud called Spaces and Projects that offer managed services that are designed for scale and reliability.
Next, let's learn about Ockam's Application Layer Routing and how it enables protocols that provide end-to-end guarantees to messages traveling across many network connection hops and protocols boundaries.
Ockam Secure Channels are mutually authenticated and end-to-end encrypted messaging channels that guarantee data authenticity, integrity, and confidentiality.
To trust data-in-motion, applications need end-to-end guarantees of data authenticity, integrity, and confidentiality.
In previous sections, we saw how Ockam Routing and Transports, when combined with the ability to model Bridges and Relays, make it possible to create end-to-end, application layer protocols in any communication topology - across networks, clouds, and protocols over many transport layer hops.
Ockam Secure Channels is an end-to-end protocol built on top of Ockam Routing. This cryptographic protocol guarantees data authenticity, integrity, and confidentiality over any communication topology that can be traversed with Ockam Routing.
Distributed applications that are connected in this way can communicate without the risk of spoofing, tampering, or eavesdropping attacks, irrespective of transport protocols, communication topologies, and network configuration. As application data flows across data centers, through queues and caches, via gateways and brokers - these intermediaries, like the relay in the above picture, can facilitate communication but cannot eavesdrop or tamper data.
In contrast, traditional secure communication implementations are typically tightly coupled with transport protocols in a way that all their security is limited to the length and duration of one underlying transport connection.
For example, most TLS implementations are tightly coupled with the underlying TCP connection. If your applications data and requests travel over two TCP connection hops TCP -> TCP
then all TLS guarantees break at the bridge between the two networks. This bridge, gateway or load balancer then becomes a point of weakness for application data.
To make matters worse, if you don't set up another mutually authenticated TLS connection on the second hop between the gateway and your destination server, then the entire second hop network – which may have thousands of applications and machines within it – becomes an attack vector to your application and its data. If any of these neighboring applications or machines are compromised, then your application and its data can also be easily compromised.
Traditional secure communication protocols are also unable to protect your application's data if it travels over multiple different transport protocols. They can't guarantee data authenticity or data integrity if your application's communication path is UDP -> TCP
or BLE -> TCP
.
Ockam Routing and Transports, when combined with the ability to model Bridges and Relays make it possible to bidirectionally exchange messages over a large variety of communication topologies: TCP -> TCP
or TCP -> TCP -> TCP
or BLE -> UDP -> TCP
or BLE -> TCP -> TCP
or TCP -> Kafka -> TCP
, etc.
By layering Ockam Secure Channels over Ockam Routing, it becomes simple to provide end-to-end, application layer guarantees of data authenticity, integrity, and confidentiality in any communication topology.
Ockam Secure Channels provides the following end-to-end guarantees:
Authenticity: Each end of the channel knows that messages received on the channel must have been sent by someone who possesses the secret keys of a specific Ockam Identifier.
Integrity: Each end of the channel knows that the messages received on the channel could not have been tampered en route and are exactly what was sent by the authenticated sender at the other end of the channel.
Confidentiality: Each end of the channel knows that the contents of messages received on the channel could not have been observed en route between the sender and the receiver.
To establish the secure channel, the two ends run an authenticated key establishment protocol and then authenticate each other's Ockam Identifier by signing the transcript hash of the key establishment protocol. The cryptographic key establishment safely derives shared secrets without transporting these secrets on the wire.
Once the shared secrets are established, they are used for authenticated encryption that ensures data integrity and confidentiality of application data.
Our secure channel protocol is based on a handshake design pattern described in the Noise Protocol Framework. Designs based on this framework are widely deployed and the described patterns have formal security proofs. The specific pattern that we use in Ockam Secure Channels provides sender and receiver authentication and is resistant to key compromise impersonation attacks. It also ensures the integrity and secrecy of application data and provides strong forward secrecy.
Now that you're familiar with the basics let's create some secure channels. If you haven't already, install ockam command, run ockam enroll
, and delete any nodes from previous examples.
In this example, we'll create a secure channel from Node a
to node b
. Every node, created with Ockam Command, starts a secure channel listener at address /service/api
.
In the above example, a
and b
mutually authenticate using the default Ockam Identity that is generated when we create the first node. Both nodes, in this case, are using the same identity.
Once the channel is created, note above how we used the service address of the channel on a
to send messages through the channel. This can be shortened to the one-liner:
The first command writes /service/d92ef0aea946ec01cdbccc5b9d3f2e16
, the address of a new secure channel on a
, to standard output and the second command replaces the -
in the to
argument with the value from standard input. Everything else works the same.
In a previous section, we learned that Bridges enable end-to-end protocols between applications in separate networks in cases where we have a bridge node that is connected to both networks. Since Ockam Secure Channels are built on top of Ockam Routing, we can establish end-to-end secure channels over a route that may include one or more bridges.
Delete any existing nodes and then try this example:
In a previous section, we also saw how Relays make it possible to establish end-to-end protocols with services operating in a remote private network without requiring a remote service to expose listening ports on an outside hostile network like the Internet.
Since Ockam Secure Channels are built on top of Ockam Routing, we can establish end-to-end secure channels over a route that may include one or more relays.
Delete any existing nodes and then try this example:
Ockam Secure Channels are built on top of Ockam Routing. But they also carry Ockam Routing messages.
Any protocol that is implemented in this way melds with and becomes a seamless part of Ockam Routing. This means that we can run any Ockam Routing based protocol through Secure Channels. This also means that we can create Secure Channels that pass through other Secure Channels.
The on-the-wire overhead of a new secure channel is only 20 bytes per message. This makes passing secure channels though other secure channels a powerful tool in many real world topologies.
Ockam Orchestrator can create and manage Elastic Encrypted Relays in the cloud within your Orchestrator project. These managed relays are designed for high availability, high throughput, and low latency.
Let's create an end-to-end secure channel through an elastic relay in your Orchestrator project.
The Project that was created when you ran ockam enroll
offers an Elastic Relay Service. Delete any existing nodes and then try this new example:
Nodes a
and b
(the two ends) are mutually authenticated and are cryptographically guaranteed data authenticity, integrity, and confidentiality - even though their messages are traveling over the public Internet over two different TCP connections.
In a previous section, we saw how Portals make existing application protocols work over Ockam Routing without changing any code in the existing applications.
We can combine Secure Channels with Portals to create Secure Portals.
Continuing from the above example on Elastic Encrypted Relays create a Python-based web server to represent a sample web service. This web service is listening on 127.0.0.1:9000
.
Then create a TCP Portal Outlet that makes 127.0.0.1:9000
available on worker address /service/outlet
on b
. We already have a forwarding relay for b
on orchestrator /project/default
at /service/forward_to_b
.
We then create a TCP Portal Inlet on a
that will listen for TCP connections to 127.0.0.1:6000
. For every new connection, the inlet creates a portal following the --to
route all the way to the outlet. As it receives TCP data, it chunks and wraps them into Ockam Routing messages and sends them along the supplied route. The outlet receives Ockam Routing messages, unwraps them to extract TCP data, and send that data along to the target web service on 127.0.0.1:9000
. It all just seamlessly works.
The HTTP requests from curl, enter the inlet on a
, travel to the orchestrator project node and are relayed back to b
via it's forwarding relay to reach the outlet and onward to the Python-based web service. Responses take the same return route back to curl.
The TCP Inlet/Outlet works for a large number of TCP based protocols like HTTP. It is also simple to implement portals for other transport protocols. There is a growing base of Ockam Portal Add-Ons in our GitHub Repository.
Trust and authorization decisions must be anchored in some pre-existing knowledge.
Delete any existing nodes and then try this new example:
Ockam Secure Channels is an end-to-end protocol built on top of Ockam Routing. This cryptographic protocol guarantees data authenticity, integrity, and confidentiality over any communication topology that can be traversed with Ockam Routing.
Next, let's explore how we can scale mutual authentication with Ockam Credentials.
Create an ockam node using Cloudformation template
This guide contains instructions to launch within AWS environment, an
Ockam Outlet Node
Ockam Inlet Node
The walkthrough demonstrates running both outlet and inlet nodes and verify communication between them.
Read: “How does Ockam work?” to learn about end-to-end trust establishment.
Sign up for Ockam and pick a subscription plan through the guided workflow on Ockam.io.
Run the following commands to install Ockam Command and enroll with the Ockam Orchestrator.
Completing this step creates a Project in Ockam Orchestrator.
Control which identities are allowed to enroll themselves into your project by issuing unique one-time use enrollment tickets. Generate two enrollment tickets, one for the Outlet and one for the Inlet.
Login to AWS Account you would like to use
Subscribe to "Ockam - Node" in AWS Marketplace
Navigate to AWS Marketplace -> Manage subscriptions
. Select Ockam - Node
from the list of subscriptions. Select Actions-> Launch Cloudformation stack
Select the Region you want to deploy and click Continue to Launch
. Under Actions, select Launch Cloudformation
Create stack with below details
Stack name: example-outlet
or any name you prefer
Network Configuration
Select suitable values for VPC ID
and Subnet ID
Default instance type is m6a.8xlarge
because of the predictable network bandwidth of 12.5 Gbps. Adjust instance type if you need to
Ockam Configuration
Enrollment ticket
: Copy and paste the content of the outlet.ticket
generated above
JSON Node Configuration
: Copy and paste the below configuration.
Click Next to launch the CloudFormation run.
A successful CloudFormation stack run configures the Ockam outlet node on an EC2 machine.
EC2 machine mounts an EFS volume created in the same subnet. Ockam state is stored in the EFS volume.
Connect to the EC2 machine via AWS Session Manager. To view the log file, run sudo cat /var/log/cloud-init-output.log
.
View the Ockam node status in CloudWatch.
Navigate to Cloudwatch -> Log Group
and select example-outet-ockam-status-logs
. Select the Logstream for the EC2 instance.
Cloudformation template creates a subscription filter that sends data to a Cloudwatch alarm example-outlet-OckamNodeDownAlarm.
Alarm will turn green upon ockam node successfully running.
An Autoscaling group ensures atleast one EC2 instance is running at all times.
Run python3 /opt/webhook_receiver.py
to start the webhook that will listen on port 7777
. We will send traffic to this webhook after inlet is setup, so keep the terminal window open.
Login to AWS Account you would like to use
Subscribe to "Ockam - Node" in AWS Marketplace
Navigate to AWS Marketplace -> Manage subscriptions
. Select Ockam - Node
from the list of subscriptions. Select Actions-> Launch Cloudformation stack
Select the Region you want to deploy and click Continue to Launch
. Under Actions, select Launch Cloudformation
Create stack with below details
Stack name: example-inlet
or any name you prefer
Network Configuration
Select suitable values for VPC ID
and Subnet ID
Default instance type is m6a.8xlarge
because of the predictable network bandwidth of 12.5 Gbps. Adjust instance type if you need to
Ockam Configuration
Enrollment ticket
: Copy and paste the content of the outlet.ticket
generated above
JSON Node Configuration
: Copy and paste the below configuration.
Click Next to launch the CloudFormation run.
A successful CloudFormation stack run configures the Ockam inlet node on an EC2 machine.
EC2 machine mounts an EFS volume created in the same subnet. Ockam state is stored in the EFS volume.
Connect to the EC2 machine via AWS Session Manager. To view the log file, run sudo cat /var/log/cloud-init-output.log
.
View the Ockam node status in CloudWatch.
Navigate to Cloudwatch -> Log Group
and select example-inlet-ockam-status-logs
. Select the Logstream for the EC2 instance.
Cloudformation template creates a subscription filter that sends data to a Cloudwatch alarm example-inlet-OckamNodeDownAlarm.
Alarm will turn green upon ockam node successfully running.
An Autoscaling group ensures atleast one EC2 instance is running at all times.
Connect to the EC2 machine via AWS Session Manager.
Run the command below to post a request to the Inlet address. You must receive a response. Verify that the request reaches the webhook running on the Outlet machine.
A Successful setup receives a response back
You will also see the request received in the Outlet EC2 machine
You have now successfully created an Ockam Portal and verified secure communication 🎉.
Delete the example-outlet
CloudFormation stack from the AWS Account.
Delete the example-inlet
CloudFormation stack from the AWS Account.
Delete ockam configuration files from the machine that the administrator used to generate enrollment tickets.
Create an ockam kafka outlet node using Cloudformation template
This guide contains instructions to launch within AWS environment, an
An Ockam Kafka Outlet Node within an AWS environment
An Ockam Kafka Inlet Node:
Within an AWS environment, or
Using Docker in any environment
The walkthrough demonstrates:
Running an Ockam kafka outlet node in your AWS environment that contains Amazon MSK instance
Setting up Ockam Kafka inlet nodes using either AWS or Docker from any location.
Verifying secure communication between kafka clients and Amazon MSK cluster.
Read: “How does Ockam work?” to learn about end-to-end trust establishment.
Amazon MSK Cluster Configuration: Ensure that your Amazon MSK cluster is configured with the following settings:
Access Control Methods: Unauthenticated access should be enabled.
Encryption between Clients and Brokers: PLAINTEXT should be enabled
Network Access to Amazon MSK Cluster: Verify that the Security Group associated with the Amazon MSK cluster allows inbound traffic on the required port(s) (e.g., 9092) from the subnet where the EC2 instance will reside.
Sign up for Ockam and pick a subscription plan through the guided workflow on Ockam.io.
Run the following commands to install Ockam Command and enroll with the Ockam Orchestrator.
Completing this step creates a Project in Ockam Orchestrator.
Control which identities are allowed to enroll themselves into your project by issuing unique one-time use enrollment tickets. Generate two enrollment tickets, one for the Outlet and one for the Inlet.
Login to AWS Account you would like to use
Subscribe to "Ockam - Node for Amazon MSK" in AWS Marketplace
Navigate to AWS Marketplace -> Manage subscriptions
. Select Ockam - Node for Amazon MSK
from the list of subscriptions. Select Actions-> Launch Cloudformation stack
Select the Region you want to deploy and click Continue to Launch
. Under Actions, select Launch Cloudformation
Create stack with the following details
Stack name: msk-ockam-outlet
or any name you prefer
Network Configuration
VPC ID: Choose a VPC ID where the EC2 instance will be deployed.
Subnet ID: Select a suitable Subnet ID within the chosen VPC that has access to Amazon MSK cluster.
EC2 Instance Type: Default instance type is m6a.8xlarge
because of the predictable network bandwidth of 12.5 Gbps. Adjust to a small instance type depending on your use case
Ockam Configuration
Enrollment ticket: Copy and paste the content of the outlet.ticket
generated above
Amazon MSK Bootstrap Server with Port: To configure the Ockam Kafka Outlet Node, you'll need to specify the bootstrap servers for your Amazon MSK cluster. This configuration allows the Ockam Kafka Outlet Node to connect to the Kafka brokers.
Go to the MSK cluster in the AWS Management Console and select the cluster name.
In the Connectivity Summary section, select View Client information, copy the Bootstrap servers (plaintext) string with port 9092
.
JSON Node Configuration: Copy and paste the below configuration. Note that the configuration values match with the enrollment tickets created in the previous step
Click Next to launch the CloudFormation run.
A successful CloudFormation stack run configures the Ockam Kafka outlet node on an EC2 machine.
EC2 machine mounts an EFS volume created in the same subnet. Ockam state is stored in the EFS volume.
A security group with egress access to the internet will be attached to the EC2 machine.
Connect to the EC2 machine via AWS Session Manager.
To view the log file, run sudo cat /var/log/cloud-init-output.log
.
Successful run will show Ockam node setup completed successfully
in the logs
To view the status of Ockam node run curl http://localhost:23345/show | jq
View the Ockam node status in CloudWatch.
Navigate to Cloudwatch -> Log Group
and select msk-outlet-ockam-status-logs
. Select the Logstream for the EC2 instance.
The Cloudformation template creates a subscription filter that sends data to a Cloudwatch alarm msk-ockam-outlet-OckamNodeDownAlarm.
Alarm will turn green upon ockam node successfully running.
An Autoscaling group ensures atleast one EC2 instance is running at all times.
Ockam Kafka outlet node setup is complete. You can now create Ockam Kafka inlet nodes in any network to establish secure communication.
You can set up an Ockam Kafka Inlet Node either in AWS or locally using Docker. Here are both options:
Option 1: Setup Inlet Node in AWS
To set up an Inlet Node in AWS, follow similar steps as the Outlet Node setup, with these modifications:
Use the same CloudFormation template as before.
When configuring the stack,
Use the inlet.ticket
instead of the outlet.ticket
.
VPC and Subnet: You can choose any VPC and subnet for the Inlet Node. It doesn't need to be in the same network as the MSK cluster or the Outlet Node.
For the JSON Node Configuration, use the following:
Use any kafka client and connect to 127.0.0.1:9092
as the bootstrap-server,
from the same machine running the Ockam Kafak Inlet node.
Option 2: Setup Inlet Node Locally with Docker Compose
To set up an Inlet Node locally and interact with it outside of AWS, use Docker Compose.
Create a file named docker-compose.yml
with the following content:
Run the following command from the same location as the docker-compose.yml
and the inlet.ticket
to create an Ockam kafka inlet that can connect to the outlet running in AWS , along with kakfa client tools container
Exec into the kafka-tools
and run commands to produce as well as consume kafka messages.
This setup allows you to run an Ockam Kafka Inlet Node locally and communicate securely with the Outlet Node running in AWS.
Create an Ockam Bedrock outlet node using Cloudformation template
By default, You can access Amazon Bedrock over the public internet, which means:
Your API calls to Bedrock travel across the public internet.
Your client must have public internet connectivity
You must implement additional security measures to protect your data in transit
When you build AI applications with sensitive or proprietary data, exposing them to the public internet creates several risks:
Your data may travel through unknown network paths
Attackers gain more potential entry points
Your compliance requirements may prohibit public internet usage
You must maintain extra security controls and monitoring
Understanding VPC Endpoints for Amazon Bedrock
How VPC Endpoints Work
AWS PrivateLink powers VPC endpoints, which let you access Amazon Bedrock privately without exposing data to the public internet. When you create a private connection between your VPC and Bedrock:
Your traffic stays within AWS network infrastructure
You eliminate the need for public endpoints
Your data remains on private AWS networks
However, organizations often need additional capabilities:
Access to Bedrock from outside AWS
Secure connections from other cloud providers
Private access from on-premises environments
This is where Ockam comes helps.
You have permission to subscribe and launch Cloudformation stack from AWS Marketplace on the AWS Account running Amazon Bedrock.
Make sure AWS Bedrock is available in the region you are deploying the cloudformation template.
Run the following commands to install Ockam Command and enroll with the Ockam Orchestrator.
Control which identities are allowed to enroll themselves into your project by issuing unique one-time use enrollment tickets. Generate two enrollment tickets, one for the Outlet and one for the Inlet.
Login to AWS Account you would like to use
Subscribe to "Ockam - Node for Amazon Bedrock" in AWS Marketplace
Navigate to AWS Marketplace -> Manage subscriptions
. Select Ockam - Node for Amazon Bedrock
from the list of subscriptions. Select Actions-> Launch Cloudformation stack
Select the Region you want to deploy and click Continue to Launch
. Under Actions, select Launch Cloudformation
Create stack with the following details
Stack name: bedrock-ockam-outlet
or any name you prefer
Network Configuration
VPC ID: Choose a VPC ID where the VPC Endpoint for Bedrock and EC2 instance will be deployed.
Subnet ID: Select a suitable Subnet ID within the chosen VPC.
EC2 Instance Type: Default instance type is m6a.large
. please use different instance types based on your use case.
Ockam Node Configuration
Enrollment ticket: Copy and paste the content of the outlet.ticket
generated above
JSON Node Configuration: Copy and paste the below configuration. Note that the configuration values (relay, allow attribute) match with the enrollment tickets created in the previous step. $BEDROCK_RUNTIME_ENDPOINT
will be replaced during runtime.
Click Next to launch the CloudFormation run.
A successful CloudFormation stack run
Creates a VPC Endpoint for Bedrock Runetime API
Configures an Ockam Bedrock Outlet node on an EC2 machine.
EC2 machine mounts an EFS volume created in the same subnet. Ockam state is stored in the EFS volume.
A security group with ingress access within the security group and egress access to the internet will be attached to the EC2 machine and VPC Endpoint.
Connect to the EC2 machine via AWS Session Manager.
To view the log file, run sudo cat /var/log/cloud-init-output.log
.
Note: DNS Resolution for the EFS drive may take up to 10 minutes. The script will retry
A Successful run will show Ockam node setup completed successfully
in the above log.
To view the status of Ockam node run curl http://localhost:23345/show | jq
View the Ockam node status in CloudWatch.
Navigate to Cloudwatch -> Log Group
and select bedrock-ockam-outlet-status-logs
. Select the Logstream for the EC2 instance.
The Cloudformation template creates a subscription filter which sends data to a Cloudwatch alarm bedrock-ockam-outlet-OckamNodeDownAlarm.
Alarm will turn green upon ockam node successfully running.
An Autoscaling group keeps atleast one EC2 instance is running.
Ockam bedrock outlet node setup is complete. You can now create Ockam bedrock inlet nodes in any network to establish secure communication.
You can set up an Ockam Bedrock Inlet Node locally using Docker. You can then use any library (aws cli, python, javascript etc) to access AWS Bedrock via Ockam inlet
Create a file named docker-compose.yml
with the following content:
Run the following command from the same location as the docker-compose.yml
and the inlet.ticket
to create an Ockam bedrock inlet that can connect to the outlet running in AWS , along with psql client container.
Check status of Ockam inlet node. You will see The node is UP
when ockam is configured successfully and ready to accept connection
Find your Ockam project id and use it to create to endpoint to bedrock
Construct bedrock endpoint url
An example bedrock endpoint url will look like below
Run below AWS CLI Command.
The above command should produce similar result
Cleanup
This guide walked you through:
Understanding the security challenges of accessing Amazon Bedrock over the public internet
How VPC endpoints secure your Bedrock communications within AWS
Setting up Ockam to extend this security beyond AWS boundaries
Deploying and configuring both Outlet and Inlet nodes
Testing your secure connection with a simple Bedrock API call
Rust crates to build secure by design applications for any environment – from highly scalable cloud infrastructure to tiny battery operated microcontroller based devices.
Ockam Rust crates are a library of tools to build secure by design applications for any environment – from highly scalable cloud infrastructure to tiny battery operated microcontroller based devices. They make it easy to orchestrate end-to-end encryption, mutual authentication, key management, credential management, and authorization policy enforcement – at massive scale.
No more having to think about creating unique cryptographic keys and issuing credentials to your fleet of application entities. No more designing ways to safely store secrets in hardware and securely distribute roots of trust.
Create end-to-end encrypted, authenticated secure channels over any transport topology.
Create secure channels over multi-hop, multi-protocol routes over TCP, UDP, WebSockets, BLE, etc.
Provision encrypted relays for applications distributed across many edge, cloud and data-center private networks.
Make any protocol secure by tunneling it through mutually authenticated and encrypted portals.
Bring end-to-end encryption to enterprise messaging, pub/sub and event streams - Kafka, Kinesis, RabbitMQ etc.
Generate cryptographically provable unique identities.
Store private keys in safe vaults - hardware secure enclaves and cloud key management systems.
Operate scalable credential authorities to issue lightweight, short-lived, revokable, attribute-based credentials.
Onboard fleets of self-sovereign application identities using secure enrollment protocols.
Rotate and revoke keys and credentials – at scale, across fleets.
Define and enforce project-wide attribute based access control policies. Chose ABAC, RBAC or ACLs.
Integrate with enterprise identity providers and policy providers for seamless employee access.
Ockam Rust crates provide the above collection of composable building blocks. In a step-by-step hands-on guide let’s walk through each building block to understand how you can use them to build end-to-end trustful communication for any application in any communication topology.
The first step is to install Rust and create a cargo project called hello_ockam
We’ll use this project to try out various examples.
Next, create a new cargo project to get started:
AWS Marketplace listings guides
Please select specific marketplace listings to view
Create an Ockam Postgres outlet node using Cloudformation template
This guide contains instructions to launch within AWS environment, an
An Ockam Postgres Outlet Node within an AWS environment
An Ockam Postgres Inlet Node:
Within an AWS environment, or
Using Docker in any environment
The walkthrough demonstrates:
Running an Ockam Postgres Outlet node in your AWS environment that contains a private Amazon RDS for PostgreSQL Database
Setting up Ockam Postgres inlet nodes using either AWS or Docker from any location.
Verifying secure communication between Postgres clients and Amazon RDS for Postgres Database.
A private Amazon RDS Postgres Database is created and accessible from the VPC and Subnet where the Ockam Node will be launched.
Security Group associated with the Amazon RDS Postgres Database allows inbound traffic on the required port (5432) from the subnet where the Ockam Outlet Node will reside.
You have permission to subscribe and launch Cloudformation stack from AWS Marketplace on the AWS Account running RDS Postgres Database.
Run the following commands to install Ockam Command and enroll with the Ockam Orchestrator.
Completing this step creates a Project in Ockam Orchestrator
Control which identities are allowed to enroll themselves into your project by issuing unique one-time use enrollment tickets. Generate two enrollment tickets, one for the Outlet and one for the Inlet.
Login to AWS Account you would like to use
Navigate to AWS Marketplace -> Manage subscriptions
. Select Ockam - Node for Amazon RDS Postgres
from the list of subscriptions. Select Actions-> Launch Cloudformation stack
Select the Region you want to deploy and click Continue to Launch
. Under Actions, select Launch Cloudformation
Create stack with the following details
Stack name: postgres-ockam-outlet
or any name you prefer
Network Configuration
VPC ID: Choose a VPC ID where the EC2 instance will be deployed.
Subnet ID: Select a suitable Subnet ID within the chosen VPC that has access to Amazon RDS PostgreSQL Database.
EC2 Instance Type: Default instance type is m6a.8xlarge
because of the predictable network bandwidth of 12.5 Gbps. Adjust to a small instance type depending on your use case. Eg: m6a.large
Ockam Node Configuration
Enrollment ticket: Copy and paste the content of the outlet.ticket
generated above
RDS Postgres Database Endpoint: To configure the Ockam postgres Outlet Node, you'll need to specify the Amazon RDS Postgres Endpoint. This configuration allows the Ockam Postgres Outlet Node to connect to the database.
JSON Node Configuration: Copy and paste the below configuration. Note that the configuration values match with the enrollment tickets created in the previous step. $POSTGRES_ENDPOINT
will be replaced during runtime.
Click Next to launch the CloudFormation run.
A successful CloudFormation stack run configures the Ockam Postgres Outlet node on an EC2 machine.
EC2 machine mounts an EFS volume created in the same subnet. Ockam state is stored in the EFS volume.
A security group with egress access to the internet will be attached to the EC2 machine.
Connect to the EC2 machine via AWS Session Manager.
To view the log file, run sudo cat /var/log/cloud-init-output.log
.
Successful run will show Ockam node setup completed successfully
in the logs
To view the status of Ockam node run curl http://localhost:23345/show | jq
View the Ockam node status in CloudWatch.
Navigate to Cloudwatch -> Log Group
and select postgres-ockam-outlet-status-logs
. Select the Logstream for the EC2 instance.
The Cloudformation template creates a subscription filter that sends data to a Cloudwatch alarm postgres-ockam-outlet-OckamNodeDownAlarm.
Alarm will turn green upon ockam node successfully running.
An Autoscaling group ensures atleast one EC2 instance is running at all times.
Ockam postgres outlet node setup is complete. You can now create Ockam postgres inlet nodes in any network to establish secure communication.
You can set up an Ockam Postgres Inlet Node either in AWS or locally using Docker. Here are both options:
Option 1: Setup Inlet Node in AWS
Login to AWS Account you would like to use
Navigate to AWS Marketplace -> Manage subscriptions
. Select Ockam - Node
from the list of subscriptions. Select Actions-> Launch Cloudformation stack
Select the Region you want to deploy and click Continue to Launch
. Under Actions, select Launch Cloudformation
Create stack with below details
Stack name: postgres-ockam-inlet
or any name you prefer
Network Configuration
Select suitable values for VPC ID
and Subnet ID
EC2 Instance Type: Default instance type is m6a.8xlarge
because of the predictable network bandwidth of 12.5 Gbps. Adjust to a small instance type depending on your use case. Eg: m6a.large
Ockam Configuration
Enrollment ticket
: Copy and paste the content of the inlet.ticket
generated above
JSON Node Configuration
: Copy and paste the below configuration.
Click Next to launch the CloudFormation run.
A successful CloudFormation stack run configures the Ockam inlet node on an EC2 machine.
EC2 machine mounts an EFS volume created in the same subnet. Ockam state is stored in the EFS volume.
Connect to the EC2 machine via AWS Session Manager.
To view the log file, run sudo cat /var/log/cloud-init-output.log
.
Successful run will show Ockam node setup completed successfully
in the logs
To view the status of Ockam node run curl http://localhost:23345/show | jq
View the Ockam node status in CloudWatch.
Navigate to Cloudwatch -> Log Group
and select postgres-ockam-inlet-status-logs
. Select the Logstream for the EC2 instance.
Cloudformation template creates a subscription filter that sends data to a Cloudwatch alarm postgres-ockam-inlet-OckamNodeDownAlarm.
Alarm will turn green upon ockam node successfully running.
An Autoscaling group ensures atleast one EC2 instance is running at all times.
Use any postgresql
client and connect to localhost:15432 (
PGHOST=localhost,
PGPORT=15432) from the machine running the Ockam postgres Inlet node.
Option 2: Setup Inlet Node Locally with Docker Compose
To set up an Inlet Node locally and interact with it outside of AWS, use Docker Compose.
Create a file named docker-compose.yml
with the following content:
Run the following command from the same location as the docker-compose.yml
and the inlet.ticket
to create an Ockam postgres inlet that can connect to the outlet running in AWS , along with psql client container
Check status of Ockam inlet node. You will see The node is UP
when ockam is configured successfully and ready to accept connection
Connect to psql-client container and run commands
This setup allows you to run an Ockam Postgres Inlet Node locally and communicate securely with a private Amazon RDS Postgres database running in AWS
Cleanup
Ockam Nodes and Workers decouple applications from the host environment and enable simple interfaces for stateful and asynchronous message-based protocols.
At Ockam’s core are a collection of cryptographic and messaging protocols. These protocols make it possible to create private and secure by design applications that provide end-to-end application layer trust it data.
Ockam is designed to make these powerful protocols easy and safe to use in any application environment, from highly scalable cloud services to tiny battery operated microcontroller based devices.
However, many of these protocols require multiple steps and have complicated internal state that must be managed with care. It can be quite challenging to make them simple to use, secure, and platform independent.
Using the Ockam Rust crates, you can easily turn any application into a lightweight Ockam Node. This flexible approach allows your to build secure by design applications that can run efficiently on tiny microcontrollers or scale horizontally in cloud environments.
A node requires an asynchronous runtime to concurrently execute workers. The default Ockam Node implementation in Rust uses tokio
, a popular asynchronous runtime in the Rust ecosystem. We also support Ockam Node implementations for various no_std
embedded targets.
The first thing any Ockam rust program must do is initialize and start an Ockam node. This setup can be done manually but the most convenient way is to use the #[ockam::node]
attribute that injects the initialization code. It creates the asynchronous environment, initializes worker management, sets up routing and initializes the node context.
Add the following code to this file:
Here we add the #[ockam::node]
attribute to an async
main function that receives the node execution context as a parameter and returns ockam::Result
which helps make our error reporting better.
As soon as the main function starts, we use ctx.stop()
to immediately stop the node that was just started. If we don't add this line, the node will run forever.
To run the node program:
This will download various dependencies, compile and then run our code. When it runs, you'll see colorized output showing that the node starts up and then shuts down immediately 🎉.
When a worker is started on a node, it is given one or more addresses. The node maintains a mailbox for each address and whenever a message arrives for a specific address it delivers that message to the corresponding registered worker.
Workers can handle messages from other workers running on the same or a different node. In response to a message, an worker can: make local decisions, change its internal state, create more workers, or send more messages to other workers running on the same or a different node.
To create a worker, we create a struct that can optionally have some fields to store the worker's internal state. If the worker is stateless, it can be defined as a field-less unit struct.
This struct:
Must implement the ockam::Worker
trait.
Must have the #[ockam::worker]
attribute on the Worker trait implementation
Must define two associated types Context
and Message
The Context
type is usually set to ockam::Context
which is provided by the node implementation.
The Message
type must be set to the type of message the worker wishes to handle.
Add the following code to this file:
Note that we define the Message
associated type of the worker as String
, which specifies that this worker expects to handle String
messages. We then go on to define a handle_message(..)
function that will be called whenever a new message arrives for this worker.
In the Echoer's handle_message(..)
, we print any incoming message, along with the address of the Echoer
. We then take the body of the incoming message and echo it back on its return route (more about routes soon).
To make this Echoer type accessible to our main program, export it from src/lib.rs
file by adding the following to it:
When a new node starts and calls an async
main function, it turns that function into a worker with address of "app"
. This makes it easy to send and receive messages from the main function (i.e the "app"
worker).
In the code below, we start a new Echoer
worker at address "echoer"
, send this "echoer"
a message "Hello Ockam!"
and then wait to receive a String
reply back from the "echoer"
.
Create a new file at:
Add the following code to this file:
To run this new node program:
You'll see console output that shows "Hello Ockam!"
received by the "echoer"
and then an echo of it received by the "app"
.
The message flow looked like this:
Create an Ockam Timestream InfluxDB outlet node using Cloudformation template
This guide contains instructions to launch within AWS environment, an
An Ockam Timestream InfluxDB Outlet Node within an AWS environment
An Ockam Timestream InfluxDB Inlet Node:
Within an AWS environment, or
Using Docker in any environment
The walkthrough demonstrates:
Running an Ockam Timestream InfluxDB Outlet node in your AWS environment that contains a private Amazon Timestream InfluxDB Database
Setting up Ockam Timestream InfluxDB inlet nodes using either AWS or Docker from any location.
Verifying secure communication between InfluxDB clients and Amazon Timestream InfluxDB Database.
A private Amazon Timestream InfluxDB Database is created and accessible from the VPC and Subnet where the Ockam Node will be launched. You have the details of Organization
, Username and Password
Security Group associated with the Amazon Timestream InfluxDBDatabase allows inbound traffic on the required port (TCP 8086) from the subnet where the Ockam Outlet Node will reside.
You have permission to subscribe and launch Cloudformation stack from AWS Marketplace on the AWS Account running Timestream InfluxDB Database.
Permission to create an "All Access" InfluxDB token to use by Ockam Node and store it in AWS Secrets Manager.
Run the following commands to install Ockam Command and enroll with the Ockam Orchestrator.
Completing this step creates a Project in Ockam Orchestrator
Control which identities are allowed to enroll themselves into your project by issuing unique one-time use enrollment tickets. Generate two enrollment tickets, one for the Outlet and one for the Inlet.
Configure your CLI to use --username-password
to be able to create the operator:
Find out Org ID to use as an input to cloudformation template
Create your new token.
Create influxDB token as secret within secret manager. Note the ARN of the secret.
Login to AWS Account you would like to use
Subscribe to "Ockam - Node for Amazon Timestream InfluxDB" in AWS Marketplace
Navigate to AWS Marketplace -> Manage subscriptions
. Select Ockam - Node for Amazon Timestream InfluxDB
from the list of subscriptions. Select Actions-> Launch Cloudformation stack
Select the Region you want to deploy and click Continue to Launch
. Under Actions, select Launch Cloudformation
Create stack with the following details
Stack name: influxdb-ockam-outlet
or any name you prefer
Network Configuration
VPC ID: Choose a VPC ID where the EC2 instance will be deployed.
Subnet ID: Select a suitable Subnet ID within the chosen VPC that has access to Amazon Timestream InfluxDB Database.
EC2 Instance Type: Default instance type is m6a.large
Adjust instance type depending on your use case. If you would like to have predictable network bandwidth of 12.5 Gbps use m6a.8xlarge
. Make sure the instance type is available in the subnet you are launching in.
Ockam Node Configuration
Enrollment ticket: Copy and paste the content of the outlet.ticket
generated above
InfluxDBEndpoint: To configure the Ockam Timestream InfluxDB Outlet Node, you'll need to specify the Amazon Timestream InfluxDB Endpoint. This configuration allows the Ockam Postgres Outlet Node to connect to the database. In AWS Console, go to Timestream -> InfluxDB databases, select your influxdb database and copy "Endpoint" details
InfluxDBOrgID: Enter the Organization of InfluxDB instance.
InfluxDBTokenSecretArn: Enter the ARN of the Secret that contains the all access token.
InfluxDBLeasedTokenPermissions: JSON array of permission objects for InfluxDB leased token in the below format. Update as needed. Leave the variable INFLUX_ORG_ID
as it will be replaced during runtime.
NodeConfig: Copy and paste the below configuration. Note that the configuration values match with the enrollment tickets created in the previous step. INFLUX_ENDPOINT
, INFLUX_ORG_ID
and INFLUX_TOKEN
will be replaced during runtime.
Click Next to launch the CloudFormation run.
A successful CloudFormation stack run configures the Ockam Timestream InfluxDB Outlet node on an EC2 machine.
EC2 machine mounts an EFS volume created in the same subnet. Ockam state is stored in the EFS volume.
A security group with egress access to the internet will be attached to the EC2 machine.
Connect to the EC2 machine via AWS Session Manager.
To view the log file, run sudo cat /var/log/cloud-init-output.log
.
Successful run will show Ockam node setup completed successfully
in the logs
To view the status of Ockam node run curl http://localhost:23345/show | jq
View the Ockam node status in CloudWatch.
Navigate to Cloudwatch -> Log Group
and select influxdb-ockam-outlet-status-logs
. Select the Logstream for the EC2 instance.
The Cloudformation template creates a subscription filter that sends data to a Cloudwatch alarm influxdb-ockam-outlet-OckamNodeDownAlarm.
Alarm will turn green upon ockam node successfully running.
An Autoscaling group ensures atleast one EC2 instance is running at all times.
Ockam Timestream InfluxDB outlet node setup is complete. You can now create Ockam Timestream InfluxDB inlet nodes in any network to establish secure communication.
You can set up an Ockam Timestream InfluxDB Inlet Node either in AWS or locally using Docker. Here are both options:
Option 1: Setup Inlet Node Locally with Docker Compose
To set up an Inlet Node locally and interact with it outside of AWS, use Docker Compose.
Find your Ockam project id by running the command where you created the enrollment tickets and use it to create to endpoint to use for REPLACE_WITH_YOUR_PROJECT_ID
Create a file named docker-compose.yml
with the following content:
Create a file named app.mjs
and package.json.
Update REPLACE_WITH_*
variables
Value of token doesn't matter as it will be injected with the temporary token by Ockam
Run the following command from the same location as the docker-compose.yml
and the inlet.ticket
to create an Ockam Timestream InfluxDB inlet that can connect to the outlet running in AWS , along with node client container
Check status of Ockam inlet node. You will see The node is UP
when ockam is configured successfully and ready to accept connection
Connect to influxdb-client container and run commands
Option 2: Setup Inlet Node in AWS
Login to AWS Account you would like to use
Navigate to AWS Marketplace -> Manage subscriptions
. Select Ockam - Node
from the list of subscriptions. Select Actions-> Launch Cloudformation stack
Select the Region you want to deploy and click Continue to Launch
. Under Actions, select Launch Cloudformation
Create stack with below details
Stack name: influxdb-ockam-inlet
or any name you prefer
Network Configuration
Select suitable values for VPC ID
and Subnet ID
EC2 Instance Type: Default instance type is m6a.8xlarge
because of the predictable network bandwidth of 12.5 Gbps. Adjust to a small instance type depending on your use case. Eg: m6a.large
Ockam Configuration
Enrollment ticket: Copy and paste the content of the inlet.ticket
generated above
JSON Node Configuration: Copy and paste the below configuration.
Click Next to launch the CloudFormation run.
A successful CloudFormation stack run configures the Ockam inlet node on an EC2 machine.
EC2 machine mounts an EFS volume created in the same subnet. Ockam state is stored in the EFS volume.
Connect to the EC2 machine via AWS Session Manager.
To view the log file, run sudo cat /var/log/cloud-init-output.log
.
Successful run will show Ockam node setup completed successfully
in the logs
To view the status of Ockam node run curl http://localhost:23345/show | jq
View the Ockam node status in CloudWatch.
Navigate to Cloudwatch -> Log Group
and select influxdb-ockam-inlet-status-logs
. Select the Logstream for the EC2 instance.
Cloudformation template creates a subscription filter that sends data to a Cloudwatch alarm influxdb-ockam-inlet-OckamNodeDownAlarm.
Alarm will turn green upon ockam node successfully running.
An Autoscaling group ensures atleast one EC2 instance is running at all times.
Find your Ockam project id and use it to create to endpoint to use for INFLUXDB_ENDPOINT
Follow testing steps in docker example above for node.js or use InfluxDB cli client with below details
Generate cryptographically provable unique identities and store their secret keys in safe vaults.
Create end-to-end encrypted and mutually authenticated secure channels over any transport topology.
Now that we understand the basics of Nodes, Workers, and Routing ... let's create our first encrypted secure channel.
Establishing a secure channel requires establishing a shared secret key between the two entities that wish to communicate securely. This is usually achieved using a cryptographic key agreement protocol to safely derive a shared secret without transporting it over the network.
Running such protocols requires a stateful exchange of multiple messages and having a worker and routing system allows Ockam to hide the complexity of creating and maintaining a secure channel behind two simple functions:
create_secure_channel_listener(...)
which waits for requests to create a secure channel.
create_secure_channel(...)
which initiates the protocol to create a secure channel with a listener.
Create a new file at:
Add the following code to this file:
Create a new file at:
Add the following code to this file:
Create a new file at:
Add the following code to this file:
Run the responder in a separate terminal tab and keep it running:
Run the middle node in a separate terminal tab and keep it running:
Run the initiator:
Note the message flow.
Create an Ockam Redshift outlet node using Cloudformation template
This guide contains instructions to launch within AWS environment, an
An Ockam Redshift Outlet Node within an AWS environment
An Ockam Redshift Inlet Node:
Within an AWS environment, or
Using Docker in any environment
The walkthrough demonstrates:
Running an Ockam Redshift Outlet node in your AWS environment that contains a private Amazon Redshift Serverless or Amazon Redshift Provisioned Cluster
Setting up Ockam Redshift inlet nodes using either AWS or Docker from any location.
Verifying secure communication between Redshift clients and Amazon Redshift Database.
A private Amazon Redshift Database (Serverless or Provisioned) is created and accessible from the VPC and Subnet where the Ockam Node will be launched.
Security Group associated with the Amazon Redshift Database allows inbound traffic on the required default port (5439) from the subnet where the Ockam Outlet Node will reside.
You have permission to subscribe and launch Cloudformation stack from AWS Marketplace on the AWS Account running Amazon Redshift.
Run the following commands to install Ockam Command and enroll with the Ockam Orchestrator.
Control which identities are allowed to enroll themselves into your project by issuing unique one-time use enrollment tickets. Generate two enrollment tickets, one for the Outlet and one for the Inlet.
Login to AWS Account you would like to use
Subscribe to "Ockam - Node for Amazon Redshift" in AWS Marketplace
Navigate to AWS Marketplace -> Manage subscriptions
. Select Ockam - Node for Amazon Redshift
from the list of subscriptions. Select Actions-> Launch Cloudformation stack
Select the Region you want to deploy and click Continue to Launch
. Under Actions, select Launch Cloudformation
Create stack with the following details
Stack name: redshift-ockam-outlet
or any name you prefer
Network Configuration
VPC ID: Choose a VPC ID where the EC2 instance will be deployed.
Subnet ID: Select a suitable Subnet ID within the chosen VPC that has access to Amazon Redshift. Note: Security Group associated with Amazon Redshift should allow inbound traffic on the required default port (5439) from the IP address of the Subnet or VPC.
EC2 Instance Type: Default instance type is m6a.large
. If you would like predictable network bandwidth of 12.5 Gbps please use m6a.8xlarge
or a small instance type like t3.medium
depending on your use case
Ockam Node Configuration
Enrollment ticket: Copy and paste the content of the outlet.ticket
generated above
Redshift Database Endpoint: To configure the Ockam Redshift Outlet Node, you'll need to specify the Amazon Redshift Endpoint. This configuration allows the Ockam Redshift Outlet Node to connect to the database.
Example: cluster-name.xxxx.region.redshift.amazonaws.com:5439
or workgroup.account.region.redshift-serverless.amazonaws.com:5439
Note: If you are copy pasting the Redshift Endpoint value from the AWS Console, please make sure to remove the /DATABASE_NAME at the end as it is not needed
JSON Node Configuration: Copy and paste the below configuration. Note that the configuration values match with the enrollment tickets created in the previous step. $REDSHIFT_ENDPOINT
will be replaced during runtime.
Click Next to launch the CloudFormation run.
A successful CloudFormation stack run configures the Ockam Redshift Outlet node on an EC2 machine.
EC2 machine mounts an EFS volume created in the same subnet. Ockam state is stored in the EFS volume.
A security group with egress access to the internet will be attached to the EC2 machine.
Connect to the EC2 machine via AWS Session Manager.
To view the log file, run sudo cat /var/log/cloud-init-output.log
.
Note: DNS Resolution for the EFS drive may take upto 10 minutes, you will see the script retrying every 30 seconds to resolve
Successful run will show Ockam node setup completed successfully
in the logs
To view the status of Ockam node run curl http://localhost:23345/show | jq
View the Ockam node status in CloudWatch.
Navigate to Cloudwatch -> Log Group
and select redshift-ockam-outlet-status-logs
. Select the Logstream for the EC2 instance.
The Cloudformation template creates a subscription filter that sends data to a Cloudwatch alarm redshift-ockam-outlet-OckamNodeDownAlarm.
Alarm will turn green upon ockam node successfully running.
An Autoscaling group ensures atleast one EC2 instance is running at all times.
Ockam redshift outlet node setup is complete. You can now create Ockam redshisft inlet nodes in any network to establish secure communication.
You can set up an Ockam Redshift Inlet Node either in AWS or locally using Docker. Here are both options:
Option 1: Setup Inlet Node in AWS
Login to AWS Account you would like to use
Navigate to AWS Marketplace -> Manage subscriptions
. Select Ockam - Node
from the list of subscriptions. Select Actions-> Launch Cloudformation stack
Select the Region you want to deploy and click Continue to Launch
. Under Actions, select Launch Cloudformation
Create stack with below details
Stack name: redshift-ockam-inlet
or any name you prefer
Network Configuration
Select suitable values for VPC ID
and Subnet ID
EC2 Instance Type: Default instance type is m6a.large
. If you would like predictable network bandwidth of 12.5 Gbps please use m6a.8xlarge
or a small instance type like t3.medium
depending on your use case
Ockam Configuration
Enrollment ticket: Copy and paste the content of the inlet.ticket
generated above
JSON Node Configuration: Copy and paste the below configuration.
Click Next to launch the CloudFormation run.
A successful CloudFormation stack run configures the Ockam inlet node on an EC2 machine.
EC2 machine mounts an EFS volume created in the same subnet. Ockam state is stored in the EFS volume.
Connect to the EC2 machine via AWS Session Manager.
To view the log file, run sudo cat /var/log/cloud-init-output.log
.
Successful run will show Ockam node setup completed successfully
in the logs
To view the status of Ockam node run curl http://localhost:23345/show | jq
View the Ockam node status in CloudWatch.
Navigate to Cloudwatch -> Log Group
and select redshift-ockam-inlet-status-logs
. Select the Logstream for the EC2 instance.
Cloudformation template creates a subscription filter that sends data to a Cloudwatch alarm redshift-ockam-inlet-OckamNodeDownAlarm.
Alarm will turn green upon ockam node successfully running.
An Autoscaling group ensures atleast one EC2 instance is running at all times.
Use any postgresql
client and connect to localhost:15432 (
PGHOST=localhost,
PGPORT=15439) from the machine running the Ockam redshift Inlet node.
Option 2: Setup Inlet Node Locally with Docker Compose
To set up an Inlet Node locally and interact with it outside of AWS, use Docker Compose.
Create a file named docker-compose.yml
with the following content:
Run the following command from the same location as the docker-compose.yml
and the inlet.ticket
to create an Ockam postgres inlet that can connect to the outlet running in AWS , along with psql client container.
Check status of Ockam inlet node. You will see The node is UP
when ockam is configured successfully and ready to accept connection
Connect to psql-client container and run commands
This setup allows you to run an Ockam Redshift Inlet Node locally and communicate securely with a private Amazon Redshift database running in AWS
Cleanup
is a fully managed service that makes high-performing foundation models (FMs) from leading AI companies and Amazon available for your use through a unified API. Organizations building innovative generative AI applications with Amazon Bedrock often need to ensure their proprietary data remains secure and private while accessing these powerful models.
Read: “” to learn about end-to-end trust establishment.
and pick a subscription plan through the guided workflow on Ockam.io.
If you don't have it, please the latest version of Rust.
If the above instructions don't work on your machine, please , we’d love to help.
Read: “” to learn about end-to-end trust establishment.
and pick a subscription plan through the guided workflow on Ockam.io.
Subscribe to "" in AWS Marketplace
Subscribe to " in AWS Marketplace
Ockam help hide this complexity and decouple from the host environment - to provide simple interfaces for stateful and asynchronous message-based protocols.
An Ockam Node is any program that can interact with other Ockam Nodes using various Ockam Protocols like Ockam and Ockam Secure Channels.
Rust based Ockam Nodes run very lightweight, concurrent, stateful actors called Ockam . Using Ockam Routing, a node can deliver messages from one worker to another local worker. Using Ockam Transports, nodes can also route messages to workers on other remote nodes.
For your new node, create a new file at examples/01-node.rs
in your project:
Ockam run very lightweight, concurrent, and stateful actors called Ockam Workers.
Above we've , now let's create a new worker, send it a message, and receive a reply.
For a new Echoer
worker, create a new file at src/echoer.rs
in your project. We're creating this inside the src
directory so we can easily reuse the Echoer
in other examples that we'll write later in this guide:
Next, let’s explore how Ockam’s enables us to create protocols that provide end-to-end security and privacy guarantees.
Read: “” to learn about end-to-end trust establishment.
and pick a subscription plan through the guided workflow on Ockam.io.
Use Influx CLI to create a token. For instructions, please see: .
Subscribe to " in AWS Marketplace
Read: “” to learn about end-to-end trust establishment.
and pick a subscription plan through the guided workflow on Ockam.io.
Subscribe to " in AWS Marketplace
Send end-to-end encrypted messages through Aiven.
Scale mutual trust using lightweight, short-lived, revokable, attribute-based credentials.
Ockam Secure Channels enable you to setup mutually authenticated and end-to-end encrypted communication. Once a channel is established, it has the following guarantees:
Authenticity: Each end of the channel knows that messages received on the channel must have been sent by someone who possesses the secret keys of specific Ockam Cryptographic Identifier.
Integrity: Each end of the channel knows that the messages received on the channel could not have been tapered en-route and are exactly what was sent by the authenticated sender at the other end of the channel.
Confidentiality: Each end of the channel knows that the contents of messages received on the channel could not have been observed en-route between the sender and the receiver.
These guarantees however don't automatically imply trust. They don't tell us if a particular sender is trusted to inform us about a particular topic or if the sender is authorized to get a response to a particular request.
One way to create trust and authorize requests would be to use Access Control Lists (ACLs), where every receiver of messages would have a preconfigured list of identifiers that are trusted to inform about a certain topic or trigger certain requests. This approach works but doesn't scale very well. It becomes very cumbersome to manage mutual trust if you have more that a few nodes communicating with each other.
Another, and significantly more scalable, approach is to use Ockam Credentials combined with Attribute Based Access Control (ABAC). In this setup every participant starts off by trusting a single Credential Issuer to be the authority on the attributes of an Identifier. This authority issues cryptographically signed credentials to attest to these attributes. Participants can then exchange and authenticate each others’ credentials to collect authenticated attributes about an identifier. Every participant uses these authenticated attributes to make authorization decisions based on attribute-based access control policies.
Let’s walk through an example of setting up ABAC using cryptographically verifiable credentials.
To get started please create the initial hello_ockam
project and define an echoer
worker. We'll also need the hex
crate for this example so add that to your Cargo.toml
using cargo add
:
Any Ockam Identity can issue Credentials. As a first step we’ll create a credential issuer that will act as an authority for our example application:
This issuer, knows a predefined list of identifiers that are member of an application’s production cluster.
In a later guide, we'll explore how Ockam enables you to define various pluggable Enrollment Protocols to decide who should be issued credentials. For this example we'll assume that this list is known in advance.
Ockam Vaults store secret cryptographic keys in hardware and cloud key management systems. These keys remain behind a stricter security boundary and can be used without being revealed.
Ockam Identities, Credentials, and Secure Channels rely on cryptographic proofs of possession of specific secret keys. Ockam Vaults safely store these secret keys in cryptographic hardware and cloud key management systems.
Vaults can cryptographically sign data. We support two types of Signatures: EdDSA signatures using Curve 25519 and ECDSA signatures using SHA256 + Curve P-256.
Our preferred signature scheme is EdDSA signatures using Curve 25519 which are also call Ed25519 signatures. ECDSA is only supported because as of this writing Cloud KMS services don't support Ed25519.
In addition to VerifyingPublicKeys for the above two signature schemes we also support X25519PublicKeys for ECDH in Ockam Secure Channels using X25519.
Three rust traits - VaultForVerifyingSignatures
, VaultForSigning
, and VaultForSecureChannels
define abstract functions that an Ockam Vault implementation can implement to support Ockam Identities, Credentials, and Secure Channels.
Identities and Credentials require VaultForVerifyingSignatures
and VaultForSigning
while Secure Channels require VaultForSecureChannels
.
Implementations of VaultForVerifyingSignatures
provide two simple and stateless functions that don't require any secrets so they can be usually provided in software.
Implementations of VaultForSigning
enable using a secret signing key to sign Credentials, PurposeKeyAttestations, and Identity Change events. The signing key remains inside the tighter security boundary of a KMS or an HSM.
Implementations of VaultForSecureChannels
enable using a secret X25519 key for ECDH within Ockam Secure Channels. They rely on compile time feature flags to chose between three possible combinations of primitives:
OCKAM_XX_25519_AES256_GCM_SHA256
enables Ockam_XX secure channel handshake with AEAD_AES_256_GCM and SHA256. This is our current default.
OCKAM_XX_25519_AES128_GCM_SHA256
enables Ockam_XX secure channel handshake with AEAD_AES_128_GCM and SHA256.
OCKAM_XX_25519_ChaChaPolyBLAKE2s
enables Ockam_XX secure channel handshake with AEAD_CHACHA20_POLY1305
and Blake2s.
Attribute names can be used to define policies and policies can be used to define access controls:
Policies are expressions involving attribute names, which can be evaluated to true
or false
given an environment containing attribute values.
Access controls were discussed earlier. They restrict the messages which can be received or sent by a worker.
Policies are boolean expressions constructed using attribute names. For example:
In the expression above:
and
, =
, member?
are operators.
resource.version
, subject.name
, resource.admins
are identifiers.
1
, "John"
are values.
Values can have the 5 following types:
String
Int
Float
Bool
Seq
: a sequence of values
This table lists all the available operators:
and
>= 2
Produce the logical conjunction of n expressions
or
>= 2
Produce the logical disjunction of n expressions
not
1
Produce the negation of an expression
if
3
Evaluate the first expression to select either the second expression or the third one
<
2
Return true if the first value is less than the second one
>
2
Return true if the second value is less than the first one
=
2
Return true if the two values are equal
!=
2
Return true if the two values are different
member?
2
Return true if the first value is present in the second expression, which must be a sequence Seq
of values
exists?
>= 1
Return true if all the expressions are identifiers with values present in the environment
Here are a few more examples of policies.
The subject must have a
component
attribute with a value that is eitherweb
ordatabase
:
Note that attribute names can have dots in their name, so you could also write:
You can also declare more complex logical expressions, by nesting and
and or
operators:
The subject must either by the "Smart Factory" application or being a member of the "Field Engineering" department in San Francisco:
Since many policies are just need to test for the presence of an attribute, we provide simpler ways to write them.
For example we can write:
Simply as (note that logical operators can now be written as infix operators):
String comparisons are still supported, so you could also have a component
attribute and write:
More complex expressions require parentheses:
Since identities are frequently used in policies, we provide a shortcut for them. For example, this is a valid boolean policy:
It translates to:
This table summarizes the elements you can use in a simple boolean policy:
name
Equivalent to (= subject.name "true")
name="string value"
Equivalent to (= subject.name "string value")
and
Conjunction of 2 expressions
or
Disjunction of 2 expressions
not
Negation of an expression
identifier
Equivalent to (= subject.identifier "identifier")
()
Parentheses. Used to group expressions. The precedence rules are not > and > or
We evaluate a policy by doing the following:
Each attribute attribute_name/attribute_value
is added to the environment as an identifier subject.attribute_name
associated to the value attribute_value
(always as a String
). In the example of a policy given above the identifier subject.name
means that we are expecting an attribute name
associated to the identity which sent a message.
The top-level expression of the policy is recursively evaluated by evaluating each operator and taking values from the environment when an expression is referencing an identifier.
The end result of a policy evaluation is simply a boolean saying if the policy succeeded or not.
The library offers two types of access controls using policies:
AbacAccessControl
.
PolicyAccessControl
.
AbacAccessControl
This access control type is used as an IncomingAccessControl
(so it restricts incoming messages).
We define an AbacAccessControl
with the following:
A Policy
which specifies which attributes are required for a given identity.
An IdentityRepository
which stores a list of the known authenticated attributes for a given identity.
When a LocalMessage
arrives to a worker using such an incoming access control, we do the following:
If an identity is not associated to this message (as LocalInfo
), the message is rejected.
Otherwise the attributes for this identity are retrieved from the repository.
The attributes are used to populate the policy environment.
The policy expression is evaluated. If it returns true
the message is accepted.
PolicyAccessControl
This access control type is used as an IncomingAccessControl
(so it restricts incoming messages).
We define a PolicyAccessControl
with the following:
A PolicyRepository
which stores a list of policies.
A Resource
and an Action
. They represent the access which we want to restrict.
An IdentityRepository
which stores a list of the known authenticated attributes for a given identity.
When a LocalMessage
arrives to a worker using this type of incoming access control, we do the following:
If an identity is not associated to this message (as LocalInfo
), the message is rejected.
Otherwise the attributes for this identity are retrieved from the repository.
The most recent policy for the resource and the action is retrieved from the policy repository.
The attributes are used to populate the policy environment.
The policy expression is evaluated. If it returns true
the message is accepted.
The two major differences between this policy and the previous one are:
The PolicyAccessControl
models a Resource/Action
pair.
Policies for that resource and action can be modified even if the worker they are attached to is already started.
Ockam Routing and Transports enable higher level protocols that provide end-to-end guarantees to messages traveling across many network connection hops and protocols boundaries.
Ockam Routing is a simple and lightweight message-based protocol that makes it possible to bidirectionally exchange messages over a large variety of communication topologies.
Ockam Transports adapt Ockam Routing to various transport protocols like TCP, UDP, WebSockets, Bluetooth etc.
By layering Ockam Secure Channels and other higher level protocols over Ockam Routing, it is possible to build systems that provide end-to-end guarantees over arbitrary transport topologies that span many networks, connections, gateways, queues, and clouds.
Let's dive into how the routing protocol works. So far, in the section on Nodes and Workers, we've come across this simple message exchange:
Ockam Routing Protocol messages carry with them two metadata fields: an onward_route
and a return_route
. A route is an ordered list of addresses describing the path a message should travel. This information is carried with the message in compact binary form.
Pay close attention to the Sender, Hop, and Replier rules in the sequence diagrams below. Note how onward_route
and return_route
are handled as the message travels.
The above was one message hop. We may extend this to two hops:
This very simple protocol extends to any number of hops:
So far, we've created an "echoer"
worker in our node, sent it a message, and received a reply. This worker was a simple one hop away from our "app"
worker.
To achieve this, messages carry with them two metadata fields: onward_route
and return_route
, where a route is a list of addresses.
To get a sense of how that works, let's route a message over two hops.
For demonstration, we'll create a simple worker, called Hop
, that takes every incoming message and forwards it to the next address in the onward_route
of that message.
Just before forwarding the message, Hop
's handle message function will:
Print the message
Remove its own address (first address) from the onward_route
, by calling step()
Insert its own address as the first address in the return_route
by calling prepend()
Next, let's create our main "app"
worker.
In the code below we start an Echoer
worker at address "echoer"
and a Hop
worker at address "h1"
. Then, we send a message along the h1 => echoer
route by passing route!["h1", "echoer"]
to send(..)
.
To run this new node program:
Similarly, we can also route the message via many hop workers:
To run this new node program:
An Ockam Transport is a plugin for Ockam Routing. It moves Ockam Routing messages using a specific transport protocol like TCP, UDP, WebSockets, Bluetooth etc.
In previous examples, we routed messages locally within one node. Routing messages over transport layer connections looks very similar.
Let's try the TcpTransport, we'll need to create two nodes: a responder and an initiator.
Run the responder in a separate terminal tab and keep it running:
Run the initiator:
A common real world topology is a transport bridge.
Node n1
wishes to access a service on node n3
, but it can't directly connect to n3
. This can happen for many reasons, maybe because n3
is in a separate IP
subnet, or it could be that the communication from n1 to n2
uses UDP while from n2 to n3
uses TCP or other similar constraints. The topology makes n2
a bridge or gateway between these two separate networks.
We can setup this topology with Ockam Routing as follows:
Relay worker
We'll create a worker, called Relay
, that takes every incoming message and forwards it to the predefined address.
Run the responder in a separate terminal tab and keep it running:
Run the middle node in a separate terminal tab and keep it running:
Run the initiator:
It is common, however, to encounter communication topologies where the machine that provides a service is unwilling or is not allowed to open a listening port or expose a bridge node to other networks. This is a common security best practice in enterprise environments, home networks, OT networks, and VPCs across clouds. Application developers may not have control over these choices from the infrastructure / operations layer. This is where relays are useful.
Relays make it possible to establish end-to-end protocols with services operating in a remote private network, without requiring a remote service to expose listening ports to an outside hostile network like the Internet.
Ockam Routing messages when transported over the wire have the following structure. TransportMessage is serialized using BARE Encoding. We intend to transition to CBOR in the near future since we already use CBOR for other protocols built on top of Ockam Routing.
Each transport type has a conventional value. TCP has transport type 1. UDP has transport type 2 etc. Node local messages have transport type 0.
As message moves within a node it gathers additional metadata in structure like LocalMessage
and RelayMessage
that are used for a node's internal operation.
Each Worker has one or more addresses that it uses to send and receive messages. We assign each Address an Incoming Access Control and an Outgoing Access Control.
Concrete instances of these traits inspect a message's onward_route
, return_route
, metadata etc. along with other node local state to decide if a message should be allowed to be sent or received. Incoming Access Control filters which messages reach an address while Outgoing Access Control decides which messages can be sent.
In our threat model, we assume that Workers within a Node are not malicious against each other. If programmed correctly they intend no harm.
However, there are certain types of Workers that forward messages that were created on other nodes. We don't implicitly trust other Ockam Nodes so messages from them can be dangerous. Such workers that can receive messages from another node are implemented with an Outgoing Access Control that denies all messages by default.
For example, a TCP Transport Listener spawns TCP Receivers for every new TCP connection. These receivers are implemented with an Outgoing Access Control that denies all messages, by default, from entering the node that is running the receiver. We can then explicitly allow messages to flow to a specific addresses.
In the middle node example above, we do this by explicitly allowing flow of messages from the TCP Receivers (spawned by TCP Transport Listener) to the forward_to_responder
worker.
Cryptographic and Messaging Protocols that provide the foundation for end-to-end application layer trust in data.
Ockam is composed of a collection of cryptographic and messaging protocols. These protocols make it possible to create private and secure by design applications that provide end-to-end application layer trust it data. The following pages explain, in detail, how each of the protocols work:
In October of 2023, a team of security and cryptography experts, from Trail of Bits, conducted an extensive review of Ockam’s protocols. Trail of Bits is renowned for their comprehensive third-party audits of the security of many other critical projects, including Kubernetes and the Linux kernel.
The auditors from Trail of Bits conducted in-depth, manual analysis, and formal modeling of the security properties of Ockam’s protocols. After this review was complete, they highlighted:
Ockam’s protocols use robust cryptographic primitives according to industry best practices. None of the identified issues pose an immediate risk to the confidentiality and integrity of data handled by the system in the context of the two in-scope use cases. The majority of identified issues relate to information that should be added to the design documentation, such as threat model details and increased specification for certain aspects.
— Trail of Bits
Ockam Nodes and Workers decouple applications from the host environment and enable simple interfaces for stateful, asynchronous, and bi-directional message-based protocols.
At Ockam's core is a collection of cryptographic and messaging protocols. These protocols enable private and secure by design applications that provide end-to-end application layer trust in data.
Ockam is designed to make these protocols easy and safe to use in any application environment – from highly scalable cloud services to tiny battery operated microcontroller based devices.
Many included protocols require multiple steps and have complicated internal state that must be managed with care. Protocol steps can often be initiated by any participant so it can be quite challenging to make these protocols simple to use, secure, and platform independent.
Ockam Nodes, Workers, and Services help hide this complexity to provide simple interfaces for stateful and asynchronous message-based protocols.
An Ockam Node is any program that can interact with other Ockam Nodes using various Ockam protocols like Ockam Routing and Ockam Secure Channels.
A typical Ockam Node is implemented as an asynchronous execution environment that can run very lightweight, concurrent, stateful actors called Ockam Workers. Using Ockam Routing, a node can deliver messages from one worker to another local worker. Using Ockam Transports, nodes can also route messages to workers on other remote nodes.
In the following code snippet we create a node in Rust and then immediately stop it:
A node requires an asynchronous runtime to concurrently execute workers. The default Ockam Node implementation in Rust uses tokio
, a popular asynchronous runtime in the Rust ecosystem. There are also Ockam Node implementations that support various no_std
embedded targets.
Nodes can be implemented in any language. The only requirement is that understand various Ockam protocols like Routing, Secure Channels, Identities etc.
Ockam Nodes run very lightweight, concurrent, and stateful actors called Ockam Workers. They are like processes on your operating system, except that they all live inside one node and are very lightweight so a node can have hundreds of thousands of them, depending on the capabilities of the machine hosting the node.
When a worker is started on a node, it is given one or more addresses. The node maintains a mailbox for each address and whenever a message arrives for a specific address it delivers that message to the corresponding worker. In response to a message, a worker can: make local decisions, change internal state, create more workers, or send more messages.
To create a worker, we create a struct that can optionally have some fields to store the worker's internal state. If the worker is stateless, it can be defined as a field-less unit struct.
This struct:
Must implement the ockam::Worker
trait.
Must have the #[ockam::worker]
attribute on the Worker trait implementation.
Must define two associated types Context
and Message
The Context
type is set to ockam::Context.
The Message
type must be set to the type of messages the worker wishes to handle.
When a new node starts and calls an async
main function, it turns that function into a worker with address of "app"
. This makes it easy to send and receive messages from the main function (i.e the "app"
worker).
In the code below, we start a new Echoer
worker at address "echoer"
, send this "echoer"
a message "Hello Ockam!"
and then wait to receive a String
reply back from the "echoer"
.
The message flow looked like this:
Next, let’s explore how Ockam’s Application Layer Routing enables us to create protocols that provide end-to-end guarantees.
Ockam Routing and Transports enable other Ockam protocols to provide end-to-end guarantees like trust, security, privacy, reliable delivery, and ordering at the application layer.
Data, within modern applications, routinely flows over complex, multi-hop, multi-protocol routes before reaching its end destination. It’s common for application layer requests and data to move across network boundaries, beyond data centers, via shared or public networks, through queues and caches, from gateways and brokers to reach remote services and other distributed parts of an application.
Our goal is to enable end-to-end application layer guarantees in any communication topology. For example Ockam Secure Channels can provide end-to-end guarantees of data authenticity, integrity, and confidentiality in any of the above communication topologies.
In contrast, traditional secure communication protocol implementations are typically tightly coupled with transport protocols in a way that all their security is limited to the length and duration of the underlying transport connections.
For example, most implementations are coupled to the underlying TCP connection. If your application’s data and requests travel over two TCP connection hops TCP -> TCP
then all TLS guarantees break at the bridge between the two networks. This bridge, gateway or load balancer then becomes a point of weakness for application data. To makes matters worse, if you don't setup another mutually authenticated TLS connection on the second hop between the gateway and your destination server then the entire second hop network – all applications and machines within it – become attack vectors to your application and its data.
Traditional secure communication protocols are also unable to protect your application’s data if it travels over multiple different transport protocols. They can’t guarantee data authenticity or data integrity if your application’s communication path is UDP -> TCP
or BLE -> TCP
.
Ockam Routing is a simple and lightweight message based protocol that makes it possible to bidirectionally exchange message over a large variety of communication topologies: TCP -> TCP
or TCP -> TCP -> TCP
or BLE -> UDP -> TCP
or BLE -> TCP -> TCP
or TCP -> Kafka -> TCP
and more.
By layering Ockam Secure Channels and other protocols over Ockam Routing, we can provide end-to-end guarantees over arbitrary transport topologies.
So far, we've created an "echoer"
worker in our node, sent it a message, and received a reply. This worker was a simple one hop away from our "app"
worker.
To achieve this, messages carry with them two metadata fields: onward_route
and return_route
, where a route is a list of addresses.
To get a sense of how that works, let's route a message over two hops.
Sender:
Needs to know the route to a destination, makes that route the onward_route of a new message
Makes its own address the return_route of the new message
Hop:
Removes its own address from beginning of onward_route
Adds its own address to beginning of return_route
Replier:
Makes return_route of incoming message, onward_route of outgoing message
Makes its own address the return_route of the new message
For demonstration, we'll create a simple worker, called Hop
, that takes every incoming message and forwards it to the next address in the onward_route
of that message.
Just before forwarding the message, Hop
's handle message function will:
Print the message
Remove its own address (first address) from the onward_route
, by calling step()
Insert its own address as the first address in the return_route
by calling prepend()
Create a new file at:
Add the following code to this file:
To make this Hop
type accessible to our main program, export it from src/lib.rs
by adding the following to it:
We'll also use the Echoer
worker that we created in the previous example. So make sure that it stays exported from src/lib.rs
.
Next, let's create our main "app"
worker.
In the code below we start an Echoer
worker at address "echoer"
and a Hop
worker at address "h1"
. Then, we send a message along the h1 => echoer
route by passing route!["h1", "echoer"]
to send(..)
.
Create a new file at:
Add the following code to this file:
To run this new node program:
Note the message flow and how routing information is manipulated as the message travels.
Routing is not limited to one or two hops, we can easily create routes with many hops. Let's try that in a quick example:
This time we'll create multiple hop workers between the "app"
and the "echoer"
and route our message through them.
Create a new file at:
Add the following code to this file:
To run this new node program:
Note the message flow.
An Ockam Transport is a plugin for Ockam Routing. It moves Ockam Routing messages using a specific transport protocol like TCP, UDP, WebSockets, Bluetooth etc.
In previous examples, we routed messages locally within one node. Routing messages over transport layer connections looks very similar.
Let's try the TcpTransport, we'll need to create two nodes: a responder and an initiator.
Create a new file at:
Add the following code to this file:
Create a new file at:
Add the following code to this file:
Run the responder in a separate terminal tab and keep it running:
Run the initiator:
Note the message flow.
For demonstration, we'll create another worker, called Relay
, that takes every incoming message and forwards it to the predefined address.
Just before forwarding the message, Relay
's handle message function will:
Print the message
Remove its own address (first address) from the onward_route
, by calling step()
Insert predefined address as the first address in the onward_route
by calling prepend()
Create a new file at:
Add the following code to this file:
To make this Relay
type accessible to our main program, export it from src/lib.rs
by adding the following to it:
Create a new file at:
Add the following code to this file:
Create a new file at:
Add the following code to this file:
Create a new file at:
Add the following code to this file:
Run the responder in a separate terminal tab and keep it running:
Run the middle node in a separate terminal tab and keep it running:
Run the initiator:
Note how the message is routed.
Ockam Identities are cryptographically verifiable digital identities. Each Identity has a unique Identifier. An Ockam Credential is a signed attestation by an Issuer about the Attributes of a Subject.
Ockam Identities are cryptographically verifiable digital identities. Each Identity maintains one or more secret keys and has a unique Ockam Identifier.
When an Ockam Identity is first created, it generates a random primary secret key inside an Ockam Vault. This secret key must be capable of performing a ChangeSignature
. We support two types of change signatures - EdDSACurve25519Signature
or ECDSASHA256CurveP256Signature
. When both options are supported by a vault implementation that EdDSACurve25519Signature
is our preferred option.
The public part of the primary secret key is then written into a Change
(see data structure below) and this Change
includes a signature using the primary secret key. The SHA256 hash of this first Change, truncated to its first 20 bytes, becomes the the forever Ockam Identifier
of this Identity. Each change includes a created_at
timestamp to indicate when the change was created and an expires_at
timestamp to indicate when the primary_public_key
included in the change should stop being relied on as the primary public key of this identity.
Whenever the identity wishes to rotate to a new primary public key and revoke all previous primary public keys it can create a new Change
. This new change includes two signatures - one by the previous primary secret key and another by a newly generated primary secret key. Over time, this creates a signed ChangeHistory, the latest Change in this history indicates the self-attested latest primary public key of this Identity.
An Ockam Identity can use its primary secret key to sign PurposeKeyAttestation
s (see data structure below). These attestations indicate which public keys (and corresponding secret keys) the identity wishes to use for issuing credentials and authenticating itself within secure channels.
Each attestation includes an expires_at
timestamp to indicate when the included public key should no longer be relied on for its indicated purpose. The Identity's ChangeHistory can include a Change which has revoke_all_purpose_keys
set to true. All purpose key attestations created before the created_at
timestamp of this change are also be considered expired.
An Ockam Credential is a signed attestation by an Issuer about the Attributes of a Subject. The Issuer and Subject are both Ockam Identities. Attributes is a map of name and value pairs.
Any Identity can issue credentials attesting to attributes of another Ockam Identity. This does not imply that these attestations should be considered authoritative about the subject's attributes. Who is an authority on which attributes of which subjects is defined using Ockam Trust Contexts.
Each signed credential includes an expires_at
field to indicate a timestamp beyond which the attestation made in the credential should no longer be relied on.
The Attributes type above includes a schema identifier that refers to a schema that defines the meaning of each attribute. For example, Project Membership Authorities within an Ockam Orchestrator Project use a specific schema identifier and define attributes like enroller
which indicates that an Identity that possess a credential with enroller attribute set to true can request one-time user enrollment tokens to invite new members to the project.
Ockam Secure Channels are mutually authenticated and end-to-end encrypted messaging channels that guarantee data authenticity, integrity, and confidentiality.
Ockam Routing and Transports, combined with the ability to model Bridges and Relays, make it possible to run end-to-end, application layer protocols in a variety of communication topologies - across many network connection hops and protocols boundaries.
Ockam Secure Channels is an end-to-end protocol built on top of Ockam Routing. This cryptographic protocol guarantees data authenticity, integrity, and confidentiality over any communication topology that can be traversed with Ockam Routing.
A secure channel has two participants (ends). A participant that starts a Listener
and creates dedicated Responder
s whenever a new protocol session is initiated. Another participant, called the Initiator
, initiates the protocol with a Listener.
Running this protocol requires a stateful exchange of multiple messages and having a worker and routing system allows Ockam to hide the complexity of creating and maintaining a secure channel behind two simple functions:
Let's see this in action before we dive into the protocol. The following example is similar to the earlier multi hop routing example but this this time the echoer
is accessed through and end-to-end secure channel.
Run the responder in a separate terminal tab and keep it running:
Run the middle node in a separate terminal tab and keep it running:
Run the initiator:
Using SecureChannelListenerOptions
and SecureChannelOptions
, each participant is initialized with with the following initial state:
An Ockam Identifier that will be used as the Ockam Identity of this secure channel participant. Access to a Vault that contains the primary secret key for this Identifier is not required during the creation of the secure channel. We assume that a PurposeKeyAttestation for a SecureChannelStatic has already been created.
The SecureChannelStatic purpose key and access to its secret inside a Vault. This vault should be an implementation of the VaultForSecureChannels and VaultForVerifyingSignatures traits described earlier.
A Trust Context and Access Controls, that are used for authorization.
The following IdentityAndCredentials
data structure that contains:
The complete ChangeHistory of the Identity of this participant.
A purpose key attestation, issued by the Identity of this participant, attesting to a SecureChannelStatic purpose key. This must be the same SecureChannelStatic that the participant can access the secret for inside a vault.
Zero or more Credentials and corresponding PurposeKeyAttestations that can be used to verify the signature on the credential and tie a CredentialSigning verification key to the Ockam Identifier of the Credential Issuer.
The Listener runs on the specified Worker address and the Initiator knows a Route to reach the Listener. The Listener starts new Responder workers dedicated to each protocol session that is started by any Initiator.
The Initiator uses the above described initial state to begin a handshake with the Listener. The Listener initializes and starts a Responder in response to the first message from an initiator.
Each participant maintains the following variables:
s, e
: The local participant's static and ephemeral key pairs.
rs, re
: The remote participant's static and ephemeral public keys (which may be empty).
h
: A handshake transcript hash that hashes all the data that's been sent and received.
ck
: A chaining key that hashes all previous DH outputs. Once the handshake completes, the chaining key will be used to derive the encryption keys for transport messages.
k, n
: An encryption key k
(which may be empty) and a counter-based nonce n
. Whenever a new DH output causes a new ck
to be calculated, a new k
is also calculated. The key k
and nonce n
are used to encrypt static public keys and handshake payloads. Encryption with k
uses some AEAD cipher mode and uses the current h
value as associated data which is covered by the AEAD authentication. Encryption of static public keys and payloads provides some confidentiality and key confirmation during the handshake phase.
As described in the section on VaultForSecureChannels
, we rely on compile time feature flags to chose between three possible combinations of primitives:
This is a completely compile time choice for the purpose of studying performance of the various options in different runtime environments. We intentionally have no negotiation of primitives in the handshake. All participants in a live systems are deployed with the same compile time choice of secure channels primitives.
The s variable is initialized with SecureChannelStatic of this participant and the functions described in VaultForSecureChannels and VaultForVerifyingSignatures are used to run the handshake as follows:
At any point if there is error in decrypting the incoming data, the participant simply exits the protocols without signaling any failure to the other participant.
It verifies the chain of signatures on the change history. It checks that the expires_at
timestamp on the latest change is greater than now
.
It checks that the public_key
in the PurposeKeyAttestation is the same as the rs
that has been authenticated. It checks that the PurposeKeyAttestation subject is the Identifier whose change history was presented. It verifies that the primary public key in the latest change has correctly signed the PurposeKeyAttestation for the SecureChannelStatic. It checks that the expires_at
timestamp on the PurposeKeyAttestation is greater than now
.
For each included credential in verifies:
That subject of the credential is the Identifier whose change history was presented.
That the expires_at
timestamp of the Credential is greater than now
.
That the credential is correctly signed by the purpose key in the PurposeKeyAttestation included with the Credential as part of the corresponding CredentialAndPurposeKeyAttestation.
That expires_at
timestamp of the PurposeKeyAttestation is greater than now
.
At this point both sides have mutually authenticated the each other's rs
, Ockam Identifier, and Credentials by one or more Issuers about this Identifier.
Each participant in a Secure Channel is initialized with a Trust Context and Access Controls.
The simple form of mutual authorization is achieved by defining an Access Control that only allows the SecureChannel handshake to complete if the remote participant authenticates with a specific Ockam Identifier. Both participants have pre-existing knowledge of each other's Ockam Identifier.
A more scalable form of mutual authorization is achieved by specifying a Trust Context where each participant must present a specific type of credential issued by a specific Credential Issuer. Both participants have pre-existing knowledge of Ockam Identifier of this Credential Issuer (Authority).
After performing the the XX handshake, peers have agreed on a pair of symmetric encryption keys they will use to encrypt data on the channel, one for each direction.
With each direction of the secure channel, we associate a nonce variable. It holds a 64 bit unsigned integer. That integer is prepended to each ciphertext and the nonce variable is increased by 1 when the message is sent.
This nonce allows us to count the number of sent messages and define a series of contiguous buckets of messages where each bucket is of size N. N is a constant value known by both the initiator and the responder. We can then associate an encryption key to each bucket, and decide to create a new symmetric key once we need to send a message corresponding to the next bucket.
This approach implies that we don't need to communicate a "Rekey" operation between the secure channel parties. They both know that they need to perform rekeying every N messages.
In the previous figure:
Messages 0 to N-1 are encrypted with k0 (the initial key agreed during the handshake).
Messages N to 2N-1 with k1, etc.
In the most simple scenario, the encryptor keeps track of the last nonce it generated, and increments it by one each time it generates a new message. While the decryptor keeps track of the nonce it is expecting to receive next, and increments it every time it receives a valid message:
However this simple approach doesn't work at the level of Ockam Secure Channels, since there is no message delivery guarantees offered. For example, this can happen when using a transport protocol like UDP. This means that:
Packets can be completely lost.
Packets can be delayed/reordered.
Packets can be repeated.
This introduces a complication to the rekeying operation since the encryptor and the decryptor must agree on the nonce to use for every message on the channel.
In order to allow for out-of-order delivery each secure channel message includes the nonce that was used to encrypt it. The encryptor side keeps incrementing the nonce by 1 each time it generates a new message and prepends this nonce to the message.
Then the decryptor extracts this nonce from the message and uses it as part of the decryption operation.
With the nonce being part of the transmitted message, the synchronization problem is solved. Even if messages are lost or arrive out-of-order, the decryptor can still process them.
But other important difficulties arise:
Since the nonce is part of the message and transmitted in plaintext, how can the decryptor protect itself against duplicate packets / replay attacks? Even if the decryptor keeps track of every nonce ever received (and accepted) during the channel's lifetime, this is a problem for long-lived channels since it would require a prohibitive amount of memory to keep track of all the nonces used.
Even keeping track of all the nonces would be problematic since this would mean being able to decrypt old messages with old keys. This defeats Forward Secrecy, which is a protection against the possible decryption of previous messages, which is precisely what we are trying to achieve with the rekeying process.
Moreover, since each K is derived from the previous one, let's say an attacker sends a forged message with a nonce far in the future (than the one the decryptor is currently expecting). This would force the decryptor to perform a time-consuming series of rekey()
operations to reach to the K
needed to attempt to decrypt the message. This is an easy target for denial-of-service attacks.
Both of these problems are solved by the introduction of a sliding valid window of nonces that the decryptor will accept.
The decryptor keeps track of the largest accepted nonce received so far on the channel.
It defines an interval around it for nonces that it will accept.
Messages with nonces outside of this window are discarded.
In the following example:
The decryptor uses a valid window of size 10.
Given that the largest nonce it has accepted so far is 13, the decryptor can accept packets with nonces between 8 and 18.
Nonces outside of that interval will be discarded without any further processing.
When the decryptor receives a message with nonce = 14
(an allowed value), we try to decrypt the message. If the decryption succeeds, we accept the nonce and advance the window:
Note that the set of already-seen nonces is bounded in size. This size is (at most) half the valid window size.
Since the valid window is always centered on the highest received nonce, the nonces we track will always fall between the lower part of the window and that nonce. If we receive a nonce greater than the nonce at the window center, the whole window will have a new center and will move further along.
On the flip side, if at this point, the missing message with nonce 8 was received, it will be rejected, even if it was the valid one that was emitted by the sender, but delayed in the network. That message is effectively lost, it is too out-of-order to be handled.
Now suppose the next message received has nonce 12. It will be accepted, but the window won't move forward as it is less than the current maximum nonce accepted:
Here's another caveat. What happens if, let's say, messages 15 to 20 where lost? Then the channel is effectively stuck: no matter if it receives the next messages (21, 22, ...)
, the decryptor will reject them all because they will also be out of the valid window. At this point, the secure channel will need to be re-established.
The encryptor and decryptor implement both of the following in a similar manner:
Rekeying interval (which defines the key buckets).
Key deriving algorithm. The current rekeying interval size is 32.
However, the concept of valid window is entirely up to the decryptor to implement. This only has to do with how tolerant to out-of-order packets the communication will be. The encryptor side is not aware nor affected by this choice.
In our Elixir implementation of secure channels, the valid window is tied to the choice of how often to rekey. If the current k in use is kn (the k that corresponds to the maximum nonce accepted so far) the valid window is defined as nonces falling into the kn-1 , kn or kn+1 buckets.
Our Rust version is similar but defines a window of 32 positions around the expected nonce.
Create Ockam Inlet and Outlet Nodes using Cloudformation template
Create Ockam kafka outlet and kafka inlet Nodes using Cloudformation template
Create Ockam Postgres Outlet and Inlet Nodes using Cloudformation template
Create Ockam Amazon Timestream InfluxDB Outlet and Inlet Nodes using Cloudformation template
Create Ockam Amazon Redshift Outlet and Inlet Nodes using Cloudformation template
Create Ockam Amazon Bedrock Outlet and Inlet Nodes using Cloudformation template
This handshake is based on the described in the . The security properties of the messages in the XX pattern and their payload have been studied and describe at the following locations - , , , .
OCKAM_XX_25519_AES256_GCM_SHA256
enables Ockam_XX secure channel handshake with and SHA256. This is our current default.
OCKAM_XX_25519_AES128_GCM_SHA256
enables Ockam_XX secure channel handshake with and SHA256.
OCKAM_XX_25519_ChaChaPolyBLAKE2s
enables Ockam_XX secure channel handshake with and .
After the second message in the handshake is received by the Initiator, the initiator that the Responder possesses the secret keys of rs
, the remote SecureChannelStatic. The payload of the second message contains serialized IdentityAndCredentials
data of the Responder. The Initiator deserializes and verifies the this data structure:
After the third message in the handshake is received by the Responder, the responder that the Initiator possesses the secret keys of rs
, the remote SecureChannelStatic. The payload of the second message contains serialized IdentityAndCredentials
data of the Initiator. The Responder, similar to the initiator, deserializes and verifies the this data structure.
Rekeying is the process of periodically updating the symmetric key in use (refer to the ).
.
Typescript
Coming Soon.