Only this pageAll pages
Powered by GitBook
1 of 71

Documentation

Loading...

Loading...

Loading...

Loading...

ENCRYPTED PORTALS TO ...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Reference

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Intro to Ockam

Ockam empowers you to build secure-by-design apps that can trust data-in-motion.

With Ockam:

  • Impossible connections become possible. Establish secure channels between systems in private networks that previously could not be connected because it is either too difficult or insecure.

  • All public endpoints become private. Connect your applications and databases without exposing anything publicly.

At its core, Ockam is a toolkit for developers to build applications that can create end-to-end encrypted, mutually authenticated, secure communication channels:

  • From anywhere to anywhere: Ockam works across any network, cloud, or on prem infrastructure.

  • Over any transport topology: Ockam is compatible with every transport layer including TCP, UDP, Kafka, or even Bluetooth.

  • Without no infrastructure, network, or application changes: Ockam works at the application layer, so you don’t need to make complex changes.

  • While ensuring the risky things are impossible to get wrong: Ockam’s protocols do the heavy lifting to establish end-to-end encrypted, mutually authenticated secure channels

Why Ockam is so unique

Traditionally, connections made over TCP are secured with TLS. However, the security guarantees of a TLS secure channel only apply for the length of the underlying TCP connection. It is not possible to connect two systems in different private networks over a single TCP connection. Thus, connecting these two systems requires exposing one of them over the Internet, and breaking the security guarantees of TLS.

Ockam works differently. Our secure channel protocol sits on top of an application layer routing protocol. This routing protocol can hand over messages from one transport layer connection to another. This can be done over any transport protocol, with any number of transport layer hops: TCP to TCP to TCP, TCP to UDP to TCP, UDP to Bluetooth to TCP to Kafka, etc.

Over these transport layer connections, Ockam sets up an end-to-end encrypted, mutually authenticated connection. This unlocks the ability to create secure channels between systems that live in entirely private networks, without exposing either end to the Internet.

Examples of Ockam Secure Channels over multiple hops of TCP, Kafka, UDP, or anything else.

Since Ockam’s routing protocol is at the application layer, complex network and infrastructure changes are not required to make these connections. Rather than a months-long infrastructure project, you can connect private systems in minutes while ensuring the risky things are impossible to get wrong. NATs are traversed; Keys are stored in vaults; Credentials are short-lived; Messages are authenticated; Data-integrity is guaranteed; Senders are protected from key compromise impersonation; Encryption keys are ratcheted; Nonces are never reused; Strong forward secrecy is ensured; Sessions recover from network failures; and a lot more.

Ockam is easy to use

The magic of Ockam is it's simplicity. All you need to do is subscribe to Ockam Orchestrator, and then deploy one of the following distributions next to the applications you'd like to connect:

  • Ockam Programming Libraries (Rust …)

  • Ockam Command

  • Ockam Docker Images

  • RedPanda Connect

  • Managed Ockam Nodes from the AWS Marketplace

  • Snowflake Native Apps

  • Lambda/Serverless Functions

Ockam's core concepts

Ockam empowers you to build secure-by-design apps that can trust data-in-motion.

You can use Ockam to create end-to-end encrypted and mutually authenticated channels. Ockam secure channels authenticate using cryptographic identities and credentials. They give your apps granular control over all trust and access decisions. This control makes it easy to enforce fine-grained, attribute-based authorization policies – at scale.

These core capabilities are composed to enable private and secure communication in a wide variety of application architectures. For example, with one simple command, an app in your cloud can create an encrypted portal to a micro-service in another cloud. The service doesn’t need to be exposed to the Internet. You don’t have to change anything about networks or firewalls.

# Create a TCP Portal Inlet to a Postgres server that is running in
# a remote private VPC in another cloud.
ockam tcp-inlet create --from 15432 --via postgres

# Access the Postgres server on localhost.
psql --host localhost --port 15432

Similarly, using another simple command a kafka producer can publish end-to-end encrypted messages for a specific kafka consumer. Kafka brokers in the middle can’t see, manipulate, or accidentally leak sensitive enterprise data. This minimizes risk to sensitive business data and makes it easy to comply with data governance policies.

Encrypted Portals

Portals carry various application protocols over end-to-end encrypted Ockam secure channels.

For example: a TCP Portal carries TCP over Ockam, a Kafka Portal carries Kafka Protocol over Ockam, etc. Since portals work with existing application protocols you can use them through companion Ockam Nodes, that run adjacent to your application, without changing any of your application’s code.

A tcp portal makes a remote tcp server virtually adjacent to the server’s clients. It has two parts: an inlet and an outlet. The outlet runs adjacent to the tcp server and inlets run adjacent to tcp clients. An inlet and the outlet work together to create a portal that makes the remote tcp server appear on localhost adjacent to a client. This client can then interact with this localhost server exactly like it would with the remote server. All communication between inlets and outlets is end-to-end encrypted.

You can use Ockam Command to start nodes with one or more inlets or outlets. The underlying protocols handle the hard parts :

  • NATs are traversed

  • Keys are stored in vaults

  • Credentials are short-lived

  • Messages are authenticated

  • Data-integrity is guaranteed

  • Senders are protected from key compromise impersonation

  • Encryption keys are ratcheted

  • Nonces are never reused

  • Strong forward secrecy is ensured and has been validated by an Audit by Trail of Bits

  • Sessions recover from network failures

  • ...and lots more!

How Ockam is different from a Network layer connector

Databases

Create an Ockam Portal between any application, to any database, in any environment.

In each example, we connect a nodejs app in one private network with a database in another private network.

Each company’s network is private, isolated, and doesn't expose ports. To learn how end-to-end trust is established, please read: “”

Please select an example to dig in:

The examples below use PostgreSQL, MongoDB and InfluxDB. However, the same setup works for any database: MySQL, ClickHouse, Cassandra, SQL Server, Databricks, Snowflake, Mongo, etc.

PostgreSQL

In each example, we will connect a nodejs app in one private network with a postgres database in another private network.

Each company’s network is private, isolated, and doesn't expose ports. To learn how end-to-end trust is established, please read: “”

Please select an example to dig in:

InfluxDB

This section contains hands-on examples that use to create encrypted portals to InfluxDB databases running in various environments.

In each example, we connect a nodejs app in one private network with a InfluxDB database in another private network. To understand how end-to-end trust is established, and how the portal works even though the two networks are isolated with no exposed ports, please read: “”

Please select an example to dig in:

MongoDB

In each example, we connect a nodejs app in one private network with a MongoDB database in another private network.

Each company’s network is private, isolated, and doesn't expose ports. To learn how end-to-end trust is established, please read: “”

Please select an example to dig in:

Quickstarts

Try one of these demos yourself, or get a video walk through.

Add secure connectivity to your SaaS Product
Snowflake federated queries to Postgres
Postgres to Snowflake Migrations
Snowflake to Postgres for CDC (Change Data Capture)
Run federated queries from inside of Snowflake
Steam Kafka events to to Snowflake
Real-Time CDC (Change Data Capture) Pipelines from Snowflake to Kafka
Access a Snowflake stage with SFTP
Access a Snowflake stage with WebDAV
Call a remote private API from within Snowflake

Identities and Vaults

Generate cryptographically provable unique identities and store their secret keys in safe vaults.

// examples/vault-and-identities.rs
use ockam::node;
use ockam::{Context, Result};

#[ockam::node]
async fn main(ctx: Context) -> Result<()> {
    // Create default node to safely store secret keys for Alice
    let mut node = node(ctx).await?;

    // Create an Identity to represent Alice.
    let _alice = node.create_identity().await?;

    // Stop the node.
    node.shutdown().await
}
How does Ockam work?

We connect a nodejs app in one private network with a PostgreSQL database in another private network.

We connect a nodejs app in one private network with a MongoDB database in another private network.

We connect a nodejs app in one private network with a InfluxDB database in another private network.

Use one of the Ockam Snowflake connectors to build private connections to Snowflake in minutes.

How does Ockam work?

We connect a nodejs app in one virtual private network with a postgres database in another virtual private network. The example uses docker and docker compose to create these virtual networks.

We connect a nodejs app in one private kubernetes cluster with a postgres database in another private kubernetes cluster. The example uses docker and kind to create these kubernetes clusters.

We connect a nodejs app in one Amazon VPC with a Amazon Aurora managed Postgres database in another Amazon VPC. The example uses AWS CLI to create these VPCs.

We connect a nodejs app in one Amazon VPC with a Amazon RDS managed Postgres database in another Amazon VPC. The example uses AWS CLI to create these VPCs.

Ockam
How does Ockam work?

We connect a nodejs app in one Amazon VPC with a InfluxDB database in another Amazon VPC. The example uses AWS CLI to create these VPCs.

How does Ockam work?

We connect a nodejs app in one virtual private network with a MongoDB database in another virtual private network. The example uses docker and docker compose to create these virtual networks.

Nodes and Workers

Code Repos

Create an Ockam Portal from any application, to any code repo, in any environment.

In each example, we connect a nodejs app in one company's private network with a git repository managed by gitlab in another company's private network.

Each company’s network is private, isolated, and doesn't expose ports. To learn how end-to-end trust is established, please read: “”

Please select an example to dig in:

The examples below use a Gitlab server, however, the same setup works for any other coderepositories: Github, Gitea, Claude, etc._

Programming Libraries

PostgreSQL
MongoDB
InfluxDB
Snowflake
PostgresSQL - Docker
PostgreSQL - Kubernetes
PostgreSQL - Amazon Aurora
PostgreSQL - Amazon RDS
InfluxDB - Amazon Timestream
MongoDB - Docker
Cover

Rust

Available now.

Cover

Typescript

Coming Soon.

Implementation and Internals

Nodes and Workers

How does Ockam work?

We connect a nodejs app in an AWS virtual private network with a Gitlab hosted CodeRepository in another AWS virtual private network. The example uses the AWS CLI to instantiate AWS resources.

APIs

Create an Ockam Portal from any application, to any API, in any environment.

In each example, we connect a client app in one private network with am API service in another private network.

Each company’s network is private, isolated, and doesn't expose ports. To learn how end-to-end trust is established, please read: “How does Ockam work?”

Please select an example to dig in:

AI

Create an Ockam Portal from any application, to any AI model, in any environment.

In each example, we connect a nodejs app in one private network with an AI service in another private network.

Each company’s network is private, isolated, and doesn't expose ports. To learn how end-to-end trust is established, please read: “How does Ockam work?”

Please select an example to dig in:

The Amazon EC2 example uses a LLaMA model and the Amazon Bedrock model uses an Amazon Titan model. However, the same setup works for any other AI models: GPT, Claude, LaMDA, etc.

Apache Kafka

Create an Ockam Portal to send end-to-end encrypted messages through Apache Kafka.

Ockam encrypts messages from a Producer to a specific Consumer. Only that specific Consumer can decrypt these messages. This guarantees that your data cannot be observed or tampered as it passes through Kafka. Operators of the Kafka cluster only see end-to-end encrypted data. Any compromise of an operator's infrastructure cannot compromise your business data.

To learn how end-to-end trust is established, please read: “How does Ockam work?”

Please select an example to dig in:

Redpanda

Create an Ockam Portal to send end-to-end encrypted messages through Redpanda.

Ockam encrypts messages from a Producer all-of-the-way to a specific Consumer. Only that specific Consumer can decrypt these messages. This guarantees that your data cannot be observed or tampered with as it passes through Redpanda or the network where it is hosted. The operators of Redpanda can only see encrypted data in the network and in service that they operate. Thus, a compromise of the operator's infrastructure will not compromise the data stream's security, privacy, or integrity.

To learn how end-to-end trust is established, please read: “How does Ockam work?”

Please select an example to dig in:

Confluent

Create an Ockam Portal to send end-to-end encrypted messages through Confluent Cloud.

Ockam encrypts messages from a Producer all-of-the-way to a specific Consumer. Only that specific Consumer can decrypt these messages. This guarantees that your data cannot be observed or tampered with as it passes through Confluent Cloud or the network where it is hosted. The operators of Confluent Cloud can only see encrypted data in the network and in service that they operate. Thus, a compromise of the operator's infrastructure will not compromise the data stream's security, privacy, or integrity.

To learn how end-to-end trust is established, please read: “How does Ockam work?”

Please select an example to dig in:

Warpstream

Create an Ockam Portal to send end-to-end encrypted messages through Warpstream Cloud.

Ockam encrypts messages from a Producer all-of-the-way to a specific Consumer. Only that specific Consumer can decrypt these messages. This guarantees that your data cannot be observed or tampered with as it passes through Warpstream Cloud or the network where it is hosted. The operators of Warpstream Cloud can only see encrypted data in the network and in service that they operate. Thus, a compromise of the operator's infrastructure will not compromise the data stream's security, privacy, or integrity.

To learn how end-to-end trust is established, please read: “How does Ockam work?”

Please select an example to dig in:

Instaclustr

Create an Ockam Portal to send end-to-end encrypted messages through Instaclustr.

Ockam encrypts messages from a Producer all-of-the-way to a specific Consumer. Only that specific Consumer can decrypt these messages. This guarantees that your data cannot be observed or tampered with as it passes through Instaclustr or the network where it is hosted. The operators of Instaclustr can only see encrypted data in the network and in service that they operate. Thus, a compromise of the operator's infrastructure will not compromise the data stream's security, privacy, or integrity.

To learn how end-to-end trust is established, please read: “How does Ockam work?”

Please select an example to dig in:

Get started demo

Let’s build a simple example together. We will create an encrypted from a psql microservice in Azure to a Postgres Database in AWS.

When you get done with this page you will understand

  1. the basic building blocks of Ockam,

  2. the first steps you should take in your architecture, and

  3. how to build an end-to-end encrypted portal between two private services.

Create an Orchestrator Project

and pick a subscription plan through the guided workflow on Ockam.io. After you complete this step you will have a Project in Ockam Orchestrator. A Project offers two services: a Membership and a service. More on both of those later.

Set up Command on your local dev machine

Run the following commands to install Ockam Command on your dev machine.

The `enroll` command does a lot! All at once it...

  1. creates an Ockam Node on your machine.

  2. generates a private key as your local Node’s cryptographic.

  3. creates a local

  4. guides you to sign in to your new Ockam Orchestrator Project.

  5. asks your Project’s Membership Authority to issue and sign a for this Node.

  6. makes you the administrator of your Project.

  7. creates a Secure Channel between your local Ockam Node and your Project in Orchestrator.

Congrats! Your dev machine Node has a secure, encrypted Ockam Portal connection to your Project Node inside of Ockam Orchestrator over a Secure Channel!

Install Ockam Command and create an Ockam Node in AWS

The process is repeated in AWS through the same set of commands.

You now have an Ockam Node running in your VPC. As before, this Node will have

  1. a set of private key Identifiers, stored in a local Vault

  2. a Membership Credential that will allow this Ockam Node to join your Project in Orchestrator.

Create a Portal Outlet in this Ockam Node

An Outlet is created in the Ockam Node and a raw TCP connection is created to the postgres server on localhost port 5432.

Create a Secure Channel to Orchestrator, and create a Relay in your Project

This command

  1. initiates an outgoing tcp connection from the Ockam Node in AWS to your Project in Ockam Orchestrator.

  2. creates a over the tcp connection.

  3. creates a Relay in your Project at the address: postgres

Notice that we didn’t have to change anything in the AWS network settings. It’s possible because Bank Corp’s network allows outgoing tcp connections to the Internet. We use this port to create the Secure Channel.

Create an Ockam Node in Azure

Create a Portal Inlet in this Node in Azure

This command

  1. creates a tcp Portal Inlet.

  2. creates a tcp listener on localhost port 15432.

  3. creates an outgoing tcp connection to your Project.

  4. creates a to your Project over this tcp connection.

  5. creates an end-to-end Secure Channel from the Inlet to the Outlet in Bank Corp’s VPC via the Relay in your Project at address: postgres

Congrats! The psql microservice at Analysis Corp and the Postgres database at Bank Corp are connected with an Ockam Portal.

Local Query

The psql service now has an end-to-end encrypted, mutually authenticated, secure channel connection with the postgres database on localhost:15432

All of the data-in-motion is end-to-end with strong forward secrecy as it moves through the Internet. The communication channel is and. Keys and Credentials are automatically rotated. Access to connect with postgres can be easily revoked.

There’s so much more….

This is just one simple example. Ockam’s stack of work together to ensure security, privacy, and trust in data. They can be combined and composed in all sorts of ways.

In the next section we will dive into all sorts of ways to build portals across different infrastructures, networks, and applications.

The Trick behind Ockam's Magic, by our Founders

Kafka

Create an Ockam Portal to send end-to-end encrypted messages through Kafka - from any producer, to any consumer, through any Kafka API compatible data streaming platform.

encrypts messages from a Producer to a specific Consumer. Only that specific Consumer can decrypt these messages. This guarantees that your data cannot be observed or tampered as it passes through Kafka. Operators of the Kafka cluster only see end-to-end encrypted data. Any compromise of an operator's infrastructure cannot compromise your business data.

To learn how end-to-end trust is established, please read: “”

Please select an example to dig in:

The examples below use Apache Kafka, Redpanda, Confluent, Aiven, WarmStream and Instaclustr. However, the same setup works for any Kafka API compatible data streaming platform.

Command

Command line tools to build and orchestrate secure by design applications.

Ockam Command is our command line interface to build secure by design applications that can trust all data in motion. It makes it easy to orchestrate end-to-end encryption, mutual authentication, key management, credential management, and authorization policy enforcement – at a massive scale.

No more having to design error-prone ad-hoc ways to distribute sensitive credentials and roots of trust. Ockam's integrated approach takes away this complexity and gives you simple tools for:

End-to-end data authenticity, integrity, and privacy in any communication topology

  • Create end-to-end encrypted, authenticated secure channels over any transport topology.

  • Create secure channels over multi-hop, multi-protocol routes over TCP, UDP, WebSockets, BLE, etc.

  • Provision encrypted relays for applications distributed across many edge, cloud and data-center private networks.

  • Make any protocol secure by tunneling it through mutually authenticated and encrypted portals.

  • Bring end-to-end encryption to enterprise messaging, pub/sub and event streams - Kafka, Kinesis, RabbitMQ, etc.

Identity-based, policy driven, application layer trust – granular authentication and authorization

  • Generate cryptographically provable unique identities.

  • Store private keys in safe vaults - hardware secure enclaves and cloud key management systems.

  • Operate scalable credential authorities to issue lightweight, short-lived, revocable, attribute-based credentials.

  • Onboard fleets of self-sovereign application identities using secure enrollment protocols.

  • Rotate and revoke keys and credentials – at scale, across fleets.

  • Define and enforce project-wide attribute-based access control policies. Choose ABAC, RBAC or ACLs.

  • Integrate with enterprise identity providers and policy providers for seamless employee access.

A step by step introduction

Ockam Command provides the above collection of composable building blocks that are accessible through various sub-commands. In a step-by-step guide let's walk through various Ockam sub-commands to understand how you can use them to build end-to-end trustful communication for any application in any communication topology.

Install Ockam Command

If you haven't already, the first step is to install Ockam Command:

If you use Homebrew, you can install Ockam using brew.

This will download a precompiled binary and add it to your path. If you don't use Homebrew, you can also install on Linux and macOS systems using curl. See instructions for other systems in the next tab.

On Linux and macOS, you can download precompiled binaries for your architecture using curl.

This will download a precompiled binary and add it to your path. If the above instructions don't work on your machine, please , we'd love to help.

Check that everything was installed correctly by enrolling with Ockam Orchestrator. This will create a and for you in Ockam Orchestrator.

Next, let's dive in and learn how to use .

Identities and Vaults

Ockam Identities are unique, cryptographically verifiable digital identities. These identities authenticate by proving possession of secret keys. Ockam Vaults safely store these secret keys.

In order to make decisions about trust, we must authenticate senders of messages.

Vaults

Ockam authenticate by cryptographically proving possession of specific secret keys. Ockam Vaults safely store these secret keys in cryptographic hardware and cloud key management systems.

You can create a vault as follows:

This command will, by default, create a file system based vault, where your secret keys are stored at a specific file path.

Vaults are designed to be used in a way that secret keys never have to leave a vault. There is a growing base of Ockam Vault implementations in the that safely store secret keys in specific KMSs, HSMs, Secure Enclaves etc.

Identities

Ockam Identities are unique, cryptographically verifiable digital identities.

You can create new identities, by typing:

The secret keys belonging to this identity are stored in the specified vault. This can be any type of vault - File Vault, AWS KMS, Azure KeyVault, YubiKey etc. If no vault is specified, the default vault is used. If a default vault doesn't exist yet, a new file systems based vault is created, set as default, and then used to generate secret keys.

To ensure privacy and eliminate the possibility of correlation of behavior across trust contexts, we've made it easy to generate and use different identities and identifiers for separate trust contexts.

Secret Keys

Each Ockam Identity starts its life by generating a secret key and its corresponding public key. Secret keys, must remain secret, while public keys can be shared with the world.

Ockam Identities support two types of Elliptic Curve secret keys that live in vaults - Curve25519 or NIST P-256.

Identifiers

Each Ockam Identity has a unique public identifier, called the Ockam Identifier of this identity:

This Identifier is generated by hashing the first public key of the Identity.

Change History

Ockam Identities can periodically rotate their keys to indicate that the latest public key is the one that should be used for authentication. Each Ockam Identity maintains a self-signed change history of key rotation events, you can see this full history by running:

Identifier Authentication

Authentication, within Ockam, starts by proving control of a specific Ockam Identifier. To prove control of a specific Identifier, the prover must present the identifier, the full signed change history of the identifier, and a signature on a challenge using the secret key corresponding to the latest public key in the identifier's change history.

If you're stuck or have questions at any point, .

Next, let's combine everything we've learnt so far to create mutually authenticated and end-to-end encrypted that guarantee data authenticity, integrity, and confidentiality.

Rust

Rust crates to build secure by design applications for any environment – from highly scalable cloud infrastructure to tiny battery operated microcontroller based devices.

Ockam Rust crates are a library of tools to build secure by design applications for any environment – from highly scalable cloud infrastructure to tiny battery operated microcontroller based devices. They make it easy to orchestrate end-to-end encryption, mutual authentication, key management, credential management, and authorization policy enforcement – at massive scale.

No more having to think about creating unique cryptographic keys and issuing credentials to your fleet of application entities. No more designing ways to safely store secrets in hardware and securely distribute roots of trust.

End-to-end data authenticity, integrity, and privacy in any communication topology

  • Create end-to-end encrypted, authenticated secure channels over any transport topology.

  • Create secure channels over multi-hop, multi-protocol routes over TCP, UDP, WebSockets, BLE, etc.

  • Provision encrypted relays for applications distributed across many edge, cloud and data-center private networks.

  • Make any protocol secure by tunneling it through mutually authenticated and encrypted portals.

  • Bring end-to-end encryption to enterprise messaging, pub/sub and event streams - Kafka, Kinesis, RabbitMQ etc.

Identity-based, policy driven, application layer trust – granular authentication and authorization

  • Generate cryptographically provable unique identities.

  • Store private keys in safe vaults - hardware secure enclaves and cloud key management systems.

  • Operate scalable credential authorities to issue lightweight, short-lived, revokable, attribute-based credentials.

  • Onboard fleets of self-sovereign application identities using secure enrollment protocols.

  • Rotate and revoke keys and credentials – at scale, across fleets.

  • Define and enforce project-wide attribute based access control policies. Chose ABAC, RBAC or ACLs.

  • Integrate with enterprise identity providers and policy providers for seamless employee access.

A step by step introduction

Ockam Rust crates provide the above collection of composable building blocks. In a step-by-step hands-on guide let’s walk through each building block to understand how you can use them to build end-to-end trustful communication for any application in any communication topology.

The first step is to install Rust and create a cargo project called hello_ockam We’ll use this project to try out various examples.

If you don't have it, please the latest version of Rust.

Next, create a new cargo project to get started:

If the above instructions don't work on your machine, please , we’d love to help.

Gitlab
» ockam vault create v1
     ✔︎ Vault created with name 'v1'!
» ockam identity create i1 --vault v1
     ✔︎ Identity Pef7f2a20c186b5adb03c0d7160879134135574663cc930d9b1cd664d63a45fb0
       created successfully as i1
» ockam identity show i1
I945b711058805c3e700e2f387d3f5458a0e0e62e806329154f70547fe12d0a78
» ockam identity show i1 --full
Identifier: I945b711058805c3e700e2f387d3f5458a0e0e62e806329154f70547fe12d0a78
  Change[0]:
    identifier:              945b711058805c3e700e2f387d3f5458a0e0e62e806329154f70547fe12d0a78
    primary_public_key:      EdDSACurve25519: db44d6e29006420b836fb2535c3c733711d3e05ef934aad16111596b7f4ede1a
    revoke_all_purpose_keys: false
Identities
Ockam GitHub Repository
please reach out to us
secure channels
# Tap and install Ockam Command
brew install build-trust/ockam/ockam
curl --proto '=https' --tlsv1.2 -sSf \
    https://raw.githubusercontent.com/build-trust/ockam/develop/install.sh | bash
ockam enroll
post a question
Space
Project
Nodes and Workers
Please click the diagram to see a bigger version.
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
cargo new --lib hello_ockam && cd hello_ockam && mkdir examples \
  && cargo add ockam r3bl_ansi_color && cargo build
install
post a question
Please click the diagram to see a bigger version.

nodejs

We connect a nodejs app in one AWS VPC with a nodejs API service in another AWS VPC.

python

We connect a python app in one AWS VPC with a python API service in another AWS VPC.

Amazon EC2

We connect a nodejs app in an AWS virtual private network with a LLaMA model provisioned on an EC2 instance in another AWS virtual private network. The example uses the AWS CLI to instantiate AWS resources.

Amazon Bedrock

We connect a nodejs app in an AWS virtual private network with an Amazon Bedrock API in another AWS virtual private network. The example uses the AWS CLI to instantiate AWS resources.

Apache Kafka - Docker

We send end-to-end encrypted messages through Apache Kafka.

Redpanda - Self Hosted

Send end-to-end encrypted messages through Redpanda.

Confluent Cloud

Send end-to-end encrypted messages through Confluent Cloud.

Instaclustr - Cloud

Send end-to-end encrypted messages through Instaclustr.

Guides

Warpstream Cloud

Send end-to-end encrypted messages through Warpstream Cloud.

curl --proto '=https' --tlsv1.2 -sSfL https://install.command.ockam.io | bash
source "$HOME/.ockam/env"

ockam enroll
curl --proto '=https' --tlsv1.2 -sSfL https://install.command.ockam.io | bash
source "$HOME/.ockam/env"

ockam enroll
ockam tcp-outlet create --to 5432
ockam relay create postgres
curl --proto '=https' --tlsv1.2 -sSfL https://install.command.ockam.io | bash
source "$HOME/.ockam/env"

ockam enroll
ockam tcp-inlet create --from 15432 --via postgres
psql --host localhost --port 15432
Ockam Portal
Sign up for Ockam
Authority
Relay
Identifier
Identity
Vault to store keys.
membership Credential
Secure Channel
Secure Channel
encrypted
mutually authenticated
authorized
protocols
Ockam
How does Ockam work?

We send end-to-end encrypted messages through Apache Kafka.

We send end-to-end encrypted messages through Redpanda.

We send end-to-end encrypted messages through Instaclustr.

We send end-to-end encrypted messages through Confluent.

We send end-to-end encrypted messages through Aiven.

We send end-to-end encrypted messages through Warpstream.

Protocols

Cryptographic and Messaging Protocols that provide the foundation for end-to-end application layer trust in data.

Ockam is composed of a collection of cryptographic and messaging protocols. These protocols make it possible to create private and secure by design applications that provide end-to-end application layer trust it data. The following pages explain, in detail, how each of the protocols work:

  • Nodes and Workers

  • Routing and Transports

  • Keys and Vaults

  • Identities and Credentials

  • Secure Channels

  • Access Controls and Policies

Ockam has been audited and verified by experts

In October of 2023, a team of security and cryptography experts, from Trail of Bits, conducted an extensive review of Ockam’s protocols. Trail of Bits is renowned for their comprehensive third-party audits of the security of many other critical projects, including Kubernetes and the Linux kernel.

The auditors from Trail of Bits conducted in-depth, manual analysis, and formal modeling of the security properties of Ockam’s protocols. After this review was complete, they highlighted:

Ockam’s protocols use robust cryptographic primitives according to industry best practices. None of the identified issues pose an immediate risk to the confidentiality and integrity of data handled by the system in the context of the two in-scope use cases. The majority of identified issues relate to information that should be added to the design documentation, such as threat model details and increased specification for certain aspects.

— Trail of Bits

Download the unabridged audit by Trail of Bits:

4MB
Ockam - Design Review - Comprehensive Report.pdf
pdf

Cloud

In this hands-on example we send end-to-end encrypted messages through Aiven Cloud.

encrypts messages from a Producer all-of-the-way to a specific Consumer. Only that specific Consumer can decrypt these messages. This guarantees that your data cannot be observed or tampered with as it passes through Aiven Cloud or the network where it is hosted. The operators of Aiven Cloud can only see encrypted data in the network and in service that they operate. Thus, a compromise of the operator's infrastructure will not compromise the data stream's security, privacy, or integrity.

To learn how end-to-end trust is established, please read: “”

Run

This example requires Aiven CLI, Bash, JQ, Git, Curl, Docker, and Docker Compose. Please set up these tools for your operating system, then run the following commands:

If everything runs as expected, you'll see the message: The example run was successful 🥳

Walkthrough

The , that you ran above, and its are full of comments and meant to be read. The example setup is only a few simple steps, so please take some time to read and explore.

Administrator

  • The calls the which invokes the to create an new identity, sign in to Ockam Orchestrator, set up a new Ockam project, make you the administrator of this project, and get a project membership .

  • The run function then using the Aiven CLI.

  • An Ockam relay is then started which creates an encrypted Kafka relay that transmits Kafka messages over a secure portal.

  • We then , each valid for 10 minutes, and can be redeemed only once. The are meant for the Consumer and Producer, in the Ockam node that will run in Application Team’s network.

  • In a typical production setup, an administrator or provisioning pipeline generates enrollment tickets and gives them to nodes that are being provisioned. In our example, the run function is acting on your behalf as the administrator of the Ockam project. It creates a Kafka relay using a pre-baked Ockam Kafka addon which will host the Aiven Kafka server and , passing them their tickets using environment variables.

  • For the Application team, the run function takes the enrollment tickets, sets them as the value of an , and to create the Application Teams’s networks.

Application Teams

  • Application Teams’s is used when run.sh invokes docker-compose. It creates an for Application Teams. In this network, docker compose starts a and a .

  • The Kafka consumer node container is created using and this . The consumer enrollment ticket from run.sh is via environment variable.

  • When the Kafka consumer node container starts in the Application Teams network, it runs . The entrypoint and then calls the which starts the Kafka inlet and listens and traffic connection on localhost port 9092 through Ockam relay.

  • Next, the entrypoint at the end executes the , which launches a Kafka consumer waiting for messages in the demo topic. Once the messages are received, they are printed out.

  • In the producer container, the process is analogous, once the Ockam kafka-producer inlet is set up, the launches a Kafka producer that sends messages.

  • You can view the Aiven website to see the encrypted messages as they are being sent by the producer.

Recap

We sent end-to-end encrypted messages through Aiven cloud.

Messages are encrypted with strong forward secrecy as soon as they leave a Producer, and only the intended Consumer can decrypt those messages. Aiven Cloud and other Consumers can only see encrypted messages.

All communication is mutually authenticated and authorized. Keys and credentials are automatically rotated. Access can be easily revoked.

Cleanup

To delete all containers and images:

Apache Kafka
Redpanda
Instaclustr
Confluent
Aiven
Warpstream
# Clone the Ockam repo from Github.
git clone --depth 1 https://github.com/build-trust/ockam && cd ockam

# Navigate to this example’s directory.
cd examples/command/portals/kafka/aiven/aiven/

# Run the example, use Ctrl-C to exit at any point.
./run.sh
# Create a dedicated and isolated virtual network for application_team.
networks:
  application_team:
      driver: bridge
./run.sh cleanup
Ockam
How does Ockam work?
run.sh script
accompanying files
run.sh script
run function
enroll command
credential
creates a new Kafka cluster
using the Ockam Kafka addon
generate two new enrollment tickets
two tickets
Application Team’s network
environment variable
passes the Aiven authentication variables
invokes docker-compose
docker-compose configuration
isolated virtual network
Kafka Consumer container
Kafka Producer container
this dockerfile
entrypoint script
passed to the container
its entrypoint
enrolls with your project
Ockam kafka-consumer command
command present in the docker-compose configuration
command within docker-compose configuration

Kubernetes

Let's connect a nodejs app in one kubernetes cluster with a postgres database in another private kubernetes cluster.

Each company’s network is private, isolated, and doesn't expose ports. To learn how end-to-end trust is established, please read: “How does Ockam work?”

Run

This example requires Bash, Git, Curl, Kind, and Kubectl. Please set up these tools for your operating system, then run the following commands:

# Clone the Ockam repo from Github.
git clone --depth 1 https://github.com/build-trust/ockam && cd ockam

# Navigate to this example’s directory.
cd examples/command/portals/databases/postgres/kubernetes

# Run the example, use Ctrl-C to exit at any point.
./run.sh

If everything runs as expected, you'll see the message: The example run was successful 🥳

Walkthrough

The run.sh script, that you ran above, and its accompanying files are full of comments and meant to be read. The example setup is only a few simple steps, so please take some time to read and explore.

Administrator

  • The run.sh script calls the run function which invokes the enroll command to create an new identity, sign into Ockam Orchestrator, set up a new Ockam project, make you the administrator of this project, and get a project membership credential.

  • The run function then generates two new enrollment tickets. The tickets are valid for 10 minutes. Each ticket can be redeemed only once and assigns attributes to its redeemer. The first ticket is meant for the Ockam node that will run in Bank Corp.’s kubernetes cluster. The second ticket is for the Ockam node that will run in Analysis Corp.’s kubernetes cluster.

  • In a typical production setup an administrator or provisioning pipeline generates enrollment tickets and gives them to nodes that are being provisioned. In our example, the run function is acting on your behalf as the administrator of the Ockam project. It uses kubernetes secrets to give tickets to Ockam nodes that are being provisioned in Bank Corp.’s and Analysis Corp.’s kubernetes clusters.

  • The run function takes the enrollment tickets, sets them as kubernetes secrets, and uses kind with kubectl to create Bank Corp.’s and Analysis Corp.’s kubernetes clusters.

Bank Corp

  • Bank Corp.’s kubernetes manifest defines a pod and containers to run in Bank Corp’s isolated kubernetes cluster. The run.sh script invokes kind to create the cluster, prepares container images and calls kubectl apply to start the pod and its containers.

  • One of the containers defined in Bank Corp.’s kubernetes manifest runs a PostgreSQL database makes it available on localhost:5432 inside its pod.

  • Another container defined inside that same pod runs an Ockam node as a companion to the postgres container. The Ockam node container is created using this dockerfile and this entrypoint script. The enrollment ticket from run.sh is passed to the container.

  • When the Ockam node container starts in the Bank Corp cluster, it runs its entrypoint. The entrypoint script creates a new identity and uses the enrollment ticket to enroll with your project and get a project membership credential that attests to the attribute postgres-outlet=true. The run function assigned this attribute to the enrollment ticket.

  • The entrypoint script then creates a node that uses this identity and membership credential to authenticate and create a relay in the project, back to the node, at relay address: postgres. The run function gave the enrollment ticket permission to use this relay address.

  • Next, the entrypoint sets an access control policy that only allows project members that possesses a credential with attribute postgres-inlet="true" to connect to tcp portal outlets on this node. It then creates tcp portal outlet to postgres at localhost:5432.

Analysis Corp

  • Analysis Corp.’s kubernetes manifest defines a pod and containers to run in Analysis Corp.’s isolated kubernetes cluster. The run.sh script invokes kind to create the cluster, prepares container images and calls kubectl apply to start the pod and its containers. The manifest defines a pod with two containers an Ockam node container and an app container.

  • The Ockam node container is created using this dockerfile and this entrypoint script. The enrollment ticket from run.sh is passed to the container.

  • When the Ockam node container starts in the Analysis Corp network, it runs its entrypoint. The entrypoint script creates a new identity and uses the enrollment ticket to enroll with your project and get a project membership credential that attests to the attribute postgres-inlet=true. The run function assigned this attribute to the enrollment ticket.

  • The entrypoint script then creates a node that uses this identity and membership credential. It then sets an access control policy that only allows project members that possesses a credential with attribute postgres-outlet="true" to connect to tcp portal inlets on this node.

  • Next, the entrypoint creates tcp portal inlet that makes the remote postgres available on all localhost IPs at 0.0.0.0:15432. This makes postgres available at localhost:15432 within Analysis Corp’s pod that also has the app container.

  • The app container is created using this dockerfile which runs this app.js file on startup. The app.js file is a nodejs app, it connects with postgres on localhost:15432, then creates a table in the database, inserts some data into the table, queries it back, and prints it.

Recap

We connected a nodejs app in one kubernetes cluster with a postgres database in another kubernetes cluster over an end-to-end encrypted portal.

Sensitive business data in the postgres database is only accessible to Bank Corp. and Analysis Corp. All data is encrypted with strong forward secrecy as it moves through the Internet. The communication channel is mutually authenticated and authorized. Keys and credentials are automatically rotated. Access to connect with postgres can be easily revoked.

Analysis Corp. does not get unfettered access to Bank Corp.’s cluster. It gets access only to run queries on the postgres server. Bank Corp. does not get unfettered access to Analysis Corp.’s cluster. It gets access only to respond to queries over a tcp connection. Bank Corp. cannot initiate connections.

All access controls are secure-by-default. Only project members, with valid credentials, can connect with each other. NAT’s are traversed using a relay and outgoing tcp connections. Bank Corp. or Analysis Corp. don’t expose any listening endpoints on the Internet. Their kubernetes clusters are completely closed and protected from any attacks from the Internet.

Cleanup

To delete all containers and images:

./run.sh cleanup

Kubernetes

Let's connect a nodejs app in one one private kubernetes cluster with a mongodb database in another private kubernetes cluster. The example uses docker and docker compose to create these virtual networks.

Each company’s network is private, isolated, and doesn't expose ports. To learn how end-to-end trust is established, please read: “How does Ockam work?”

Run

This example requires Bash, Git, Curl, Kind, and Kubectl. Please set up these tools for your operating system, then run the following commands:

# Clone the Ockam repo from Github.
git clone --depth 1 https://github.com/build-trust/ockam && cd ockam

# Navigate to this example’s directory.
cd examples/command/portals/databases/mongodb/kubernetes

# Run the example, use Ctrl-C to exit at any point.
./run.sh

If everything runs as expected, you'll see the message: The example run was successful 🥳

Walkthrough

The run.sh script, that you ran above, and its accompanying files are full of comments and meant to be read. The example setup is only a few simple steps, so please take some time to read and explore.

Administrator

  • The run.sh script calls the run function which invokes the enroll command to create an new identity, sign into Ockam Orchestrator, set up a new Ockam project, make you the administrator of this project, and get a project membership credential.

  • The run function then generates two new enrollment tickets. The tickets are valid for 10 minutes. Each ticket can be redeemed only once and assigns attributes to its redeemer. The first ticket is meant for the Ockam node that will run in Bank Corp.’s kubernetes cluster. The second ticket is for the Ockam node that will run in Analysis Corp.’s kubernetes cluster.

  • In a typical production setup an administrator or provisioning pipeline generates enrollment tickets and gives them to nodes that are being provisioned. In our example, the run function is acting on your behalf as the administrator of the Ockam project. It uses kubernetes secrets to give tickets to Ockam nodes that are being provisioned in Bank Corp.’s and Analysis Corp.’s kubernetes clusters.

  • The run function takes the enrollment tickets, sets them as kubernetes secrets, and uses kind with kubectl to create Bank Corp.’s and Analysis Corp.’s kubernetes clusters.

Bank Corp

  • Bank Corp.’s kubernetes manifest defines a pod and containers to run in Bank Corp’s isolated kubernetes cluster. The run.sh script invokes kind to create the cluster, prepares container images and calls kubectl apply to start the pod and its containers.

  • One of the containers defined in Bank Corp.’s kubernetes manifest runs a MongoDB database makes it available on localhost:5432 inside its pod.

  • Another container defined inside that same pod runs an Ockam node as a companion to the mongodb container. The Ockam node container is created using this dockerfile and this entrypoint script. The enrollment ticket from run.sh is passed to the container.

  • When the Ockam node container starts in the Bank Corp cluster, it runs its entrypoint. The entrypoint script creates a new identity and uses the enrollment ticket to enroll with your project and get a project membership credential that attests to the attribute mongodb-outlet=true. The run function assigned this attribute to the enrollment ticket.

  • The entrypoint script then creates a node that uses this identity and membership credential to authenticate and create a relay in the project, back to the node, at relay address: mongodb. The run function gave the enrollment ticket permission to use this relay address.

  • Next, the entrypoint sets an access control policy that only allows project members that possesses a credential with attribute mongodb-inlet="true" to connect to tcp portal outlets on this node. It then creates tcp portal outlet to mongodb at localhost:5432.

Analysis Corp

  • Analysis Corp.’s kubernetes manifest defines a pod and containers to run in Analysis Corp.’s isolated kubernetes cluster. The run.sh script invokes kind to create the cluster, prepares container images and calls kubectl apply to start the pod and its containers. The manifest defines a pod with two containers an Ockam node container and an app container.

  • The Ockam node container is created using this dockerfile and this entrypoint script. The enrollment ticket from run.sh is passed to the container.

  • When the Ockam node container starts in the Analysis Corp network, it runs its entrypoint. The entrypoint script creates a new identity and uses the enrollment ticket to enroll with your project and get a project membership credential that attests to the attribute mongodb-inlet=true. The run function assigned this attribute to the enrollment ticket.

  • The entrypoint script then creates a node that uses this identity and membership credential. It then sets an access control policy that only allows project members that possesses a credential with attribute mongodb-outlet="true" to connect to tcp portal inlets on this node.

  • Next, the entrypoint creates tcp portal inlet that makes the remote mongodb available on all localhost IPs at 0.0.0.0:15432. This makes mongodb available at localhost:15432 within Analysis Corp’s pod that also has the app container.

  • The app container is created using this dockerfile which runs this app.js file on startup. The app.js file is a nodejs app, it connects with mongodb on localhost:15432, then creates a table in the database, inserts some data into the table, queries it back, and prints it.

Recap

We connected a nodejs app in one kubernetes cluster with a mongodb database in another kubernetes cluster over an end-to-end encrypted portal.

Sensitive business data in the mongodb database is only accessible to Bank Corp. and Analysis Corp. All data is encrypted with strong forward secrecy as it moves through the Internet. The communication channel is mutually authenticated and authorized. Keys and credentials are automatically rotated. Access to connect with mongodb can be easily revoked.

Analysis Corp. does not get unfettered access to Bank Corp.’s cluster. It gets access only to run queries on the mongodb server. Bank Corp. does not get unfettered access to Analysis Corp.’s cluster. It gets access only to respond to queries over a tcp connection. Bank Corp. cannot initiate connections.

All access controls are secure-by-default. Only project members, with valid credentials, can connect with each other. NAT’s are traversed using a relay and outgoing tcp connections. Bank Corp. or Analysis Corp. don’t expose any listening endpoints on the Internet. Their kubernetes clusters are completely closed and protected from any attacks from the Internet.

Cleanup

To delete all containers and images:

./run.sh cleanup

Docker

Let's connect a nodejs app in one private network with a postgres database in another private network.

Each company’s network is private, isolated, and doesn't expose ports. To learn how end-to-end trust is established, please read: “How does Ockam work?”

Run

This example requires Bash, Git, Curl, Docker, and Docker Compose. Please set up these tools for your operating system, then run the following commands:

# Clone the Ockam repo from Github.
git clone --depth 1 https://github.com/build-trust/ockam && cd ockam

# Navigate to this example’s directory.
cd examples/command/portals/databases/postgres/docker

# Run the example, use Ctrl-C to exit at any point.
./run.sh

If everything runs as expected, you'll see the message: The example run was successful 🥳

Walkthrough

The run.sh script, that you ran above, and its accompanying files are full of comments and meant to be read. The example setup is only a few simple steps, so please take some time to read and explore.

Administrator

  • The run.sh script calls the run function which invokes the enroll command to create an new identity, sign into Ockam Orchestrator, set up a new Ockam project, make you the administrator of this project, and get a project membership credential.

  • The run function then generates two new enrollment tickets. The tickets are valid for 10 minutes. Each ticket can be redeemed only once and assigns attributes to its redeemer. The first ticket is meant for the Ockam node that will run in Bank Corp.’s network. The second ticket is meant for the Ockam node that will run in Analysis Corp.’s network.

  • In a typical production setup an administrator or provisioning pipeline generates enrollment tickets and gives them to nodes that are being provisioned. In our example, the run function is acting on your behalf as the administrator of the Ockam project. It uses environment variables to give tickets to and provision Ockam nodes in Bank Corp.’s and Analysis Corp.’s network.

  • The run function takes the enrollment tickets, sets them as the value of an environment variable, and invokes docker-compose to create Bank Corp.’s and Analysis Corp.’s networks.

Bank Corp

# Create a dedicated and isolated virtual network for bank_corp.
networks:
  bank_corp:
    driver: bridge
  • Bank Corp.’s docker-compose configuration is used when run.sh invokes docker-compose. It creates an isolated virtual network for Bank Corp.

  • In this network, docker compose starts a container with a PostgreSQL database. This container becomes available at postgres:5432 in the Bank Corp network.

  • Once the postgres container is ready, docker compose starts an Ockam node in a container as a companion to the postgres container. The Ockam node container is created using this dockerfile and this entrypoint script. The enrollment ticket from run.sh is passed to the container.

  • When the Ockam node container starts in the Bank Corp network, it runs its entrypoint. The entrypoint script creates a new identity and uses the enrollment ticket to enroll with your project and get a project membership credential that attests to the attribute postgres-outlet=true. The run function assigned this attribute to the enrollment ticket.

  • The entrypoint script then creates a node that uses this identity and membership credential to authenticate and create a relay in the project, back to the node, at relay address: postgres. The run function gave the enrollment ticket permission to use this relay address.

  • Next, the entrypoint sets an access control policy that only allows project members that possesses a credential with attribute postgres-inlet="true" to connect to tcp portal outlets on this node. It then creates tcp portal outlet to postgres at postgres:5432.

Analysis Corp

# Create a dedicated and isolated virtual network for analysis_corp.
networks:
  analysis_corp:
    driver: bridge
  • Analysis Corp.’s docker-compose configuration is used when run.sh invokes docker-compose. It creates an isolated virtual network for Analysis Corp. In this network, docker compose starts an Ockam node container and an app container.

  • The Ockam node container is created using this dockerfile and this entrypoint script. The enrollment ticket from run.sh is passed to the container.

  • When the Ockam node container starts in the Analysis Corp network, it runs its entrypoint. The entrypoint script creates a new identity and uses the enrollment ticket to enroll with your project and get a project membership credential that attests to the attribute postgres-inlet=true. The run function assigned this attribute to the enrollment ticket.

  • The entrypoint script then creates a node that uses this identity and membership credential. It then sets an access control policy that only allows project members that possesses a credential with attribute postgres-outlet="true" to connect to tcp portal inlets on this node.

  • Next, the entrypoint creates tcp portal inlet that makes the remote postgres available on all localhost IPs at 0.0.0.0:15432. This makes postgres available at ockam:15432 within Analysis Corp’s virtual private network.

  • Once the Ockam node container is ready, docker compose starts an app container. The app container is created using this dockerfile which runs this app.js file on startup.

  • The app.js file is a nodejs app, it connects with postgres on ockam:15432, then creates a table in the database, inserts some data into the table, queries it back, and prints it.

Recap

We connected a nodejs app in one virtual private network with a postgres database in another virtual private network over an end-to-end encrypted portal.

Sensitive business data in the postgres database is only accessible to Bank Corp. and Analysis Corp. All data is encrypted with strong forward secrecy as it moves through the Internet. The communication channel is mutually authenticated and authorized. Keys and credentials are automatically rotated. Access to connect with postgres can be easily revoked.

Analysis Corp. does not get unfettered access to Bank Corp.’s network. It gets access only to run queries on the postgres server. Bank Corp. does not get unfettered access to Analysis Corp.’s network. It gets access only to respond to queries over a tcp connection. Bank Corp. cannot initiate connections.

All access controls are secure-by-default. Only project members, with valid credentials, can connect with each other. NAT’s are traversed using a relay and outgoing tcp connections. Bank Corp. or Analysis Corp. don’t expose any listening endpoints on the Internet. Their networks are completely closed and protected from any attacks from the Internet.

Cleanup

To delete all containers and images:

./run.sh cleanup

Docker

Let's connect a nodejs app in one virtual private network with a MongoDB database in another virtual private network. The example uses docker and docker compose to create these virtual networks.

Each company’s network is private, isolated, and doesn't expose ports. To learn how end-to-end trust is established, please read: “How does Ockam work?”

Run

This example requires Bash, Git, Curl, Docker, and Docker Compose. Please set up these tools for your operating system, then run the following commands:

# Clone the Ockam repo from Github.
git clone --depth 1 https://github.com/build-trust/ockam && cd ockam

# Navigate to this example’s directory.
cd examples/command/portals/databases/mongodb/docker

# Run the example, use Ctrl-C to exit at any point.
./run.sh

If everything runs as expected, you'll see the message: The example run was successful 🥳

Walkthrough

The run.sh script, that you ran above, and its accompanying files are full of comments and meant to be read. The example setup is only a few simple steps, so please take some time to read and explore.

Administrator

  • The run.sh script calls the run function which invokes the enroll command to create an new identity, sign into Ockam Orchestrator, set up a new Ockam project, make you the administrator of this project, and get a project membership credential.

  • The run function then generates two new enrollment tickets. The tickets are valid for 10 minutes. Each ticket can be redeemed only once and assigns attributes to its redeemer. The first ticket is meant for the Ockam node that will run in Bank Corp.’s network. The second ticket is meant for the Ockam node that will run in Analysis Corp.’s network.

  • In a typical production setup an administrator or provisioning pipeline generates enrollment tickets and gives them to nodes that are being provisioned. In our example, the run function is acting on your behalf as the administrator of the Ockam project. It uses environment variables to give tickets to and provision Ockam nodes in Bank Corp.’s and Analysis Corp.’s network.

  • The run function takes the enrollment tickets, sets them as the value of an environment variable, and invokes docker-compose to create Bank Corp.’s and Analysis Corp.’s networks.

Bank Corp

# Create a dedicated and isolated virtual network for bank_corp.
networks:
  bank_corp:
    driver: bridge
  • Bank Corp.’s docker-compose configuration is used when run.sh invokes docker-compose. It creates an isolated virtual network for Bank Corp.

  • In this network, docker compose starts a container with a MongoDB database. This container becomes available at mongodb:27017 in the Bank Corp network.

  • Once the mongodb container is ready, docker compose starts an Ockam node in a container as a companion to the mongodb container. The Ockam node container is created using this dockerfile and this entrypoint script. The enrollment ticket from run.sh is passed to the container.

  • When the Ockam node container starts in the Bank Corp network, it runs its entrypoint. The entrypoint script creates a new identity and uses the enrollment ticket to enroll with your project and get a project membership credential that attests to the attribute mongodb-outlet=true. The run function assigned this attribute to the enrollment ticket.

  • The entrypoint script then creates a node that uses this identity and membership credential to authenticate and create a relay in the project, back to the node, at relay address: mongodb. The run function gave the enrollment ticket permission to use this relay address.

  • Next, the entrypoint sets an access control policy that only allows project members that possesses a credential with attribute mongodb-inlet="true" to connect to tcp portal outlets on this node. It then creates tcp portal outlet to mongodb at mongodb:27017.

Analysis Corp

# Create a dedicated and isolated virtual network for analysis_corp.
networks:
  analysis_corp:
    driver: bridge
  • Analysis Corp.’s docker-compose configuration is used when run.sh invokes docker-compose. It creates an isolated virtual network for Analysis Corp. In this network, docker compose starts an Ockam node container and an app container.

  • The Ockam node container is created using this dockerfile and this entrypoint script. The enrollment ticket from run.sh is passed to the container.

  • When the Ockam node container starts in the Analysis Corp network, it runs its entrypoint. The entrypoint script creates a new identity and uses the enrollment ticket to enroll with your project and get a project membership credential that attests to the attribute mongodb-inlet=true. The run function assigned this attribute to the enrollment ticket.

  • The entrypoint script then creates a node that uses this identity and membership credential. It then sets an access control policy that only allows project members that possesses a credential with attribute mongodb-outlet="true" to connect to tcp portal inlets on this node.

  • Next, the entrypoint creates tcp portal inlet that makes the remote mongodb available on all localhost IPs at 0.0.0.0:17017. This makes mongodb available at ockam:17017 within Analysis Corp’s virtual private network.

  • Once the Ockam node container is ready, docker compose starts an app container. The app container is created using this dockerfile which runs this app.js file on startup.

  • The app.js file is a nodejs app, it connects with mongodb on ockam:17017, then inserts some data, queries it back, and prints it.

Recap

We connected a nodejs app in one virtual private network with a MongoDB database in another virtual private network over an end-to-end encrypted portal.

Sensitive business data in the MongoDB database is only accessible to Bank Corp. and Analysis Corp. All data is encrypted with strong forward secrecy as it moves through the Internet. The communication channel is mutually authenticated and authorized. Keys and credentials are automatically rotated. Access to connect with MongoDB can be easily revoked.

Analysis Corp. does not get unfettered access to Bank Corp.’s network. It gets access only to run queries on the MongoDB server. Bank Corp. does not get unfettered access to Analysis Corp.’s network. It gets access only to respond to queries over a tcp connection. Bank Corp. cannot initiate connections.

All access controls are secure-by-default. Only project members, with valid credentials, can connect with each other. NAT’s are traversed using a relay and outgoing tcp connections. Bank Corp. or Analysis Corp. don’t expose any listening endpoints on the Internet. Their networks are completely closed and protected from any attacks from the Internet.

Cleanup

To delete all containers and images:

./run.sh cleanup

Amazon RDS

Let's connect a nodejs app in one Amazon VPC with an Amazon RDS managed Postgres database in another Amazon VPC.

Each company’s network is private, isolated, and doesn't expose ports. To learn how end-to-end trust is established, please read: “How does Ockam work?”

Run

This example requires Bash, Git, and AWS CLI. Please set up these tools for your operating system. In particular you need to login to your AWS account with aws sso login.

Then run the following commands:

# Clone the Ockam repo from Github.
git clone --depth 1 https://github.com/build-trust/ockam && cd ockam

# Navigate to this example’s directory.
cd examples/command/portals/databases/postgres/amazon_rds/aws_cli

# Run the example, use Ctrl-C to exit at any point.
./run.sh

If everything runs as expected, you'll see the message: The example run was successful 🥳

Walkthrough

The run.sh script, that you ran above, and its accompanying files are full of comments and meant to be read. The example setup is only a few simple steps, so please take some time to read and explore.

Administrator

  • The run.sh script calls the run function which invokes the enroll command to create an new identity, sign into Ockam Orchestrator, set up a new Ockam project, make you the administrator of this project, and get a project membership credential.

  • The run function then generates two new enrollment tickets. The tickets are valid for 10 minutes. Each ticket can be redeemed only once and assigns attributes to its redeemer. The first ticket is meant for the Ockam node that will run in Bank Corp.’s network. The second ticket is meant for the Ockam node that will run in Analysis Corp.’s network.

  • In a typical production setup an administrator or provisioning pipeline generates enrollment tickets and gives them to nodes that are being provisioned. In our example, the run function is acting on your behalf as the administrator of the Ockam project.

  • The run function passes the enrollment tickets as variables of the run scripts provisioning Bank Corp.'s network and Analysis Corp.'s network.

Bank Corp

First, the bank_corp/run.sh script creates a network to host the database:

  • We create a VPC and tag it.

  • We create an Internet gateway and attach it to the VPC.

  • We create a route table and create a route to the Internet via the gateway.

  • We create two subnets, located in two distinct availability zones, and associated to the route table.

  • We finally create a security group so that there is:

    • One TCP egress to the Internet,

    • And one ingress to Postgres from within our two subnets.

Then, the bank_corp/run.sh script creates a RDS database:

  • This requires a subnet group.

  • Once the subnet group is created, we create a database cluster an a database instance.

  • Finally the address of the database is saved in an environment variable.

We are now ready to create an EC2 instance where the Ockam outlet node will run:

  • We select an AMI.

  • We start an instance using the AMI above and a start script based on run_ockam.sh where:

    • ENROLLMENT_TICKET is replaced by the enrollment ticket created by the administrator and given as a parameter to run.sh.

    • POSTGRES_ADDRESS is replaced by the database address that we previously saved.

  • We tag the created instance and wait for it to be available.

When the instance is started, the run_ockam.sh script is executed:

  • The ockam executable is installed.

  • The enrollment ticket is used to create a default identity and make it a project member.

  • We then create an Ockam node:

    • With a TCP outlet.

    • A policy associated to the outlet. The policy authorizes identities with a credential containing the attribute postgres-inlet="true".

    • With a relay capable of forwarding the TCP traffic to the TCP outlet.

Analysis Corp

First, the analysis_corp/run.sh script creates a network to host the nodejs application:

  • We create a VPC and tag it.

  • We create an Internet gateway and attach it to the VPC.

  • We create a route table and create a route to the Internet via the gateway.

  • We create a subnet, and associated to the route table.

  • We finally create a security group so that there is:

    • One TCP egress to the Internet,

    • And One SSH ingress to download and install the nodejs application.

We are now ready to create an EC2 instance where the Ockam inlet node will run:

  • We select an AMI.

  • We start an instance using the AMI above and a start script based on run_ockam.sh where:

    • ENROLLMENT_TICKET is replaced by the enrollment ticket created by the administrator and given as a parameter to run.sh.

The instance is started and the run_ockam.sh script is executed:

  • The ockam executable is installed.

  • The enrollment ticket is used to create a default identity and make it a project member.

  • We then create an Ockam node:

    • With a TCP inlet.

    • A policy associated to the inlet. The policy authorizes identities with a credential containing the attribute postgres-outlet="true".

We finally wait for the instance to be ready and install the nodejs application:

  • The app.js file copied to the instance (this uses the previously created key.pem file to identify).

  • We can then SSH to the instance and:

    • Install nodejs.

    • Install the Postgres client library.

    • Start the nodejs application.

Once the nodejs application is started:

  • It will connect to the Ockam inlet at port 12345.

  • It creates a database table and runs some SQL queries to check that the connection with the Postgres database works.

Recap

We connected a nodejs app in one virtual private network with a postgres database in another virtual private network over an end-to-end encrypted portal.

Sensitive business data in the postgres database is only accessible to Bank Corp. and Analysis Corp. All data is encrypted with strong forward secrecy as it moves through the Internet. The communication channel is mutually authenticated and authorized. Keys and credentials are automatically rotated. Access to connect with postgres can be easily revoked.

Analysis Corp. does not get unfettered access to Bank Corp.’s network. It gets access only to run queries on the postgres server. Bank Corp. does not get unfettered access to Analysis Corp.’s network. It gets access only to respond to queries over a tcp connection. Bank Corp. cannot initiate connections.

All access controls are secure-by-default. Only project members, with valid credentials, can connect with each other. NAT’s are traversed using a relay and outgoing tcp connections. Bank Corp. or Analysis Corp. don’t expose any listening endpoints on the Internet. Their networks are completely closed and protected from any attacks from the Internet.

Cleanup

To delete all AWS resources:

./run.sh cleanup

Cloud

In this hands-on example we send end-to-end encrypted messages through Confluent Cloud.

Ockam encrypts messages from a Producer all-of-the-way to a specific Consumer. Only that specific Consumer can decrypt these messages. This guarantees that your data cannot be observed or tampered with as it passes through Confluent Cloud or the network where it is hosted. The operators of Confluent Cloud can only see encrypted data in the network and in service that they operate. Thus, a compromise of the operator's infrastructure will not compromise the data stream's security, privacy, or integrity.

To learn how end-to-end trust is established, please read: “How does Ockam work?”

Run

This example requires Bash, Confluent CLI, JQ, Git, Curl, Docker, and Docker Compose. Please set up these tools for your operating system, login to Confluent using your Confluent CLI so that clusters can be created and deleted, then run the following commands:

# Clone the Ockam repo from Github.
git clone --depth 1 https://github.com/build-trust/ockam && cd ockam

# Navigate to this example’s directory.
cd examples/command/portals/kafka/confluent/

# Run the example, use Ctrl-C to exit at any point.
./run.sh

If everything runs as expected, you'll see the message: The example run was successful 🥳

Walkthrough

The run.sh script, that you ran above, and its accompanying files are full of comments and meant to be read. The example setup is only a few simple steps, so please take some time to read and explore.

Administrator

  • The run.sh script calls the run function which invokes the enroll command to create an new identity, sign in to Ockam Orchestrator, set up a new Ockam project, make you the administrator of this project, and get a project membership credential.

  • The run function then creates a new Kafka cluster using the Confluent CLI.

  • An Ockam relay is then started using the ockam confluent addon which creates an encrypted relay that transmits Kafka messages over a secure portal.

  • We then generate two new enrollment tickets, each valid for 10 minutes, and can be redeemed only once. The two tickets are meant for the Consumer and Producer, in the Ockam node that will run in Application Team’s network.

  • In a typical production setup, an administrator or provisioning pipeline generates enrollment tickets and gives them to nodes that are being provisioned. In our example, the run function is acting on your behalf as the administrator of the Ockam project. It creates a Kafka relay using a pre-baked Ockam confluent addon which will host the Confluent Kafka server and Application Team’s network, passing them their tickets using environment variables.

  • For the Application team, the run function takes the enrollment tickets, sets them as the value of an environment variable, passes the Confluent authentication variables and invokes docker-compose to create the Application Teams’s networks.

Application Teams

# Create a dedicated and isolated virtual network for application_team.
networks:
  application_team:
      driver: bridge
  • Application Teams’s docker-compose configuration is used when run.sh invokes docker-compose. It creates an isolated virtual network for Application Teams. In this network, docker compose starts a Kafka Consumer container and a Kafka Producer container.

  • The Kafka consumer node container is created using this dockerfile and this entrypoint script. The consumer enrollment ticket from run.sh is passed to the container via environment variable.

  • When the Kafka consumer node container starts in the Application Teams network, it runs its entrypoint. The entrypoint enrolls with your project and then calls the Ockam kafka-consumer command which starts the Kafka inlet and listens and traffic connection on localhost port 9092 through Ockam relay.

  • Next, the entrypoint at the end executes the command present in the docker-compose configuration, which launches a Kafka consumer waiting for messages in the demo topic. Once the messages are received, they are printed out.

  • In the producer container, the process is analogous, once the Ockam kafka-producer inlet is set up, the command within docker-compose configuration launches a Kafka producer that sends messages.

  • You can view the Confluent website to see the encrypted messages as they are being sent by the producer.

Recap

We sent end-to-end encrypted messages through Confluent cloud.

Messages are encrypted with strong forward secrecy as soon as they leave a Producer, and only the intended Consumer can decrypt those messages. Confluent Cloud and other Consumers can only see encrypted messages.

All communication is mutually authenticated and authorized. Keys and credentials are automatically rotated. Access can be easily revoked.

Cleanup

To delete all containers and images:

./run.sh cleanup

Cloud

In this hands-on example we send end-to-end encrypted messages through Warpstream Cloud.

Ockam encrypts messages from a Producer all-of-the-way to a specific Consumer. Only that specific Consumer can decrypt these messages. This guarantees that your data cannot be observed or tampered with as it passes through Warpstream Cloud or the network where it is hosted. The operators of Warpstream Cloud can only see encrypted data in the network and in service that they operate. Thus, a compromise of the operator's infrastructure will not compromise the data stream's security, privacy, or integrity.

To learn how end-to-end trust is established, please read: “How does Ockam work?”

Run

This example requires Bash, Git, Curl, Docker, and Docker Compose. Please set up these tools for your operating system. It's also necessary to include your warpstream application key as an environment variable when running the example, example can be run as following:

# Clone the Ockam repo from Github.
git clone --depth 1 https://github.com/build-trust/ockam && cd ockam

# Navigate to this example’s directory.
cd examples/command/portals/kafka/warpstream/

# Run the example by calling the run.sh script and passing your warpstream application key as an argument, use Ctrl-C to exit at any point.
./run.sh _warpstream_application_key_

If everything runs as expected, you'll see the message: The example run was successful 🥳

Walkthrough

The run.sh script, that you ran above, and its accompanying files are full of comments and meant to be read. The example setup is only a few simple steps, so please take some time to read and explore.

Administrator

  • The run.sh script calls the run function which invokes the enroll command to create a new identity, sign in to Ockam Orchestrator, set up a new Ockam project, make you the administrator of this project, and get a project membership credential.

  • The run function then creates a new Kafka cluster by using your Warpstream's application key.

  • An Ockam relay is then started using the Ockam Kafka addon which creates an encrypted relay that transmits Kafka messages over a secure portal.

  • We then generate two new enrollment tickets, each valid for 10 minutes, and can be redeemed only once. The two tickets are meant for the Consumer and Producer, in the Ockam node that will run in Application Team’s network.

  • In a typical production setup, an administrator or provisioning pipeline generates enrollment tickets and gives them to nodes that are being provisioned. In our example, the run function is acting on your behalf as the administrator of the Ockam project. It creates a Kafka relay using a pre-baked Ockam Kafka addon which will host the Warpstream Kafka server and Application Team’s network, passing them their tickets using environment variables.

  • For the Application team, the run function takes the enrollment tickets, sets them as the value of an environment variable, passes the Warpstream authentication variables and invokes docker-compose to create the Application Teams’s networks.

Application Teams

# Create a dedicated and isolated virtual network for application_team.
networks:
  application_team:
      driver: bridge
  • Application Teams’s docker-compose configuration is used when run.sh invokes docker-compose. It creates an isolated virtual network for Application Teams. In this network, docker compose starts a Kafka Consumer container and a Kafka Producer container.

  • The Kafka consumer node container is created using this dockerfile and this entrypoint script. The consumer enrollment ticket from run.sh is passed to the container via environment variable.

  • When the Kafka consumer node container starts in the Application Teams network, it runs its entrypoint. The entrypoint enrolls with your project and then calls the Ockam kafka-consumer command which starts the Kafka inlet and listens and traffic connection on localhost port 9092 through Ockam relay.

  • Next, the entrypoint at the end executes the command present in the docker-compose configuration, which launches a Kafka consumer waiting for messages in the demo topic. Once the messages are received, they are printed out.

  • In the producer container, the process is analogous, once the Ockam kafka-producer inlet is set up, the command within docker-compose configuration launches a Kafka producer that sends messages.

Recap

We sent end-to-end encrypted messages through Warpstream cloud.

Messages are encrypted with strong forward secrecy as soon as they leave a Producer, and only the intended Consumer can decrypt those messages. Warpstream Cloud and other Consumers can only see encrypted messages.

All communication is mutually authenticated and authorized. Keys and credentials are automatically rotated. Access can be easily revoked.

Cleanup

To delete all containers and images:

./run.sh cleanup _warpstream_application_key

Relays and Portals

Ockam Relays make it easy to traverse NATs and run end-to-end protocols between Ockam Nodes in far away private networks. Ockam Portals make existing protocols work over Ockam Routing.

In the previous section, we learned how Ockam Routing and Transports create a foundation for end-to-end application layer protocols. When discussing Transports, we put together a specific example communication topology – a transport bridge.

Bridges

Node n1 wishes to access a service on node n3, but it can't directly connect to n3. This can happen for many reasons, maybe because n3 is in a separate IP subnet, or it could be that the communication from n1 to n2 uses UDP while from n2 to n3 uses TCP or other similar constraints. The topology makes n2 a bridge or gateway between these two separate networks to enable end-to-end protocols between n1 and n3 even though they are not directly connected.

Relays

It is common, however, to encounter communication topologies where the machine that provides a service is unwilling or is not allowed to open a listening port or expose a bridge node to other networks. This is a common security best practice in enterprise environments, home networks, OT networks, and VPCs across clouds. Application developers may not have control over these choices from the infrastructure / operations layer. This is where relays are useful.

Relays make it possible to establish end-to-end protocols with services operating in a remote private network, without requiring a remote service to expose listening ports to an outside hostile network like the Internet.

Delete any existing nodes and then try this new example:

» ockam node create n2 --tcp-listener-address=127.0.0.1:7000

» ockam node create n3
» ockam service start hop --at n3
» ockam relay create n3 --at /node/n2 --to /node/n3
     ✔︎ Now relaying messages from /node/n2/service/25716d6f86340c3f594e99dede6232df → /node/n3/service/forward_to_n3

» ockam node create n1
» ockam tcp-connection create --from n1 --to 127.0.0.1:7000
» ockam message send hello --from n1 --to /worker/603b62d245c9119d584ba3d874eb8108/service/forward_to_n3/service/uppercase
HELLO

In this example, the direction of the second TCP connection is reversed in comparison to our first example that used a bridge. n2 is the only node that has to listen for TCP connections.

Node n2 is running a relay service. n3 makes an outgoing TCP connection to n2 and requests a forwarding address from the relay service. n3 then becomes reachable via n2 at the address /service/forward_to_n3.

Node n1 connects with n2 and routes messages to n3 via its forwarding relay.

The message in the above example took the following route. This is very similar to our earlier example except for the direction of the second TCP connection. The relay worker remembers the route to back to n3. n1 just has to get the message to the forwarding relay and everything just works.

Using this simple topology rearrangement, Ockam Routing makes it possible to establish end-to-end protocols between applications that are running in completely private networks.

We can traverse NATs and pierce through network boundaries. And since this is all built using a very simple application layer routing protocol, we can have any number of transport connection hops in any transport protocol, and we can mix-match bridges with relays to create end-to-end protocols in any communication topology.

Portals

Portals make existing protocols work over Ockam Routing without changing any code in the existing applications.

Continuing from our Relays example, create a Python-based web server to represent a sample web service. This web service is listening on 127.0.0.1:9000.

» python3 -m http.server --bind 127.0.0.1 9000

» ockam tcp-outlet create --at n3 --from /service/outlet --to 127.0.0.1:9000
» ockam tcp-inlet create --at n1 --from 127.0.0.1:6000 \
    --to /worker/603b62d245c9119d584ba3d874eb8108/service/forward_to_n3/service/hop/service/outlet

» curl --head 127.0.0.1:6000
HTTP/1.0 200 OK
...

Then create a TCP Portal Outlet that makes 127.0.0.1:9000 available on worker address /service/outlet on n3. We already have a forwarding relay for n3 on n2 at service/forward_to_n3.

We then create a TCP Portal Inlet on n1 that will listen for TCP connections to 127.0.0.1:6000. For every new connection, the inlet creates a portal following the --to route all the way to the outlet. As it receives TCP data, it chunks and wraps them into Ockam Routing messages and sends them along the supplied route. The outlet receives Ockam Routing messages, unwraps them to extract TCP data and sends that data along to the target web service on 127.0.0.1:9000. It all just seamlessly works.

The HTTP requests from curl enter the inlet on n1, travel to n2, and are relayed back to n3 via its forwarding relay to reach the outlet and onward to the Python-based web service. Responses take the same return route back to curl.

The TCP Inlet/Outlet work for a large number of TCP-based protocols like HTTP. It is also simple to implement portals for other transport protocols. There is a growing base of Ockam Portal Add-Ons in our GitHub Repository.

Recap

To clean up and delete all nodes, run: ockam node delete --all

Ockam Routing and Transports combined with the ability to model Bridges and Relays make it possible to create end-to-end, application layer protocols in any communication topology - across networks, clouds, and boundaries.

Portals take this powerful capability a huge step forward by making it possible to apply these end-to-end protocols and their guarantees to existing applications, without changing any code!

This lays the foundation to make both new and existing applications - end-to-end encrypted and secure-by-design.

If you're stuck or have questions at any point, please reach out to us.

Next, let's learn how to create cryptographic identities and store secret keys in safe vaults.

Amazon EC2

Let's connect a nodejs app in one AWS VPC with a MongoDB database that resides in another AWS VPC. The example uses docker and docker compose to create these virtual networks.

Each company’s network is private, isolated, and doesn't expose ports. To learn how end-to-end trust is established, please read: “”

Run

This example requires Bash, Git, Curl, and AWS CLI. Please set up these tools for your operating system, then run the following commands:

If everything runs as expected, you'll see the message: The example run was successful 🥳

Walkthrough

The , that you ran above, and its are full of comments and meant to be read. The example setup is only a few simple steps, so please take some time to read and explore.

Administrator

  • The calls the which invokes the to create a new identity, sign into Ockam Orchestrator, set up a new Ockam project, make you the administrator of this project, and get a project membership .

  • The run function then . The tickets are valid for 10 minutes. Each ticket can be redeemed only once and assigns to its redeemer. The is meant for the Ockam node that will run in Bank Corp.’s network. The is meant for the Ockam node that will run in Analysis Corp.’s network.

  • In a typical production setup, an administrator or provisioning pipeline generates enrollment tickets and gives them to nodes that are being provisioned. In our example, the run function is acting on your behalf as the administrator of the Ockam project. It passes the enrollment tickets as a to provision Ockam nodes in Bank Corp network and also passes the enrollment key as an to provision Ockam nodes in Analysis Corp.’s network.

  • For Bank Corp, the run function calls a which will create an Amazon VPC that will host our MongoDB instance over a closed network this will be hosted by Bank Corp.

  • For Analysis Corp, we will also call a which will run a nodejs app that will write to our mongoDB data hosted by Bank Corp.

Bank Corp

First, the bank_corp/run.sh script creates a network to host the database:

  • We and tag it.

  • We and attach it to the VPC.

  • We and to the Internet via the gateway.

  • We , and associate it with the route table.

  • We finally so that there is:

    • ,

    • No Outbound connection is allowed for this VPC

Then, the bank_corp/run.sh script creates an EC2 instance where the Ockam outlet node will run:

  • We .

  • We above and a start script based on run_ockam.sh where the

    created by the administrator and given as a parameter to run.sh.

  • We and .

Next, the instance is started and the run_ockam.sh script is executed:

  • The .

  • The .

  • We then create an Ockam node:

    • With .

    • A . The policy authorizes identities with a credential containing the attribute monitoring-api-inlet="true".

    • With capable of forwarding the TCP traffic to the TCP outlet.

Analysis Corp

First, the analysis_corp/run.sh script creates a network to host the nodejs application:

  • We and tag it.

  • We and attach it to the VPC.

  • We and to the Internet via the gateway.

  • We , and associate it with the route table.

  • We finally so that there is:

    • ,

    • And to download and install the nodejs application.

Then, we create an EC2 instance where the Ockam inlet node will run:

  • We .

  • We above and a start script based on run_ockam.sh where the

    created by the administrator and given as a parameter to run.sh.

Next, the instance is started and the run_ockam.sh script is executed:

  • The .

  • The .

  • We then create an Ockam node:

    • With .

    • A . The policy authorizes identities with a credential containing the attribute monitoring-api-outlet="true".

Finally, we wait for the instance to be ready and run the nodejs app application:

  • The is (this uses the previously created key.pem file to identify).

  • We can then and:

    • .

    • .

Recap

We connected a nodejs app in one AWS VPC with a MongoDB database in another AWS VPC over an end-to-end encrypted portal.

Non-public access to MongoDB database are only accessible to enrolled members of the project with the appropriate attributes. The communication channel is and . Keys and credentials are automatically rotated. Access to the API can be easily revoked.

Analysis Corp. does not get unfettered access to Banking Corp.’s network. It only gets access to run queries on the MongoDB server. Bank Corp. does not get unfettered access to Travel App Corp.’s network. It gets access only to respond to requests over a tcp connection. Bank Corp. cannot initiate connections.

All are secure-by-default. Only project members, with valid credentials, can connect with each other. NAT’s are traversed using a relay and outgoing tcp connections. Neither Bank Corp. nor Analysis App Corp. expose any listening endpoints to the Internet. Their networks are completely closed and protected from any attacks from the Internet.

Cleanup

To delete all AWS resources created by this example:

Self Hosted

In this hands-on example we send end-to-end encrypted messages through Redpanda.

encrypts messages from a Producer all-of-the-way to a specific Consumer. Only that specific Consumer can decrypt these messages. This guarantees that your data cannot be observed or tampered with as it passes through Redpanda or the network where it is hosted. The operators of Redpanda can only see encrypted data in the network and in service that they operate. Thus, a compromise of the operator's infrastructure will not compromise the data stream's security, privacy, or integrity.

To learn how end-to-end trust is established, please read: “”

Run

This example requires Bash, Git, Curl, Docker, and Docker Compose. Please set up these tools for your operating system, then run the following commands:

If everything runs as expected, you'll see the message: The example run was successful 🥳

Walkthrough

The , that you ran above, and its are full of comments and meant to be read. The example setup is only a few simple steps, so please take some time to read and explore.

Administrator

  • The calls the which invokes the to create an new identity, sign in to Ockam Orchestrator, set up a new Ockam project, make you the administrator of this project, and get a project membership .

  • The run function then , each valid for 10 minutes, and can be redeemed only once. The is meant for the Ockam node that will run in Redpanda Operator’s network. The are meant for the Consumer and Producer, in the Ockam node that will run in Application Team’s network.

  • In a typical production setup, an administrator or provisioning pipeline generates enrollment tickets and gives them to nodes that are being provisioned. In our example, the run function is acting on your behalf as the administrator of the Ockam project. It provisions Ockam nodes in and , passing them their tickets using environment variables.

  • The run function takes the enrollment tickets, sets them as the value of an , and to create Redpanda Operator’s and Application Team’s networks.

Redpanda Operator

  • Redpanda Operator’s is used when run.sh invokes docker-compose. It creates an for Redpanda Operator.

  • In this network, docker compose starts a . This container becomes available at redpanda:9092 in the Redpanda Operator network.

  • In the same network, docker compose also starts a , connecting directly to redpanda:9092. The console will be reachable throughout the example at http://127.0.0.1:8080.

  • Once the Redpanda container , docker compose starts an as a companion to the Redpanda container described by ockam.yaml, . The node will automatically create an identity, using the ticket , and set up Kafka outlet.

  • The Ockam node then uses this identity and membership credential to authenticate and create a relay in the project, back to the node, at relay: redpanda. The run function to use this relay address.

Application Team

  • Application Team’s is used when run.sh invokes docker-compose. It creates an for the Application Team. In this network, docker compose starts a and a .

  • The Kafka consumer node container is created using and this . The consumer enrollment ticket from run.sh is via environment variable.

  • When the Kafka consumer node container starts in the Application Team's network, it runs . The entrypoint creates the Ockam node described by ockam.yaml, . The node will automatically create an identity, , and setup Kafka inlet.

  • Next, the entrypoint at the end executes the , which launches a Kafka consumer waiting for messages in the demo topic. Once the messages are received, they are printed out.

  • In the producer container, the process is analogous, once the Ockam node is set up the launches a Kafka producer that sends messages.

  • You can view the Redpanda console available at http://127.0.0.1:8080 to see the encrypted messages

Recap

We sent end-to-end encrypted messages through Redpanda.

Messages are encrypted with strong forward secrecy as soon as they leave a Producer, and only the intended Consumer can decrypt those messages. Redpanda and other Consumers can only see encrypted messages.

All communication is mutually authenticated and authorized. Keys and credentials are automatically rotated. Access can be easily revoked.

Cleanup

To delete all containers and images:

Verifiable Credentials

Scale mutual trust using lightweight, short-lived, revocable, attribute-based credentials.

Credentials

An Ockam Credential is a signed attestation by an Issuer about the Attributes of Subject. The Issuer and Subject are both Ockam . Attributes is a list of name and value pairs.

Issuing Credentials

Any Ockam Identity can issue credentials about another Ockam Identity.

The Issuer can include specific attributes in the attestation:

Verifying Credentials

Storing Credentials

Trust Anchors

Trust and authorization decisions must be anchored in some pre-existing knowledge.

Anchoring Trust in an Access Control List (ACL) of Identifiers

In the previous section about Ockam we ran an example of using pre-existing knowledge of Ockam . In this example n1 knows i2 and n2 know i1:

Anchoring Trust in a Credential Issuer

Managed Authorities

» ockam identity create a
     ✔︎ Identity P8b604a07640ecd944f379b5a1a5da0748f36f76327b00193067d1d8c6092dfae
       created successfully as a

» ockam identity create b
     ✔︎ Identity P5c14d09f32dd27255913d748d276dcf6952b7be5d0be4023e5f40787b53274ae
       created successfully as b

» ockam credential issue --as a --for $(ockam identity show b)
Subject:    P5c14d09f32dd27255913d748d276dcf6952b7be5d0be4023e5f40787b53274ae
Issuer:     P8b604a07640ecd944f379b5a1a5da0748f36f76327b00193067d1d8c6092dfae
Created:    2023-04-06T17:05:36Z
Expires:    2023-05-06T17:05:36Z
Attributes: {}
Signature:  6feeb038f0cdc28a16fbe3ed4f69feee5ccce3d2a6ac8be83e76180e7bbd3c6e0adbe37ed73c75bb3c283807ec63aeda42dd79afd3813d4658222078cad12705
» ockam credential issue --as a --for $(ockam identity show b) \
    --attribute location=Chicago --attribute department=Operations
Subject:    P5c14d09f32dd27255913d748d276dcf6952b7be5d0be4023e5f40787b53274ae
Issuer:     P8b604a07640ecd944f379b5a1a5da0748f36f76327b00193067d1d8c6092dfae (OCKAM_RK)
Created:    2023-04-06T17:26:40Z
Expires:    2023-05-06T17:26:40Z
Attributes: {"department": "Operations", "location": "Chicago"}
Signature:  b235429f8dc7be2e79bca0b8f59bdb6676b06f608408085097e7fb5a2029de0d27d6352becaecd0a5488e0bf56c5e5031613c2af2e6713b03b57e08340d99002
» ockam reset -y

» ockam identity create a
» ockam identity create b

» ockam credential issue --as a --for $(ockam identity show b) \
    --encoding hex > b.credential

» ockam credential verify --issuer $(ockam identity show a) \
    --credential-path b.credential
✔︎ Credential is valid
» ockam credential store c1 --issuer $(ockam identity show a --full --encoding hex) \
    --credential-path b.credential
✔︎ Credential c1 stored
» ockam reset -y

» ockam identity create i1
» ockam identity show i1 > i1.identifier
» ockam node create n1 --identity i1

» ockam identity create i2
» ockam identity show i2 > i2.identifier
» ockam node create n2 --identity i2

» ockam secure-channel-listener create l --at n2 \
    --identity i2 --authorized $(cat i1.identifier)

» ockam secure-channel create \
    --from n1 --to /node/n2/service/l \
    --identity i1 --authorized $(cat i2.identifier) \
      | ockam message send hello --from n1 --to -/service/uppercase
HELLO
» ockam reset -y

» ockam identity create authority
» ockam identity show authority > authority.identifier
» ockam identity show authority --full --encoding hex > authority

» ockam identity create i1
» ockam identity show i1 > i1
» ockam credential issue --as authority \
    --for $(cat i1) --attribute city="New York" \
    --encoding hex > i1.credential
» ockam credential store c1 --issuer $(cat authority) --credential-path i1.credential
» ockam trust-context create tc --credential c1 --authority-identity $(cat authority)

» ockam identity create i2
» ockam identity show i2 > i2
» ockam credential issue --as authority \
	--for $(cat i2) --attribute city="San Francisco" \
	--encoding hex > i2.credential
» ockam credential store c2 --issuer $(cat authority) --credential-path i2.credential

» ockam node create n1 --identity i1 --authority-identity $(cat authority) --trust-context tc
» ockam node create n2 --identity i2 --authority-identity $(cat authority) --credential c2

» ockam secure-channel create --from n1 --to /node/n2/service/api --credential c1 --identity i1 \
    | ockam message send hello --from n1 --to -/service/uppercase
» ockam reset -y
» ockam enroll

» ockam node create a
» ockam node create b

» ockam relay create b --at /project/default --to /node/a/service/forward_to_b

» ockam secure-channel create --from a --to /project/default/service/forward_to_b/service/api \
    | ockam message send hello --from a --to -/service/uppercase
HELLO
Identities
Secure Channels
mutual authorization
Identifiers
# Clone the Ockam repo from Github.
git clone --depth 1 https://github.com/build-trust/ockam && cd ockam

# Navigate to this example’s directory.
cd examples/command/portals/databases/mongodb/amazon_vpc

# Run the example, use Ctrl-C to exit at any point.
./run.sh
./run.sh cleanup
How does Ockam work?
run.sh script
accompanying files
run.sh script
run function
enroll command
credential
generates two new enrollment tickets
attributes
first ticket
second ticket
function argument
function argument
run.sh script
run.sh script
create a VPC
create an Internet gateway
create a route table
create a route
create a subnet
create a security group
One TCP egress to the Internet
select an AMI
start an instance using the AMI
ENROLLMENT_TICKET is replaced by the enrollment ticket
tag the created instance
wait for it to be available
MongoDB is installed using yum package installer
ockam executable is installed
enrollment ticket is used to create a default identity and make it a project member
a TCP outlet
policy associated to the outlet
a relay
create a VPC
create an Internet gateway
create a route table
create a route
create a subnet
create a security group
One TCP egress to the Internet
One SSH ingress
select an AMI
start an instance using the AMI
ENROLLMENT_TICKET is replaced by the enrollment ticket
ockam executable is installed
enrollment ticket is used to create a default identity and make it a project member
a TCP inlet
policy associated to the inlet
app.js file
copied to the instance
SSH to the instance
Install nodejs
Run the nodejs client application
mutually authenticated
authorized
access controls
# Clone the Ockam repo from Github.
git clone --depth 1 https://github.com/build-trust/ockam && cd ockam

# Navigate to this example’s directory.
cd examples/command/portals/kafka/redpanda/docker/

# Run the example, use Ctrl-C to exit at any point.
./run.sh
# Create a dedicated and isolated virtual network for redpanda_operator.
networks:
  redpanda_operator:
    driver: bridge
# Create a dedicated and isolated virtual network for application_team.
networks:
  application_team:
      driver: bridge
./run.sh cleanup
Ockam
How does Ockam work?
run.sh script
accompanying files
run.sh script
run function
enroll command
credential
generates three new enrollment tickets
first ticket
second and third tickets
Redpanda Operator’s network
Application Team’s network
environment variable
invokes docker-compose
docker-compose configuration
isolated virtual network
container with a Redpanda event store
Redpanda console
is ready
Ockam node in a container
embedded in the script
enroll with your project
passed to the container
gave the enrollment ticket permission
docker-compose configuration
isolated virtual network
Kafka Consumer container
Kafka Producer container
this dockerfile
entrypoint script
passed to the container
its entrypoint
embedded in the script
enroll with your project
command present in the docker-compose configuration
command within docker-compose configuration
Drawing
Drawing
Drawing
Drawing

Amazon Aurora

Let's connect a nodejs app in one Amazon VPC with an Amazon RDS managed Postgres database in another Amazon VPC.

Each company’s network is private, isolated, and doesn't expose ports. To learn how end-to-end trust is established, please read: “How does Ockam work?”

Run

This example requires Bash, Git, and AWS CLI. Please set up these tools for your operating system. In particular you need to login to your AWS account with aws sso login.

Then run the following commands:

# Clone the Ockam repo from Github.
git clone --depth 1 https://github.com/build-trust/ockam && cd ockam

# Navigate to this example’s directory.
cd examples/command/portals/databases/postgres/amazon_aurora/aws_cli

# Run the example, use Ctrl-C to exit at any point.
./run.sh

If everything runs as expected, you'll see the message: The example run was successful 🥳

Walkthrough

The run.sh script, that you ran above, and its accompanying files are full of comments and meant to be read. The example setup is only a few simple steps, so please take some time to read and explore.

Administrator

  • The run.sh script calls the run function which invokes the enroll command to create an new identity, sign into Ockam Orchestrator, set up a new Ockam project, make you the administrator of this project, and get a project membership credential.

  • The run function then generates two new enrollment tickets. The tickets are valid for 10 minutes. Each ticket can be redeemed only once and assigns attributes to its redeemer. The first ticket is meant for the Ockam node that will run in Bank Corp.’s network. The second ticket is meant for the Ockam node that will run in Analysis Corp.’s network.

  • In a typical production setup an administrator or provisioning pipeline generates enrollment tickets and gives them to nodes that are being provisioned. In our example, the run function is acting on your behalf as the administrator of the Ockam project.

  • The run function passes the enrollment tickets as variables of the run scripts provisioning Bank Corp.'s network and Analysis Corp.'s network.

Bank Corp

First, the bank_corp/run.sh script creates a network to host the database:

  • We create a VPC and tag it.

  • We create an Internet gateway and attach it to the VPC.

  • We create a route table and create a route to the Internet via the gateway.

  • We create two subnets, located in two distinct availability zones, and associated to the route table.

  • We finally create a security group so that there is:

    • One TCP egress to the Internet,

    • And one ingress to Postgres from within our two subnets.

Then, the bank_corp/run.sh script creates an Aurora database:

  • This requires a subnet group.

  • Once the subnet group is created, we create a database cluster an a database instance.

  • Finally the address of the database is saved in an environment variable.

We are now ready to create an EC2 instance where the Ockam outlet node will run:

  • We select an AMI.

  • We start an instance using the AMI above and a start script based on run_ockam.sh where:

    • ENROLLMENT_TICKET is replaced by the enrollment ticket created by the administrator and given as a parameter to run.sh.

    • POSTGRES_ADDRESS is replaced by the database address that we previously saved.

  • We tag the created instance and wait for it to be available.

When the instance is started, the run_ockam.sh script is executed:

  • The ockam executable is installed.

  • The enrollment ticket is used to create a default identity and make it a project member.

  • We then create an Ockam node:

    • With a TCP outlet.

    • A policy associated to the outlet. The policy authorizes identities with a credential containing the attribute postgres-inlet="true".

    • With a relay capable of forwarding the TCP traffic to the TCP outlet.

Analysis Corp

First, the analysis_corp/run.sh script creates a network to host the nodejs application:

  • We create a VPC and tag it.

  • We create an Internet gateway and attach it to the VPC.

  • We create a route table and create a route to the Internet via the gateway.

  • We create a subnet, and associated to the route table.

  • We finally create a security group so that there is:

    • One TCP egress to the Internet,

    • And One SSH ingress to download and install the nodejs application.

We are now ready to create an EC2 instance where the Ockam inlet node will run:

  • We select an AMI.

  • We start an instance using the AMI above and a start script based on run_ockam.sh where:

    • ENROLLMENT_TICKET is replaced by the enrollment ticket created by the administrator and given as a parameter to run.sh.

The instance is started and the run_ockam.sh script is executed:

  • The ockam executable is installed.

  • The enrollment ticket is used to create a default identity and make it a project member.

  • We then create an Ockam node:

    • With a TCP inlet.

    • A policy associated to the inlet. The policy authorizes identities with a credential containing the attribute postgres-outlet="true".

We finally wait for the instance to be ready and install the nodejs application:

  • The app.js file is copied to the instance (this uses the previously created key.pem file to identify).

  • We can then SSH to the instance and:

    • Install nodejs.

    • Install the Postgres client library.

    • Start the nodejs application.

Once the nodejs application is started:

  • It will connect to the Ockam inlet at port 12345.

  • It creates a database table and runs some SQL queries to check that the connection with the Postgres database works.

Recap

We connected a nodejs app in one virtual private network with a postgres database in another virtual private network over an end-to-end encrypted portal.

Sensitive business data in the postgres database is only accessible to Bank Corp. and Analysis Corp. All data is encrypted with strong forward secrecy as it moves through the Internet. The communication channel is mutually authenticated and authorized. Keys and credentials are automatically rotated. Access to connect with postgres can be easily revoked.

Analysis Corp. does not get unfettered access to Bank Corp.’s network. It gets access only to run queries on the postgres server. Bank Corp. does not get unfettered access to Analysis Corp.’s network. It gets access only to respond to queries over a tcp connection. Bank Corp. cannot initiate connections.

All access controls are secure-by-default. Only project members, with valid credentials, can connect with each other. NAT’s are traversed using a relay and outgoing tcp connections. Bank Corp. or Analysis Corp. don’t expose any listening endpoints on the Internet. Their networks are completely closed and protected from any attacks from the Internet.

Cleanup

To delete all AWS resources:

./run.sh cleanup

Routing and Transports

Ockam Routing and Transports enable protocols that provide end-to-end guarantees to messages traveling across many network connection hops and protocols boundaries.

Data, within modern applications, routinely flows over complex, multi-hop, multi-protocol routes before reaching its end destination. It's common for application layer requests and data to move across network boundaries, beyond data centers, via shared or public networks, through queues and caches, from gateways and brokers to reach remote services and other distributed parts of an application.

Ockam is designed to enable end-to-end application layer guarantees in any communication topology.

For example, Ockam Secure Channels provide end-to-end guarantees of data authenticity, integrity, and privacy in any of the above communication topologies. In contrast, traditional secure communication implementations are typically tightly coupled with transport protocols in a way that all their security is limited to the length and duration of one underlying transport connection.

For example, most TLS implementations are tightly coupled with the underlying TCP connection. If your application's data and requests travel over two TCP connection hops TCP -> TCP, then all TLS guarantees break at the bridge between the two networks. This bridge, gateway or load balancer then becomes a point of weakness for application data.

To make matters worse, if you don't set up another mutually authenticated TLS connection on the second hop between the gateway and your destination server, then the entire second hop network – which may have thousands of applications and machines within it – becomes an attack vector to your application and its data. If any of these neighboring applications or machines are compromised, then your application and its data can also be easily compromised.

Traditional secure communication protocols are also unable to protect your application's data if it travels over multiple different transport protocols. They can't guarantee data authenticity or data integrity if your application's communication path is UDP -> TCP or BLE -> TCP.

Ockam Routing is a simple and lightweight message-based protocol that makes it possible to bidirectionally exchange messages over a large variety of communication topologies: TCP -> TCP or TCP -> TCP -> TCP or BLE -> UDP -> TCP or BLE -> TCP -> TCP or TCP -> Kafka -> TCP or any other topology you can imagine.

Ockam Transports adapt Ockam Routing to various transport protocols. By layering Ockam Secure Channels and other protocols over Ockam Routing, we can provide end-to-end guarantees over arbitrary transport topologies that span many networks and clouds.

Routing

Let's start by creating a node and sending a message to a service on that node.

» ockam reset -y
» ockam node create n1
» ockam message send 'Hello Ockam!' --to /node/n1/service/echo
Hello Ockam!

We get a reply back and the message flow looked like this.

To achieve this, Ockam Routing Protocol messages carry with them two metadata fields: onward_route and return_route. A route is an ordered list of addresses describing a message's path travel. All of this information is carried in a really compact binary format.

Pay very close attention to the Sender, Hop, and Replier rules in the sequence diagrams below. Note how onward_route and return_route are handled as the message travels.

The above was just one message hop. We can extend this to two hops:

» ockam service start hop --addr h1
» ockam message send hello --to /node/n1/service/h1/service/echo
hello

This very simple protocol can extend to any number of hops, try following command:

» ockam service start hop --addr h2
» ockam message send hello --to /node/n1/service/h1/service/h2/service/echo
hello

So far, we've routed messages between Workers on one Node. Next, let's see how we can route messages across nodes and machines using Ockam Routing adapters called Transports.

Transports

Ockam Transports adapt Ockam Routing for specific transport protocol, like TCP, UDP, WebSockets, Bluetooth etc. There is a growing base of Ockam Transport implementations in the Ockam GitHub Repository.

Let's start by exploring TCP transport. Create two new nodes: n2 and n3 and explicitly specify that they should listen on the local TCP addresses 127.0.0.1:7000 and 127.0.0.1:8000 respectively:

» ockam node create n2 --tcp-listener-address=127.0.0.1:7000
» ockam node create n3 --tcp-listener-address=127.0.0.1:8000

Next, let's create two TCP connections, one from n1 to n2 and the other from n2 to n3. Let's also add a hop for routing purposes:

» ockam service start hop --at n2
» ockam tcp-connection create --from n1 --to 127.0.0.1:7000
» ockam tcp-connection create --from n2 --to 127.0.0.1:8000

Note, from the output, that the TCP connection from n1 to n2 on n1 has worker address ac40f7edbf7aca346b5d44acf82d43ba and the TCP connection from n2 to n3 on n2 has the worker address 7d2f9587d725311311668075598e291e. We can combine this information to send a message over two TCP hops.

» ockam message send hello --from n1 --to /worker/ac40f7edbf7aca346b5d44acf82d43ba/service/hop/worker/7d2f9587d725311311668075598e291e/service/uppercase
HELLO

The message in the above command took the following route:

In this example, we ran a simple uppercase request and response protocol between n1 and n3, two nodes that weren't directly connected to each other. This simple combination of Ockam Routing and Transports the foundation of end-to-end protocols in Ockam.

We can have any number of TCP hops along the route to the uppercase service. We can also easily have some hops that use a completely different transport protocol, like UDP or Bluetooth. Transport protocols are pluggable, and there is a growing base of Ockam Transport Add-Ons in our GitHub Repository.

Recap

To clean up and delete all nodes, run: ockam node delete --all

Ockam Routing is a simple and lightweight message-based protocol that makes it possible to bidirectionally exchange messages over a large variety of communication topologies: TCP -> TCP or TCP -> TCP -> TCP or BLE -> UDP -> TCP or BLE -> TCP -> TCP or TCP -> Kafka -> TCP or any other topology you can imagine. Ockam Transports adapt Ockam Routing to various transport protocols.

Together they give us a simple yet extremely flexible foundation to describe end-to-end, application layer protocols that can operate in any communication topology.

If you're stuck or have questions at any point, please reach out to us.

Next, let's explore how Ockam Relays and Portals make it simple to connect existing applications across networks.

Ockam Node

Create an ockam node using Cloudformation template

This guide contains instructions to launch within AWS environment, an

  • Ockam Outlet Node

  • Ockam Inlet Node

The walkthrough demonstrates running both outlet and inlet nodes and verify communication between them.

Read: “How does Ockam work?” to learn about end-to-end trust establishment.

Create an Orchestrator Project

Sign up for Ockam and pick a subscription plan through the guided workflow on Ockam.io.

Run the following commands to install Ockam Command and enroll with the Ockam Orchestrator.

curl --proto '=https' --tlsv1.2 -sSfL https://install.command.ockam.io | bash
source "$HOME/.ockam/env"

ockam enroll

Completing this step creates a Project in Ockam Orchestrator.

Control which identities are allowed to enroll themselves into your project by issuing unique one-time use enrollment tickets. Generate two enrollment tickets, one for the Outlet and one for the Inlet.

Generate enrollment tickets

# Enrollment ticket for Ockam Outlet Node
ockam project ticket --expires-in 10h --usage-count 1 \
  --attribute example-outlet \
  --relay outlet \
    > "outlet.ticket"

# Enrollment ticket for Ockam Inlet Node
ockam project ticket --expires-in 10h --usage-count 1 \
  --attribute example-inlet \
    > "inlet.ticket"
    

Setup Ockam Outlet Node

  • Login to AWS Account you would like to use

  • Subscribe to "Ockam - Node" in AWS Marketplace

  • Navigate to AWS Marketplace -> Manage subscriptions. Select Ockam - Node from the list of subscriptions. Select Actions-> Launch Cloudformation stack

  • Select the Region you want to deploy and click Continue to Launch. Under Actions, select Launch Cloudformation

  • Create stack with below details

    • Stack name: example-outlet or any name you prefer

    • Network Configuration

      • Select suitable values for VPC ID and Subnet ID

        • Default instance type is m6a.8xlarge because of the predictable network bandwidth of 12.5 Gbps. Adjust instance type if you need to

    • Ockam Configuration

      • Enrollment ticket: Copy and paste the content of the outlet.ticket generated above

      • JSON Node Configuration: Copy and paste the below configuration.

{
    "relay": "outlet",
    "tcp-outlet": {
        "to": "localhost:7777",
        "allow": "example-inlet"
    }
}
  • Click Next to launch the CloudFormation run.

  • A successful CloudFormation stack run configures the Ockam outlet node on an EC2 machine.

  • EC2 machine mounts an EFS volume created in the same subnet. Ockam state is stored in the EFS volume.

  • Connect to the EC2 machine via AWS Session Manager. To view the log file, run sudo cat /var/log/cloud-init-output.log.

  • View the Ockam node status in CloudWatch.

    • Navigate to Cloudwatch -> Log Group and select example-outet-ockam-status-logs. Select the Logstream for the EC2 instance.

    • Cloudformation template creates a subscription filter that sends data to a Cloudwatch alarm example-outlet-OckamNodeDownAlarm.Alarm will turn green upon ockam node successfully running.

  • An Autoscaling group ensures atleast one EC2 instance is running at all times.

Set up a webhook on the ec2 machine to validate connectivity

  • Run python3 /opt/webhook_receiver.py to start the webhook that will listen on port 7777. We will send traffic to this webhook after inlet is setup, so keep the terminal window open.

Setup Ockam Inlet Node

  • Login to AWS Account you would like to use

  • Subscribe to "Ockam - Node" in AWS Marketplace

  • Navigate to AWS Marketplace -> Manage subscriptions. Select Ockam - Node from the list of subscriptions. Select Actions-> Launch Cloudformation stack

  • Select the Region you want to deploy and click Continue to Launch. Under Actions, select Launch Cloudformation

  • Create stack with below details

    • Stack name: example-inlet or any name you prefer

    • Network Configuration

      • Select suitable values for VPC ID and Subnet ID

        • Default instance type is m6a.8xlarge because of the predictable network bandwidth of 12.5 Gbps. Adjust instance type if you need to

    • Ockam Configuration

      • Enrollment ticket: Copy and paste the content of the outlet.ticket generated above

      • JSON Node Configuration: Copy and paste the below configuration.

{
    "tcp-inlet": {
      "from": "0.0.0.0:17777",
      "via": "outlet",
      "allow": "example-outlet"
    }
}
  • Click Next to launch the CloudFormation run.

  • A successful CloudFormation stack run configures the Ockam inlet node on an EC2 machine.

  • EC2 machine mounts an EFS volume created in the same subnet. Ockam state is stored in the EFS volume.

  • Connect to the EC2 machine via AWS Session Manager. To view the log file, run sudo cat /var/log/cloud-init-output.log.

  • View the Ockam node status in CloudWatch.

    • Navigate to Cloudwatch -> Log Group and select example-inlet-ockam-status-logs. Select the Logstream for the EC2 instance.

    • Cloudformation template creates a subscription filter that sends data to a Cloudwatch alarm example-inlet-OckamNodeDownAlarm.Alarm will turn green upon ockam node successfully running.

  • An Autoscaling group ensures atleast one EC2 instance is running at all times.

Validate Connectivity

  • Connect to the EC2 machine via AWS Session Manager.

  • Run the command below to post a request to the Inlet address. You must receive a response. Verify that the request reaches the webhook running on the Outlet machine.

curl -X POST http://localhost:17777/webhook -H "Content-Type: application/json" -d "{\"from\": \"$(hostname)\"}"

A Successful setup receives a response back

# Inlet EC2
sh-5.2$ curl -X POST http://localhost:17777/webhook -H "Content-Type: application/json" -d "{\"from\": \"$(hostname)\"}"
Webhook received

You will also see the request received in the Outlet EC2 machine

# Outlet EC2
sh-5.2$ python3 /opt/webhook_receiver.py
2024-07-24 19:56:32,984 - __main__ - INFO - Webhook server running on port 7777...
127.0.0.1 - - [24/Jul/2024 19:56:36] "POST /webhook HTTP/1.1" 200 -
2024-07-24 19:58:01,341 - __main__ - INFO - Received webhook: {"from": "REDACTED.REDACTED.compute.internal"}

You have now successfully created an Ockam Portal and verified secure communication 🎉.

Cleanup

  • Delete the example-outletCloudFormation stack from the AWS Account.

  • Delete the example-inlet CloudFormation stack from the AWS Account.

  • Delete ockam configuration files from the machine that the administrator used to generate enrollment tickets.

ockam reset

Aiven

Create an Ockam Portal to send end-to-end encrypted messages through Aiven.

Ockam encrypts messages from a Producer all-of-the-way to a specific Consumer. Only that specific Consumer can decrypt these messages. This guarantees that your data cannot be observed or tampered with as it passes through Aiven or the network where it is hosted. The operators of Aiven can only see encrypted data in the network and in service that they operate. Thus, a compromise of the operator's infrastructure will not compromise the data stream's security, privacy, or integrity.

To learn how end-to-end trust is established, please read: “How does Ockam work?”

Please select an example to dig in:

Secure Channels

Create end-to-end encrypted and mutually authenticated secure channels over any transport topology.

Now that we understand the basics of Nodes, Workers, and Routing ... let's create our first encrypted secure channel.

Establishing a secure channel requires establishing a shared secret key between the two entities that wish to communicate securely. This is usually achieved using a cryptographic key agreement protocol to safely derive a shared secret without transporting it over the network.

Running such protocols requires a stateful exchange of multiple messages and having a worker and routing system allows Ockam to hide the complexity of creating and maintaining a secure channel behind two simple functions:

  • create_secure_channel_listener(...) which waits for requests to create a secure channel.

  • create_secure_channel(...) which initiates the protocol to create a secure channel with a listener.

Responder node

Create a new file at:

Add the following code to this file:

Middle node

Create a new file at:

Add the following code to this file:

Initiator node

Create a new file at:

Add the following code to this file:

Run

Run the responder in a separate terminal tab and keep it running:

Run the middle node in a separate terminal tab and keep it running:

Run the initiator:

Note the message flow.

Nodes and Workers

Ockam Nodes and Workers decouple applications from the host environment and enable simple interfaces for stateful, asynchronous, and bi-directional message-based protocols.

At Ockam's core is a collection of cryptographic and messaging protocols. These protocols enable private and secure by design applications that provide end-to-end application layer trust in data.

Ockam is designed to make these protocols easy and safe to use in any application environment – from highly scalable cloud services to tiny battery operated microcontroller based devices.

Many included protocols require multiple steps and have complicated internal state that must be managed with care. Protocol steps can often be initiated by any participant so it can be quite challenging to make these protocols simple to use, secure, and platform independent.

Ockam , , and help hide this complexity to provide simple interfaces for stateful and asynchronous message-based protocols.

Nodes

An Ockam Node is any program that can interact with other Ockam Nodes using various Ockam protocols like Ockam Routing and Ockam Secure Channels.

A typical Ockam Node is implemented as an asynchronous execution environment that can run very lightweight, concurrent, stateful actors called Ockam . Using Ockam , a node can deliver messages from one worker to another local worker. Using Ockam Transports, nodes can also route messages to workers on other remote nodes.

In the following code snippet we create a node in Rust and then immediately stop it:

A node requires an asynchronous runtime to concurrently execute workers. The default Ockam Node implementation in Rust uses tokio, a popular asynchronous runtime in the Rust ecosystem. There are also Ockam Node implementations that support various no_std embedded targets.

Nodes can be implemented in any language. The only requirement is that understand various Ockam protocols like Routing, Secure Channels, Identities etc.

Workers

Ockam run very lightweight, concurrent, and stateful actors called Ockam Workers. They are like processes on your operating system, except that they all live inside one node and are very lightweight so a node can have hundreds of thousands of them, depending on the capabilities of the machine hosting the node.

When a worker is started on a node, it is given one or more addresses. The node maintains a mailbox for each address and whenever a message arrives for a specific address it delivers that message to the corresponding worker. In response to a message, a worker can: make local decisions, change internal state, create more workers, or send more messages.

Echoer worker

To create a worker, we create a struct that can optionally have some fields to store the worker's internal state. If the worker is stateless, it can be defined as a field-less unit struct.

This struct:

  • Must implement the ockam::Worker trait.

  • Must have the #[ockam::worker] attribute on the Worker trait implementation.

  • Must define two associated types Context and Message

    • The Context type is set to ockam::Context.

    • The Message type must be set to the type of messages the worker wishes to handle.

App worker

When a new node starts and calls an async main function, it turns that function into a worker with address of "app". This makes it easy to send and receive messages from the main function (i.e the "app" worker).

In the code below, we start a new Echoer worker at address "echoer", send this "echoer" a message "Hello Ockam!" and then wait to receive a String reply back from the "echoer".

Run the above example:

Message Flow

The message flow looked like this:

Next, let’s explore how Ockam’s Application Layer Routing enables us to create protocols that provide end-to-end guarantees.

AWS Marketplace

AWS Marketplace listings guides

Please select specific marketplace listings to view

touch examples/05-secure-channel-over-two-transport-hops-responder.rs
// examples/05-secure-channel-over-two-transport-hops-responder.rs
// This node starts a tcp listener, a secure channel listener, and an echoer worker.
// It then runs forever waiting for messages.

use hello_ockam::Echoer;
use ockam::identity::SecureChannelListenerOptions;
use ockam::tcp::{TcpListenerOptions, TcpTransportExtension};
use ockam::{node, Context, Result};

#[ockam::node]
async fn main(ctx: Context) -> Result<()> {
    // Create a node with default implementations
    let node = node(ctx).await?;

    // Initialize the TCP Transport.
    let tcp = node.create_tcp_transport()?;

    node.start_worker("echoer", Echoer)?;

    let bob = node.create_identity().await?;

    // Create a TCP listener and wait for incoming connections.
    let listener = tcp.listen("127.0.0.1:4000", TcpListenerOptions::new()).await?;

    // Create a secure channel listener for Bob that will wait for requests to
    // initiate an Authenticated Key Exchange.
    let secure_channel_listener = node.create_secure_channel_listener(
        &bob,
        "bob_listener",
        SecureChannelListenerOptions::new().as_consumer(listener.flow_control_id()),
    )?;

    // Allow access to the Echoer via Secure Channels
    node.flow_controls()
        .add_consumer(&"echoer".into(), secure_channel_listener.flow_control_id());

    // Don't call node.shutdown() here so this node runs forever.
    Ok(())
}
touch examples/05-secure-channel-over-two-transport-hops-middle.rs
// examples/05-secure-channel-over-two-transport-hops-middle.rs
// This node creates a tcp connection to a node at 127.0.0.1:4000
// Starts a relay worker to forward messages to 127.0.0.1:4000
// Starts a tcp listener at 127.0.0.1:3000
// It then runs forever waiting to route messages.

use hello_ockam::Relay;
use ockam::tcp::{TcpConnectionOptions, TcpListenerOptions, TcpTransportExtension};
use ockam::{node, route, Context, Result};

#[ockam::node]
async fn main(ctx: Context) -> Result<()> {
    // Create a node with default implementations
    let node = node(ctx).await?;

    // Initialize the TCP Transport
    let tcp = node.create_tcp_transport()?;

    // Create a TCP connection to Bob.
    let connection_to_bob = tcp.connect("127.0.0.1:4000", TcpConnectionOptions::new()).await?;

    // Start a Relay to forward messages to Bob using the TCP connection.
    node.start_worker("forward_to_bob", Relay::new(route![connection_to_bob]))?;

    // Create a TCP listener and wait for incoming connections.
    let listener = tcp.listen("127.0.0.1:3000", TcpListenerOptions::new()).await?;

    node.flow_controls()
        .add_consumer(&"forward_to_bob".into(), listener.flow_control_id());

    // Don't call node.shutdown() here so this node runs forever.
    Ok(())
}
touch examples/05-secure-channel-over-two-transport-hops-initiator.rs
// examples/05-secure-channel-over-two-transport-hops-initiator.rs
// This node creates an end-to-end encrypted secure channel over two tcp transport hops.
// It then routes a message, to a worker on a different node, through this encrypted channel.

use ockam::identity::SecureChannelOptions;
use ockam::tcp::{TcpConnectionOptions, TcpTransportExtension};
use ockam::{node, route, Context, Result};

#[ockam::node]
async fn main(ctx: Context) -> Result<()> {
    // Create a node with default implementations
    let mut node = node(ctx).await?;

    // Create an Identity to represent Alice.
    let alice = node.create_identity().await?;

    // Create a TCP connection to the middle node.
    let tcp = node.create_tcp_transport()?;
    let connection_to_middle_node = tcp.connect("localhost:3000", TcpConnectionOptions::new()).await?;

    // Connect to a secure channel listener and perform a handshake.
    let r = route![connection_to_middle_node, "forward_to_bob", "bob_listener"];
    let channel = node
        .create_secure_channel(&alice, r, SecureChannelOptions::new())
        .await?;

    // Send a message to the echoer worker via the channel.
    // Wait to receive a reply and print it.
    let reply: String = node
        .send_and_receive(route![channel, "echoer"], "Hello Ockam!".to_string())
        .await?;
    println!("App Received: {}", reply); // should print "Hello Ockam!"

    // Stop all workers, stop the node, cleanup and return.
    node.shutdown().await
}
cargo run --example 05-secure-channel-over-two-transport-hops-responder
cargo run --example 05-secure-channel-over-two-transport-hops-middle
cargo run --example 05-secure-channel-over-two-transport-hops-initiator
// examples/01-node.rs
// This program creates and then immediately stops a node.

use ockam::{node, Context, Result};

/// Create and then immediately stop a node.
#[ockam::node]
async fn main(ctx: Context) -> Result<()> {
    // Create a node.
    let mut node = node(ctx).await?;

    // Stop the node as soon as it starts.
    node.shutdown().await
}
// src/echoer.rs
use ockam::{Context, Result, Routed, Worker};

pub struct Echoer;

#[ockam::worker]
impl Worker for Echoer {
    type Context = Context;
    type Message = String;

    async fn handle_message(&mut self, ctx: &mut Context, msg: Routed<String>) -> Result<()> {
        println!("Address: {}, Received: {:?}", ctx.primary_address(), msg);

        // Echo the message body back on its return_route.
        ctx.send(msg.return_route().clone(), msg.into_body()?).await
    }
}
// examples/02-worker.rs
// This node creates a worker, sends it a message, and receives a reply.

use hello_ockam::Echoer;
use ockam::{node, Context, Result};

#[ockam::node]
async fn main(ctx: Context) -> Result<()> {
    // Create a node with default implementations
    let mut node = node(ctx).await?;

    // Start a worker, of type Echoer, at address "echoer"
    node.start_worker("echoer", Echoer)?;

    // Send a message to the worker at address "echoer".
    node.send("echoer", "Hello Ockam!".to_string()).await?;

    // Wait to receive a reply and print it.
    let reply = node.receive::<String>().await?;
    println!("App Received: {}", reply.into_body()?); // should print "Hello Ockam!"

    // Stop all workers, stop the node, cleanup and return.
    node.shutdown().await
}
cargo run --example 02-worker
Nodes
Workers
Services
Workers
Routing
Nodes
Drawing
Drawing
Drawing

Aiven - Cloud

Send end-to-end encrypted messages through Aiven.

Create Ockam Inlet and Outlet Nodes using Cloudformation template

Create Ockam kafka outlet and kafka inlet Nodes using Cloudformation template

Create Ockam Postgres Outlet and Inlet Nodes using Cloudformation template

Create Ockam Amazon Timestream InfluxDB Outlet and Inlet Nodes using Cloudformation template

Create Ockam Amazon Redshift Outlet and Inlet Nodes using Cloudformation template

Create Ockam Amazon Bedrock Outlet and Inlet Nodes using Cloudformation template

Python

Let's connect a python app in one AWS VPC with a python API in another AWS VPC. The example uses AWS CLI to create these VPCs.

Each company’s network is private, isolated, and doesn't expose ports. To learn how end-to-end trust is established, please read: “How does Ockam work?”

Run

This example requires Bash, Git, and AWS CLI. Please set up these tools for your operating system. In particular you need to login to your AWS account with aws sso login.

# Clone the Ockam repo from Github.
git clone --depth 1 https://github.com/build-trust/ockam && cd ockam

# Navigate to this example’s directory.
cd examples/command/portals/apis/python/amazon_ec2/aws_cli

# Run the example, use Ctrl-C to exit at any point.
./run.sh

If everything runs as expected, you'll see the message: The example run was successful 🥳

Walkthrough

The run.sh script, that you ran above, and its accompanying files are full of comments and meant to be read. The example setup is only a few simple steps, so please take some time to read and explore.

Administrator

  • The run.sh script calls the run function which invokes the enroll command to create an new identity, sign into Ockam Orchestrator, set up a new Ockam project, make you the administrator of this project, and get a project membership credential.

  • The run function then generates two new enrollment tickets. The tickets are valid for 60 minutes. Each ticket can be redeemed only once and assigns attributes to its redeemer. The first ticket is meant for the Ockam node that will run in Monitoring Corp.’s network. The second ticket is meant for the Ockam node that will run in Travel App Corp.’s network.

  • In a typical production setup an administrator or provisioning pipeline generates enrollment tickets and gives them to nodes that are being provisioned. In our example, the run function is acting on your behalf as the administrator of the Ockam project.

  • The run function passes the enrollment tickets as variables of the run scripts provisioning Monitoring Corp.’s network and Travel App Corp.’s network.

Monitoring Corp

First, the monitoring_corp/run.sh script creates a network to host the database:

  • We create a VPC and tag it.

  • We create an Internet gateway and attach it to the VPC.

  • We create a route table and create a route to the Internet via the gateway.

  • We create a subnet, and associate it with the route table.

  • We finally create a security group so that there is:

    • One TCP egress to the Internet,

    • And one SSH ingress to install python and run the API service.

Then, the monitoring_corp/run.sh script creates an EC2 instance where the Ockam outlet node will run:

  • We select an AMI.

  • We start an instance using the AMI above and a start script based on run_ockam.sh where the

    ENROLLMENT_TICKET is replaced by the enrollment ticket created by the administrator and given as a parameter to run.sh.

  • We tag the created instance and wait for it to be available.

Next, the instance is started and the run_ockam.sh script is executed:

  • The ockam executable is installed.

  • The enrollment ticket is used to create a default identity and make it a project member.

  • We then create an Ockam node:

    • With a TCP outlet.

    • A policy associated to the outlet. The policy authorizes identities with a credential containing the attribute monitoring-api-inlet="true".

    • With a relay capable of forwarding the TCP traffic to the TCP outlet.

Finally, we wait for the instance to be ready and run the python api application:

  • The app.py file is copied to the instance (this uses the previously created key.pem file to identify).

  • We can then SSH to the instance and:

    • Install python.

    • Install dependencies.

    • Run the python flask api application.

Travel App Corp

First, the travel_app_corp/run.sh script creates a network to host the python application:

  • We create a VPC and tag it.

  • We create an Internet gateway and attach it to the VPC.

  • We create a route table and create a route to the Internet via the gateway.

  • We create a subnet, and associate it with the route table.

  • We finally create a security group so that there is:

    • One TCP egress to the Internet,

    • And One SSH ingress to download and install the python application.

Then, we create an EC2 instance where the Ockam inlet node will run:

  • We select an AMI.

  • We start an instance using the AMI above and a start script based on run_ockam.sh where the

    ENROLLMENT_TICKET is replaced by the enrollment ticket created by the administrator and given as a parameter to run.sh.

Next, the instance is started and the run_ockam.sh script is executed:

  • The ockam executable is installed.

  • The enrollment ticket is used to create a default identity and make it a project member.

  • We then create an Ockam node:

    • With a TCP inlet.

    • A policy associated to the inlet. The policy authorizes identities with a credential containing the attribute monitoring-api-outlet="true".

Finally, we wait for the instance to be ready and run the python client application:

  • The client.py file is copied to the instance (this uses the previously created key.pem file to identify).

  • We can then SSH to the instance and:

    • Install python.

    • Install dependencies.

    • Run the python client application.

Recap

We connected a python app in one AWS VPC with a python API service in another AWS VPC over an end-to-end encrypted portal.

Non-public access to private API endpoints are only accessible to enrolled members of the project with the appropriate attributes. The communication channel is mutually authenticated and authorized. Keys and credentials are automatically rotated. Access to the API can be easily revoked.

Travel App Corp. does not get unfettered access to Monitoring Corp.’s network. It only gets access to the API service. Monitoring Corp. does not get unfettered access to Travel App Corp.’s network. It gets access only to respond to requests over a tcp connection. Monitoring Corp. cannot initiate connections.

All access controls are secure-by-default. Only project members, with valid credentials, can connect with each other. NAT’s are traversed using a relay and outgoing tcp connections. Neither Monitoring Corp. nor Travel App Corp. expose any listening endpoints to the Internet. Their networks are completely closed and protected from any attacks from the Internet.

Cleanup

To delete all AWS resources created by this example:

./run.sh cleanup

Azure OpenAI

Let's connect a python app in one virtual private network with an Azure OpenAI model configured with private endpoint in another virtual private network. You will use the Azure CLI to create these virtual networks and resources.

Each company’s network is private, isolated, and doesn't expose ports. To learn how end-to-end trust is established, please read: “How does Ockam work?”

Create an Orchestrator Project

  1. Sign up for Ockam and pick a subscription plan through the guided workflow

  2. Run the following commands to install Ockam Command and enroll with the Ockam Orchestrator. This step creates a Project in Ockam Orchestrator.

curl --proto '=https' --tlsv1.2 -sSfL https://install.command.ockam.io | bash
source "$HOME/.ockam/env"

ockam enroll

Run

This example requires Bash, Git, Curl, and the Azure CLI. Please set up these tools for your operating system. In particular you need to login to your Azure with az login.

Then run the following commands:

# Clone the Ockam repo from Github.
git clone --depth 1 https://github.com/build-trust/ockam && cd ockam

# Navigate to this example’s directory.
cd examples/command/portals/ai/azure_openai

# Run the example, use Ctrl-C to exit at any point.
./run.sh

If everything runs as expected, you'll see the answer to the question: "What is Ockham's Razor?".

Walkthrough

The run.sh script script, that you ran above, and its accompanying files are full of comments and meant to be read. The example setup is only a few simple steps, so please take some time to read and explore.

Administrator

  • The run.sh script calls the run function which invokes the enroll command to create an new identity, sign into Ockam Orchestrator, set up a new Ockam project, make you the administrator of this project, and get a project membership credential.

  • The run function then generates two new enrollment tickets. The tickets are valid for 60 minutes. Each ticket can be redeemed only once and assigns attributes to its redeemer. The first ticket is meant for the Ockam node that will run in AI Corp.’s network. The second ticket is meant for the Ockam node that will run in Health Corp.’s network.

  • In a typical production setup an administrator or provisioning pipeline generates enrollment tickets and gives them to nodes that are being provisioned. In our example, the run function is acting on your behalf as the administrator of the Ockam project.

  • The run function passes the enrollment tickets as variables of the run scripts provisioning AI Corp.'s network and Health Corp.'s network.

AI Corp

First, the ai_corp/run.sh script creates a network to host the application exposing the Azure OpenAI Service Endpoint

  • Network Infrastructure:

    • We create an Azure Resource Group to contain all resources.

    • We create a Virtual Network (VNet) with a subnet to host the services.

  • Azure OpenAI Service Configuration:

    • We deploy an Azure OpenAI Service instance.

    • We set up a private endpoint for secure access:

      • Create a private endpoint connection.

      • Establish a private DNS zone.

      • Link the DNS zone to the virtual network.

      • Configure DNS records for private endpoint resolution.

      • Disable public network access and Update network ACLs to deny public access.

  • OpenAI Model Deployment:

    • We deploy the specified model (gpt-4o-mini) on the OpenAI service.

    • We retrieve the API key for authentication.

    • We create an environment file (.env.azure) containing:

      • The Azure OpenAI endpoint URL.

      • The API key for authentication.

  • Virtual Machine Deployment:

    • We process the Ockam setup script (run_ockam.sh) by replacing variables:

      • Replaces SERVICE_NAME and TICKET placeholders.

    • We create a Red Hat Enterprise Linux VM:

      • Place it in the configured VNet/subnet.

      • Generate SSH keys for access.

      • Inject the processed Ockam setup script as custom data.

      • The default Network Security Group (NSG) is configured with basic rules: inbound SSH access (port 22), internal virtual network communication, Azure Load Balancer access, and a final deny rule for all other inbound traffic. For outbound, it allows virtual network and internet traffic, with a final deny rule for all other outbound traffic.

Ensure your Azure Subscription has access to deploy the "gpt-4o-mini" model (version: 2024-07-18). You may need to request quota/access for this model through the Azure Portal if not already enabled for your subscription.

Health Corp

First, the health_corp/run.sh script creates a network to host the client.py application which will connect to the Azure OpenAI model:

  • Network Infrastructure Setup:

    • We create an Azure Resource Group to contain all resources.

    • We create a Virtual Network (VNet) with a subnet to host the services.

  • VM Deployment and Ockam Setup:

    • We process the run_ockam.sh script by replacing:

      • ${SERVICE_NAME} with the configured service name.

      • ${TICKET} with the provided enrollment ticket.

    • We create a Red Hat Enterprise Linux 8 VM where the Ockam inlet node will run:

      • Use latest RHEL 8 LVM Gen2 image.

      • Generate SSH keys automatically.

      • Inject the processed Ockam setup script as custom data.

  • Client Application Deployment:

    • We wait for VM to be accessible.

    • We copy required files to the VM:

      • Transfers client.py to the VM.

      • Copies .env.azure configuration file containing OpenAI credentials.

    • We set up the Python environment:

      • Install Python 3.9 and pip.

      • Install the OpenAI SDK.

    • We execute the client application.

  • Client Application Operation:

    • The client.py application:

      • Connects to the Azure OpenAI service using credentials from .env.azure.

      • Sends queries to the model.

      • Receives and displays responses on the console.

Recap

We connected a Python application in one virtual network with an application serving an Azure OpenAI model in another virtual network over an end-to-end encrypted portal.

Sensitive business data coming from the Azure OpenAI model is only accessible to AI Corp. and Health Corp. All data is encrypted with strong forward secrecy as it moves through the Internet. The communication channel is mutually authenticated and authorized. Keys and credentials are automatically rotated. Access to connect with the model API can be easily revoked.

Health Corp. does not get unfettered access to AI Corp.'s network. It gets access only to run API queries to the Azure OpenAI service. AI Corp. does not get unfettered access to Health Corp.'s network. It gets access only to respond to queries over a TCP connection. AI Corp. cannot initiate connections.

All access controls are secure-by-default. Only project members, with valid credentials, can connect with each other. NATs are traversed using a relay and outgoing TCP connections. AI Corp. or Health Corp. don't expose any listening endpoints on the Internet. Their Azure virtual networks are completely closed and protected from any attacks from the Internet through Network Security Groups (NSGs) that only allow essential communications.

Cleanup

To delete all Azure resources:

./run.sh cleanup

Ockam Node for Amazon MSK

Create an ockam kafka outlet node using Cloudformation template

This guide contains instructions to launch within AWS environment, an

  • An Ockam Kafka Outlet Node within an AWS environment

  • An Ockam Kafka Inlet Node:

    • Within an AWS environment, or

    • Using Docker in any environment

The walkthrough demonstrates:

  1. Running an Ockam kafka outlet node in your AWS environment that contains Amazon MSK instance

  2. Setting up Ockam Kafka inlet nodes using either AWS or Docker from any location.

  3. Verifying secure communication between kafka clients and Amazon MSK cluster.

Read: “How does Ockam work?” to learn about end-to-end trust establishment.

PreRequisite

Amazon MSK Cluster Configuration: Ensure that your Amazon MSK cluster is configured with the following settings:

  1. Access Control Methods: Unauthenticated access should be enabled.

  2. Encryption between Clients and Brokers: PLAINTEXT should be enabled

  3. Network Access to Amazon MSK Cluster: Verify that the Security Group associated with the Amazon MSK cluster allows inbound traffic on the required port(s) (e.g., 9092) from the subnet where the EC2 instance will reside.

Create an Orchestrator Project

  1. Sign up for Ockam and pick a subscription plan through the guided workflow on Ockam.io.

  2. Run the following commands to install Ockam Command and enroll with the Ockam Orchestrator.

curl --proto '=https' --tlsv1.2 -sSfL https://install.command.ockam.io | bash
source "$HOME/.ockam/env"

ockam enroll

Completing this step creates a Project in Ockam Orchestrator.

  1. Control which identities are allowed to enroll themselves into your project by issuing unique one-time use enrollment tickets. Generate two enrollment tickets, one for the Outlet and one for the Inlet.

# Enrollment ticket for Ockam Outlet Node
ockam project ticket --expires-in 10h --usage-count 1 \
  --attribute amazon-msk-kafka-outlet \
  --relay kafka \
    > "outlet.ticket"

# Enrollment ticket for Ockam Inlet Node
ockam project ticket --expires-in 10h --usage-count 1 \
  --attribute amazon-msk-kafka-inlet \
    > "inlet.ticket"

Setup Ockam Kafka Outlet Node

  • Login to AWS Account you would like to use

  • Subscribe to "Ockam - Node for Amazon MSK" in AWS Marketplace

  • Navigate to AWS Marketplace -> Manage subscriptions. Select Ockam - Node for Amazon MSK from the list of subscriptions. Select Actions-> Launch Cloudformation stack

  • Select the Region you want to deploy and click Continue to Launch. Under Actions, select Launch Cloudformation

  • Create stack with the following details

    • Stack name: msk-ockam-outlet or any name you prefer

    • Network Configuration

      • VPC ID: Choose a VPC ID where the EC2 instance will be deployed.

      • Subnet ID: Select a suitable Subnet ID within the chosen VPC that has access to Amazon MSK cluster.

      • EC2 Instance Type: Default instance type is m6a.8xlarge because of the predictable network bandwidth of 12.5 Gbps. Adjust to a small instance type depending on your use case

    • Ockam Configuration

      • Enrollment ticket: Copy and paste the content of the outlet.ticket generated above

      • Amazon MSK Bootstrap Server with Port: To configure the Ockam Kafka Outlet Node, you'll need to specify the bootstrap servers for your Amazon MSK cluster. This configuration allows the Ockam Kafka Outlet Node to connect to the Kafka brokers.

        • Go to the MSK cluster in the AWS Management Console and select the cluster name.

        • In the Connectivity Summary section, select View Client information, copy the Bootstrap servers (plaintext) string with port 9092.

      • JSON Node Configuration: Copy and paste the below configuration. Note that the configuration values match with the enrollment tickets created in the previous step

{
    "relay": "kafka",
    "kafka-outlet": {
      "bootstrap-server": "$BOOTSTRAP_SERVER_WITH_PORT",
      "allow": "amazon-msk-kafka-inlet"
    }
  }
  • Click Next to launch the CloudFormation run.

  • A successful CloudFormation stack run configures the Ockam Kafka outlet node on an EC2 machine.

    • EC2 machine mounts an EFS volume created in the same subnet. Ockam state is stored in the EFS volume.

    • A security group with egress access to the internet will be attached to the EC2 machine.

  • Connect to the EC2 machine via AWS Session Manager.

    • To view the log file, run sudo cat /var/log/cloud-init-output.log.

      • Successful run will show Ockam node setup completed successfully in the logs

    • To view the status of Ockam node run curl http://localhost:23345/show | jq

  • View the Ockam node status in CloudWatch.

    • Navigate to Cloudwatch -> Log Group and select msk-outlet-ockam-status-logs. Select the Logstream for the EC2 instance.

    • The Cloudformation template creates a subscription filter that sends data to a Cloudwatch alarm msk-ockam-outlet-OckamNodeDownAlarm.Alarm will turn green upon ockam node successfully running.

  • An Autoscaling group ensures atleast one EC2 instance is running at all times.

Ockam Kafka outlet node setup is complete. You can now create Ockam Kafka inlet nodes in any network to establish secure communication.

Setup Ockam Kafka Inlet Node

You can set up an Ockam Kafka Inlet Node either in AWS or locally using Docker. Here are both options:

Option 1: Setup Inlet Node in AWS

To set up an Inlet Node in AWS, follow similar steps as the Outlet Node setup, with these modifications:

  • Use the same CloudFormation template as before.

  • When configuring the stack,

    • Use the inlet.ticket instead of the outlet.ticket.

    • VPC and Subnet: You can choose any VPC and subnet for the Inlet Node. It doesn't need to be in the same network as the MSK cluster or the Outlet Node.

  • For the JSON Node Configuration, use the following:

{
    "kafka-inlet": {
      "from": "127.0.0.1:9092",
      "disable-content-encryption": true,
      "avoid-publishing": true,
      "allow": "amazon-msk-kafka-outlet",
      "to": "/project/default/service/forward_to_kafka/secure/api"
    }
  }
  • Use any kafka client and connect to 127.0.0.1:9092 as the bootstrap-server, from the same machine running the Ockam Kafak Inlet node.

Option 2: Setup Inlet Node Locally with Docker Compose

To set up an Inlet Node locally and interact with it outside of AWS, use Docker Compose.

  • Create a file named docker-compose.yml with the following content:

services:
  ockam:
    image: ghcr.io/build-trust/ockam
    environment:
      ENROLLMENT_TICKET: ${ENROLLMENT_TICKET:-}
      OCKAM_DEVELOPER: ${OCKAM_DEVELOPER:-false}
      OCKAM_LOGGING: true
      OCKAM_LOG_LEVEL: info
    command:
      - node
      - create
      - --foreground
      - --node-config
      - |
        ticket: ${ENROLLMENT_TICKET}

        kafka-inlet:
          from: 0.0.0.0:19092
          disable-content-encryption: true
          avoid-publishing: true
          allow: amazon-msk-kafka-outlet
          to: /project/default/service/forward_to_kafka/secure/api
    network_mode: host

  kafka-tools:
    image: apache/kafka
    container_name: kafka-tools
    command: /bin/sh -c "while true; do sleep 30; done"
    depends_on:
      - ockam
    network_mode: host
  • Run the following command from the same location as the docker-compose.yml and the inlet.ticket to create an Ockam kafka inlet that can connect to the outlet running in AWS , along with kakfa client tools container

ENROLLMENT_TICKET=$(cat inlet.ticket) docker-compose up -d
  • Exec into the kafka-tools and run commands to produce as well as consume kafka messages.

# Exec into tools container
docker exec -it kafka-tools /bin/bash

# List topics
/opt/kafka/bin/kafka-topics.sh --list --bootstrap-server localhost:19092

# Create a topic
/opt/kafka/bin/kafka-topics.sh --create --topic test-topic --bootstrap-server localhost:19092 --partitions 1 --replication-factor 1

# Publish a message
date | /opt/kafka/bin/kafka-console-producer.sh --broker-list localhost:19092 --topic test-topic

# Read messages
/opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:19092 --topic test-topic --from-beginning

This setup allows you to run an Ockam Kafka Inlet Node locally and communicate securely with the Outlet Node running in AWS.

Ockam Node for Amazon Bedrock

Create an Ockam Bedrock outlet node using Cloudformation template

Amazon Bedrock is a fully managed service that makes high-performing foundation models (FMs) from leading AI companies and Amazon available for your use through a unified API. Organizations building innovative generative AI applications with Amazon Bedrock often need to ensure their proprietary data remains secure and private while accessing these powerful models.

By default, You can access Amazon Bedrock over the public internet, which means:

  1. Your API calls to Bedrock travel across the public internet.

  2. Your client must have public internet connectivity

  3. You must implement additional security measures to protect your data in transit

The Security Challenge

When you build AI applications with sensitive or proprietary data, exposing them to the public internet creates several risks:

  • Your data may travel through unknown network paths

  • Attackers gain more potential entry points

  • Your compliance requirements may prohibit public internet usage

  • You must maintain extra security controls and monitoring

Understanding VPC Endpoints for Amazon Bedrock

How VPC Endpoints Work

AWS PrivateLink powers VPC endpoints, which let you access Amazon Bedrock privately without exposing data to the public internet. When you create a private connection between your VPC and Bedrock:

  1. Your traffic stays within AWS network infrastructure

  2. You eliminate the need for public endpoints

  3. Your data remains on private AWS networks

However, organizations often need additional capabilities:

  • Access to Bedrock from outside AWS

  • Secure connections from other cloud providers

  • Private access from on-premises environments

This is where Ockam comes helps.

Read: “How does Ockam work?” to learn about end-to-end trust establishment.

PreRequisite

  • You have permission to subscribe and launch Cloudformation stack from AWS Marketplace on the AWS Account running Amazon Bedrock.

  • Make sure AWS Bedrock is available in the region you are deploying the cloudformation template.

Create an Orchestrator Project

  1. Sign up for Ockam and pick a subscription plan through the guided workflow on Ockam.io.

  2. Run the following commands to install Ockam Command and enroll with the Ockam Orchestrator.

curl --proto '=https' --tlsv1.2 -sSfL https://install.command.ockam.io | bash
source "$HOME/.ockam/env"

ockam enroll
  1. Control which identities are allowed to enroll themselves into your project by issuing unique one-time use enrollment tickets. Generate two enrollment tickets, one for the Outlet and one for the Inlet.

# Enrollment ticket for Ockam Outlet Node
ockam project ticket --expires-in 10h --usage-count 1 \
  --attribute amazon-bedrock-outlet \
  --relay bedrock \
    > "outlet.ticket"

# Enrollment ticket for Ockam Inlet Node
ockam project ticket --expires-in 10h --usage-count 1 \
  --attribute amazon-bedrock-inlet --tls \
    > "inlet.ticket"

Setup Ockam Bedrock Outlet Node

  • Login to AWS Account you would like to use

  • Subscribe to "Ockam - Node for Amazon Bedrock" in AWS Marketplace

  • Navigate to AWS Marketplace -> Manage subscriptions. Select Ockam - Node for Amazon Bedrock from the list of subscriptions. Select Actions-> Launch Cloudformation stack

  • Select the Region you want to deploy and click Continue to Launch. Under Actions, select Launch Cloudformation

  • Create stack with the following details

    • Stack name: bedrock-ockam-outlet or any name you prefer

    • Network Configuration

      • VPC ID: Choose a VPC ID where the VPC Endpoint for Bedrock and EC2 instance will be deployed.

      • Subnet ID: Select a suitable Subnet ID within the chosen VPC.

      • EC2 Instance Type: Default instance type is m6a.large. please use different instance types based on your use case.

    • Ockam Node Configuration

      • Enrollment ticket: Copy and paste the content of the outlet.ticket generated above

      • JSON Node Configuration: Copy and paste the below configuration. Note that the configuration values (relay, allow attribute) match with the enrollment tickets created in the previous step. $BEDROCK_RUNTIME_ENDPOINT will be replaced during runtime.

{
    "relay": "bedrock",
    "tcp-outlet": {
        "to": "$BEDROCK_RUNTIME_ENDPOINT:443",
        "allow": "amazon-bedrock-inlet",
        "tls": true
    }
}
  • Click Next to launch the CloudFormation run.

  • A successful CloudFormation stack run

    • Creates a VPC Endpoint for Bedrock Runetime API

    • Configures an Ockam Bedrock Outlet node on an EC2 machine.

    • EC2 machine mounts an EFS volume created in the same subnet. Ockam state is stored in the EFS volume.

    • A security group with ingress access within the security group and egress access to the internet will be attached to the EC2 machine and VPC Endpoint.

  • Connect to the EC2 machine via AWS Session Manager.

    • To view the log file, run sudo cat /var/log/cloud-init-output.log.

    • Note: DNS Resolution for the EFS drive may take up to 10 minutes. The script will retry

    • A Successful run will show Ockam node setup completed successfully in the above log.

    • To view the status of Ockam node run curl http://localhost:23345/show | jq

  • View the Ockam node status in CloudWatch.

    • Navigate to Cloudwatch -> Log Group and select bedrock-ockam-outlet-status-logs. Select the Logstream for the EC2 instance.

    • The Cloudformation template creates a subscription filter which sends data to a Cloudwatch alarm bedrock-ockam-outlet-OckamNodeDownAlarm.Alarm will turn green upon ockam node successfully running.

  • An Autoscaling group keeps atleast one EC2 instance is running.

Ockam bedrock outlet node setup is complete. You can now create Ockam bedrock inlet nodes in any network to establish secure communication.

Setup Bedrock Ockam Inlet Node

You can set up an Ockam Bedrock Inlet Node locally using Docker. You can then use any library (aws cli, python, javascript etc) to access AWS Bedrock via Ockam inlet

  • Create a file named docker-compose.yml with the following content:

services:
  ockam:
    image: ghcr.io/build-trust/ockam
    container_name: bedrock-inlet
    environment:
      ENROLLMENT_TICKET: ${ENROLLMENT_TICKET:-}
      OCKAM_DEVELOPER: ${OCKAM_DEVELOPER:-false}
      OCKAM_LOGGING: true
      OCKAM_LOG_LEVEL: debug
    ports:
      - "443:443"  # Explicitly expose port 443
    command:
      - node
      - create
      - --enrollment-ticket
      - ${ENROLLMENT_TICKET}
      - --foreground
      - --configuration
      - |
        tcp-inlet:
          from: 0.0.0.0:443
          via: bedrock
          allow: amazon-bedrock-outlet
          tls: true
    network_mode: bridge

Run the following command from the same location as the docker-compose.yml and the inlet.ticket to create an Ockam bedrock inlet that can connect to the outlet running in AWS , along with psql client container.

ENROLLMENT_TICKET=$(cat inlet.ticket) docker-compose up -d
  • Check status of Ockam inlet node. You will see The node is UP when ockam is configured successfully and ready to accept connection

docker exec -it bedrock-inlet /ockam node show
  • Find your Ockam project id and use it to create to endpoint to bedrock

    # Below command will find your ockam project id 
    ockam project show --jq .id 
  • Construct bedrock endpoint url

https://ANY_STRING_YOU_LIKE.YOUR_PROJECT_ID.ockam.network
  • An example bedrock endpoint url will look like below

BEDROCK_ENDPOINT=https://bedrock-runtime.d8eafd41-ff3e-40ab-8dbe-936edbe3ad3c.ockam.network
  • Run below AWS CLI Command.

NOTE:

1) You should have amazon-titan-text-lite-v1 model enabled on the Account/Region

2) You need AWS Credentials for the account with permission to run the below command.

export AWS_REGION=<YOUR_REGION> 
aws bedrock-runtime invoke-model \
--endpoint-url $BEDROCK_ENDPOINT \
--model-id amazon.titan-text-lite-v1 \
--body '{"inputText": "Describe the purpose of a \"hello world\" program in one line.", "textGenerationConfig" : {"maxTokenCount": 512, "temperature": 0.5, "topP": 0.9}}' \
--cli-binary-format raw-in-base64-out \
invoke-model-output-text.txt

The above command should produce similar result

> cat invoke-model-output-text.txt
{"inputTextTokenCount":15,"results":[{"tokenCount":26,"outputText":"\nThe purpose of a \"hello world\" program is to print the text \"hello world\" to the console.","completionReason":"FINISH"}]}
  • Cleanup

docker compose down --volumes --remove-orphans

Summary

This guide walked you through:

  • Understanding the security challenges of accessing Amazon Bedrock over the public internet

  • How VPC endpoints secure your Bedrock communications within AWS

  • Setting up Ockam to extend this security beyond AWS boundaries

  • Deploying and configuring both Outlet and Inlet nodes

  • Testing your secure connection with a simple Bedrock API call

Nodejs

Let's connect a nodejs app in one AWS VPC with a nodejs API in another AWS VPC. The example uses AWS CLI to create these VPCs.

Each company’s network is private, isolated, and doesn't expose ports. To learn how end-to-end trust is established, please read: “”

Run

This example requires Bash, Git, and AWS CLI. Please set up these tools for your operating system. In particular you need to with aws sso login.

If everything runs as expected, you'll see the message: The example run was successful 🥳

Walkthrough

The , that you ran above, and its are full of comments and meant to be read. The example setup is only a few simple steps, so please take some time to read and explore.

Administrator

  • The calls the which invokes the to create an new identity, sign into Ockam Orchestrator, set up a new Ockam project, make you the administrator of this project, and get a project membership .

  • The run function then . The tickets are valid for 60 minutes. Each ticket can be redeemed only once and assigns to its redeemer. The is meant for the Ockam node that will run in Monitoring Corp.’s network. The is meant for the Ockam node that will run in Travel App Corp.’s network.

  • In a typical production setup an administrator or provisioning pipeline generates enrollment tickets and gives them to nodes that are being provisioned. In our example, the run function is acting on your behalf as the administrator of the Ockam project.

  • The run function passes the enrollment tickets as variables of the run scripts provisioning and .

Monitoring Corp

First, the monitoring_corp/run.sh script creates a network to host the database:

  • We and tag it.

  • We and attach it to the VPC.

  • We and to the Internet via the gateway.

  • We , and associate it with the route table.

  • We finally so that there is:

    • ,

    • And to install nodejs and run the API service.

Then, the monitoring_corp/run.sh script creates an EC2 instance where the Ockam outlet node will run:

  • We .

  • We above and a start script based on run_ockam.sh where the

    created by the administrator and given as a parameter to run.sh.

  • We and .

Next, the instance is started and the run_ockam.sh script is executed:

  • The .

  • The .

  • We then create an Ockam node:

    • With .

    • A . The policy authorizes identities with a credential containing the attribute monitoring-api-inlet="true".

    • With capable of forwarding the TCP traffic to the TCP outlet.

Finally, we wait for the instance to be ready and run the nodejs api application:

  • The is (this uses the previously created key.pem file to identify).

  • We can then and:

    • .

    • .

    • .

Travel App Corp

First, the travel_app_corp/run.sh script creates a network to host the nodejs application:

  • We and tag it.

  • We and attach it to the VPC.

  • We and to the Internet via the gateway.

  • We , and associate it with the route table.

  • We finally so that there is:

    • ,

    • And to download and install the nodejs application.

Then, we create an EC2 instance where the Ockam inlet node will run:

  • We .

  • We above and a start script based on run_ockam.sh where the

    created by the administrator and given as a parameter to run.sh.

Next, the instance is started and the run_ockam.sh script is executed:

  • The .

  • The .

  • We then create an Ockam node:

    • With .

    • A . The policy authorizes identities with a credential containing the attribute monitoring-api-outlet="true".

Finally, we wait for the instance to be ready and run the nodejs client application:

  • The is (this uses the previously created key.pem file to identify).

  • We can then and:

    • .

    • .

Recap

We connected a nodejs app in one AWS VPC with a nodejs API service in another AWS VPC over an end-to-end encrypted portal.

Non-public access to private API endpoints are only accessible to enrolled members of the project with the appropriate attributes. The communication channel is and . Keys and credentials are automatically rotated. Access to the API can be easily revoked.

Travel App Corp. does not get unfettered access to Monitoring Corp.’s network. It only gets access to the API service. Monitoring Corp. does not get unfettered access to Travel App Corp.’s network. It gets access only to respond to requests over a tcp connection. Monitoring Corp. cannot initiate connections.

All are secure-by-default. Only project members, with valid credentials, can connect with each other. NAT’s are traversed using a relay and outgoing tcp connections. Neither Monitoring Corp. nor Travel App Corp. expose any listening endpoints to the Internet. Their networks are completely closed and protected from any attacks from the Internet.

Cleanup

To delete all AWS resources created by this example:

Amazon EC2

Let's connect a nodejs app in one virtual private network with an application serving a self hosted model in another virtual private network. The example uses the AWS CLI to create these virtual networks.

Each company’s network is private, isolated, and doesn't expose ports. To learn how end-to-end trust is established, please read: “”

Run

This example requires Bash, Git, Curl, and the AWS CLI. Please set up these tools for your operating system. In particular you need to with aws sso login.

Then run the following commands:

If everything runs as expected, you'll see the answer to the question: "What is Ockham's Razor?".

Walkthrough

The script, that you ran above, and its are full of comments and meant to be read. The example setup is only a few simple steps, so please take some time to read and explore.

Administrator

  • The calls the which invokes the to create an new identity, sign into Ockam Orchestrator, set up a new Ockam project, make you the administrator of this project, and get a project membership .

  • The run function then . The tickets are valid for 10 minutes. Each ticket can be redeemed only once and assigns to its redeemer. The is meant for the Ockam node that will run in AI Corp.’s network. The is meant for the Ockam node that will run in Health Corp.’s network.

  • In a typical production setup an administrator or provisioning pipeline generates enrollment tickets and gives them to nodes that are being provisioned. In our example, the run function is acting on your behalf as the administrator of the Ockam project.

  • The run function passes the enrollment tickets as variables of the run scripts provisioning and .

AI Corp

First, the ai_corp/run.sh script creates a network to host the application exposing the LLaMA model API:

  • We and tag it.

  • We and attach it to the VPC.

  • We and to the Internet via the gateway.

  • We , with associate it to the route table.

  • We finally so that there is:

    • ,

    • And to install the model and its application.

We are now ready to create an EC2 instance where the Ockam outlet node will run:

  • We .

  • We in order to access the EC2 instance via SSH.

  • Before creating the EC2 instance . Indeed, we need properly sized instance in order to run a large language model, and those instances are not available in all regions. If the instance is not available in the current region, we return the list of all the regions where that instance type is available.

  • We . Starting the instance executes a start script based on ai_corp/run_ockam.sh where:

    • created by the administrator and given as a parameter to ai_corp/run.sh.

  • We and .

When the instance is started, the run_ockam.sh script is executed:

  • The .

  • The .

  • We then create an Ockam node:

    • With .

    • A . The policy authorizes identities with a credential containing the attribute ai-inlet="true".

    • With capable of forwarding the TCP traffic to the TCP outlet.

Health Corp

First, the health_corp/run.sh script creates a network to host the client.js application which will connect to the LLaMA model:

  • We and tag it.

  • We and attach it to the VPC.

  • We and to the Internet via the gateway.

  • We , and associate it to the route table.

  • We finally so that there is:

    • ,

    • And to download and install the nodejs client application.

We are now ready to create an EC2 instance where the Ockam inlet node will run:

  • We .

  • We above and a start script based on run_ockam.sh where:

    • created by the administrator and given as a parameter to run.sh.

The instance is started and the run_ockam.sh script is executed:

  • The .

  • The .

  • We then create an Ockam node:

    • With .

    • Connected to .

    • A . The policy authorizes identities with a credential containing the attribute ai-outlet="true".

We finally wait for the instance to be ready and install the client.js application:

  • The is (this uses the previously created key.pem file to identify).

  • We can then and:

    • .

    • .

Once the client.js application is started:

  • It will .

  • It and waits for a response from the model.

  • The response is then .

Recap

We connected a nodejs application in one virtual private network with an application serving a LLaMA model in another virtual private network over an end-to-end encrypted portal.

Sensitive business data coming from the model is only accessible to AI Corp. and Health Corp. All data is with strong forward secrecy as it moves through the Internet. The communication channel is and . Keys and credentials are automatically rotated. Access to connect with the model API can be easily revoked.

Health Corp. does not get unfettered access to AI Corp.’s network. It gets access only to run API queries. AI Corp. does not get unfettered access to Health Corp.’s network. It gets access only to respond to queries over a TCP connection. AI Corp. cannot initiate connections.

All are secure-by-default. Only project members, with valid credentials, can connect with each other. NAT’s are traversed using a relay and outgoing tcp connections. AI Corp. or Health Corp. don’t expose any listening endpoints on the Internet. Their networks are completely closed and protected from any attacks from the Internet.

Cleanup

To delete all AWS resources:

Ockam - Node
Ockam - Node for Amazon MSK
Ockam - Node for Amazon RDS Postgres
Ockam - Node for Amazon Timestream InfluxDB
Ockam - Node for Amazon Redshift
Ockam - Node for Amazon Bedrock
# Clone the Ockam repo from Github.
git clone --depth 1 https://github.com/build-trust/ockam && cd ockam

# Navigate to this example’s directory.
cd examples/command/portals/apis/nodejs/amazon_ec2/aws_cli

# Run the example, use Ctrl-C to exit at any point.
./run.sh
./run.sh cleanup
How does Ockam work?
login to your AWS account
run.sh script
accompanying files
run.sh script
run function
enroll command
credential
generates two new enrollment tickets
attributes
first ticket
second ticket
Monitoring Corp.’s network
Travel App Corp.’s network
create a VPC
create an Internet gateway
create a route table
create a route
create a subnet
create a security group
One TCP egress to the Internet
one SSH ingress
select an AMI
start an instance using the AMI
ENROLLMENT_TICKET is replaced by the enrollment ticket
tag the created instance
wait for it to be available
ockam executable is installed
enrollment ticket is used to create a default identity and make it a project member
a TCP outlet
policy associated to the outlet
a relay
api.js file
copied to the instance
SSH to the instance
Install nodejs
Install dependencies
Run the nodejs api application
create a VPC
create an Internet gateway
create a route table
create a route
create a subnet
create a security group
One TCP egress to the Internet
One SSH ingress
select an AMI
start an instance using the AMI
ENROLLMENT_TICKET is replaced by the enrollment ticket
ockam executable is installed
enrollment ticket is used to create a default identity and make it a project member
a TCP inlet
policy associated to the inlet
client.js file
copied to the instance
SSH to the instance
Install nodejs
Run the nodejs client application
mutually authenticated
authorized
access controls
# Clone the Ockam repo from Github.
git clone --depth 1 https://github.com/build-trust/ockam && cd ockam

# Navigate to this example’s directory.
cd examples/command/portals/ai/amazon_ec2

# Run the example, use Ctrl-C to exit at any point.
./run.sh
./run.sh cleanup
How does Ockam work?
login to your AWS account
run.sh script
accompanying files
run.sh script
run function
enroll command
credential
generates two new enrollment tickets
attributes
first ticket
second ticket
AI Corp.'s network
Health Corp.'s network
create a VPC
create an Internet gateway
create a route table
create a route
create a subnet
create a security group
One TCP egress to the Internet
one SSH ingress
select an AMI
create a key pair
we check that the AWS region we are using proposes this kind of instance type
start an instance using the selected AMI and right instance type
ENROLLMENT_TICKET is replaced by the enrollment ticket
tag the created instance
wait for it to be available
ockam executable is installed
enrollment ticket is used to create a default identity and make it a project member
a TCP outlet
policy associated to the outlet
a relay
create a VPC
create an Internet gateway
create a route table
create a route
create a subnet
create a security group
One TCP egress to the Internet
One SSH ingress
select an AMI
start an instance using the AMI
ENROLLMENT_TICKET is replaced by the enrollment ticket
ockam executable is installed
enrollment ticket is used to create a default identity and make it a project member
a TCP inlet
the ai relay
policy associated to the inlet
client.js file
copied to the instance
SSH to the instance
Install nodejs
Start the client.js application
connect to the Ockam inlet at port 3000
sends a query
printed on the console
encrypted
mutually authenticated
authorized
access controls

Gitlab Enterprise

Let's connect a nodejs app in one company's Amazon VPC with a CodeRepository hosted on a Gitlab Server in another company's Amazon VPC. The example uses AWS CLI to create these VPCs.

Each company’s network is private, isolated, and doesn't expose ports. To learn how end-to-end trust is established, please read: “How does Ockam work?”

Run

This example requires Bash, Git, AWS CLI, Influx CLI, jq. Please set up these tools for your operating system. In particular you need to login to your AWS account with aws sso login.

Then run the following commands:

# Clone the Ockam repo from Github.
git clone --depth 1 https://github.com/build-trust/ockam && cd ockam

# Navigate to this example’s directory.
cd examples/command/portals/coderepos/gitlab/amazon_ec2/aws_cli

# Run the example, use Ctrl-C to exit at any point.
./run.sh

If everything runs as expected, you'll see the message: The example run was successful 🥳

Walkthrough

The run.sh script, that you ran above, and its accompanying files are full of comments and meant to be read. The example setup is only a few simple steps, so please take some time to read and explore.

Administrator

  • The run.sh script calls the run function which invokes the enroll command to create an new identity, sign into Ockam Orchestrator, set up a new Ockam project, make you the administrator of this project, and get a project membership credential.

  • The run function then generates two new enrollment tickets. The tickets are valid for 10 minutes. Each ticket can be redeemed only once and assigns attributes to its redeemer. The first ticket is meant for the Ockam node that will run in Bank Corp.’s network. The second ticket is meant for the Ockam node that will run in Analysis Corp.’s network.

  • In a typical production setup an administrator or provisioning pipeline generates enrollment tickets and gives them to nodes that are being provisioned. In our example, the run function is acting on your behalf as the administrator of the Ockam project.

  • The run function passes the enrollment tickets as variables of the run scripts provisioning Bank Corp.'s network and Analysis Corp.'s network.

Bank Corp

First, the bank_corp/run.sh script creates a network to host the database:

  • We create a VPC and tag it.

  • We create an Internet gateway and attach it to the VPC.

  • We create a route table and create a route to the Internet via the gateway.

  • We create two subnets, located in two distinct availability zones, and associated to the route table.

  • We finally create a security group so that there is:

    • one TCP egress to the Internet,

    • one ingress to EC2 running Gitlab from the local machine running the example, to access Gitlab on port 22 and 80.

We are now ready to create an EC2 instance where the Gitlab server and Ockam outlet node will run:

  • An SSH keypair to access gitlab repository is created and, the public key is saved in a variable.

  • We select an AMI.

  • We create an ec2 keypair to access EC2 and to obtain gitlab password to be able to login to gitlab console.

  • We start an instance using the AMI above and a start script based on run_ockam.sh and run_gitlab.sh where:

    • ENROLLMENT_TICKET is replaced by the enrollment ticket created by the administrator and given as a parameter to run.sh.

    • SSH_PUBLIC_KEY is replaced with the Public IP of the EC2 instance in run_gitlab.sh script

  • We tag the created instance and wait for it to be available.

  • We wait for 3 minutes for gitlab to be setup and check it's availability

When the instance is started, the run_gitlab.sh script is executed:

  • Gitlab and it's dependencies are installed.

    • Gitlab SSH Port is mapped to 222.

  • Obtain gitlab root password to create access token.

    • Password can be used to access the gitlab console from local machine

  • Disable public signups.

  • Create demo_project.

  • Configure access via created SSH Key.

When the instance is started, the run_ockam.sh script is executed:

  • The ockam executable is installed.

  • The enrollment ticket is used to create a default identity and make it a project member.

  • We then create an Ockam node:

    • With a TCP outlet.

    • A policy associated to the outlet. The policy authorizes identities with a credential containing the attribute gitlab-inlet="true".

    • With a relay capable of forwarding the TCP traffic to the TCP outlet.

Analysis Corp

First, the analysis_corp/run.sh script creates a network to host the nodejs application:

  • We create a VPC and tag it.

  • We create an Internet gateway and attach it to the VPC.

  • We create a route table and create a route to the Internet via the gateway.

  • We create a subnet, and associated to the route table.

  • We finally create a security group so that there is:

    • One TCP egress to the Internet,

    • And One SSH ingress to download and install the nodejs application from local machine running the script.

We are now ready to create an EC2 instance where the Ockam inlet node will run:

  • We select an AMI.

  • We start an instance using the AMI above and a start script based on run_ockam.sh where:

    • ENROLLMENT_TICKET is replaced by the enrollment ticket created by the administrator and given as a parameter to run.sh.

The instance is started and the run_repoaccess.sh script is executed:

  • The ssh config file is created on the EC2 with details of the private SSH key and permissions are updated

The instance is started and the run_ockam.sh script is executed:

  • The ockam executable is installed.

  • The enrollment ticket is used to create a default identity and make it a project member.

  • We then create an Ockam node:

    • With a TCP inlet.

    • A policy associated to the inlet. The policy authorizes identities with a credential containing the attribute gitlab-outlet="true".

We finally wait for the instance to be ready and install the nodejs application:

  • The app.js file has code to access the code repository on port 1222 configured in tcp inlet

  • We can then SSH to the instance and:

    • Copy app.js.

    • Copy SSH Private key for Repository access.

    • Install nodejs.

    • Start the nodejs application.

Once the nodejs application is started:

  • It will connect to the Ockam inlet at port 1222.

  • It executes the run function that clones the repository, makes sure README.md file exists, inserts a line to the README.md file, does a commit and push the commit to the remote gitlab server.

Recap

We connected a nodejs app in one virtual private network with a Gitlab CodeRepository in another virtual private network over an end-to-end encrypted portal.

Sensitive business data in the Gitlab Codebase is only accessible to Bank Corp. and Analysis Corp. All data is encrypted with strong forward secrecy as it moves through the Internet. The communication channel is mutually authenticated and authorized. Keys and credentials are automatically rotated. Access to connect with InfluxDB can be easily revoked.

Analysis Corp. does not get unfettered access to Bank Corp.’s network. It gets access only to the codebase hosted on the Gitlab server. Bank Corp. does not get unfettered access to Analysis Corp.’s network. It gets access only to respond to queries over a tcp connection. Bank Corp. cannot initiate connections.

All access controls are secure-by-default. Only project members, with valid credentials, can connect with each other. NAT’s are traversed using a relay and outgoing tcp connections. Bank Corp. or Analysis Corp. don’t expose any listening endpoints on the Internet. Their networks are completely closed and protected from any attacks from the Internet.

Cleanup

To delete all AWS resources:

./run.sh cleanup

Ockam Node for Amazon Redshift

Create an Ockam Redshift outlet node using Cloudformation template

This guide contains instructions to launch within AWS environment, an

  • An Ockam Redshift Outlet Node within an AWS environment

  • An Ockam Redshift Inlet Node:

    • Within an AWS environment, or

    • Using Docker in any environment

The walkthrough demonstrates:

  1. Running an Ockam Redshift Outlet node in your AWS environment that contains a private Amazon Redshift Serverless or Amazon Redshift Provisioned Cluster

  2. Setting up Ockam Redshift inlet nodes using either AWS or Docker from any location.

  3. Verifying secure communication between Redshift clients and Amazon Redshift Database.

Read: “How does Ockam work?” to learn about end-to-end trust establishment.

PreRequisite

  • A private Amazon Redshift Database (Serverless or Provisioned) is created and accessible from the VPC and Subnet where the Ockam Node will be launched.

  • Security Group associated with the Amazon Redshift Database allows inbound traffic on the required default port (5439) from the subnet where the Ockam Outlet Node will reside.

  • You have permission to subscribe and launch Cloudformation stack from AWS Marketplace on the AWS Account running Amazon Redshift.

Create an Orchestrator Project

  1. Sign up for Ockam and pick a subscription plan through the guided workflow on Ockam.io.

  2. Run the following commands to install Ockam Command and enroll with the Ockam Orchestrator.

curl --proto '=https' --tlsv1.2 -sSfL https://install.command.ockam.io | bash
source "$HOME/.ockam/env"

ockam enroll
  1. Control which identities are allowed to enroll themselves into your project by issuing unique one-time use enrollment tickets. Generate two enrollment tickets, one for the Outlet and one for the Inlet.

# Enrollment ticket for Ockam Outlet Node
ockam project ticket --expires-in 10h --usage-count 1 \
  --attribute amazon-redshift-outlet \
  --relay redshift \
    > "outlet.ticket"

# Enrollment ticket for Ockam Inlet Node
ockam project ticket --expires-in 10h --usage-count 1 \
  --attribute amazon-redshift-inlet \
    > "inlet.ticket"

Setup Ockam Redshift Outlet Node

  • Login to AWS Account you would like to use

  • Subscribe to "Ockam - Node for Amazon Redshift" in AWS Marketplace

  • Navigate to AWS Marketplace -> Manage subscriptions. Select Ockam - Node for Amazon Redshift from the list of subscriptions. Select Actions-> Launch Cloudformation stack

  • Select the Region you want to deploy and click Continue to Launch. Under Actions, select Launch Cloudformation

  • Create stack with the following details

    • Stack name: redshift-ockam-outlet or any name you prefer

    • Network Configuration

      • VPC ID: Choose a VPC ID where the EC2 instance will be deployed.

      • Subnet ID: Select a suitable Subnet ID within the chosen VPC that has access to Amazon Redshift. Note: Security Group associated with Amazon Redshift should allow inbound traffic on the required default port (5439) from the IP address of the Subnet or VPC.

      • EC2 Instance Type: Default instance type is m6a.large. If you would like predictable network bandwidth of 12.5 Gbps please use m6a.8xlarge or a small instance type like t3.medium depending on your use case

    • Ockam Node Configuration

      • Enrollment ticket: Copy and paste the content of the outlet.ticket generated above

      • Redshift Database Endpoint: To configure the Ockam Redshift Outlet Node, you'll need to specify the Amazon Redshift Endpoint. This configuration allows the Ockam Redshift Outlet Node to connect to the database.

        • Example: cluster-name.xxxx.region.redshift.amazonaws.com:5439 or workgroup.account.region.redshift-serverless.amazonaws.com:5439

        • Note: If you are copy pasting the Redshift Endpoint value from the AWS Console, please make sure to remove the /DATABASE_NAME at the end as it is not needed

      • JSON Node Configuration: Copy and paste the below configuration. Note that the configuration values match with the enrollment tickets created in the previous step. $REDSHIFT_ENDPOINT will be replaced during runtime.

{
    "http-server-port": 23345,
    "relay": "redshift",
    "tcp-outlet": {
        "to": "$REDSHIFT_ENDPOINT",
        "allow": "amazon-redshift-inlet"
    }
}
  • Click Next to launch the CloudFormation run.

  • A successful CloudFormation stack run configures the Ockam Redshift Outlet node on an EC2 machine.

    • EC2 machine mounts an EFS volume created in the same subnet. Ockam state is stored in the EFS volume.

    • A security group with egress access to the internet will be attached to the EC2 machine.

  • Connect to the EC2 machine via AWS Session Manager.

    • To view the log file, run sudo cat /var/log/cloud-init-output.log.

      • Note: DNS Resolution for the EFS drive may take upto 10 minutes, you will see the script retrying every 30 seconds to resolve

      • Successful run will show Ockam node setup completed successfully in the logs

      • To view the status of Ockam node run curl http://localhost:23345/show | jq

    • View the Ockam node status in CloudWatch.

      • Navigate to Cloudwatch -> Log Group and select redshift-ockam-outlet-status-logs. Select the Logstream for the EC2 instance.

      • The Cloudformation template creates a subscription filter that sends data to a Cloudwatch alarm redshift-ockam-outlet-OckamNodeDownAlarm.Alarm will turn green upon ockam node successfully running.

    • An Autoscaling group ensures atleast one EC2 instance is running at all times.

Ockam redshift outlet node setup is complete. You can now create Ockam redshisft inlet nodes in any network to establish secure communication.

Setup Ockam Inlet Node

You can set up an Ockam Redshift Inlet Node either in AWS or locally using Docker. Here are both options:

Option 1: Setup Inlet Node in AWS

  • Login to AWS Account you would like to use

  • Subscribe to "Ockam - Node" in AWS Marketplace

  • Navigate to AWS Marketplace -> Manage subscriptions. Select Ockam - Node from the list of subscriptions. Select Actions-> Launch Cloudformation stack

  • Select the Region you want to deploy and click Continue to Launch. Under Actions, select Launch Cloudformation

  • Create stack with below details

    • Stack name: redshift-ockam-inlet or any name you prefer

    • Network Configuration

      • Select suitable values for VPC ID and Subnet ID

      • EC2 Instance Type: Default instance type is m6a.large. If you would like predictable network bandwidth of 12.5 Gbps please use m6a.8xlarge or a small instance type like t3.medium depending on your use case

    • Ockam Configuration

      • Enrollment ticket: Copy and paste the content of the inlet.ticket generated above

      • JSON Node Configuration: Copy and paste the below configuration.

{
    "http-server-port": 23345,
    "tcp-inlet": {
      "from": "0.0.0.0:15439",
      "via": "redshift",
      "allow": "amazon-redshift-outlet"
    }
  }
  • Click Next to launch the CloudFormation run.

  • A successful CloudFormation stack run configures the Ockam inlet node on an EC2 machine.

  • EC2 machine mounts an EFS volume created in the same subnet. Ockam state is stored in the EFS volume.

  • Connect to the EC2 machine via AWS Session Manager.

    • To view the log file, run sudo cat /var/log/cloud-init-output.log.

      • Successful run will show Ockam node setup completed successfully in the logs

    • To view the status of Ockam node run curl http://localhost:23345/show | jq

  • View the Ockam node status in CloudWatch.

    • Navigate to Cloudwatch -> Log Group and select redshift-ockam-inlet-status-logs. Select the Logstream for the EC2 instance.

    • Cloudformation template creates a subscription filter that sends data to a Cloudwatch alarm redshift-ockam-inlet-OckamNodeDownAlarm.Alarm will turn green upon ockam node successfully running.

  • An Autoscaling group ensures atleast one EC2 instance is running at all times.

Use any postgresqlclient and connect to localhost:15432 (PGHOST=localhost, PGPORT=15439) from the machine running the Ockam redshift Inlet node.

Option 2: Setup Inlet Node Locally with Docker Compose

To set up an Inlet Node locally and interact with it outside of AWS, use Docker Compose.

  • Create a file named docker-compose.yml with the following content:

services:
  ockam:
    image: ghcr.io/build-trust/ockam
    container_name: redshift-inlet
    environment:
      ENROLLMENT_TICKET: ${ENROLLMENT_TICKET:-}
      OCKAM_LOGGING: true
      OCKAM_LOG_LEVEL: info
    command:
      - node
      - create
      - --enrollment-ticket
      - ${ENROLLMENT_TICKET}
      - --foreground
      - --configuration
      - |
        tcp-inlet:
          via: redshift
          allow: amazon-redshift-outlet
          from: 127.0.0.1:15439
    network_mode: host

  psql-client:
    image: postgres
    container_name: psql-client
    command: /bin/bash -c "while true; do sleep 30; done"
    depends_on:
      - ockam
    network_mode: host
  • Run the following command from the same location as the docker-compose.yml and the inlet.ticket to create an Ockam postgres inlet that can connect to the outlet running in AWS , along with psql client container.

ENROLLMENT_TICKET=$(cat inlet.ticket) docker-compose up -d
  • Check status of Ockam inlet node. You will see The node is UP when ockam is configured successfully and ready to accept connection

docker exec -it redshift-inlet /ockam node show
  • Connect to psql-client container and run commands

# Connect to the container
docker exec -it psql-client /bin/bash

# Update the *_REPLACE placeholder variables
export PGUSER="PGUSER_REPLACE";
export PGPASSWORD="PGPASSWORD_REPLACE";
export PGDATABASE="PGDATABASE_REPLACE";
export PGHOST="localhost";
export PGPORT="15439";

# list tables
psql -c "\dt";

# Create a table
psql -c "CREATE TABLE __test__ (key VARCHAR(255), value VARCHAR(255));";

# Insert some data
psql -c "INSERT INTO __test__ (key, value) VALUES ('0', 'Hello');";

# Query the data
psql -c "SELECT * FROM __test__;";

# Drop table if it exists
psql -c "DROP TABLE IF EXISTS __test__;";

This setup allows you to run an Ockam Redshift Inlet Node locally and communicate securely with a private Amazon Redshift database running in AWS

  • Cleanup

docker compose down --volumes --remove-orphans

Amazon Timestream

Let's connect a nodejs app in one Amazon VPC with a Amazon Timestream managed InfluxDB database in another Amazon VPC. We’ll create an end-to-end encrypted Ockam Portal to InfluxDB.

To understand the details of how end-to-end trust is established, and how the portal works even though the two networks are isolated with no exposed ports, please read: “”

Run

This example requires Bash, Git, AWS CLI. Please set up these tools for your operating system. In particular you need to .

Amazon Timestream for InfluxDB was added very recently. To run this example, please install the latest version of AWS CLI.

Then run the following commands:

If everything runs as expected, you'll see the message: The example run was successful 🥳

Walkthrough

The , that you ran above, and its are full of comments and meant to be read. The example setup is only a few simple steps, so please take some time to read and explore.

Administrator

  • The calls the which invokes the to create an new identity, sign into Ockam Orchestrator, set up a new Ockam project, make you the administrator of this project, and get a project membership .

  • The run function then . The tickets are valid for 10 minutes. Each ticket can be redeemed only once and assigns to its redeemer. The is meant for the Ockam node that will run in Metrics Corp.’s network. The is meant for the Ockam node that will run in Datastream Corp.’s network.

  • In a typical production setup an administrator or provisioning pipeline generates enrollment tickets and gives them to nodes that are being provisioned. In our example, the run function is acting on your behalf as the administrator of the Ockam project.

  • The run function passes the enrollment tickets as variables of the run scripts provisioning and .

Metrics Corp

First, the metrics_corp/run.sh script creates a network to host the database:

  • It and tags it.

  • It and attaches it to the VPC.

  • It and to the Internet via the gateway.

  • It and associates it with the route table.

  • It which allows:

    • TCP egress to the Internet.

    • Ingress to InfluxDB from within the subnet.

    • SSH ingress to provision EC2 instances.

Then, the metrics_corp/run.sh script creates a InfluxDB using Timestream. Next the script creates an EC2 instance. This instance runs an Ockam TCP Outlet.

  • It .

  • It then and a start script based on run_ockam.sh where:

    • created by the administrator and given as a parameter to run.sh.

    • that we previously saved.

  • It and .

When EC2 starts the instance, it executes the run_ockam.sh script:

  • It installs the and

  • It to send to Datastream Corp and saves it to file.

  • It installs the command.

  • It uses the .

  • It then with:

    • A TCP outlet.

    • An access control policy associated to the outlet. The policy authorizes only identities with a credential attesting to the attribute influxdb-inlet="true".

    • A a relay that can forward TCP traffic to the TCP outlet.

Datastream Corp

First, the datastream_corp/run.sh script creates a network to host the nodejs application:

  • It and tags it.

  • It and attaches it to the VPC.

  • It and to the Internet via the gateway.

  • It and associates it with the route table.

  • It that allows:

    • TCP egress to the Internet,

    • SSH ingress to provision EC2 instances.

Next, the script creates an EC2 instance. This instance runs an Ockam TCP Inlet.

  • It .

  • It then and a start script based on run_ockam.sh in which the:

    • The variable created by the administrator and given as a parameter to run.sh.

When EC2 starts the instance, it executes the run_ockam.sh script:

  • It installs command.

  • It uses the .

  • It then creates an Ockam node with:

    • A TCP inlet.

    • An access control . The policy authorizes identities with a credential attesting to the attribute influxdb-outlet="true".

Next datastream_corp/run.sh waits for the instance to be ready and :

  • It copies into the instance using SCP

  • It then , which:

    • .

    • .

    • .

Finally, the nodejs application is started:

  • It .

  • It to show that the connection with the InfluxDB database is working.

Recap

We connected a nodejs app in one virtual private network with a InfluxDB database in another virtual private network over an end-to-end encrypted portal.

Sensitive business data in the InfluxDB database is only accessible to Metrics Corp. and Datastream Corp. All data is with strong forward secrecy as it moves through the Internet. The communication channel is and . Keys and credentials are automatically rotated. Access to connect with InfluxDB can be easily revoked.

Datastream Corp. does not get unfettered access to Metrics Corp.’s network. It gets access only to query InfluxDB. Metrics Corp. does not get unfettered access to Datastream Corp.’s network. It gets access only to respond to queries over a tcp connection. Metrics Corp. cannot initiate connections.

All are secure-by-default. Only project members, with valid credentials, can connect with each other. NAT’s are traversed using a relay and outgoing tcp connections. Metrics Corp. or Datastream Corp. don’t expose any listening endpoints on the Internet. Their networks are completely closed and protected from any attacks from the Internet.

Cleanup

To delete all AWS resources:

Amazon Bedrock

Let's connect a nodejs app in one virtual private network with an application serving an Amazon Bedrock model in another virtual private network. The example uses the AWS CLI to create these virtual networks.

Each company’s network is private, isolated, and doesn't expose ports. To learn how end-to-end trust is established, please read: “”

Run

This example requires Bash, Git, Curl, and the AWS CLI. Please set up these tools for your operating system. In particular you need to with aws sso login.

Then run the following commands:

If everything runs as expected, you'll see the answer to the question: "What is Ockham's Razor?".

Walkthrough

The script, that you ran above, and its are full of comments and meant to be read. The example setup is only a few simple steps, so please take some time to read and explore.

Administrator

  • The calls the which invokes the to create an new identity, sign into Ockam Orchestrator, set up a new Ockam project, make you the administrator of this project, and get a project membership .

  • The run function then . The tickets are valid for 10 minutes. Each ticket can be redeemed only once and assigns to its redeemer. The is meant for the Ockam node that will run in AI Corp.’s network. The is meant for the Ockam node that will run in Health Corp.’s network.

  • In a typical production setup an administrator or provisioning pipeline generates enrollment tickets and gives them to nodes that are being provisioned. In our example, the run function is acting on your behalf as the administrator of the Ockam project.

  • The run function passes the enrollment tickets as variables of the run scripts provisioning and .

AI Corp

First, the ai_corp/run.sh script creates a network to host the application exposing the Bedrock model API:

  • We and tag it.

  • We . This will be used to create the private link below.

  • We and attach it to the VPC.

  • We and to the Internet via the gateway.

  • We , with associate it to the route table.

  • We so that there is:

    • ,

    • And to install the application accessing the model.

  • We finally to allow the Bedrock client inside the server application to access the Bedrock model.

We are now ready to create an EC2 instance where the Ockam outlet node will run:

  • We .

  • We in order to access the EC2 instance via SSH.

  • We . Starting the instance executes a start script based on ai_corp/run_ockam.sh where:

    • created by the administrator and given as a parameter to ai_corp/run.sh.

  • We

  • We

  • We .

When the instance is started, the run_ockam.sh script is executed:

  • The .

  • The .

  • We then create an Ockam node:

    • With .

    • A . The policy authorizes identities with a credential containing the attribute ai-inlet="true".

    • With capable of forwarding the TCP traffic to the TCP outlet.

The model used in this example is the "Titan Text G1 - Lite" model. In order to use it, .

Health Corp

First, the health_corp/run.sh script creates a network to host the client.js application which will connect to the Bedrock model:

  • We and tag it.

  • We and attach it to the VPC.

  • We and to the Internet via the gateway.

  • We , and associate it to the route table.

  • We finally so that there is:

    • ,

    • And to download and install the nodejs client application.

We are now ready to create an EC2 instance where the Ockam inlet node will run:

  • We .

  • We above and a start script based on run_ockam.sh where:

    • created by the administrator and given as a parameter to run.sh.

The instance is started and the run_ockam.sh script is executed:

  • The .

  • The .

  • We then create an Ockam node:

    • With .

    • Forwarding messages to

    • A . The policy authorizes identities with a credential containing the attribute ai-outlet="true".

We finally wait for the instance to be ready and install the client.js application:

  • The is (this uses the previously created key.pem file to identify).

  • We can then and:

    • .

    • .

Once the client.js application is started:

  • It will .

  • It and waits for a response from the model.

  • The response is then .

Recap

We connected a nodejs application in one virtual private network with an application serving an Amazon Bedrock model in another virtual private network over an end-to-end encrypted portal.

Sensitive business data coming from the model is only accessible to AI Corp. and Health Corp. All data is with strong forward secrecy as it moves through the Internet. The communication channel is and . Keys and credentials are automatically rotated. Access to connect with the model API can be easily revoked.

Health Corp. does not get unfettered access to AI Corp.’s network. It gets access only to run API queries. AI Corp. does not get unfettered access to Health Corp.’s network. It gets access only to respond to queries over a TCP connection. AI Corp. cannot initiate connections.

All are secure-by-default. Only project members, with valid credentials, can connect with each other. NAT’s are traversed using a relay and outgoing tcp connections. AI Corp. or Health Corp. don’t expose any listening endpoints on the Internet. Their networks are completely closed and protected from any attacks from the Internet.

Cleanup

To delete all AWS resources:

Identities and Credentials

Ockam Identities are cryptographically verifiable digital identities. Each Identity has a unique Identifier. An Ockam Credential is a signed attestation by an Issuer about the Attributes of a Subject.

Identities

Ockam Identities are cryptographically verifiable digital identities. Each Identity maintains one or more secret keys and has a unique Ockam Identifier.

When an Ockam Identity is first created, it generates a random primary secret key inside an Ockam Vault. This secret key must be capable of performing a ChangeSignature. We support two types of change signatures - EdDSACurve25519Signature or ECDSASHA256CurveP256Signature. When both options are supported by a vault implementation that EdDSACurve25519Signature is our preferred option.

The public part of the primary secret key is then written into a Change (see data structure below) and this Change includes a signature using the primary secret key. The SHA256 hash of this first Change, truncated to its first 20 bytes, becomes the the forever Ockam Identifier of this Identity. Each change includes a created_at timestamp to indicate when the change was created and an expires_at timestamp to indicate when the primary_public_key included in the change should stop being relied on as the primary public key of this identity.

Whenever the identity wishes to rotate to a new primary public key and revoke all previous primary public keys it can create a new Change. This new change includes two signatures - one by the previous primary secret key and another by a newly generated primary secret key. Over time, this creates a signed ChangeHistory, the latest Change in this history indicates the self-attested latest primary public key of this Identity.

Purpose Key Attestations

An Ockam Identity can use its primary secret key to sign PurposeKeyAttestations (see data structure below). These attestations indicate which public keys (and corresponding secret keys) the identity wishes to use for issuing credentials and authenticating itself within secure channels.

Each attestation includes an expires_at timestamp to indicate when the included public key should no longer be relied on for its indicated purpose. The Identity's ChangeHistory can include a Change which has revoke_all_purpose_keys set to true. All purpose key attestations created before the created_at timestamp of this change are also be considered expired.

Credentials

An Ockam Credential is a signed attestation by an Issuer about the Attributes of a Subject. The Issuer and Subject are both Ockam Identities. Attributes is a map of name and value pairs.

Any Identity can issue credentials attesting to attributes of another Ockam Identity. This does not imply that these attestations should be considered authoritative about the subject's attributes. Who is an authority on which attributes of which subjects is defined using Ockam Trust Contexts.

Each signed credential includes an expires_at field to indicate a timestamp beyond which the attestation made in the credential should no longer be relied on.

The Attributes type above includes a schema identifier that refers to a schema that defines the meaning of each attribute. For example, Project Membership Authorities within an Ockam Orchestrator Project use a specific schema identifier and define attributes like enroller which indicates that an Identity that possess a credential with enroller attribute set to true can request one-time user enrollment tokens to invite new members to the project.

Docker

In this hands-on example we send end-to-end encrypted messages through Apache Kafka.

encrypts messages from a Producer to a specific Consumer. Only that specific Consumer can decrypt these messages. This guarantees that your data cannot be observed or tampered as it passes through Kafka. Operators of the Kafka cluster only see end-to-end encrypted data. Any compromise of an operator's infrastructure cannot compromise your business data.

To learn how end-to-end trust is established, please read: “”

Run

This example requires Bash, Git, Curl, Docker, and Docker Compose. Please set up these tools for your operating system, then run the following commands:

If everything runs as expected, you'll see the message: The example run was successful 🥳

Walkthrough

The that you ran above, and its , are full of comments and meant to be read. The example setup is only a few simple steps, so please take some time to read and explore.

Administrator

  • The calls the which invokes the to create a new identity, sign in to Ockam Orchestrator, set up a new Ockam project, make you the administrator of this project, and get a project membership .

  • The run function then , each valid for 10 minutes, and can be redeemed only once. The is meant for the Ockam node that will run in Kafka Operator's network. The are meant for the Ockam node that will run in Application Team’s network.

  • In a typical production setup, an administrator or provisioning pipeline generates enrollment tickets and gives them to nodes that are being provisioned. In our example, the run function is acting on your behalf as the administrator of the Ockam project. It provisions Ockam nodes in and , passing them their tickets using environment variables.

  • The run function invokes docker-compose for both and .

Kafka Operator

  • Kafka Operator’s is used when run.sh invokes docker-compose. It creates an for Kafka Operator.

  • In this network, docker compose starts a . This container becomes available at kafka:9092 in the Kafka Operator's network.

  • Once the Kafka container , docker compose starts an as a companion to the Kafka container described by ockam.yaml, . The node will automatically create an identity, using the ticket , and set up Kafka outlet.

  • The Ockam node then uses this identity and membership credential to authenticate and create a relay in the project, back to the node, at relay: kafka. The run function to use this relay address.

Application Team

  • Application Team’s is used when run.sh invokes docker-compose. It creates an for Application Team. In this network, docker compose starts a and a .

  • The Kafka consumer container is created using and an . The consumer enrollment ticket from run.sh is via an environment variable.

  • When the Kafka consumer node container starts in the Application Team's network, it runs , creating the Ockam node described by ockam.yaml, . The node will automatically create an identity, enroll with your project, and set up the Kafka inlet.

  • Next, the entrypoint at the end executes the , which launches a Kafka consumer waiting for messages in the demo topic. Once the messages are received, they are printed out.

  • In the producer container, the process is analogous. Once the Ockam node is setup, the launches a Kafka producer that sends messages.

Recap

We sent end-to-end encrypted messages through Apache Kafka.

Messages are encrypted with strong forward secrecy as soon as they leave a Producer, and only the intended Consumer can decrypt those messages. Kafka brokers and other Consumers can only see encrypted messages.

All communication is mutually authenticated and authorized. Keys and credentials are automatically rotated. Access can be easily revoked.

Cleanup

To delete all containers and images:

# Clone the Ockam repo from Github.
git clone --depth 1 https://github.com/build-trust/ockam && cd ockam

# Navigate to this example’s directory.
cd examples/command/portals/databases/influxdb/amazon_timestream/aws_cli

# Run the example, use Ctrl-C to exit at any point.
./run.sh
./run.sh cleanup
How does Ockam work?
login to your AWS account
run.sh script
accompanying files
run.sh script
run function
enroll command
credential
generates two new enrollment tickets
attributes
first ticket
second ticket
Metrics Corp.'s network
Datastream Corp.'s network
creates a VPC
creates an Internet gateway
creates a route table
a route
creates a subnet
creates a security group
database
selects an AMI
starts an instance using this AMI
ENROLLMENT_TICKET is replaced by the enrollment ticket
INFLUXDB_ADDRESS is replaced by the database address
tags the created instance
waits for it to be available
Influxdb client
configures it.
generates an InfluxDB auth token
ockam
enrollment ticket to create a default identity and make it a project member
creates an Ockam node
creates a VPC
creates an Internet gateway
creates a route table
a route
creates a subnet
creates a security group
selects an AMI
starts an instance using that AMI
ENROLLMENT_TICKET is replaced by the enrollment ticket
ockam
enrollment ticket is used to create a default identity and make it a project member
policy associated with the inlet
provisions it using SSH
app.js and token.txt
runs a script, using SSH
Installs nodejs
Installs the InfluxDB client library
Starts the nodejs application
connects to the Ockam inlet at localhost:8086
inserts a few system metrics into a bucket and retrieves them back
encrypted
mutually authenticated
authorized
access controls
# Clone the Ockam repo from Github.
git clone --depth 1 https://github.com/build-trust/ockam && cd ockam

# Navigate to this example’s directory.
cd examples/command/portals/ai/amazon_bedrock

# Run the example, use Ctrl-C to exit at any point.
./run.sh
./run.sh cleanup
How does Ockam work?
login to your AWS account
run.sh script
accompanying files
run.sh script
run function
enroll command
credential
generates two new enrollment tickets
attributes
first ticket
second ticket
AI Corp.'s network
Health Corp.'s network
create a VPC
enable DNS attributes and hostnames for the VPC
create an Internet gateway
create a route table
create a route
create a subnet
create a security group
One TCP egress to the Internet
one SSH ingress
server
create a private link to the Amazon Bedrock service
select an AMI
create a key pair
start an instance using the selected AMI
ENROLLMENT_TICKET is replaced by the enrollment ticket
tag the created instance
create an IAM profile with a role allowing the EC2 instance to access the Bedrock API
wait for it to be available
ockam executable is installed
enrollment ticket is used to create a default identity and make it a project member
a TCP outlet
policy associated to the outlet
a relay
you will need to request access to this model
create a VPC
create an Internet gateway
create a route table
create a route
create a subnet
create a security group
One TCP egress to the Internet
one SSH ingress
select an AMI
start an instance using the AMI
ENROLLMENT_TICKET is replaced by the enrollment ticket
ockam executable is installed
enrollment ticket is used to create a default identity and make it a project member
a TCP inlet
the ai relay
policy associated to the inlet
client.js file
copied to the instance
SSH to the instance
Install nodejs
Start the client.js application
connect to the Ockam inlet at port 3000
sends a query
printed on the console
encrypted
mutually authenticated
authorized
access controls
#[derive(Encode, Decode)]
#[cbor(transparent)]
pub struct Identifier(#[cbor(n(0), with = "minicbor::bytes")] pub [u8; 20]);

/// SHA256 hash of a Change, truncated to its first 20 bytes.
#[derive(Encode, Decode)]
#[cbor(transparent)]
pub struct ChangeHash(#[cbor(n(0), with = "minicbor::bytes")] pub [u8; 20]);

#[derive(Encode, Decode)]
#[cbor(transparent)]
pub struct ChangeHistory(#[n(0)] pub Vec<Change>);

#[derive(Encode, Decode)]
pub struct Change {
    #[cbor(with = "minicbor::bytes")]
    #[n(0)]
    pub data: Vec<u8>,

    #[n(1)]
    pub signature: ChangeSignature,

    #[n(2)]
    pub previous_signature: Option<ChangeSignature>,
}

#[derive(Encode, Decode)]
pub enum ChangeSignature {
    #[n(0)]
    EdDSACurve25519(#[n(0)] EdDSACurve25519Signature),

    #[n(1)]
    ECDSASHA256CurveP256(#[n(0)] ECDSASHA256CurveP256Signature),
}

#[derive(Encode, Decode)]
pub struct ChangeData {
    #[n(0)]
    pub previous_change: Option<ChangeHash>,

    #[n(1)]
    pub primary_public_key: PrimaryPublicKey,

    #[n(2)]
    pub revoke_all_purpose_keys: bool,

    #[n(3)]
    pub created_at: TimestampInSeconds,

    #[n(4)]
    pub expires_at: TimestampInSeconds,
}

#[derive(Encode, Decode)]
pub enum PrimaryPublicKey {
    #[n(0)]
    EdDSACurve25519(#[n(0)] EdDSACurve25519PublicKey),

    #[n(1)]
    ECDSASHA256CurveP256(#[n(0)] ECDSASHA256CurveP256PublicKey),
}

#[derive(Encode, Decode)]
pub struct VersionedData {
    #[n(0)]
    pub version: u8,

    #[cbor(with = "minicbor::bytes")]
    #[n(1)]
    pub data: Vec<u8>,
}

#[derive(Encode, Decode)]
#[cbor(transparent)]
pub struct TimestampInSeconds(#[n(0)] pub u64);
#[derive(Encode, Decode)]
pub struct PurposeKeyAttestation {
    #[cbor(with = "minicbor::bytes")]
    #[n(0)]
    pub data: Vec<u8>,

    #[n(1)]
    pub signature: PurposeKeyAttestationSignature,
}

#[derive(Encode, Decode)]
pub enum PurposeKeyAttestationSignature {
    #[n(0)]
    EdDSACurve25519(#[n(0)] EdDSACurve25519Signature),

    #[n(1)]
    ECDSASHA256CurveP256(#[n(0)] ECDSASHA256CurveP256Signature),
}

#[derive(Encode, Decode)]
pub struct PurposeKeyAttestationData {
    #[n(0)]
    pub subject: Identifier,

    #[n(1)]
    pub subject_latest_change_hash: ChangeHash,

    #[n(2)]
    pub public_key: PurposePublicKey,

    #[n(3)]
    pub created_at: TimestampInSeconds,

    #[n(4)]
    pub expires_at: TimestampInSeconds,
}

#[derive(Encode, Decode)]
pub enum PurposePublicKey {
    #[n(0)]
    SecureChannelStatic(#[n(0)] X25519PublicKey),

    #[n(1)]
    CredentialSigning(#[n(0)] VerifyingPublicKey),
}
#[derive(Encode, Decode)]
pub struct Credential {
    #[cbor(with = "minicbor::bytes")]
    #[n(0)]
    pub data: Vec<u8>,

    #[n(1)]
    pub signature: CredentialSignature,
}

#[derive(Encode, Decode)]
pub enum CredentialSignature {
    #[n(0)]
    EdDSACurve25519(#[n(0)] EdDSACurve25519Signature),

    #[n(1)]
    ECDSASHA256CurveP256(#[n(0)] ECDSASHA256CurveP256Signature),
}

#[derive(Encode, Decode)]
pub struct CredentialData {
    #[n(0)]
    pub subject: Option<Identifier>,

    #[n(1)]
    pub subject_latest_change_hash: Option<ChangeHash>,

    #[n(2)]
    pub subject_attributes: Attributes,

    #[n(3)]
    pub created_at: TimestampInSeconds,

    #[n(4)]
    pub expires_at: TimestampInSeconds,
}

#[derive(Encode, Decode)]
#[cbor(transparent)]
pub struct CredentialSchemaIdentifier(#[n(0)] pub u64);

#[derive(Encode, Decode)]
pub struct Attributes {
    #[n(0)]
    pub schema: CredentialSchemaIdentifier,

    #[n(1)]
    pub map: BTreeMap<Vec<u8>, Vec<u8>>,
}
Note the green lines that indicate which signature is verified by which public key.
Note the green lines that indicate which signature is verified by which public key.
Note the green lines that indicate which signature is verified by which public key.
# Clone the Ockam repo from Github.
git clone --depth 1 https://github.com/build-trust/ockam && cd ockam

# Navigate to this example’s directory.
cd examples/command/portals/kafka/apache/docker/

# Run the example, use Ctrl-C to exit at any point.
./run.sh
# Create a dedicated and isolated virtual network for kafka_operator.
networks:
  kafka_operator:
    driver: bridge
# Create a dedicated and isolated virtual network for application_team.
networks:
  application_team:
    driver: bridge
./run.sh cleanup
Ockam
How does Ockam work?
run.sh script
accompanying files
run.sh script
run function
enroll command
credential
generates three new enrollment tickets
first ticket
second and third tickets
Kafka Operator’s network
Application Team’s network
Kafka Operator's network
Application Team's network
docker-compose configuration
isolated virtual network
container with an Apache Kafka server
is ready
Ockam node in a container
embedded in the script
enroll with your project
passed to the container
gave the enrollment ticket permission
docker-compose configuration
isolated virtual network
Kafka Consumer container
Kafka Producer container
a dockerfile
entrypoint script
passed to the container
its entrypoint
embedded in the script
command present in the docker-compose configuration
command within docker-compose configuration

Nodes and Workers

Ockam Nodes and Workers decouple applications from the host environment and enable simple interfaces for stateful and asynchronous message-based protocols.

At Ockam's core is a collection of cryptographic and messaging protocols. These protocols make it possible to create private and secure by design applications that provide end-to-end application layer trusted data.

Ockam is designed to make these powerful protocols easy and safe to use in any application environment – from highly scalable cloud services to tiny battery operated microcontroller based devices.

However, many of these protocols require multiple steps and have complicated internal state that must be managed with care. It can be quite challenging to make them simple to use, secure, and platform independent.

Ockam Nodes, Workers, and Services help hide this complexity and decouple from the host environment - to provide simple interfaces for stateful and asynchronous message-based protocols.

Nodes

An Ockam Node is any program that can interact with other Ockam Nodes using various Ockam protocols like Ockam Routing and Ockam Secure Channels.

You can create a standalone node using Ockam Command or embed one directly into your application using various Ockam programming libraries. Nodes are built to leverage the strengths of their operating environment. Our Rust implementation, for example, makes it easy to adapt to various architectures and processors. It can run efficiently on tiny microcontrollers or scale horizontally in cloud environments.

A typical Ockam Node is implemented as an asynchronous execution environment that can run very lightweight, concurrent, stateful actors called Ockam Workers. Using Ockam Routing, a node can deliver messages from one worker to another local worker. Using Ockam Transports, nodes can also route messages to workers on other remote nodes.

Ockam Command makes it super easy to create and manage local or remote nodes. If you run ockam node create, it will create and start a node in the background and give it a random name:

» ockam node create
✔︎ Node sharp-falconet created successfully

Similarly, you can also create a node with a name of your choice:

» ockam node create n1
✔︎ Node n1 created successfully

You could also start a node in the foreground and optionally tell it display verbose logs:

» ockam node create n2 --foreground --verbose
2023-05-18T09:54:24.281248Z  INFO ockam_node::node: Initializing ockam node
2023-05-18T09:54:24.298089Z  INFO ockam_command::node::util: node state initialized name=n2
2023-05-18T09:54:24.298906Z  INFO ockam_node::processor_builder: Initializing ockam processor '0#c20e2e4aeb9fbae2b5be1529c83af54d' with access control in:DenyAll out:DenyAll
2023-05-18T09:54:24.299627Z  INFO ockam_api::cli_state::nodes: setup config updated name=n2
2023-05-18T09:54:24.302206Z  INFO ockam_api::nodes::service: NodeManager::create: n2
2023-05-18T09:54:24.302218Z  INFO ockam_api::nodes::service: NodeManager::create: starting default services
2023-05-18T09:54:24.302286Z  INFO ockam_node::worker_builder: Initializing ockam worker '0#_internal.nodemanager' with access control in:AllowAll out:AllowAll
2023-05-18T09:54:24.302719Z  INFO ockam_node::worker_builder: Initializing ockam worker '0#ockam.ping.collector' with access control in:AllowAll out:DenyAll
2023-05-18T09:54:24.302728Z  INFO ockam_node::worker_builder: Initializing ockam worker '0#identity_service' with access control in:AllowAll out:AllowAll
2023-05-18T09:54:24.303179Z  INFO ockam_node::worker_builder: Initializing ockam worker '0#authenticated' with access control in:AllowAll out:AllowAll
2023-05-18T09:54:24.303364Z  INFO ockam_node::worker_builder: Initializing ockam worker '0#uppercase' with access control in:AllowAll out:AllowAll
2023-05-18T09:54:24.303527Z  INFO ockam_node::worker_builder: Initializing ockam worker '0#forwarding_service' with access control in:AllowAll out:DenyAll
2023-05-18T09:54:24.303851Z  INFO ockam_node::worker_builder: Initializing ockam worker '0#api' with access control in:AllowAll out:DenyAll
2023-05-18T09:54:24.304009Z  INFO ockam_node::worker_builder: Initializing ockam worker '0#echo' with access control in:AllowAll out:AllowAll
2023-05-18T09:54:24.304056Z  INFO ockam_node::worker_builder: Initializing ockam worker '0#rpc_proxy_service' with access control in:AllowAll out:AllowAll
...

To stop the foreground node, you can press Ctrl-C. This will stop the node but won't delete its state.

You can see all running nodes with ockam node list

» ockam node list

       ┌───────────────────┐
       │       Nodes       │
       └───────────────────┘

     │ Node n1  UP
     │ Process id 42218

     │ Node sharp-falconet (default) UP
     │ Process id 42083

     │ Node n2  DOWN
     │ No process running
...

You can stop a running node with ockam node stop.

» ockam node stop n1

You can start a stopped node with ockam node start.

» ockam node start n1

You can permanently delete a node by running:

» ockam node delete n1
✔︎ The node named 'n1' has been deleted.

You can also delete all nodes with:

» ockam node delete --all

Workers

Ockam Nodes run very lightweight, concurrent, and stateful actors called Ockam Workers. They are like processes on your operating system, except that they all live inside one node and are very lightweight so a node can have hundreds of thousands of them, depending on the capabilities of the machine hosting the node.

When a worker is started on a node, it is given one or more addresses. The node maintains a mailbox for each address and whenever a message arrives for a specific address it delivers that message to the corresponding worker. In response to a message, a worker can: make local decisions, change internal state, create more workers, or send more messages.

You can see the list of workers in a node by running:

» ockam node create n1
» ockam worker list --at n1
       ┌───────────────────────────┐
       │       Workers on n1       │
       └───────────────────────────┘

     │ Worker 0c240525017e2273fa58fc0d5497b62a

     │ Worker 31482d2647246b47667cf12428626723

     │ Worker 4248c83401c77176967715caca9d82dd

     │ Worker _internal.nodemanager
...

Note the workers in node n1 with address echo and uppercase. We'll send them some messages below as we look at services. A node can also deliver messages to workers on a different node using the Ockam Routing Protocol and its Transports. Later in this guide, when we dig into routing, we'll send some messages across nodes.

From ockam command, we don't usually create workers directly but instead start predefined services like Transports and Secure Channels that in turn start one or more workers. Using our libraries you can also develop your own workers.

Workers are stateful and can asynchronously send and receive messages. This makes them a potent abstraction that can take over the responsibility of running multistep, stateful, and asynchronous message-based protocols. This enables ockam command and Ockam Programming Libraries to expose very simple and safe interfaces for powerful protocols.

Services

One or more Ockam Workers can work as a team to offer a Service. Services can also be attached to a trust context and authorization policies to enforce attribute based access control rules.

For example, nodes that are created with Ockam Command come with some predefined services including an example service /service/uppercase that responds with an uppercased version of whatever message you send it:

» ockam message send hello --to /node/n1/service/uppercase
HELLO

Services have addresses represented by /service/{ADDRESS}. You can see a list of all services on a node by running:

» ockam service list --at n1
       ┌────────────────────────────┐
       │       Services on n1       │
       └────────────────────────────┘

     │ Service uppercase
     │ Address /service/uppercase

     │ Service echo
     │ Address /service/echo

     │ Service credentials
     │ Address /service/credentials

Later in this guide, we'll explore other commands that interact with pre-defined services. For example every node created with ockam command starts a secure channel listener at the address /service/api, which allows other nodes to create mutually authenticated secure channels with it.

Spaces

Ockam Spaces are infinitely scalable Ockam Nodes in the cloud. Ockam Orchestrator can create, manage, and scale spaces for you. Like other nodes, Spaces offer services. For example, you can create projects within a space, invite teammates to it, or attach payment subscriptions.

When you run ockam enroll for the first time, we create a space for you to host your projects.

» ockam enroll
...

» ockam space list
       ┌────────────────────┐
       │       Spaces       │
       └────────────────────┘

     │ Space f27d39e1
     │ Id 877c7a4d-b1be-4f36-8da6-be045ab64b60
     │ [email protected]

Projects

Ockam Projects are also infinitely scalable Ockam Nodes in the cloud. Ockam Orchestrator can create, manage, and scale projects for you. Projects are created within a Space and can inherit permissions and subscriptions from their parent space. There can be many projects within one space.

When you run ockam enroll for the first time, we create a default project for you, within your default space.

» ockam enroll
...

» ockam project list
       ┌──────────────────────┐
       │       Projects       │
       └──────────────────────┘

     │ Project default
     │ Space f27d39e1

Like other nodes, Projects offer services. For example, the default project has an echo service just like the local nodes we created above. We can send messages and get replies from it. The echo service replies with the same message we send it.

» ockam message send hello --to /project/default/service/echo
hello

Recap

To clean up and delete all nodes, run: ockam node delete --all

Ockam Nodes are programs that interact with other nodes using one or more Ockam protocols like Routing and Secure Channels. Nodes run very lightweight, concurrent, and stateful actors called Workers. Nodes and Workers hide complexities of environment and state to enable simple interfaces for stateful, asynchronous, message-based protocols.

One or more Workers can work as a team to offer a Service. Services can be attached to trust contexts and authorization policies to enforce attribute based access control rules. Ockam Orchestrator can create and manage infinitely scalable nodes in the cloud called Spaces and Projects that offer managed services that are designed for scale and reliability.

If you're stuck or have questions at any point, please reach out to us.

Next, let's learn about Ockam's Application Layer Routing and how it enables protocols that provide end-to-end guarantees to messages traveling across many network connection hops and protocols boundaries.

Cloud

In this hands-on example we send end-to-end encrypted messages through Instaclustr.

encrypts messages from a Producer all-of-the-way to a specific Consumer. Only that specific Consumer can decrypt these messages. This guarantees that your data cannot be observed or tampered with as it passes through Instaclustr or the network where it is hosted. The operators of Instaclustr can only see encrypted data in the network and in service that they operate. Thus, a compromise of the operator's infrastructure will not compromise the data stream's security, privacy, or integrity.

To learn how end-to-end trust is established, please read: “”

Run

This example requires Bash, Git, jq, Curl, Docker, and Docker Compose. Please set up these tools for your operating system, then run the following commands:

If everything runs as expected, you'll see the message: The example run was successful 🥳

Walkthrough

The , that you ran above, and its are full of comments and meant to be read. The example setup is only a few simple steps, so please take some time to read and explore.

This example requires Instaclustr Username and API key to create a kafka cluster to use for the example. You can create a trial account at https://www.instaclustr.com/platform/managed-apache-kafka/

Administrator

  • The calls the which invokes the to create an new identity, sign into Ockam Orchestrator, set up a new Ockam project, make you the administrator of this project, and get a project membership .

  • The run function then , each valid for 10 minutes, and can be redeemed only once. The is meant for the Ockam node that will run in Instaclustr Operator’s network. The are meant for the Consumer and Producer, in the Ockam node that will run in Application Team’s network.

  • Run function using Username and API Key and to create and configure a kafka cluster

    • Upon logged in to Instaclustr console, Account API keys can be created from the console by going to gear icon to the top right > Account Settings > API Keys. Create a Provisioning API key and note it down.

    • Alternative to entering the username and API key, you can export them as environment variables INSTACLUSTR_USER_NAME and INSTACLUSTR_API_KEY

  • gets invoked which:

    • Creates a .

    • for kafka consumer and producer to use.

    • to access the cluster from the machine running the script.

    • .

  • In a typical production setup, an administrator or provisioning pipeline generates enrollment tickets and gives them to nodes that are being provisioned. In our example, the run function is acting on your behalf as the administrator of the Ockam project. It provisions Ockam nodes in and , passing them their tickets using environment variables.

  • The run function takes the enrollment tickets, sets them as the value of an , and to create Instaclustr Operator’s and Application Teams’s networks.

Instaclustr Operator

  • Instaclustr Operator’s is used when run.sh invokes docker-compose. It creates an for Instaclustr Operator.

  • In the same network, docker compose starts a , connecting directly to ${BOOTSTRAPSERVER}:9092. The console will be reachable throughout the example at http://127.0.0.1:8080.

  • Docker compose starts an described by ockam.yaml, . The node will automatically create an identity, using the ticket , and set up Kafka outlet with the passed to the container

  • The Ockam node then uses this identity and membership credential to authenticate and create a relay in the project, back to the node, at relay: instaclustr. The run function to use this relay address.

Application Teams

  • Application Teams’s is used when run.sh invokes docker-compose. It creates an for Application Teams. In this network, docker compose starts a and a .

  • The Kafka consumer node container is created using and this . The consumer enrollment ticket from run.sh is via environment variable.

  • When the Kafka consumer node container starts in the Application Teams network, it runs . The entrypoint creates the Ockam node described by ockam.yaml, . The node will automatically create an identity, , and setup Kafka inlet.

  • Next, the executes the , which launches a Kafka consumer waiting for messages in the demo topic. Once the messages are received, they are printed out.

  • In the producer container, the process is analogous, once the Ockam node is set up the launches a Kafka producer that sends messages.

  • Both consumer and producer uses that has credentials of the kafka user created when setting up the cluster

  • You can view the Kafak UI available at to see the encrypted messages

Recap

We sent end-to-end encrypted messages through Instaclustr.

Messages are encrypted with strong forward secrecy as soon as they leave a Producer, and only the intended Consumer can decrypt those messages. Instaclustr and other Consumers can only see encrypted messages.

All communication is mutually authenticated and authorized. Keys and credentials are automatically rotated. Access can be easily revoked.

Cleanup

To delete all containers, images and instaclustr cluster:

# Clone the Ockam repo from Github.
git clone --depth 1 https://github.com/build-trust/ockam && cd ockam

# Navigate to this example’s directory.
cd examples/command/portals/kafka/instaclustr/docker/

# Run the example, use Ctrl-C to exit at any point.
./run.sh
# Create a dedicated and isolated virtual network for instaclustr_operator.
networks:
  instaclustr_operator:
    driver: bridge
# Create a dedicated and isolated virtual network for application_team.
networks:
  application_team:
      driver: bridge
./run.sh cleanup
Ockam
How does Ockam work?
run.sh script
accompanying files
run.sh script
run function
enroll command
credential
generates three new enrollment tickets
first ticket
second and third tickets
authorizes to instaclustr
setup a free trial Instaclustr Kafka Cluster
cluster_manager.sh
trial cluster
Creates a user
Setup firewall rules
Obtains the bootstrap server public address
Instaclustr Operator’s network
Application Team’s network
environment variable
invokes docker-compose
docker-compose configuration
isolated virtual network
Kafka UI
Ockam node in a container
embedded in the script
enroll with your project
passed to the container
bootstrap server details
gave the enrollment ticket permission
docker-compose configuration
isolated virtual network
Kafka Consumer container
Kafka Producer container
this dockerfile
entrypoint script
passed to the container
its entrypoint
embedded in the script
enroll with your project
entrypoint at the end
consumer commands
command within docker-compose configuration
kafka.config
http://127.0.0.1:8080
Drawing
Drawing

Secure Channels

Ockam Secure Channels are mutually authenticated and end-to-end encrypted messaging channels that guarantee data authenticity, integrity, and confidentiality.

To trust data-in-motion, applications need end-to-end guarantees of data authenticity, integrity, and confidentiality.

In previous sections, we saw how Ockam Routing and Transports, when combined with the ability to model Bridges and Relays, make it possible to create end-to-end, application layer protocols in any communication topology - across networks, clouds, and protocols over many transport layer hops.

Ockam Secure Channels is an end-to-end protocol built on top of Ockam Routing. This cryptographic protocol guarantees data authenticity, integrity, and confidentiality over any communication topology that can be traversed with Ockam Routing.

Distributed applications that are connected in this way can communicate without the risk of spoofing, tampering, or eavesdropping attacks, irrespective of transport protocols, communication topologies, and network configuration. As application data flows across data centers, through queues and caches, via gateways and brokers - these intermediaries, like the relay in the above picture, can facilitate communication but cannot eavesdrop or tamper data.

In contrast, traditional secure communication implementations are typically tightly coupled with transport protocols in a way that all their security is limited to the length and duration of one underlying transport connection.

For example, most TLS implementations are tightly coupled with the underlying TCP connection. If your applications data and requests travel over two TCP connection hops TCP -> TCP then all TLS guarantees break at the bridge between the two networks. This bridge, gateway or load balancer then becomes a point of weakness for application data.

To make matters worse, if you don't set up another mutually authenticated TLS connection on the second hop between the gateway and your destination server, then the entire second hop network – which may have thousands of applications and machines within it – becomes an attack vector to your application and its data. If any of these neighboring applications or machines are compromised, then your application and its data can also be easily compromised.

Traditional secure communication protocols are also unable to protect your application's data if it travels over multiple different transport protocols. They can't guarantee data authenticity or data integrity if your application's communication path is UDP -> TCP or BLE -> TCP.

Ockam Routing and Transports, when combined with the ability to model Bridges and Relays make it possible to bidirectionally exchange messages over a large variety of communication topologies: TCP -> TCP or TCP -> TCP -> TCP or BLE -> UDP -> TCP or BLE -> TCP -> TCP or TCP -> Kafka -> TCP, etc.

By layering Ockam Secure Channels over Ockam Routing, it becomes simple to provide end-to-end, application layer guarantees of data authenticity, integrity, and confidentiality in any communication topology.

Secure Channels

Ockam Secure Channels provides the following end-to-end guarantees:

  1. Authenticity: Each end of the channel knows that messages received on the channel must have been sent by someone who possesses the secret keys of a specific Ockam Identifier.

  2. Integrity: Each end of the channel knows that the messages received on the channel could not have been tampered en route and are exactly what was sent by the authenticated sender at the other end of the channel.

  3. Confidentiality: Each end of the channel knows that the contents of messages received on the channel could not have been observed en route between the sender and the receiver.

To establish the secure channel, the two ends run an authenticated key establishment protocol and then authenticate each other's Ockam Identifier by signing the transcript hash of the key establishment protocol. The cryptographic key establishment safely derives shared secrets without transporting these secrets on the wire.

Once the shared secrets are established, they are used for authenticated encryption that ensures data integrity and confidentiality of application data.

Our secure channel protocol is based on a handshake design pattern described in the Noise Protocol Framework. Designs based on this framework are widely deployed and the described patterns have formal security proofs. The specific pattern that we use in Ockam Secure Channels provides sender and receiver authentication and is resistant to key compromise impersonation attacks. It also ensures the integrity and secrecy of application data and provides strong forward secrecy.

Now that you're familiar with the basics let's create some secure channels. If you haven't already, install ockam command, run ockam enroll, and delete any nodes from previous examples.

Hello Secure Channels

In this example, we'll create a secure channel from Node a to node b. Every node, created with Ockam Command, starts a secure channel listener at address /service/api.

» ockam node create a
» ockam node create b
» ockam secure-channel create --from a --to /node/b/service/api
     ✔︎ Secure Channel at /service/d92ef0aea946ec01cdbccc5b9d3f2e16 created successfully
       From /node/a to /node/b/service/api

» ockam message send hello --from a --to /service/d92ef0aea946ec01cdbccc5b9d3f2e16/service/uppercase
HELLO

In the above example, a and b mutually authenticate using the default Ockam Identity that is generated when we create the first node. Both nodes, in this case, are using the same identity.

Once the channel is created, note above how we used the service address of the channel on a to send messages through the channel. This can be shortened to the one-liner:

» ockam secure-channel create --from a --to /node/b/service/api |
    ockam message send hello --from a --to -/service/uppercase
HELLO

The first command writes /service/d92ef0aea946ec01cdbccc5b9d3f2e16, the address of a new secure channel on a, to standard output and the second command replaces the - in the to argument with the value from standard input. Everything else works the same.

Over Bridges

In a previous section, we learned that Bridges enable end-to-end protocols between applications in separate networks in cases where we have a bridge node that is connected to both networks. Since Ockam Secure Channels are built on top of Ockam Routing, we can establish end-to-end secure channels over a route that may include one or more bridges.

Delete any existing nodes and then try this example:

» ockam node create a
» ockam node create bridge1 --tcp-listener-address=127.0.0.1:7000
» ockam service start hop --at bridge1
» ockam node create bridge2 --tcp-listener-address=127.0.0.1:8000
» ockam service start hop --at bridge2
» ockam node create b --tcp-listener-address=127.0.0.1:9000

» ockam tcp-connection create --from a --to 127.0.0.1:7000
» ockam tcp-connection create --from bridge1 --to 127.0.0.1:8000
» ockam tcp-connection create --from bridge2 --to 127.0.0.1:9000

» ockam message send hello --from a --to /worker/ec8d523a2b9261c7fff5d0c66abc45c9/service/hop/worker/f0ea25511025c3a262b5dbd7b357f686/service/hop/worker/dd2306d6b98e7ca57ce660750bc84a53/service/uppercase
HELLO

» ockam secure-channel create --from a --to /worker/ec8d523a2b9261c7fff5d0c66abc45c9/service/hop/worker/f0ea25511025c3a262b5dbd7b357f686/service/hop/worker/dd2306d6b98e7ca57ce660750bc84a53/service/api \
    | ockam message send hello --from a --to -/service/uppercase
HELLO

Through Relays

In a previous section, we also saw how Relays make it possible to establish end-to-end protocols with services operating in a remote private network without requiring a remote service to expose listening ports on an outside hostile network like the Internet.

Since Ockam Secure Channels are built on top of Ockam Routing, we can establish end-to-end secure channels over a route that may include one or more relays.

Delete any existing nodes and then try this example:

» ockam node create relay --tcp-listener-address=127.0.0.1:7000

» ockam node create b
» ockam relay create b --at /node/relay --to b
    ✔︎ Now relaying messages from /node/relay/service/34df708509a28abf3b4c1616e0b37056 → /node/b/service/forward_to_b

» ockam node create a
» ockam tcp-connection create --from a --to 127.0.0.1:7000

» ockam secure-channel create --from a --to /worker/1fb75f2e7234035461b261602a714b72/service/forward_to_b/service/api \
    | ockam message send hello --from a --to -/service/uppercase
HELLO

The Routing Sandwich

Ockam Secure Channels are built on top of Ockam Routing. But they also carry Ockam Routing messages.

Any protocol that is implemented in this way melds with and becomes a seamless part of Ockam Routing. This means that we can run any Ockam Routing based protocol through Secure Channels. This also means that we can create Secure Channels that pass through other Secure Channels.

The on-the-wire overhead of a new secure channel is only 20 bytes per message. This makes passing secure channels though other secure channels a powerful tool in many real world topologies.

Elastic Encrypted Relays

Ockam Orchestrator can create and manage Elastic Encrypted Relays in the cloud within your Orchestrator project. These managed relays are designed for high availability, high throughput, and low latency.

Let's create an end-to-end secure channel through an elastic relay in your Orchestrator project.

The Project that was created when you ran ockam enroll offers an Elastic Relay Service. Delete any existing nodes and then try this new example:

» ockam enroll

» ockam node create a
» ockam node create b

» ockam relay create b --at /project/default --to /node/a
     ✔︎ Now relaying messages from /project/default/service/70c63af6590869c9bf9aa5cad45d1539 → /node/a/service/forward_to_b

» ockam secure-channel create --from a --to /project/default/service/forward_to_b/service/api \
    | ockam message send hello --from a --to -/service/uppercase
HELLO

Nodes a and b (the two ends) are mutually authenticated and are cryptographically guaranteed data authenticity, integrity, and confidentiality - even though their messages are traveling over the public Internet over two different TCP connections.

Secure Portals

In a previous section, we saw how Portals make existing application protocols work over Ockam Routing without changing any code in the existing applications.

We can combine Secure Channels with Portals to create Secure Portals.

Continuing from the above example on Elastic Encrypted Relays create a Python-based web server to represent a sample web service. This web service is listening on 127.0.0.1:9000.

» python3 -m http.server --bind 127.0.0.1 9000

» ockam tcp-outlet create --at a --from /service/outlet --to 127.0.0.1:9000
» ockam secure-channel create --from a --to /project/default/service/forward_to_b/service/api \
    | ockam tcp-inlet create --at a --from 127.0.0.1:6000 --to -/service/outlet

» curl --head 127.0.0.1:6000
HTTP/1.0 200 OK
...

Then create a TCP Portal Outlet that makes 127.0.0.1:9000 available on worker address /service/outlet on b. We already have a forwarding relay for b on orchestrator /project/default at /service/forward_to_b.

We then create a TCP Portal Inlet on a that will listen for TCP connections to 127.0.0.1:6000. For every new connection, the inlet creates a portal following the --to route all the way to the outlet. As it receives TCP data, it chunks and wraps them into Ockam Routing messages and sends them along the supplied route. The outlet receives Ockam Routing messages, unwraps them to extract TCP data, and send that data along to the target web service on 127.0.0.1:9000. It all just seamlessly works.

The HTTP requests from curl, enter the inlet on a, travel to the orchestrator project node and are relayed back to b via it's forwarding relay to reach the outlet and onward to the Python-based web service. Responses take the same return route back to curl.

The TCP Inlet/Outlet works for a large number of TCP based protocols like HTTP. It is also simple to implement portals for other transport protocols. There is a growing base of Ockam Portal Add-Ons in our GitHub Repository.

Mutual Authorization

Trust and authorization decisions must be anchored in some pre-existing knowledge.

Delete any existing nodes and then try this new example:

» ockam identity create i1
» ockam identity show i1 > i1.identifier
» ockam node create n1 --identity i1

» ockam identity create i2
» ockam identity show i2 > i2.identifier
» ockam node create n2 --identity i2

» ockam secure-channel-listener create l --at n2 \
    --identity i2 --authorized $(cat i1.identifier)

» ockam secure-channel create \
    --from n1 --to /node/n2/service/l \
    --identity i1 --authorized $(cat i2.identifier) \
      | ockam message send hello --from n1 --to -/service/uppercase
HELLO

Recap

To clean up and delete all nodes, run: ockam node delete --all

Ockam Secure Channels is an end-to-end protocol built on top of Ockam Routing. This cryptographic protocol guarantees data authenticity, integrity, and confidentiality over any communication topology that can be traversed with Ockam Routing.

If you're stuck or have questions at any point, please reach out to us.

Next, let's explore how we can scale mutual authentication with Ockam Credentials.

Ockam Node for Amazon RDS Postgres

Create an Ockam Postgres outlet node using Cloudformation template

This guide contains instructions to launch within AWS environment, an

  • An Ockam Postgres Outlet Node within an AWS environment

  • An Ockam Postgres Inlet Node:

    • Within an AWS environment, or

    • Using Docker in any environment

The walkthrough demonstrates:

  1. Running an Ockam Postgres Outlet node in your AWS environment that contains a private Amazon RDS for PostgreSQL Database

  2. Setting up Ockam Postgres inlet nodes using either AWS or Docker from any location.

  3. Verifying secure communication between Postgres clients and Amazon RDS for Postgres Database.

Read: “” to learn about end-to-end trust establishment.

PreRequisite

  • A private Amazon RDS Postgres Database is created and accessible from the VPC and Subnet where the Ockam Node will be launched.

  • Security Group associated with the Amazon RDS Postgres Database allows inbound traffic on the required port (5432) from the subnet where the Ockam Outlet Node will reside.

  • You have permission to subscribe and launch Cloudformation stack from AWS Marketplace on the AWS Account running RDS Postgres Database.

Create an Orchestrator Project

  1. and pick a subscription plan through the guided workflow on Ockam.io.

  2. Run the following commands to install Ockam Command and enroll with the Ockam Orchestrator.

Completing this step creates a Project in Ockam Orchestrator

  1. Control which identities are allowed to enroll themselves into your project by issuing unique one-time use enrollment tickets. Generate two enrollment tickets, one for the Outlet and one for the Inlet.

Setup Ockam Postgres Outlet Node

  • Login to AWS Account you would like to use

  • Subscribe to "" in AWS Marketplace

  • Navigate to AWS Marketplace -> Manage subscriptions. Select Ockam - Node for Amazon RDS Postgres from the list of subscriptions. Select Actions-> Launch Cloudformation stack

  • Select the Region you want to deploy and click Continue to Launch. Under Actions, select Launch Cloudformation

  • Create stack with the following details

    • Stack name: postgres-ockam-outlet or any name you prefer

    • Network Configuration

      • VPC ID: Choose a VPC ID where the EC2 instance will be deployed.

      • Subnet ID: Select a suitable Subnet ID within the chosen VPC that has access to Amazon RDS PostgreSQL Database.

      • EC2 Instance Type: Default instance type is m6a.8xlarge because of the predictable network bandwidth of 12.5 Gbps. Adjust to a small instance type depending on your use case. Eg: m6a.large

    • Ockam Node Configuration

      • Enrollment ticket: Copy and paste the content of the outlet.ticket generated above

      • RDS Postgres Database Endpoint: To configure the Ockam postgres Outlet Node, you'll need to specify the Amazon RDS Postgres Endpoint. This configuration allows the Ockam Postgres Outlet Node to connect to the database.

      • JSON Node Configuration: Copy and paste the below configuration. Note that the configuration values match with the enrollment tickets created in the previous step. $POSTGRES_ENDPOINT will be replaced during runtime.

  • Click Next to launch the CloudFormation run.

  • A successful CloudFormation stack run configures the Ockam Postgres Outlet node on an EC2 machine.

    • EC2 machine mounts an EFS volume created in the same subnet. Ockam state is stored in the EFS volume.

    • A security group with egress access to the internet will be attached to the EC2 machine.

  • Connect to the EC2 machine via AWS Session Manager.

    • To view the log file, run sudo cat /var/log/cloud-init-output.log.

      • Successful run will show Ockam node setup completed successfully in the logs

    • To view the status of Ockam node run curl http://localhost:23345/show | jq

  • View the Ockam node status in CloudWatch.

    • Navigate to Cloudwatch -> Log Group and select postgres-ockam-outlet-status-logs. Select the Logstream for the EC2 instance.

    • The Cloudformation template creates a subscription filter that sends data to a Cloudwatch alarm postgres-ockam-outlet-OckamNodeDownAlarm.Alarm will turn green upon ockam node successfully running.

  • An Autoscaling group ensures atleast one EC2 instance is running at all times.

Ockam postgres outlet node setup is complete. You can now create Ockam postgres inlet nodes in any network to establish secure communication.

Setup Ockam Inlet Node

You can set up an Ockam Postgres Inlet Node either in AWS or locally using Docker. Here are both options:

Option 1: Setup Inlet Node in AWS

  • Login to AWS Account you would like to use

  • Subscribe to " in AWS Marketplace

  • Navigate to AWS Marketplace -> Manage subscriptions. Select Ockam - Node from the list of subscriptions. Select Actions-> Launch Cloudformation stack

  • Select the Region you want to deploy and click Continue to Launch. Under Actions, select Launch Cloudformation

  • Create stack with below details

    • Stack name: postgres-ockam-inlet or any name you prefer

    • Network Configuration

      • Select suitable values for VPC ID and Subnet ID

      • EC2 Instance Type: Default instance type is m6a.8xlarge because of the predictable network bandwidth of 12.5 Gbps. Adjust to a small instance type depending on your use case. Eg: m6a.large

    • Ockam Configuration

      • Enrollment ticket: Copy and paste the content of the inlet.ticket generated above

      • JSON Node Configuration: Copy and paste the below configuration.

  • Click Next to launch the CloudFormation run.

  • A successful CloudFormation stack run configures the Ockam inlet node on an EC2 machine.

  • EC2 machine mounts an EFS volume created in the same subnet. Ockam state is stored in the EFS volume.

  • Connect to the EC2 machine via AWS Session Manager.

    • To view the log file, run sudo cat /var/log/cloud-init-output.log.

      • Successful run will show Ockam node setup completed successfully in the logs

    • To view the status of Ockam node run curl http://localhost:23345/show | jq

  • View the Ockam node status in CloudWatch.

    • Navigate to Cloudwatch -> Log Group and select postgres-ockam-inlet-status-logs. Select the Logstream for the EC2 instance.

    • Cloudformation template creates a subscription filter that sends data to a Cloudwatch alarm postgres-ockam-inlet-OckamNodeDownAlarm.Alarm will turn green upon ockam node successfully running.

  • An Autoscaling group ensures atleast one EC2 instance is running at all times.

Use any postgresqlclient and connect to localhost:15432 (PGHOST=localhost, PGPORT=15432) from the machine running the Ockam postgres Inlet node.

Option 2: Setup Inlet Node Locally with Docker Compose

To set up an Inlet Node locally and interact with it outside of AWS, use Docker Compose.

  • Create a file named docker-compose.yml with the following content:

  • Run the following command from the same location as the docker-compose.yml and the inlet.ticket to create an Ockam postgres inlet that can connect to the outlet running in AWS , along with psql client container

  • Check status of Ockam inlet node. You will see The node is UP when ockam is configured successfully and ready to accept connection

  • Connect to psql-client container and run commands

This setup allows you to run an Ockam Postgres Inlet Node locally and communicate securely with a private Amazon RDS Postgres database running in AWS

  • Cleanup

Access Controls and Policies

Attribute names can be used to define policies and policies can be used to define access controls:

  • Policies are expressions involving attribute names, which can be evaluated to true or false given an environment containing attribute values.

  • Access controls were discussed earlier. They restrict the messages which can be received or sent by a worker.

Policies

Policies are boolean expressions constructed using attribute names. For example:

In the expression above:

  • and, =, member? are operators.

  • resource.version, subject.name, resource.admins are identifiers.

  • 1, "John" are values.

Values can have the 5 following types:

  • String

  • Int

  • Float

  • Bool

  • Seq: a sequence of values

This table lists all the available operators:

Operator
Number of operands
Description

Examples

Here are a few more examples of policies.

The subject must have a component attribute with a value that is either web or database:

Note that attribute names can have dots in their name, so you could also write:

You can also declare more complex logical expressions, by nesting and and or operators:

The subject must either by the "Smart Factory" application or being a member of the "Field Engineering" department in San Francisco:

Boolean policies

Since many policies are just need to test for the presence of an attribute, we provide simpler ways to write them.

For example we can write:

Simply as (note that logical operators can now be written as infix operators):

String comparisons are still supported, so you could also have a component attribute and write:

More complex expressions require parentheses:

Since identities are frequently used in policies, we provide a shortcut for them. For example, this is a valid boolean policy:

It translates to:

This table summarizes the elements you can use in a simple boolean policy:

Operator
Description

Evaluation

We evaluate a policy by doing the following:

  • Each attribute attribute_name/attribute_value is added to the environment as an identifier subject.attribute_name associated to the value attribute_value (always as a String). In the example of a policy given above the identifier subject.name means that we are expecting an attribute name associated to the identity which sent a message.

  • The top-level expression of the policy is recursively evaluated by evaluating each operator and taking values from the environment when an expression is referencing an identifier.

  • The end result of a policy evaluation is simply a boolean saying if the policy succeeded or not.

Access controls

The library offers two types of access controls using policies:

  1. AbacAccessControl.

  2. PolicyAccessControl.

AbacAccessControl

This access control type is used as an IncomingAccessControl (so it restricts incoming messages).

We define an AbacAccessControl with the following:

  1. A Policy which specifies which attributes are required for a given identity.

  2. An IdentityRepository which stores a list of the known authenticated attributes for a given identity.

When a LocalMessage arrives to a worker using such an incoming access control, we do the following:

  • If an identity is not associated to this message (as LocalInfo), the message is rejected.

  • Otherwise the attributes for this identity are retrieved from the repository.

  • The attributes are used to populate the policy environment.

  • The policy expression is evaluated. If it returns true the message is accepted.

PolicyAccessControl

This access control type is used as an IncomingAccessControl (so it restricts incoming messages).

We define a PolicyAccessControl with the following:

  • A PolicyRepository which stores a list of policies.

  • A Resource and an Action. They represent the access which we want to restrict.

  • An IdentityRepository which stores a list of the known authenticated attributes for a given identity.

When a LocalMessage arrives to a worker using this type of incoming access control, we do the following:

  • If an identity is not associated to this message (as LocalInfo), the message is rejected.

  • Otherwise the attributes for this identity are retrieved from the repository.

  • The most recent policy for the resource and the action is retrieved from the policy repository.

  • The attributes are used to populate the policy environment.

  • The policy expression is evaluated. If it returns true the message is accepted.

The two major differences between this policy and the previous one are:

  1. The PolicyAccessControl models a Resource/Action pair.

  2. Policies for that resource and action can be modified even if the worker they are attached to is already started.

(and (= resource.version 1)
     (= subject.name "John")
     (member? "John" resource.admins))

and

>= 2

Produce the logical conjunction of n expressions

or

>= 2

Produce the logical disjunction of n expressions

not

1

Produce the negation of an expression

if

3

Evaluate the first expression to select either the second expression or the third one

<

2

Return true if the first value is less than the second one

>

2

Return true if the second value is less than the first one

=

2

Return true if the two values are equal

!=

2

Return true if the two values are different

member?

2

Return true if the first value is present in the second expression, which must be a sequence Seq of values

exists?

>= 1

Return true if all the expressions are identifiers with values present in the environment

(or (= subject.component "web")
    (= subject.component "database"))
(or (= subject.component.web "true")
    (= subject.component.database "true"))
(or (= subject.application "Smart Factory") 
    (and (= subject.department "Field Engineering") 
         (= subject.city "San Francisco")))
(or (= subject.web "true")
    (= subject.database "true"))
web or database
component="web" or component="database"
(web or not database) and analytics
I84502ce0d9a0a91bae29026b84e19be69fb4203a6bdd1424c85a43c812772a00
(= subject.identifier = "I84502ce0d9a0a91bae29026b84e19be69fb4203a6bdd1424c85a43c812772a00")

name

Equivalent to (= subject.name "true")

name="string value"

Equivalent to (= subject.name "string value")

and

Conjunction of 2 expressions

or

Disjunction of 2 expressions

not

Negation of an expression

identifier

Equivalent to (= subject.identifier "identifier")

()

Parentheses. Used to group expressions. The precedence rules are not > and > or

Drawing
Drawing
Drawing
Drawing
Drawing
Drawing
Drawing
curl --proto '=https' --tlsv1.2 -sSfL https://install.command.ockam.io | bash
source "$HOME/.ockam/env"

ockam enroll
# Enrollment ticket for Ockam Outlet Node
ockam project ticket --expires-in 10h --usage-count 1 \
  --attribute amazon-rds-postgresql-outlet \
  --relay postgresql \
    > "outlet.ticket"

# Enrollment ticket for Ockam Inlet Node
ockam project ticket --expires-in 10h --usage-count 1 \
  --attribute amazon-rds-postgresql-inlet \
    > "inlet.ticket"
{
    "relay": "postgresql",
    "tcp-outlet": {
        "to": "$POSTGRES_ENDPOINT:5432",
        "allow": "amazon-rds-postgresql-inlet"
    }
}
    
{
    "tcp-inlet": {
      "from": "0.0.0.0:15432",
      "via": "postgresql",
      "allow": "amazon-rds-postgresql-outlet"
    }
}
services:
  ockam:
    image: ghcr.io/build-trust/ockam
    container_name: postgres-inlet
    environment:
      ENROLLMENT_TICKET: ${ENROLLMENT_TICKET:-}
      OCKAM_DEVELOPER: ${OCKAM_DEVELOPER:-false}
      OCKAM_LOGGING: true
      OCKAM_LOG_LEVEL: info
    command:
      - node
      - create
      - --foreground
      - --node-config
      - |
        ticket: ${ENROLLMENT_TICKET}
        tcp-inlet:
          from: 127.0.0.1:15432
          via: postgresql
          allow: amazon-rds-postgresql-outlet
    network_mode: host

  psql-client:
    image: postgres
    container_name: psql-client
    command: /bin/bash -c "while true; do sleep 30; done"
    depends_on:
      - ockam
    network_mode: host
ENROLLMENT_TICKET=$(cat inlet.ticket) docker-compose up -d
docker exec -it postgres-inlet /ockam node show
# Connect to the container
docker exec -it psql-client /bin/bash

# Update the *_REPLACE placeholder variables
export PGUSER="$PGUSER_REPLACE";
export PGPASSWORD="$PGPASSWORD_REPLACE";
export PGDATABASE="$PGDATABASE_REPLACE";
export PGHOST="localhost";
export PGPORT="15432";

# list tables
psql -c "\dt";

# Create a table
psql -c "CREATE TABLE __test__ (key VARCHAR(255), value VARCHAR(255));";

# Insert some data
psql -c "INSERT INTO __test__ (key, value) VALUES ('0', 'Hello');";

# Query the data
psql -c "SELECT * FROM __test__;";

# Drop table if it exists
psql -c "DROP TABLE IF EXISTS __test__;";
docker compose down --volumes --remove-orphans
How does Ockam work?
Sign up for Ockam
Ockam - Node for Amazon RDS Postgres
Ockam - Node"

Keys and Vaults

Ockam Vaults store secret cryptographic keys in hardware and cloud key management systems. These keys remain behind a stricter security boundary and can be used without being revealed.

Ockam Identities, Credentials, and Secure Channels rely on cryptographic proofs of possession of specific secret keys. Ockam Vaults safely store these secret keys in cryptographic hardware and cloud key management systems.

Serialization

// The types below that are annotated with #[derive(Encode, Decode)] are
// serialized using [CBOR](1). The various annotations and their effects on the
// encoding are defined in the [minicbor_derive](3) crate.
//
// #[derive(Encode, Decode)] on structs and enums implies #[cbor(array)]
// and CBOR [array encoding](4). The #[n(..)] annotation specifies the index
// position of the field in the CBOR encoded array.
//
// #[cbor(transparent)] annotation on structs with exactly one field forwards
// the respective encode and decode calls to the inner type, i.e. the resulting
// CBOR representation will be identical to the one of the inner type.
//
// [1]: https://www.rfc-editor.org/rfc/rfc8949.html
// [2]: https://docs.rs/minicbor/latest/minicbor
// [3]: https://docs.rs/minicbor-derive/latest/minicbor_derive/index.html
// [4]: https://docs.rs/minicbor-derive/latest/minicbor_derive/index.html#array-encoding
use minicbor::{Decode, Encode};

Signatures

Vaults can cryptographically sign data. We support two types of Signatures: EdDSA signatures using Curve 25519 and ECDSA signatures using SHA256 + Curve P-256.

Our preferred signature scheme is EdDSA signatures using Curve 25519 which are also call Ed25519 signatures. ECDSA is only supported because as of this writing Cloud KMS services don't support Ed25519.

/// A cryptographic signature.
#[derive(Encode, Decode)]
pub enum Signature {
    /// An EdDSA signature using Curve 25519.
    #[n(0)]
    EdDSACurve25519(#[n(0)] EdDSACurve25519Signature),

    /// An ECDSA signature using SHA-256 and Curve P-256.
    #[n(1)]
    ECDSASHA256CurveP256(#[n(0)] ECDSASHA256CurveP256Signature),
}

/// An EdDSA Signature using Curve25519.
///
/// - EdDSA Signature as defined [here][1].
/// - Curve25519 as defined in [here][2].
///
/// [1]: https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.186-5.pdf
/// [2]: https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-186.pdf
#[derive(Encode, Decode)]
#[cbor(transparent)]
pub struct EdDSACurve25519Signature(#[cbor(n(0), with = "minicbor::bytes")] pub [u8; 64]);

/// An ECDSA Signature using SHA256 and Curve P-256.
///
/// - ECDSA Signature as defined [here][1].
/// - SHA256 as defined [here][2].
/// - Curve P-256 as defined [here][3].
///
/// [1]: https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.186-5.pdf
/// [2]: https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.180-4.pdf
/// [3]: https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-186.pdf
#[derive(Encode, Decode)]
#[cbor(transparent)]
pub struct ECDSASHA256CurveP256Signature(#[cbor(n(0), with = "minicbor::bytes")] pub [u8; 64]);

Public Keys

In addition to VerifyingPublicKeys for the above two signature schemes we also support X25519PublicKeys for ECDH in Ockam Secure Channels using X25519.

/// A public key for verifying signatures.
#[derive(Encode, Decode)]
pub enum VerifyingPublicKey {
    /// Curve25519 Public Key for verifying EdDSA signatures.
    #[n(0)]
    EdDSACurve25519(#[n(0)] EdDSACurve25519PublicKey),

    /// Curve P-256 Public Key for verifying ECDSA SHA256 signatures.
    #[n(1)]
    ECDSASHA256CurveP256(#[n(0)] ECDSASHA256CurveP256PublicKey),
}

/// A Curve25519 Public Key that is only used for EdDSA signatures.
///
/// - EdDSA Signature as defined [here][1] and [here][2].
/// - Curve25519 as defined [here][3].
///
/// [1]: https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.186-5.pdf
/// [2]: https://ed25519.cr.yp.to/papers.html
/// [2]: https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-186.pdf
#[derive(Encode, Decode)]
#[cbor(transparent)]
pub struct EdDSACurve25519PublicKey(#[cbor(n(0), with = "minicbor::bytes")] pub [u8; 32]);

/// A Curve P-256 Public Key that is only used for ECDSA SHA256 signatures.
///
/// This type only supports the uncompressed form which is 65 bytes and
/// has the first byte - 0x04. The uncompressed form is defined [here][1] in
/// section 2.3.3.
///
/// - ECDSA Signature as defined [here][2].
/// - SHA256 as defined [here][3].
/// - Curve P-256 as defined [here][4].
///
/// [1]: https://www.secg.org/SEC1-Ver-1.0.pdf
/// [2]: https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.186-5.pdf
/// [3]: https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.180-4.pdf
/// [4]: https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-186.pdf
#[derive(Encode, Decode)]
#[cbor(transparent)]
pub struct ECDSASHA256CurveP256PublicKey(#[cbor(n(0), with = "minicbor::bytes")] pub [u8; 65]);

/// X25519 Public Key is used for ECDH.
///
/// - X25519 as defined [here][1].
/// - Curve25519 as defined [here][2].
///
/// [1]: https://datatracker.ietf.org/doc/html/rfc7748
/// [2]: https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-186.pdf
#[derive(Encode, Decode)]
#[cbor(transparent)]
pub struct X25519PublicKey(#[cbor(n(0), with = "minicbor::bytes")] pub [u8; 32]);

Vaults and Secrets

Three rust traits - VaultForVerifyingSignatures, VaultForSigning, and VaultForSecureChannels define abstract functions that an Ockam Vault implementation can implement to support Ockam Identities, Credentials, and Secure Channels.

Identities and Credentials require VaultForVerifyingSignatures and VaultForSigning while Secure Channels require VaultForSecureChannels.

VaultForVerifyingSignatures

Implementations of VaultForVerifyingSignatures provide two simple and stateless functions that don't require any secrets so they can be usually provided in software.

use async_trait::async_trait;

pub struct Sha256Output([u8; 32]);

#[async_trait]
pub trait VaultForVerifyingSignatures: Send + Sync + 'static {
    async fn sha256(&self, data: &[u8]) -> Result<Sha256Output>;

    async fn verify_signature(
        &self,
        verifying_public_key: &VerifyingPublicKey,
        data: &[u8],
        signature: &Signature,
    ) -> Result<bool>;
}

VaultForSigning

Implementations of VaultForSigning enable using a secret signing key to sign Credentials, PurposeKeyAttestations, and Identity Change events. The signing key remains inside the tighter security boundary of a KMS or an HSM.

use ockam_core::Result;

/// A handle to a secret inside a vault.
pub struct HandleToSecret(Vec<u8>);

/// A handle to a signing secret key inside a vault.
pub enum SigningSecretKeyHandle {
    /// Curve25519 key that is only used for EdDSA signatures.
    EdDSACurve25519(HandleToSecret),

    /// Curve P-256 key that is only used for ECDSA SHA256 signatures.
    ECDSASHA256CurveP256(HandleToSecret),
}

/// An enum to represent the supported types of signing keys.
pub enum SigningKeyType {
    // Curve25519 key that is only used for EdDSA signatures.
    EdDSACurve25519,

    /// Curve P-256 key that is only used for ECDSA SHA256 signatures.
    ECDSASHA256CurveP256,
}

#[async_trait]
pub trait VaultForSigning: Send + Sync + 'static {
    async fn sign(
        &self,
        signing_secret_key_handle: &SigningSecretKeyHandle,
        data: &[u8],
    ) -> Result<Signature>;

    async fn generate_signing_secret_key(
        &self,
        signing_key_type: SigningKeyType,
    ) -> Result<SigningSecretKeyHandle>;

    async fn get_verifying_public_key(
        &self,
        signing_secret_key_handle: &SigningSecretKeyHandle,
    ) -> Result<VerifyingPublicKey>;

    async fn get_secret_key_handle(
        &self,
        verifying_public_key: &VerifyingPublicKey,
    ) -> Result<SigningSecretKeyHandle>;

    async fn delete_signing_secret_key(
        &self,
        signing_secret_key_handle: SigningSecretKeyHandle,
    ) -> Result<bool>;
}

VaultForSecureChannels

Implementations of VaultForSecureChannels enable using a secret X25519 key for ECDH within Ockam Secure Channels. They rely on compile time feature flags to chose between three possible combinations of primitives:

  • OCKAM_XX_25519_AES256_GCM_SHA256 enables Ockam_XX secure channel handshake with AEAD_AES_256_GCM and SHA256. This is our current default.

  • OCKAM_XX_25519_AES128_GCM_SHA256 enables Ockam_XX secure channel handshake with AEAD_AES_128_GCM and SHA256.

  • OCKAM_XX_25519_ChaChaPolyBLAKE2s enables Ockam_XX secure channel handshake with AEAD_CHACHA20_POLY1305 and Blake2s.

use cfg_if::cfg_if;
use ockam_core::compat::{collections::BTreeMap, vec::Vec};

/// A handle to X25519 secret key inside a vault.
///
/// - X25519 as defined [here][1].
/// - Curve25519 as defined [here][2].
///
/// [1]: https://datatracker.ietf.org/doc/html/rfc7748
/// [2]: https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-186.pdf
pub struct X25519SecretKeyHandle(pub HandleToSecret);

pub struct SecretBufferHandle {
    pub handle: HandleToSecret,
    pub length: usize,
}

/// The number of hkdf outputs to produce from the hkdf function.
pub enum HKDFNumberOfOutputs {
    Two,
    Three,
}

cfg_if! {
    if #[cfg(feature = "OCKAM_XX_25519_ChaChaPolyBLAKE2s")] {
        pub struct Blake2sOutput([u8; 32]);
        pub struct HashOutput(pub Blake2sOutput);

        pub struct Blake2sHkdfOutput(Vec<SecretBufferHandle>);
        pub struct HkdfOutput(pub Blake2sHkdfOutput);

        pub struct Chacha20Poly1305SecretKeyHandle(pub HandleToSecret);
        pub struct AeadSecretKeyHandle(pub Chacha20Poly1305SecretKeyHandle);

    } else if #[cfg(feature = "OCKAM_XX_25519_AES128_GCM_SHA256")] {
        pub struct HashOutput(pub Sha256Output);

        pub struct Sha256HkdfOutput(Vec<SecretBufferHandle>);
        pub struct HkdfOutput(pub Sha256HkdfOutput);

        pub struct Aes128GcmSecretKeyHandle(pub HandleToSecret);
        pub struct AeadSecretKeyHandle(pub Aes128GcmSecretKeyHandle);

    } else {
        // OCKAM_XX_25519_AES256_GCM_SHA256
        pub struct HashOutput(pub Sha256Output);

        pub struct Sha256HkdfOutput(Vec<SecretBufferHandle>);
        pub struct HkdfOutput(pub Sha256HkdfOutput);

        pub struct Aes256GcmSecretKeyHandle(pub HandleToSecret);
        pub struct AeadSecretKeyHandle(pub Aes256GcmSecretKeyHandle);
    }
}

#[async_trait]
pub trait VaultForSecureChannels: Send + Sync + 'static {

    /// [1]: http://www.noiseprotocol.org/noise.html#dh-functions
    async fn dh(
        &self,
        secret_key_handle: &X25519SecretKeyHandle,
        peer_public_key: &X25519PublicKey,
    ) -> Result<SecretBufferHandle>;

    /// [1]: http://www.noiseprotocol.org/noise.html#hash-functions
    async fn hash(&self, data: &[u8]) -> Result<HashOutput>;

    /// [1]: http://www.noiseprotocol.org/noise.html#hash-functions
    async fn hkdf(
        &self,
        salt: &SecretBufferHandle,
        input_key_material: Option<&SecretBufferHandle>,
        number_of_outputs: HKDFNumberOfOutputs,
    ) -> Result<HkdfOutput>;

    /// AEAD Encrypt
    /// [1]: http://www.noiseprotocol.org/noise.html#cipher-functions
    async fn encrypt(
        &self,
        secret_key_handle: &AeadSecretKeyHandle,
        plain_text: &[u8],
        nonce: &[u8],
        aad: &[u8],
    ) -> Result<Vec<u8>>;

    /// AEAD Decrypt
    /// [1]: http://www.noiseprotocol.org/noise.html#cipher-functions
    async fn decrypt(
        &self,
        secret_key_handle: &AeadSecretKeyHandle,
        cipher_text: &[u8],
        nonce: &[u8],
        aad: &[u8],
    ) -> Result<Vec<u8>>;

    async fn generate_ephemeral_x25519_secret_key(&self) -> Result<X25519SecretKeyHandle>;

    async fn delete_ephemeral_x25519_secret_key(
        &self,
        secret_key_handle: X25519SecretKeyHandle,
    ) -> Result<bool>;

    async fn get_x25519_public_key(
        &self,
        secret_key_handle: &X25519SecretKeyHandle,
    ) -> Result<X25519PublicKey>;

    async fn get_x25519_secret_key_handle(
        &self,
        public_key: &X25519PublicKey,
    ) -> Result<X25519SecretKeyHandle>;

    async fn import_secret_buffer(&self, buffer: Vec<u8>) -> Result<SecretBufferHandle>;

    async fn delete_secret_buffer(&self, secret_buffer_handle: SecretBufferHandle) -> Result<bool>;

    async fn convert_secret_buffer_to_aead_key(
        &self,
        secret_buffer_handle: SecretBufferHandle,
    ) -> Result<AeadSecretKeyHandle>;

    async fn delete_aead_secret_key(&self, secret_key_handle: AeadSecretKeyHandle) -> Result<bool>;
}

Nodes and Workers

Ockam Nodes and Workers decouple applications from the host environment and enable simple interfaces for stateful and asynchronous message-based protocols.

At Ockam’s core are a collection of cryptographic and messaging protocols. These protocols make it possible to create private and secure by design applications that provide end-to-end application layer trust it data.

Ockam is designed to make these powerful protocols easy and safe to use in any application environment, from highly scalable cloud services to tiny battery operated microcontroller based devices.

However, many of these protocols require multiple steps and have complicated internal state that must be managed with care. It can be quite challenging to make them simple to use, secure, and platform independent.

Ockam help hide this complexity and decouple from the host environment - to provide simple interfaces for stateful and asynchronous message-based protocols.

Nodes

An Ockam Node is any program that can interact with other Ockam Nodes using various Ockam Protocols like Ockam and Ockam Secure Channels.

Using the Ockam Rust crates, you can easily turn any application into a lightweight Ockam Node. This flexible approach allows your to build secure by design applications that can run efficiently on tiny microcontrollers or scale horizontally in cloud environments.

Rust based Ockam Nodes run very lightweight, concurrent, stateful actors called Ockam . Using Ockam Routing, a node can deliver messages from one worker to another local worker. Using Ockam Transports, nodes can also route messages to workers on other remote nodes.

A node requires an asynchronous runtime to concurrently execute workers. The default Ockam Node implementation in Rust uses tokio, a popular asynchronous runtime in the Rust ecosystem. We also support Ockam Node implementations for various no_std embedded targets.

Create a node

The first thing any Ockam rust program must do is initialize and start an Ockam node. This setup can be done manually but the most convenient way is to use the #[ockam::node] attribute that injects the initialization code. It creates the asynchronous environment, initializes worker management, sets up routing and initializes the node context.

For your new node, create a new file at examples/01-node.rs in your project:

Add the following code to this file:

Here we add the #[ockam::node] attribute to an async main function that receives the node execution context as a parameter and returns ockam::Result which helps make our error reporting better.

As soon as the main function starts, we use ctx.stop() to immediately stop the node that was just started. If we don't add this line, the node will run forever.

To run the node program:

The clear command is used to clear the terminal before running the program. The OCKAM_LOG=none environment variable is used to disable logging. You can remove this to see the logs.

This will download various dependencies, compile and then run our code. When it runs, you'll see colorized output showing that the node starts up and then shuts down immediately 🎉.

Workers

Ockam run very lightweight, concurrent, and stateful actors called Ockam Workers.

When a worker is started on a node, it is given one or more addresses. The node maintains a mailbox for each address and whenever a message arrives for a specific address it delivers that message to the corresponding registered worker.

Workers can handle messages from other workers running on the same or a different node. In response to a message, an worker can: make local decisions, change its internal state, create more workers, or send more messages to other workers running on the same or a different node.

Above we've , now let's create a new worker, send it a message, and receive a reply.

Echoer worker

To create a worker, we create a struct that can optionally have some fields to store the worker's internal state. If the worker is stateless, it can be defined as a field-less unit struct.

This struct:

  • Must implement the ockam::Worker trait.

  • Must have the #[ockam::worker] attribute on the Worker trait implementation

  • Must define two associated types Context and Message

    • The Context type is usually set to ockam::Context which is provided by the node implementation.

    • The Message type must be set to the type of message the worker wishes to handle.

For a new Echoer worker, create a new file at src/echoer.rs in your project. We're creating this inside the src directory so we can easily reuse the Echoer in other examples that we'll write later in this guide:

Add the following code to this file:

Note that we define the Message associated type of the worker as String, which specifies that this worker expects to handle String messages. We then go on to define a handle_message(..) function that will be called whenever a new message arrives for this worker.

In the Echoer's handle_message(..), we print any incoming message, along with the address of the Echoer. We then take the body of the incoming message and echo it back on its return route (more about routes soon).

To make this Echoer type accessible to our main program, export it from src/lib.rs file by adding the following to it:

App worker

When a new node starts and calls an async main function, it turns that function into a worker with address of "app". This makes it easy to send and receive messages from the main function (i.e the "app" worker).

In the code below, we start a new Echoer worker at address "echoer", send this "echoer" a message "Hello Ockam!" and then wait to receive a String reply back from the "echoer".

Create a new file at:

Add the following code to this file:

To run this new node program:

You'll see console output that shows "Hello Ockam!" received by the "echoer" and then an echo of it received by the "app".

Message Flow

The message flow looked like this:

Next, let’s explore how Ockam’s enables us to create protocols that provide end-to-end security and privacy guarantees.

touch examples/01-node.rs
// examples/01-node.rs
// This program creates and then immediately stops a node.

use ockam::{node, Context, Result};

/// Create and then immediately stop a node.
#[ockam::node]
async fn main(ctx: Context) -> Result<()> {
    // Create a node.
    let mut node = node(ctx).await?;

    // Stop the node as soon as it starts.
    node.shutdown().await
}
clear; OCKAM_LOG=none cargo run --example 01-node
touch src/echoer.rs
// src/echoer.rs
use ockam::{Context, Result, Routed, Worker};

pub struct Echoer;

#[ockam::worker]
impl Worker for Echoer {
    type Context = Context;
    type Message = String;

    async fn handle_message(&mut self, ctx: &mut Context, msg: Routed<String>) -> Result<()> {
        println!("Address: {}, Received: {:?}", ctx.primary_address(), msg);

        // Echo the message body back on its return_route.
        ctx.send(msg.return_route().clone(), msg.into_body()?).await
    }
}
mod echoer;

pub use echoer::*;
touch examples/02-worker.rs
// examples/02-worker.rs
// This node creates a worker, sends it a message, and receives a reply.

use hello_ockam::Echoer;
use ockam::{node, Context, Result};

#[ockam::node]
async fn main(ctx: Context) -> Result<()> {
    // Create a node with default implementations
    let mut node = node(ctx).await?;

    // Start a worker, of type Echoer, at address "echoer"
    node.start_worker("echoer", Echoer)?;

    // Send a message to the worker at address "echoer".
    node.send("echoer", "Hello Ockam!".to_string()).await?;

    // Wait to receive a reply and print it.
    let reply = node.receive::<String>().await?;
    println!("App Received: {}", reply.into_body()?); // should print "Hello Ockam!"

    // Stop all workers, stop the node, cleanup and return.
    node.shutdown().await
}
cargo run --example 02-worker
Nodes and Workers
Routing
Workers
hello_ockam
Nodes
created our first node
hello_ockam
Application Layer Routing

Credentials and Authorities

Scale mutual trust using lightweight, short-lived, revokable, attribute-based credentials.

Ockam Secure Channels enable you to setup mutually authenticated and end-to-end encrypted communication. Once a channel is established, it has the following guarantees:

  1. Authenticity: Each end of the channel knows that messages received on the channel must have been sent by someone who possesses the secret keys of specific Ockam Cryptographic Identifier.

  2. Integrity: Each end of the channel knows that the messages received on the channel could not have been tapered en-route and are exactly what was sent by the authenticated sender at the other end of the channel.

  3. Confidentiality: Each end of the channel knows that the contents of messages received on the channel could not have been observed en-route between the sender and the receiver.

These guarantees however don't automatically imply trust. They don't tell us if a particular sender is trusted to inform us about a particular topic or if the sender is authorized to get a response to a particular request.

One way to create trust and authorize requests would be to use Access Control Lists (ACLs), where every receiver of messages would have a preconfigured list of identifiers that are trusted to inform about a certain topic or trigger certain requests. This approach works but doesn't scale very well. It becomes very cumbersome to manage mutual trust if you have more that a few nodes communicating with each other.

Another, and significantly more scalable, approach is to use Ockam Credentials combined with Attribute Based Access Control (ABAC). In this setup every participant starts off by trusting a single Credential Issuer to be the authority on the attributes of an Identifier. This authority issues cryptographically signed credentials to attest to these attributes. Participants can then exchange and authenticate each others’ credentials to collect authenticated attributes about an identifier. Every participant uses these authenticated attributes to make authorization decisions based on attribute-based access control policies.

Let’s walk through an example of setting up ABAC using cryptographically verifiable credentials.

Setup

To get started please create the initial hello_ockam project and define an echoer worker. We'll also need the hex crate for this example so add that to your Cargo.toml using cargo add :

cargo add hex

Credential Issuer

Any Ockam Identity can issue Credentials. As a first step we’ll create a credential issuer that will act as an authority for our example application:

touch examples/06-credential-exchange-issuer.rs

This issuer, knows a predefined list of identifiers that are member of an application’s production cluster.

In a later guide, we'll explore how Ockam enables you to define various pluggable Enrollment Protocols to decide who should be issued credentials. For this example we'll assume that this list is known in advance.

// examples/06-credentials-exchange-issuer.rs
use ockam::access_control::AllowAll;
use ockam::access_control::IdentityIdAccessControl;
use ockam::compat::collections::BTreeMap;
use ockam::compat::sync::Arc;
use ockam::identity::utils::now;
use ockam::identity::SecureChannelListenerOptions;
use ockam::identity::{Identifier, Vault};
use ockam::tcp::{TcpListenerOptions, TcpTransportExtension};
use ockam::vault::{EdDSACurve25519SecretKey, SigningSecret, SoftwareVaultForSigning};
use ockam::{Context, Node, Result};
use ockam_api::authenticator::credential_issuer::CredentialIssuerWorker;
use ockam_api::authenticator::{AuthorityMembersRepository, AuthorityMembersSqlxDatabase, PreTrustedIdentity};
use ockam_api::DefaultAddress;

#[ockam::node]
async fn main(ctx: Context) -> Result<()> {
    let identity_vault = SoftwareVaultForSigning::create().await?;
    // Import the signing secret key to the Vault
    let secret = identity_vault
        .import_key(SigningSecret::EdDSACurve25519(EdDSACurve25519SecretKey::new(
            hex::decode("0127359911708ef4de9adaaf27c357501473c4a10a5326a69c1f7f874a0cd82e")
                .unwrap()
                .try_into()
                .unwrap(),
        )))
        .await?;

    // Create a default Vault but use the signing vault with our secret in it
    let mut vault = Vault::create().await?;
    vault.identity_vault = identity_vault;

    let node = Node::builder().await?.with_vault(vault).build(&ctx)?;

    let issuer_identity = hex::decode("81825837830101583285f68200815820afbca9cf5d440147450f9f0d0a038a337b3fe5c17086163f2c54509558b62ef4f41a654cf97d1a7818fc7d8200815840650c4c939b96142546559aed99c52b64aa8a2f7b242b46534f7f8d0c5cc083d2c97210b93e9bca990e9cb9301acc2b634ffb80be314025f9adc870713e6fde0d").unwrap();
    let issuer = node.import_private_identity(None, &issuer_identity, &secret).await?;
    println!("issuer identifier {}", issuer);

    // Tell the credential issuer about a set of public identifiers that are
    // known, in advance, to be members of the production cluster.
    let known_identifiers = vec![
        Identifier::try_from("Ie70dc5545d64724880257acb32b8851e7dd1dd57076838991bc343165df71bfe")?, // Client Identifier
        Identifier::try_from("Ife42b412ecdb7fda4421bd5046e33c1017671ce7a320c3342814f0b99df9ab60")?, // Server Identifier
    ];

    let members = Arc::new(AuthorityMembersSqlxDatabase::create().await?);

    // Tell this credential issuer about the attributes to include in credentials
    // that will be issued to each of the above known_identifiers, after and only
    // if, they authenticate with their corresponding latest private key.
    //
    // Since this issuer knows that the above identifiers are for members of the
    // production cluster, it will issue a credential that attests to the attribute
    // set: [{cluster, production}] for all identifiers in the above list.
    //
    // For a different application this attested attribute set can be different and
    // distinct for each identifier, but for this example we'll keep things simple.
    let credential_issuer = CredentialIssuerWorker::new(
        members.clone(),
        node.identities_attributes(),
        node.credentials(),
        &issuer,
        "test".to_string(),
        None,
        None,
        true,
    );

    let mut pre_trusted_identities = BTreeMap::<Identifier, PreTrustedIdentity>::new();
    let attributes = PreTrustedIdentity::new(
        [(b"cluster".to_vec(), b"production".to_vec())].into(),
        now()?,
        None,
        issuer.clone(),
    );
    for identifier in &known_identifiers {
        pre_trusted_identities.insert(identifier.clone(), attributes.clone());
    }
    members
        .bootstrap_pre_trusted_members(&issuer, &pre_trusted_identities.into())
        .await?;

    let tcp_listener_options = TcpListenerOptions::new();
    let sc_listener_options =
        SecureChannelListenerOptions::new().as_consumer(&tcp_listener_options.spawner_flow_control_id());
    let sc_listener_flow_control_id = sc_listener_options.spawner_flow_control_id();

    // Start a secure channel listener that only allows channels where the identity
    // at the other end of the channel can authenticate with the latest private key
    // corresponding to one of the above known public identifiers.
    node.create_secure_channel_listener(&issuer, DefaultAddress::SECURE_CHANNEL_LISTENER, sc_listener_options)?;

    // Start a credential issuer worker that will only accept incoming requests from
    // authenticated secure channels with our known public identifiers.
    let allow_known = IdentityIdAccessControl::new(known_identifiers);
    node.flow_controls()
        .add_consumer(&DefaultAddress::CREDENTIAL_ISSUER.into(), &sc_listener_flow_control_id);
    node.start_worker_with_access_control(
        DefaultAddress::CREDENTIAL_ISSUER,
        credential_issuer,
        allow_known,
        AllowAll,
    )?;

    // Initialize TCP Transport, create a TCP listener, and wait for connections.
    let tcp = node.create_tcp_transport()?;
    tcp.listen("127.0.0.1:5000", tcp_listener_options).await?;

    // Don't call node.shutdown() here so this node runs forever.
    println!("issuer started");
    Ok(())
}
cargo run --example 06-credential-exchange-issuer

Server

touch examples/06-credential-exchange-server.rs
// examples/06-credentials-exchange-server.rs
// This node starts a tcp listener, a secure channel listener, and an echoer worker.
// It then runs forever waiting for messages.
use hello_ockam::Echoer;
use ockam::abac::{IncomingAbac, OutgoingAbac};
use ockam::identity::{SecureChannelListenerOptions, Vault};
use ockam::tcp::{TcpListenerOptions, TcpTransportExtension};
use ockam::vault::{EdDSACurve25519SecretKey, SigningSecret, SoftwareVaultForSigning};
use ockam::{Context, Node, Result};
use ockam_api::enroll::enrollment::Enrollment;
use ockam_api::nodes::NodeManager;
use ockam_api::DefaultAddress;
use ockam_multiaddr::MultiAddr;

#[ockam::node]
async fn main(ctx: Context) -> Result<()> {
    let identity_vault = SoftwareVaultForSigning::create().await?;
    // Import the signing secret key to the Vault
    let secret = identity_vault
        .import_key(SigningSecret::EdDSACurve25519(EdDSACurve25519SecretKey::new(
            hex::decode("5FB3663DF8405379981462BABED7507E3D53A8D061188105E3ADBD70E0A74B8A")
                .unwrap()
                .try_into()
                .unwrap(),
        )))
        .await?;

    // Create a default Vault but use the signing vault with our secret in it
    let mut vault = Vault::create().await?;
    vault.identity_vault = identity_vault;

    let node = Node::builder().await?.with_vault(vault).build(&ctx)?;

    // Initialize the TCP Transport
    let tcp = node.create_tcp_transport()?;

    // Create an Identity representing the server
    // Load an identity corresponding to the following public identifier
    // Ife42b412ecdb7fda4421bd5046e33c1017671ce7a320c3342814f0b99df9ab60
    //
    // We're hard coding this specific identity because its public identifier is known
    // to the credential issuer as a member of the production cluster.
    let change_history = hex::decode("81825837830101583285f682008158201d387ce453816d91159740a55e9a62ad3b58be9ecf7ef08760c42c0d885b6c2ef41a654cf9681a7818fc688200815840dc10ba498655dac0ebab81c6e1af45f465408ddd612842f10a6ced53c06d4562117e14d656be85685aa5bfbd5e5ede6f0ecf5eb41c19a5594e7a25b7a42c5c07").unwrap();
    let server = node.import_private_identity(None, &change_history, &secret).await?;

    let issuer_identity = "81825837830101583285f68200815820afbca9cf5d440147450f9f0d0a038a337b3fe5c17086163f2c54509558b62ef4f41a654cf97d1a7818fc7d8200815840650c4c939b96142546559aed99c52b64aa8a2f7b242b46534f7f8d0c5cc083d2c97210b93e9bca990e9cb9301acc2b634ffb80be314025f9adc870713e6fde0d";
    let issuer = node.import_identity_hex(None, issuer_identity).await?;

    // Connect with the credential issuer and authenticate using the latest private
    // key of this program's hardcoded identity.
    //
    // The credential issuer already knows the public identifier of this identity
    // as a member of the production cluster so it returns a signed credential
    // attesting to that knowledge.
    let authority_node = NodeManager::authority_node_client(
        tcp.clone(),
        node.secure_channels().clone(),
        &issuer,
        &MultiAddr::try_from("/dnsaddr/localhost/tcp/5000/secure/api").unwrap(),
        &server,
        None,
    )
    .await?;
    let credential = authority_node.issue_credential(node.context()).await.unwrap();

    // Verify that the received credential has indeed be signed by the issuer.
    // The issuer identity must be provided out-of-band from a trusted source
    // and match the identity used to start the issuer node
    node.credentials()
        .credentials_verification()
        .verify_credential(Some(&server), &[issuer.clone()], &credential)
        .await?;

    // Start an echoer worker that will only accept incoming requests from
    // identities that have authenticated credentials issued by the above credential
    // issuer. These credentials must also attest that requesting identity is
    // a member of the production cluster.
    let tcp_listener_options = TcpListenerOptions::new();
    let sc_listener_options = SecureChannelListenerOptions::new()
        .with_authority(issuer.clone())
        .with_credential(credential)?
        .as_consumer(&tcp_listener_options.spawner_flow_control_id());

    node.flow_controls().add_consumer(
        &DefaultAddress::ECHO_SERVICE.into(),
        &sc_listener_options.spawner_flow_control_id(),
    );
    let allow_production_incoming = IncomingAbac::create_name_value(
        node.identities_attributes(),
        Some(issuer.clone()),
        "cluster",
        "production",
    );
    let allow_production_outgoing = OutgoingAbac::create_name_value(
        ctx.get_router_context(),
        node.identities_attributes(),
        Some(issuer),
        "cluster",
        "production",
    )?;
    node.start_worker_with_access_control(
        DefaultAddress::ECHO_SERVICE,
        Echoer,
        allow_production_incoming,
        allow_production_outgoing,
    )?;

    // Start a secure channel listener that only allows channels with
    // authenticated identities.
    node.create_secure_channel_listener(&server, DefaultAddress::SECURE_CHANNEL_LISTENER, sc_listener_options)?;

    // Create a TCP listener and wait for incoming connections
    tcp.listen("127.0.0.1:4000", tcp_listener_options).await?;

    // Don't call node.shutdown() here so this node runs forever.
    println!("server started");
    Ok(())
}
cargo run --example 06-credential-exchange-server

Client

touch examples/06-credential-exchange-client.rs
// examples/06-credentials-exchange-client.rs
use ockam::identity::{SecureChannelOptions, Vault};
use ockam::tcp::TcpConnectionOptions;
use ockam::vault::{EdDSACurve25519SecretKey, SigningSecret, SoftwareVaultForSigning};
use ockam::{route, Context, Node, Result};
use ockam_api::enroll::enrollment::Enrollment;
use ockam_api::nodes::NodeManager;
use ockam_api::DefaultAddress;
use ockam_multiaddr::MultiAddr;
use ockam_transport_tcp::TcpTransportExtension;

#[ockam::node]
async fn main(ctx: Context) -> Result<()> {
    let identity_vault = SoftwareVaultForSigning::create().await?;
    // Import the signing secret key to the Vault
    let secret = identity_vault
        .import_key(SigningSecret::EdDSACurve25519(EdDSACurve25519SecretKey::new(
            hex::decode("31FF4E1CD55F17735A633FBAB4B838CF88D1252D164735CB3185A6E315438C2C")
                .unwrap()
                .try_into()
                .unwrap(),
        )))
        .await?;

    // Create a default Vault but use the signing vault with our secret in it
    let mut vault = Vault::create().await?;
    vault.identity_vault = identity_vault;

    let mut node = Node::builder().await?.with_vault(vault).build(&ctx)?;
    // Initialize the TCP Transport
    let tcp = node.create_tcp_transport()?;

    // Create an Identity representing the client
    // We preload the client vault with a change history and secret key corresponding to the identifier
    // Ie70dc5545d64724880257acb32b8851e7dd1dd57076838991bc343165df71bfe
    // which is an identifier known to the credential issuer, with some preset attributes
    //
    // We're hard coding this specific identity because its public identifier is known
    // to the credential issuer as a member of the production cluster.
    let change_history = hex::decode("81825837830101583285f68200815820530d1c2e9822433b679a66a60b9c2ed47c370cd0ce51cbe1a7ad847b5835a963f41a654cf98e1a7818fc8e820081584085054457d079a67778f235a90fa1b926d676bad4b1063cec3c1b869950beb01d22f930591897f761c2247938ce1d8871119488db35fb362727748407885a1608").unwrap();
    let client = node.import_private_identity(None, &change_history, &secret).await?;
    println!("issuer identifier {}", client);

    // Connect to the authority node and ask that node to create a
    // credential for the client.
    let issuer_identity = "81825837830101583285f68200815820afbca9cf5d440147450f9f0d0a038a337b3fe5c17086163f2c54509558b62ef4f41a654cf97d1a7818fc7d8200815840650c4c939b96142546559aed99c52b64aa8a2f7b242b46534f7f8d0c5cc083d2c97210b93e9bca990e9cb9301acc2b634ffb80be314025f9adc870713e6fde0d";
    let issuer = node.import_identity_hex(None, issuer_identity).await?;

    // The authority node already knows the public identifier of the client
    // as a member of the production cluster so it returns a signed credential
    // attesting to that knowledge.
    let authority_node = NodeManager::authority_node_client(
        tcp.clone(),
        node.secure_channels().clone(),
        &issuer,
        &MultiAddr::try_from("/dnsaddr/localhost/tcp/5000/secure/api")?,
        &client,
        None,
    )
    .await?;
    let credential = authority_node.issue_credential(node.context()).await.unwrap();

    // Verify that the received credential has indeed be signed by the issuer.
    // The issuer identity must be provided out-of-band from a trusted source
    // and match the identity used to start the issuer node
    node.credentials()
        .credentials_verification()
        .verify_credential(Some(&client), &[issuer.clone()], &credential)
        .await?;

    // Create a secure channel to the node that is running the Echoer service.
    let server_connection = tcp.connect("127.0.0.1:4000", TcpConnectionOptions::new()).await?;
    let channel = node
        .create_secure_channel(
            &client,
            route![server_connection, DefaultAddress::SECURE_CHANNEL_LISTENER],
            SecureChannelOptions::new()
                .with_authority(issuer.clone())
                .with_credential(credential)?,
        )
        .await?;

    // Send a message to the worker at address "echoer".
    // Wait to receive a reply and print it.
    let reply: String = node
        .send_and_receive(
            route![channel, DefaultAddress::ECHO_SERVICE],
            "Hello Ockam!".to_string(),
        )
        .await?;
    println!("Received: {}", reply); // should print "Hello Ockam!"

    node.shutdown().await
}
cargo run --example 06-credential-exchange-client

Secure Channels

Ockam Secure Channels are mutually authenticated and end-to-end encrypted messaging channels that guarantee data authenticity, integrity, and confidentiality.

Ockam Routing and Transports, combined with the ability to model Bridges and Relays, make it possible to run end-to-end, application layer protocols in a variety of communication topologies - across many network connection hops and protocols boundaries.

Ockam Secure Channels is an end-to-end protocol built on top of Ockam Routing. This cryptographic protocol guarantees data authenticity, integrity, and confidentiality over any communication topology that can be traversed with Ockam Routing.

Secure Channel

A secure channel has two participants (ends). A participant that starts a Listener and creates dedicated Responders whenever a new protocol session is initiated. Another participant, called the Initiator, initiates the protocol with a Listener.

Running this protocol requires a stateful exchange of multiple messages and having a worker and routing system allows Ockam to hide the complexity of creating and maintaining a secure channel behind two simple functions:

/// Creates a secure channel listener and waits for messages from secure channel Initiator.
pub async fn create_secure_channel_listener(
    &self,
    identifier: &Identifier,
    address: impl Into<Address>,
    options: impl Into<SecureChannelListenerOptions>,
) -> Result<SecureChannelListener> { ... }

/// Initiates the protocol to create a secure channel with a secure channel Listener.
pub async fn create_secure_channel(
    &self,
    identifier: &Identifier,
    route_to_a_secure_channel_listener: impl Into<Route>,
    options: impl Into<SecureChannelOptions>,
) -> Result<SecureChannel> { ... }

Let's see this in action before we dive into the protocol. The following example is similar to the earlier multi hop routing example but this this time the echoer is accessed through and end-to-end secure channel.

Responder node

// examples/05-secure-channel-over-two-transport-hops-responder.rs
// This node starts a tcp listener, a secure channel listener, and an echoer worker.
// It then runs forever waiting for messages.

use hello_ockam::Echoer;
use ockam::identity::SecureChannelListenerOptions;
use ockam::tcp::{TcpListenerOptions, TcpTransportExtension};
use ockam::{node, Context, Result};

#[ockam::node]
async fn main(ctx: Context) -> Result<()> {
    // Create a node with default implementations
    let node = node(ctx).await?;

    // Initialize the TCP Transport.
    let tcp = node.create_tcp_transport()?;

    node.start_worker("echoer", Echoer)?;

    let bob = node.create_identity().await?;

    // Create a TCP listener and wait for incoming connections.
    let listener = tcp.listen("127.0.0.1:4000", TcpListenerOptions::new()).await?;

    // Create a secure channel listener for Bob that will wait for requests to
    // initiate an Authenticated Key Exchange.
    let secure_channel_listener = node.create_secure_channel_listener(
        &bob,
        "bob_listener",
        SecureChannelListenerOptions::new().as_consumer(listener.flow_control_id()),
    )?;

    // Allow access to the Echoer via Secure Channels
    node.flow_controls()
        .add_consumer(&"echoer".into(), secure_channel_listener.flow_control_id());

    // Don't call node.shutdown() here so this node runs forever.
    Ok(())
}

Middle node

// examples/05-secure-channel-over-two-transport-hops-middle.rs
// This node creates a tcp connection to a node at 127.0.0.1:4000
// Starts a relay worker to forward messages to 127.0.0.1:4000
// Starts a tcp listener at 127.0.0.1:3000
// It then runs forever waiting to route messages.

use hello_ockam::Relay;
use ockam::tcp::{TcpConnectionOptions, TcpListenerOptions, TcpTransportExtension};
use ockam::{node, route, Context, Result};

#[ockam::node]
async fn main(ctx: Context) -> Result<()> {
    // Create a node with default implementations
    let node = node(ctx).await?;

    // Initialize the TCP Transport
    let tcp = node.create_tcp_transport()?;

    // Create a TCP connection to Bob.
    let connection_to_bob = tcp.connect("127.0.0.1:4000", TcpConnectionOptions::new()).await?;

    // Start a Relay to forward messages to Bob using the TCP connection.
    node.start_worker("forward_to_bob", Relay::new(route![connection_to_bob]))?;

    // Create a TCP listener and wait for incoming connections.
    let listener = tcp.listen("127.0.0.1:3000", TcpListenerOptions::new()).await?;

    node.flow_controls()
        .add_consumer(&"forward_to_bob".into(), listener.flow_control_id());

    // Don't call node.shutdown() here so this node runs forever.
    Ok(())
}

Initiator node

// examples/05-secure-channel-over-two-transport-hops-initiator.rs
// This node creates an end-to-end encrypted secure channel over two tcp transport hops.
// It then routes a message, to a worker on a different node, through this encrypted channel.

use ockam::identity::SecureChannelOptions;
use ockam::tcp::{TcpConnectionOptions, TcpTransportExtension};
use ockam::{node, route, Context, Result};

#[ockam::node]
async fn main(ctx: Context) -> Result<()> {
    // Create a node with default implementations
    let mut node = node(ctx).await?;

    // Create an Identity to represent Alice.
    let alice = node.create_identity().await?;

    // Create a TCP connection to the middle node.
    let tcp = node.create_tcp_transport()?;
    let connection_to_middle_node = tcp.connect("localhost:3000", TcpConnectionOptions::new()).await?;

    // Connect to a secure channel listener and perform a handshake.
    let r = route![connection_to_middle_node, "forward_to_bob", "bob_listener"];
    let channel = node
        .create_secure_channel(&alice, r, SecureChannelOptions::new())
        .await?;

    // Send a message to the echoer worker via the channel.
    // Wait to receive a reply and print it.
    let reply: String = node
        .send_and_receive(route![channel, "echoer"], "Hello Ockam!".to_string())
        .await?;
    println!("App Received: {}", reply); // should print "Hello Ockam!"

    // Stop all workers, stop the node, cleanup and return.
    node.shutdown().await
}

Run

Run the responder in a separate terminal tab and keep it running:

cargo run --example 05-secure-channel-over-two-transport-hops-responder

Run the middle node in a separate terminal tab and keep it running:

cargo run --example 05-secure-channel-over-two-transport-hops-middle

Run the initiator:

cargo run --example 05-secure-channel-over-two-transport-hops-initiator

Initialization

Using SecureChannelListenerOptions and SecureChannelOptions, each participant is initialized with with the following initial state:

  1. An Ockam Identifier that will be used as the Ockam Identity of this secure channel participant. Access to a Vault that contains the primary secret key for this Identifier is not required during the creation of the secure channel. We assume that a PurposeKeyAttestation for a SecureChannelStatic has already been created.

  2. The SecureChannelStatic purpose key and access to its secret inside a Vault. This vault should be an implementation of the VaultForSecureChannels and VaultForVerifyingSignatures traits described earlier.

  3. A Trust Context and Access Controls, that are used for authorization.

  4. The following IdentityAndCredentials data structure that contains:

    1. The complete ChangeHistory of the Identity of this participant.

    2. A purpose key attestation, issued by the Identity of this participant, attesting to a SecureChannelStatic purpose key. This must be the same SecureChannelStatic that the participant can access the secret for inside a vault.

    3. Zero or more Credentials and corresponding PurposeKeyAttestations that can be used to verify the signature on the credential and tie a CredentialSigning verification key to the Ockam Identifier of the Credential Issuer.

The Listener runs on the specified Worker address and the Initiator knows a Route to reach the Listener. The Listener starts new Responder workers dedicated to each protocol session that is started by any Initiator.

#[derive(Encode, Decode)]
pub struct IdentityAndCredentials {
    #[n(0)] pub change_history: ChangeHistory,
    #[n(1)] pub purpose_key_attestation: PurposeKeyAttestation,
    #[n(2)] pub credentials: Vec<CredentialAndPurposeKeyAttestation>,
}

#[derive(Encode, Decode)]
pub struct CredentialAndPurposeKeyAttestation {
    #[n(0)] pub credential: Credential,
    #[n(1)] pub purpose_key_attestation: PurposeKeyAttestation,
}

Authenticated Key Establishment

The Initiator uses the above described initial state to begin a handshake with the Listener. The Listener initializes and starts a Responder in response to the first message from an initiator.

This handshake is based on the XX pattern described in the Noise Protocol Framework. The security properties of the messages in the XX pattern and their payload have been studied and describe at the following locations - 1, 2, 3, 4.

Each participant maintains the following variables:

  • s, e: The local participant's static and ephemeral key pairs.

  • rs, re: The remote participant's static and ephemeral public keys (which may be empty).

  • h: A handshake transcript hash that hashes all the data that's been sent and received.

  • ck: A chaining key that hashes all previous DH outputs. Once the handshake completes, the chaining key will be used to derive the encryption keys for transport messages.

  • k, n: An encryption key k (which may be empty) and a counter-based nonce n. Whenever a new DH output causes a new ck to be calculated, a new k is also calculated. The key k and nonce n are used to encrypt static public keys and handshake payloads. Encryption with k uses some AEAD cipher mode and uses the current h value as associated data which is covered by the AEAD authentication. Encryption of static public keys and payloads provides some confidentiality and key confirmation during the handshake phase.

As described in the section on VaultForSecureChannels, we rely on compile time feature flags to chose between three possible combinations of primitives:

  • OCKAM_XX_25519_AES256_GCM_SHA256 enables Ockam_XX secure channel handshake with AEAD_AES_256_GCM and SHA256. This is our current default.

  • OCKAM_XX_25519_AES128_GCM_SHA256 enables Ockam_XX secure channel handshake with AEAD_AES_128_GCM and SHA256.

  • OCKAM_XX_25519_ChaChaPolyBLAKE2s enables Ockam_XX secure channel handshake with AEAD_CHACHA20_POLY1305 and Blake2s.

This is a completely compile time choice for the purpose of studying performance of the various options in different runtime environments. We intentionally have no negotiation of primitives in the handshake. All participants in a live systems are deployed with the same compile time choice of secure channels primitives.

The s variable is initialized with SecureChannelStatic of this participant and the functions described in VaultForSecureChannels and VaultForVerifyingSignatures are used to run the handshake as follows:

At any point if there is error in decrypting the incoming data, the participant simply exits the protocols without signaling any failure to the other participant.

Mutual Authentication

After the second message in the handshake is received by the Initiator, the initiator is convinced that the Responder possesses the secret keys of rs, the remote SecureChannelStatic. The payload of the second message contains serialized IdentityAndCredentials data of the Responder. The Initiator deserializes and verifies the this data structure:

  • It verifies the chain of signatures on the change history. It checks that the expires_at timestamp on the latest change is greater than now.

  • It checks that the public_key in the PurposeKeyAttestation is the same as the rs that has been authenticated. It checks that the PurposeKeyAttestation subject is the Identifier whose change history was presented. It verifies that the primary public key in the latest change has correctly signed the PurposeKeyAttestation for the SecureChannelStatic. It checks that the expires_at timestamp on the PurposeKeyAttestation is greater than now.

  • For each included credential in verifies:

    • That subject of the credential is the Identifier whose change history was presented.

    • That the expires_at timestamp of the Credential is greater than now.

    • That the credential is correctly signed by the purpose key in the PurposeKeyAttestation included with the Credential as part of the corresponding CredentialAndPurposeKeyAttestation.

    • That expires_at timestamp of the PurposeKeyAttestation is greater than now.

After the third message in the handshake is received by the Responder, the responder is convinced that the Initiator possesses the secret keys of rs, the remote SecureChannelStatic. The payload of the second message contains serialized IdentityAndCredentials data of the Initiator. The Responder, similar to the initiator, deserializes and verifies the this data structure.

At this point both sides have mutually authenticated the each other's rs, Ockam Identifier, and Credentials by one or more Issuers about this Identifier.

Authorization

Each participant in a Secure Channel is initialized with a Trust Context and Access Controls.

The simple form of mutual authorization is achieved by defining an Access Control that only allows the SecureChannel handshake to complete if the remote participant authenticates with a specific Ockam Identifier. Both participants have pre-existing knowledge of each other's Ockam Identifier.

A more scalable form of mutual authorization is achieved by specifying a Trust Context where each participant must present a specific type of credential issued by a specific Credential Issuer. Both participants have pre-existing knowledge of Ockam Identifier of this Credential Issuer (Authority).

Rekeying

After performing the the XX handshake, peers have agreed on a pair of symmetric encryption keys they will use to encrypt data on the channel, one for each direction.

Rekeying is the process of periodically updating the symmetric key in use (refer to the Noise specification for a complete description and rationale).

With each direction of the secure channel, we associate a nonce variable. It holds a 64 bit unsigned integer. That integer is prepended to each ciphertext and the nonce variable is increased by 1 when the message is sent.

This nonce allows us to count the number of sent messages and define a series of contiguous buckets of messages where each bucket is of size N. N is a constant value known by both the initiator and the responder. We can then associate an encryption key to each bucket, and decide to create a new symmetric key once we need to send a message corresponding to the next bucket.

This approach implies that we don't need to communicate a "Rekey" operation between the secure channel parties. They both know that they need to perform rekeying every N messages.

In the previous figure:

  • Messages 0 to N-1 are encrypted with k0 (the initial key agreed during the handshake).

  • Messages N to 2N-1 with k1, etc.

  • Each kn is derived from the previous kn-1.

In the most simple scenario, the encryptor keeps track of the last nonce it generated, and increments it by one each time it generates a new message. While the decryptor keeps track of the nonce it is expecting to receive next, and increments it every time it receives a valid message:

However this simple approach doesn't work at the level of Ockam Secure Channels, since there is no message delivery guarantees offered. For example, this can happen when using a transport protocol like UDP. This means that:

  1. Packets can be completely lost.

  2. Packets can be delayed/reordered.

  3. Packets can be repeated.

This introduces a complication to the rekeying operation since the encryptor and the decryptor must agree on the nonce to use for every message on the channel.

In order to allow for out-of-order delivery each secure channel message includes the nonce that was used to encrypt it. The encryptor side keeps incrementing the nonce by 1 each time it generates a new message and prepends this nonce to the message.

Then the decryptor extracts this nonce from the message and uses it as part of the decryption operation.

With the nonce being part of the transmitted message, the synchronization problem is solved. Even if messages are lost or arrive out-of-order, the decryptor can still process them.

But other important difficulties arise:

  • Since the nonce is part of the message and transmitted in plaintext, how can the decryptor protect itself against duplicate packets / replay attacks? Even if the decryptor keeps track of every nonce ever received (and accepted) during the channel's lifetime, this is a problem for long-lived channels since it would require a prohibitive amount of memory to keep track of all the nonces used.

  • Even keeping track of all the nonces would be problematic since this would mean being able to decrypt old messages with old keys. This defeats Forward Secrecy, which is a protection against the possible decryption of previous messages, which is precisely what we are trying to achieve with the rekeying process.

  • Moreover, since each K is derived from the previous one, let's say an attacker sends a forged message with a nonce far in the future (than the one the decryptor is currently expecting). This would force the decryptor to perform a time-consuming series of rekey() operations to reach to the K needed to attempt to decrypt the message. This is an easy target for denial-of-service attacks.

Both of these problems are solved by the introduction of a sliding valid window of nonces that the decryptor will accept.

  1. The decryptor keeps track of the largest accepted nonce received so far on the channel.

  2. It defines an interval around it for nonces that it will accept.

  3. Messages with nonces outside of this window are discarded.

In the following example:

  • The decryptor uses a valid window of size 10.

  • Given that the largest nonce it has accepted so far is 13, the decryptor can accept packets with nonces between 8 and 18.

  • Nonces outside of that interval will be discarded without any further processing.

When the decryptor receives a message with nonce = 14 (an allowed value), we try to decrypt the message. If the decryption succeeds, we accept the nonce and advance the window:

Note that the set of already-seen nonces is bounded in size. This size is (at most) half the valid window size.

Since the valid window is always centered on the highest received nonce, the nonces we track will always fall between the lower part of the window and that nonce. If we receive a nonce greater than the nonce at the window center, the whole window will have a new center and will move further along.

On the flip side, if at this point, the missing message with nonce 8 was received, it will be rejected, even if it was the valid one that was emitted by the sender, but delayed in the network. That message is effectively lost, it is too out-of-order to be handled.

Now suppose the next message received has nonce 12. It will be accepted, but the window won't move forward as it is less than the current maximum nonce accepted:

Here's another caveat. What happens if, let's say, messages 15 to 20 where lost? Then the channel is effectively stuck: no matter if it receives the next messages (21, 22, ...) , the decryptor will reject them all because they will also be out of the valid window. At this point, the secure channel will need to be re-established.

Implementation

The encryptor and decryptor implement both of the following in a similar manner:

  1. Rekeying interval (which defines the key buckets).

  2. Key deriving algorithm. The current rekeying interval size is 32.

However, the concept of valid window is entirely up to the decryptor to implement. This only has to do with how tolerant to out-of-order packets the communication will be. The encryptor side is not aware nor affected by this choice.

In our Elixir implementation of secure channels, the valid window is tied to the choice of how often to rekey. If the current k in use is kn (the k that corresponds to the maximum nonce accepted so far) the valid window is defined as nonces falling into the kn-1 , kn or kn+1 buckets.

Our Rust version is similar but defines a window of 32 positions around the expected nonce.

Drawing

Routing and Transports

Ockam Routing and Transports enable other Ockam protocols to provide end-to-end guarantees like trust, security, privacy, reliable delivery, and ordering at the application layer.

Data, within modern applications, routinely flows over complex, multi-hop, multi-protocol routes before reaching its end destination. It’s common for application layer requests and data to move across network boundaries, beyond data centers, via shared or public networks, through queues and caches, from gateways and brokers to reach remote services and other distributed parts of an application.

Our goal is to enable end-to-end application layer guarantees in any communication topology. For example Ockam Secure Channels can provide end-to-end guarantees of data authenticity, integrity, and confidentiality in any of the above communication topologies.

In contrast, traditional secure communication protocol implementations are typically tightly coupled with transport protocols in a way that all their security is limited to the length and duration of the underlying transport connections.

For example, most implementations are coupled to the underlying TCP connection. If your application’s data and requests travel over two TCP connection hops TCP -> TCP then all TLS guarantees break at the bridge between the two networks. This bridge, gateway or load balancer then becomes a point of weakness for application data. To makes matters worse, if you don't setup another mutually authenticated TLS connection on the second hop between the gateway and your destination server then the entire second hop network – all applications and machines within it – become attack vectors to your application and its data.

Traditional secure communication protocols are also unable to protect your application’s data if it travels over multiple different transport protocols. They can’t guarantee data authenticity or data integrity if your application’s communication path is UDP -> TCP or BLE -> TCP.

Ockam Routing is a simple and lightweight message based protocol that makes it possible to bidirectionally exchange message over a large variety of communication topologies: TCP -> TCP or TCP -> TCP -> TCP or BLE -> UDP -> TCP or BLE -> TCP -> TCP or TCP -> Kafka -> TCP and more.

By layering Ockam Secure Channels and other protocols over Ockam Routing, we can provide end-to-end guarantees over arbitrary transport topologies.

Routing

So far, we've created an "echoer" worker in our node, sent it a message, and received a reply. This worker was a simple one hop away from our "app" worker.

To achieve this, messages carry with them two metadata fields: onward_route and return_route, where a route is a list of addresses.

To get a sense of how that works, let's route a message over two hops.

Protocol

Sender:

  • Needs to know the route to a destination, makes that route the onward_route of a new message

  • Makes its own address the return_route of the new message

Hop:

  • Removes its own address from beginning of onward_route

  • Adds its own address to beginning of return_route

Replier:

  • Makes return_route of incoming message, onward_route of outgoing message

  • Makes its own address the return_route of the new message

Hop worker

For demonstration, we'll create a simple worker, called Hop, that takes every incoming message and forwards it to the next address in the onward_route of that message.

Just before forwarding the message, Hop's handle message function will:

  1. Print the message

  2. Remove its own address (first address) from the onward_route, by calling step()

  3. Insert its own address as the first address in the return_route by calling prepend()

Create a new file at:

touch src/hop.rs

Add the following code to this file:

// src/hop.rs
use ockam::{Any, Context, Result, Routed, Worker};

pub struct Hop;

#[ockam::worker]
impl Worker for Hop {
    type Context = Context;
    type Message = Any;

    /// This handle function takes any incoming message and forwards
    /// it to the next hop in it's onward route
    async fn handle_message(&mut self, ctx: &mut Context, msg: Routed<Any>) -> Result<()> {
        println!("Address: {}, Received: {:?}", ctx.primary_address(), msg);

        // Send the message to the next worker on its onward_route
        ctx.forward(msg.into_local_message().step_forward(ctx.primary_address().clone())?)
            .await
    }
}

To make this Hop type accessible to our main program, export it from src/lib.rs by adding the following to it:

mod hop;
pub use hop::*;

Echoer worker

We'll also use the Echoer worker that we created in the previous example. So make sure that it stays exported from src/lib.rs.

App worker

Next, let's create our main "app" worker.

In the code below we start an Echoer worker at address "echoer" and a Hop worker at address "h1". Then, we send a message along the h1 => echoer route by passing route!["h1", "echoer"] to send(..).

Create a new file at:

touch examples/03-routing.rs

Add the following code to this file:

// examples/03-routing.rs
// This node routes a message.

use hello_ockam::{Echoer, Hop};
use ockam::{node, route, Context, Result};

#[ockam::node]
async fn main(ctx: Context) -> Result<()> {
    // Create a node with default implementations
    let mut node = node(ctx).await?;

    // Start a worker, of type Echoer, at address "echoer"
    node.start_worker("echoer", Echoer)?;

    // Start a worker, of type Hop, at address "h1"
    node.start_worker("h1", Hop)?;

    // Send a message to the worker at address "echoer",
    // via the worker at address "h1"
    node.send(route!["h1", "echoer"], "Hello Ockam!".to_string()).await?;

    // Wait to receive a reply and print it.
    let reply = node.receive::<String>().await?;
    println!("App Received: {}", reply.into_body()?); // should print "Hello Ockam!"

    // Stop all workers, stop the node, cleanup and return.
    node.shutdown().await
}

To run this new node program:

cargo run --example 03-routing

Note the message flow and how routing information is manipulated as the message travels.

Routing over many hops

Routing is not limited to one or two hops, we can easily create routes with many hops. Let's try that in a quick example:

This time we'll create multiple hop workers between the "app" and the "echoer" and route our message through them.

Create a new file at:

touch examples/03-routing-many-hops.rs

Add the following code to this file:

// examples/03-routing-many-hops.rs
// This node routes a message through many hops.

use hello_ockam::{Echoer, Hop};
use ockam::{node, route, Context, Result};

#[ockam::node]
async fn main(ctx: Context) -> Result<()> {
    // Create a node with default implementations
    let mut node = node(ctx).await?;

    // Start an Echoer worker at address "echoer"
    node.start_worker("echoer", Echoer)?;

    // Start 3 hop workers at addresses "h1", "h2" and "h3".
    node.start_worker("h1", Hop)?;
    node.start_worker("h2", Hop)?;
    node.start_worker("h3", Hop)?;

    // Send a message to the echoer worker via the "h1", "h2", and "h3" workers
    let r = route!["h1", "h2", "h3", "echoer"];
    node.send(r, "Hello Ockam!".to_string()).await?;

    // Wait to receive a reply and print it.
    let reply = node.receive::<String>().await?;
    println!("App Received: {}", reply.into_body()?); // should print "Hello Ockam!"

    // Stop all workers, stop the node, cleanup and return.
    node.shutdown().await
}

To run this new node program:

cargo run --example 03-routing-many-hops

Note the message flow.

Transport

An Ockam Transport is a plugin for Ockam Routing. It moves Ockam Routing messages using a specific transport protocol like TCP, UDP, WebSockets, Bluetooth etc.

In previous examples, we routed messages locally within one node. Routing messages over transport layer connections looks very similar.

Let's try the TcpTransport, we'll need to create two nodes: a responder and an initiator.

Create a new file at:

touch examples/04-routing-over-transport-responder.rs

Add the following code to this file:

// examples/04-routing-over-transport-responder.rs
// This node starts a tcp listener and an echoer worker.
// It then runs forever waiting for messages.

use hello_ockam::Echoer;
use ockam::tcp::{TcpListenerOptions, TcpTransportExtension};
use ockam::{node, Context, Result};

#[ockam::node]
async fn main(ctx: Context) -> Result<()> {
    // Create a node with default implementations
    let node = node(ctx).await?;

    // Initialize the TCP Transport
    let tcp = node.create_tcp_transport()?;

    // Create an echoer worker
    node.start_worker("echoer", Echoer)?;

    // Create a TCP listener and wait for incoming connections.
    let listener = tcp.listen("127.0.0.1:4000", TcpListenerOptions::new()).await?;

    // Allow access to the Echoer via TCP connections from the TCP listener
    node.flow_controls()
        .add_consumer(&"echoer".into(), listener.flow_control_id());

    // Don't call node.shutdown() here so this node runs forever.
    Ok(())
}

Initiator node

Create a new file at:

touch examples/04-routing-over-transport-initiator.rs

Add the following code to this file:

// examples/04-routing-over-transport-initiator.rs
// This node routes a message, to a worker on a different node, over the tcp transport.

use ockam::tcp::{TcpConnectionOptions, TcpTransportExtension};
use ockam::{node, route, Context, Result};

#[ockam::node]
async fn main(ctx: Context) -> Result<()> {
    // Create a node with default implementations
    let mut node = node(ctx).await?;

    // Initialize the TCP Transport.
    let tcp = node.create_tcp_transport()?;

    // Create a TCP connection to a different node.
    let connection_to_responder = tcp.connect("localhost:4000", TcpConnectionOptions::new()).await?;

    // Send a message to the "echoer" worker on a different node, over a tcp transport.
    // Wait to receive a reply and print it.
    let r = route![connection_to_responder, "echoer"];
    let reply: String = node.send_and_receive(r, "Hello Ockam!".to_string()).await?;

    println!("App Received: {}", reply); // should print "Hello Ockam!"

    // Stop all workers, stop the node, cleanup and return.
    node.shutdown().await
}

Run

Run the responder in a separate terminal tab and keep it running:

cargo run --example 04-routing-over-transport-responder

Run the initiator:

cargo run --example 04-routing-over-transport-initiator

Note the message flow.

Routing over two transport hops

Relay worker

For demonstration, we'll create another worker, called Relay, that takes every incoming message and forwards it to the predefined address.

Just before forwarding the message, Relay's handle message function will:

  1. Print the message

  2. Remove its own address (first address) from the onward_route, by calling step()

  3. Insert predefined address as the first address in the onward_route by calling prepend()

Create a new file at:

touch src/relay.rs

Add the following code to this file:

// src/relay.rs
use ockam::{Any, Context, Result, Route, Routed, Worker};

pub struct Relay {
    route: Route,
}

impl Relay {
    pub fn new(route: impl Into<Route>) -> Self {
        let route = route.into();

        if route.is_empty() {
            panic!("Relay can't forward messages to an empty route");
        }

        Self { route }
    }
}

#[ockam::worker]
impl Worker for Relay {
    type Context = Context;
    type Message = Any;

    /// This handle function takes any incoming message and forwards
    /// it to the next hop in it's onward route
    async fn handle_message(&mut self, ctx: &mut Context, msg: Routed<Any>) -> Result<()> {
        println!("Address: {}, Received: {:?}", ctx.primary_address(), msg);

        let next_on_route = self.route.next()?.clone();

        // Some type conversion
        let mut local_message = msg.into_local_message();

        local_message = local_message.pop_front_onward_route()?;
        local_message = local_message.prepend_front_onward_route(self.route.clone()); // Prepend predefined route to the onward_route

        let prev_hop = local_message.return_route().next()?.clone();

        if let Some(info) = ctx
            .flow_controls()
            .find_flow_control_with_producer_address(&next_on_route)
        {
            ctx.flow_controls().add_consumer(&prev_hop, info.flow_control_id());
        }

        if let Some(info) = ctx.flow_controls().find_flow_control_with_producer_address(&prev_hop) {
            ctx.flow_controls().add_consumer(&next_on_route, info.flow_control_id());
        }

        // Send the message on its onward_route
        ctx.forward(local_message).await
    }
}

To make this Relay type accessible to our main program, export it from src/lib.rs by adding the following to it:

mod relay;
pub use relay::*;

Responder node

Create a new file at:

touch examples/04-routing-over-transport-two-hops-responder.rs

Add the following code to this file:

// examples/04-routing-over-transport-two-hops-responder.rs
// This node starts a tcp listener and an echoer worker.
// It then runs forever waiting for messages.

use hello_ockam::Echoer;
use ockam::tcp::{TcpListenerOptions, TcpTransportExtension};
use ockam::{node, Context, Result};

#[ockam::node]
async fn main(ctx: Context) -> Result<()> {
    // Create a node with default implementations
    let node = node(ctx).await?;

    // Initialize the TCP Transport
    let tcp = node.create_tcp_transport()?;

    // Create an echoer worker
    node.start_worker("echoer", Echoer)?;

    // Create a TCP listener and wait for incoming connections.
    let listener = tcp.listen("127.0.0.1:4000", TcpListenerOptions::new()).await?;

    // Allow access to the Echoer via TCP connections from the TCP listener
    node.flow_controls()
        .add_consumer(&"echoer".into(), listener.flow_control_id());

    // Don't call node.shutdown() here so this node runs forever.
    Ok(())
}

Middle node

Create a new file at:

touch examples/04-routing-over-transport-two-hops-middle.rs

Add the following code to this file:

// examples/04-routing-over-transport-two-hops-middle.rs
// This node creates a tcp connection to a node at 127.0.0.1:4000
// Starts a relay worker to forward messages to 127.0.0.1:4000
// Starts a tcp listener at 127.0.0.1:3000
// It then runs forever waiting to route messages.

use hello_ockam::Relay;
use ockam::tcp::{TcpConnectionOptions, TcpListenerOptions, TcpTransportExtension};
use ockam::{node, Context, Result};

#[ockam::node]
async fn main(ctx: Context) -> Result<()> {
    // Create a node with default implementations
    let node = node(ctx).await?;

    // Initialize the TCP Transport
    let tcp = node.create_tcp_transport()?;

    // Create a TCP connection to the responder node.
    let connection_to_responder = tcp.connect("127.0.0.1:4000", TcpConnectionOptions::new()).await?;

    // Create and start a Relay worker
    node.start_worker("forward_to_responder", Relay::new(connection_to_responder))?;

    // Create a TCP listener and wait for incoming connections.
    let listener = tcp.listen("127.0.0.1:3000", TcpListenerOptions::new()).await?;

    // Allow access to the Relay via TCP connections from the TCP listener
    node.flow_controls()
        .add_consumer(&"forward_to_responder".into(), listener.flow_control_id());

    // Don't call node.shutdown() here so this node runs forever.
    Ok(())
}

Initiator node

Create a new file at:

touch examples/04-routing-over-transport-two-hops-initiator.rs

Add the following code to this file:

// examples/04-routing-over-transport-two-hops-initiator.rs
// This node routes a message, to a worker on a different node, over two tcp transport hops.

use ockam::tcp::{TcpConnectionOptions, TcpTransportExtension};
use ockam::{node, route, Context, Result};

#[ockam::node]
async fn main(ctx: Context) -> Result<()> {
    // Create a node with default implementations
    let mut node = node(ctx).await?;

    // Initialize the TCP Transport
    let tcp = node.create_tcp_transport()?;

    // Create a TCP connection to the middle node.
    let connection_to_middle_node = tcp.connect("localhost:3000", TcpConnectionOptions::new()).await?;

    // Send a message to the "echoer" worker, on a different node, over two tcp hops.
    // Wait to receive a reply and print it.
    let r = route![connection_to_middle_node, "forward_to_responder", "echoer"];
    let reply: String = node.send_and_receive(r, "Hello Ockam!".to_string()).await?;
    println!("App Received: {}", reply); // should print "Hello Ockam!"

    // Stop all workers, stop the node, cleanup and return.
    node.shutdown().await
}

Run

Run the responder in a separate terminal tab and keep it running:

cargo run --example 04-routing-over-transport-two-hops-responder

Run the middle node in a separate terminal tab and keep it running:

cargo run --example 04-routing-over-transport-two-hops-middle

Run the initiator:

cargo run --example 04-routing-over-transport-two-hops-initiator

Note how the message is routed.

Routing and Transports

Ockam Routing and Transports enable higher level protocols that provide end-to-end guarantees to messages traveling across many network connection hops and protocols boundaries.

Ockam Routing is a simple and lightweight message-based protocol that makes it possible to bidirectionally exchange messages over a large variety of communication topologies.

Ockam Transports adapt Ockam Routing to various transport protocols like TCP, UDP, WebSockets, Bluetooth etc.

By layering Ockam Secure Channels and other higher level protocols over Ockam Routing, it is possible to build systems that provide end-to-end guarantees over arbitrary transport topologies that span many networks, connections, gateways, queues, and clouds.

Routing

Let's dive into how the routing protocol works. So far, in the section on Nodes and Workers, we've come across this simple message exchange:

Ockam Routing Protocol messages carry with them two metadata fields: an onward_route and a return_route. A route is an ordered list of addresses describing the path a message should travel. This information is carried with the message in compact binary form.

Pay close attention to the Sender, Hop, and Replier rules in the sequence diagrams below. Note how onward_route and return_route are handled as the message travels.

The above was one message hop. We may extend this to two hops:

This very simple protocol extends to any number of hops:

Routing over two hops

So far, we've created an "echoer" worker in our node, sent it a message, and received a reply. This worker was a simple one hop away from our "app" worker.

To achieve this, messages carry with them two metadata fields: onward_route and return_route, where a route is a list of addresses.

To get a sense of how that works, let's route a message over two hops.

Hop worker

For demonstration, we'll create a simple worker, called Hop, that takes every incoming message and forwards it to the next address in the onward_route of that message.

Just before forwarding the message, Hop's handle message function will:

  1. Print the message

  2. Remove its own address (first address) from the onward_route, by calling step()

  3. Insert its own address as the first address in the return_route by calling prepend()

// src/hop.rs
use ockam::{Any, Context, Result, Routed, Worker};

pub struct Hop;

#[ockam::worker]
impl Worker for Hop {
    type Context = Context;
    type Message = Any;

    /// This handle function takes any incoming message and forwards
    /// it to the next hop in it's onward route
    async fn handle_message(&mut self, ctx: &mut Context, msg: Routed<Any>) -> Result<()> {
        println!("Address: {}, Received: {:?}", ctx.primary_address(), msg);

        // Send the message to the next worker on its onward_route
        ctx.forward(msg.into_local_message().step_forward(ctx.primary_address().clone())?)
            .await
    }
}

App worker

Next, let's create our main "app" worker.

In the code below we start an Echoer worker at address "echoer" and a Hop worker at address "h1". Then, we send a message along the h1 => echoer route by passing route!["h1", "echoer"] to send(..).

// examples/03-routing.rs
// This node routes a message.

use hello_ockam::{Echoer, Hop};
use ockam::{node, route, Context, Result};

#[ockam::node]
async fn main(ctx: Context) -> Result<()> {
    // Create a node with default implementations
    let mut node = node(ctx).await?;

    // Start a worker, of type Echoer, at address "echoer"
    node.start_worker("echoer", Echoer)?;

    // Start a worker, of type Hop, at address "h1"
    node.start_worker("h1", Hop)?;

    // Send a message to the worker at address "echoer",
    // via the worker at address "h1"
    node.send(route!["h1", "echoer"], "Hello Ockam!".to_string()).await?;

    // Wait to receive a reply and print it.
    let reply = node.receive::<String>().await?;
    println!("App Received: {}", reply.into_body()?); // should print "Hello Ockam!"

    // Stop all workers, stop the node, cleanup and return.
    node.shutdown().await
}

To run this new node program:

cargo run --example 03-routing

Routing over many hops

Similarly, we can also route the message via many hop workers:

// examples/03-routing-many-hops.rs
// This node routes a message through many hops.

use hello_ockam::{Echoer, Hop};
use ockam::{node, route, Context, Result};

#[ockam::node]
async fn main(ctx: Context) -> Result<()> {
    // Create a node with default implementations
    let mut node = node(ctx).await?;

    // Start an Echoer worker at address "echoer"
    node.start_worker("echoer", Echoer)?;

    // Start 3 hop workers at addresses "h1", "h2" and "h3".
    node.start_worker("h1", Hop)?;
    node.start_worker("h2", Hop)?;
    node.start_worker("h3", Hop)?;

    // Send a message to the echoer worker via the "h1", "h2", and "h3" workers
    let r = route!["h1", "h2", "h3", "echoer"];
    node.send(r, "Hello Ockam!".to_string()).await?;

    // Wait to receive a reply and print it.
    let reply = node.receive::<String>().await?;
    println!("App Received: {}", reply.into_body()?); // should print "Hello Ockam!"

    // Stop all workers, stop the node, cleanup and return.
    node.shutdown().await
}

To run this new node program:

cargo run --example 03-routing-many-hops

Transport

An Ockam Transport is a plugin for Ockam Routing. It moves Ockam Routing messages using a specific transport protocol like TCP, UDP, WebSockets, Bluetooth etc.

In previous examples, we routed messages locally within one node. Routing messages over transport layer connections looks very similar.

Let's try the TcpTransport, we'll need to create two nodes: a responder and an initiator.

Responder node

// examples/04-routing-over-transport-responder.rs
// This node starts a tcp listener and an echoer worker.
// It then runs forever waiting for messages.

use hello_ockam::Echoer;
use ockam::tcp::{TcpListenerOptions, TcpTransportExtension};
use ockam::{node, Context, Result};

#[ockam::node]
async fn main(ctx: Context) -> Result<()> {
    // Create a node with default implementations
    let node = node(ctx).await?;

    // Initialize the TCP Transport
    let tcp = node.create_tcp_transport()?;

    // Create an echoer worker
    node.start_worker("echoer", Echoer)?;

    // Create a TCP listener and wait for incoming connections.
    let listener = tcp.listen("127.0.0.1:4000", TcpListenerOptions::new()).await?;

    // Allow access to the Echoer via TCP connections from the TCP listener
    node.flow_controls()
        .add_consumer(&"echoer".into(), listener.flow_control_id());

    // Don't call node.shutdown() here so this node runs forever.
    Ok(())
}

Initiator node

// examples/04-routing-over-transport-initiator.rs
// This node routes a message, to a worker on a different node, over the tcp transport.

use ockam::tcp::{TcpConnectionOptions, TcpTransportExtension};
use ockam::{node, route, Context, Result};

#[ockam::node]
async fn main(ctx: Context) -> Result<()> {
    // Create a node with default implementations
    let mut node = node(ctx).await?;

    // Initialize the TCP Transport.
    let tcp = node.create_tcp_transport()?;

    // Create a TCP connection to a different node.
    let connection_to_responder = tcp.connect("localhost:4000", TcpConnectionOptions::new()).await?;

    // Send a message to the "echoer" worker on a different node, over a tcp transport.
    // Wait to receive a reply and print it.
    let r = route![connection_to_responder, "echoer"];
    let reply: String = node.send_and_receive(r, "Hello Ockam!".to_string()).await?;

    println!("App Received: {}", reply); // should print "Hello Ockam!"

    // Stop all workers, stop the node, cleanup and return.
    node.shutdown().await
}

Run

Run the responder in a separate terminal tab and keep it running:

cargo run --example 04-routing-over-transport-responder

Run the initiator:

cargo run --example 04-routing-over-transport-initiator

Bridge

A common real world topology is a transport bridge.

Node n1 wishes to access a service on node n3, but it can't directly connect to n3. This can happen for many reasons, maybe because n3 is in a separate IP subnet, or it could be that the communication from n1 to n2 uses UDP while from n2 to n3 uses TCP or other similar constraints. The topology makes n2 a bridge or gateway between these two separate networks.

We can setup this topology with Ockam Routing as follows:

Responder node

// examples/04-routing-over-transport-two-hops-responder.rs
// This node starts a tcp listener and an echoer worker.
// It then runs forever waiting for messages.

use hello_ockam::Echoer;
use ockam::tcp::{TcpListenerOptions, TcpTransportExtension};
use ockam::{node, Context, Result};

#[ockam::node]
async fn main(ctx: Context) -> Result<()> {
    // Create a node with default implementations
    let node = node(ctx).await?;

    // Initialize the TCP Transport
    let tcp = node.create_tcp_transport()?;

    // Create an echoer worker
    node.start_worker("echoer", Echoer)?;

    // Create a TCP listener and wait for incoming connections.
    let listener = tcp.listen("127.0.0.1:4000", TcpListenerOptions::new()).await?;

    // Allow access to the Echoer via TCP connections from the TCP listener
    node.flow_controls()
        .add_consumer(&"echoer".into(), listener.flow_control_id());

    // Don't call node.shutdown() here so this node runs forever.
    Ok(())
}

Middle node

Relay worker

We'll create a worker, called Relay, that takes every incoming message and forwards it to the predefined address.

// src/relay.rs
use ockam::{Any, Context, Result, Route, Routed, Worker};

pub struct Relay {
    route: Route,
}

impl Relay {
    pub fn new(route: impl Into<Route>) -> Self {
        let route = route.into();

        if route.is_empty() {
            panic!("Relay can't forward messages to an empty route");
        }

        Self { route }
    }
}

#[ockam::worker]
impl Worker for Relay {
    type Context = Context;
    type Message = Any;

    /// This handle function takes any incoming message and forwards
    /// it to the next hop in it's onward route
    async fn handle_message(&mut self, ctx: &mut Context, msg: Routed<Any>) -> Result<()> {
        println!("Address: {}, Received: {:?}", ctx.primary_address(), msg);

        let next_on_route = self.route.next()?.clone();

        // Some type conversion
        let mut local_message = msg.into_local_message();

        local_message = local_message.pop_front_onward_route()?;
        local_message = local_message.prepend_front_onward_route(self.route.clone()); // Prepend predefined route to the onward_route

        let prev_hop = local_message.return_route().next()?.clone();

        if let Some(info) = ctx
            .flow_controls()
            .find_flow_control_with_producer_address(&next_on_route)
        {
            ctx.flow_controls().add_consumer(&prev_hop, info.flow_control_id());
        }

        if let Some(info) = ctx.flow_controls().find_flow_control_with_producer_address(&prev_hop) {
            ctx.flow_controls().add_consumer(&next_on_route, info.flow_control_id());
        }

        // Send the message on its onward_route
        ctx.forward(local_message).await
    }
}
// examples/04-routing-over-transport-two-hops-middle.rs
// This node creates a tcp connection to a node at 127.0.0.1:4000
// Starts a relay worker to forward messages to 127.0.0.1:4000
// Starts a tcp listener at 127.0.0.1:3000
// It then runs forever waiting to route messages.

use hello_ockam::Relay;
use ockam::tcp::{TcpConnectionOptions, TcpListenerOptions, TcpTransportExtension};
use ockam::{node, Context, Result};

#[ockam::node]
async fn main(ctx: Context) -> Result<()> {
    // Create a node with default implementations
    let node = node(ctx).await?;

    // Initialize the TCP Transport
    let tcp = node.create_tcp_transport()?;

    // Create a TCP connection to the responder node.
    let connection_to_responder = tcp.connect("127.0.0.1:4000", TcpConnectionOptions::new()).await?;

    // Create and start a Relay worker
    node.start_worker("forward_to_responder", Relay::new(connection_to_responder))?;

    // Create a TCP listener and wait for incoming connections.
    let listener = tcp.listen("127.0.0.1:3000", TcpListenerOptions::new()).await?;

    // Allow access to the Relay via TCP connections from the TCP listener
    node.flow_controls()
        .add_consumer(&"forward_to_responder".into(), listener.flow_control_id());

    // Don't call node.shutdown() here so this node runs forever.
    Ok(())
}

Initiator node

// examples/04-routing-over-transport-two-hops-initiator.rs
// This node routes a message, to a worker on a different node, over two tcp transport hops.

use ockam::tcp::{TcpConnectionOptions, TcpTransportExtension};
use ockam::{node, route, Context, Result};

#[ockam::node]
async fn main(ctx: Context) -> Result<()> {
    // Create a node with default implementations
    let mut node = node(ctx).await?;

    // Initialize the TCP Transport
    let tcp = node.create_tcp_transport()?;

    // Create a TCP connection to the middle node.
    let connection_to_middle_node = tcp.connect("localhost:3000", TcpConnectionOptions::new()).await?;

    // Send a message to the "echoer" worker, on a different node, over two tcp hops.
    // Wait to receive a reply and print it.
    let r = route![connection_to_middle_node, "forward_to_responder", "echoer"];
    let reply: String = node.send_and_receive(r, "Hello Ockam!".to_string()).await?;
    println!("App Received: {}", reply); // should print "Hello Ockam!"

    // Stop all workers, stop the node, cleanup and return.
    node.shutdown().await
}

Run

Run the responder in a separate terminal tab and keep it running:

cargo run --example 04-routing-over-transport-two-hops-responder

Run the middle node in a separate terminal tab and keep it running:

cargo run --example 04-routing-over-transport-two-hops-middle

Run the initiator:

cargo run --example 04-routing-over-transport-two-hops-initiator

Relay

It is common, however, to encounter communication topologies where the machine that provides a service is unwilling or is not allowed to open a listening port or expose a bridge node to other networks. This is a common security best practice in enterprise environments, home networks, OT networks, and VPCs across clouds. Application developers may not have control over these choices from the infrastructure / operations layer. This is where relays are useful.

Relays make it possible to establish end-to-end protocols with services operating in a remote private network, without requiring a remote service to expose listening ports to an outside hostile network like the Internet.

Serialization

Ockam Routing messages when transported over the wire have the following structure. TransportMessage is serialized using BARE Encoding. We intend to transition to CBOR in the near future since we already use CBOR for other protocols built on top of Ockam Routing.

pub struct TransportMessage {
    pub version: u8,
    pub onward_route: Route,
    pub return_route: Route,
    pub payload: Vec<u8>,
}

pub struct Route {
    addresses: VecDeque<Address>
}

pub struct Address {
    transport_type: TransportType,
    transport_protocol_address: Vec<u8>,
}

pub struct TransportType(u8);

Each transport type has a conventional value. TCP has transport type 1. UDP has transport type 2 etc. Node local messages have transport type 0.

As message moves within a node it gathers additional metadata in structure like LocalMessage and RelayMessage that are used for a node's internal operation.

Access Control

Each Worker has one or more addresses that it uses to send and receive messages. We assign each Address an Incoming Access Control and an Outgoing Access Control.

#[async_trait]
pub trait IncomingAccessControl: Debug + Send + Sync + 'static {
    /// Return true if the message is allowed to pass, and false if not.
    async fn is_authorized(&self, relay_msg: &RelayMessage) -> Result<bool>;
}

#[async_trait]
pub trait OutgoingAccessControl: Debug + Send + Sync + 'static {
    /// Return true if the message is allowed to pass, and false if not.
    async fn is_authorized(&self, relay_msg: &RelayMessage) -> Result<bool>;
}

Concrete instances of these traits inspect a message's onward_route, return_route, metadata etc. along with other node local state to decide if a message should be allowed to be sent or received. Incoming Access Control filters which messages reach an address while Outgoing Access Control decides which messages can be sent.

Flow Control

In our threat model, we assume that Workers within a Node are not malicious against each other. If programmed correctly they intend no harm.

However, there are certain types of Workers that forward messages that were created on other nodes. We don't implicitly trust other Ockam Nodes so messages from them can be dangerous. Such workers that can receive messages from another node are implemented with an Outgoing Access Control that denies all messages by default.

For example, a TCP Transport Listener spawns TCP Receivers for every new TCP connection. These receivers are implemented with an Outgoing Access Control that denies all messages, by default, from entering the node that is running the receiver. We can then explicitly allow messages to flow to a specific addresses.

In the middle node example above, we do this by explicitly allowing flow of messages from the TCP Receivers (spawned by TCP Transport Listener) to the forward_to_responder worker.

// Create a TCP listener and wait for incoming connections.
let listener = tcp.listen("127.0.0.1:3000", TcpListenerOptions::new()).await?;

// Allow access to the Forwarder via TCP connections from the TCP listener
node.flow_controls()
    .add_consumer("forward_to_responder", listener.flow_control_id());
Drawing
Drawing
Drawing

Ockam Node for Amazon Timestream InfluxDB

Create an Ockam Timestream InfluxDB outlet node using Cloudformation template

This guide contains instructions to launch within AWS environment, an

  • An Ockam Timestream InfluxDB Outlet Node within an AWS environment

  • An Ockam Timestream InfluxDB Inlet Node:

    • Within an AWS environment, or

    • Using Docker in any environment

The walkthrough demonstrates:

  • Running an Ockam Timestream InfluxDB Outlet node in your AWS environment that contains a private Amazon Timestream InfluxDB Database

  • Setting up Ockam Timestream InfluxDB inlet nodes using either AWS or Docker from any location.

  • Verifying secure communication between InfluxDB clients and Amazon Timestream InfluxDB Database.

Read: “How does Ockam work?” to learn about end-to-end trust establishment.

PreRequisite

  • A private Amazon Timestream InfluxDB Database is created and accessible from the VPC and Subnet where the Ockam Node will be launched. You have the details of Organization, Username and Password

  • Security Group associated with the Amazon Timestream InfluxDBDatabase allows inbound traffic on the required port (TCP 8086) from the subnet where the Ockam Outlet Node will reside.

  • You have permission to subscribe and launch Cloudformation stack from AWS Marketplace on the AWS Account running Timestream InfluxDB Database.

  • Permission to create an "All Access" InfluxDB token to use by Ockam Node and store it in AWS Secrets Manager.

Create an Orchestrator Project

  1. Sign up for Ockam and pick a subscription plan through the guided workflow on Ockam.io.

  2. Run the following commands to install Ockam Command and enroll with the Ockam Orchestrator.

curl --proto '=https' --tlsv1.2 -sSfL https://install.command.ockam.io | bash
source "$HOME/.ockam/env"

ockam enroll

Completing this step creates a Project in Ockam Orchestrator

  1. Control which identities are allowed to enroll themselves into your project by issuing unique one-time use enrollment tickets. Generate two enrollment tickets, one for the Outlet and one for the Inlet.

# Enrollment ticket for Ockam Outlet Node
ockam project ticket --expires-in 10h --usage-count 1 \
  --attribute amazon-influxdb-outlet \
  --relay influxdb \
    > "outlet.ticket"

# Enrollment ticket for Ockam Inlet Node
ockam project ticket --expires-in 10h --usage-count 1 \
  --attribute amazon-influxdb-inlet --tls \
    > "inlet.ticket"

Create an All Access InfluxDB Token and obtain Org ID

  • Use Influx CLI to create a token. For instructions, please see: Install and use the influx CLI.

  • Configure your CLI to use --username-password to be able to create the operator:

INFLUXDB_ORG="REPLACE_WITH_ORG_NAME"
INFLUXDB_USERNAME="REPLACE_WITH_USERNAME"
INFLUXDB_PASSWORD="REPLACE_WITH_PASSWORD"
INFLUXDB_ENDPOINT="https://REPLACE_WITH_INFLUXDB_ENDPOINT:8086"

influx config create --active --config-name testconfig \
  --host-url $INFLUXDB_ENDPOINT \
  --org $INFLUXDB_ORG \
  --username-password "$INFLUXDB_USERNAME:$INFLUXDB_PASSWORD"
  • Find out Org ID to use as an input to cloudformation template

influx org list
  • Create your new token.

influx auth create --all-access --json | jq -r .token
  • Create influxDB token as secret within secret manager. Note the ARN of the secret.

SECRET_NAME="influxdb-token" #Update as necessary
INFLUXDB_TOKEN="REPLACE_WITH_TOKEN"
AWS_REGION="us-east-1"

# Create secret
aws secretsmanager create-secret \
--region $AWS_REGION \
--name $SECRET_NAME \
--description "Ockam node InfluxDB lessor token" \
--secret-string "$INFLUXDB_TOKEN"

# Get the ARN of the secret

aws secretsmanager describe-secret --secret-id $SECRET_NAME --query ARN --output text

Setup Ockam Timestream InfluxDB Outlet Node

  • Login to AWS Account you would like to use

  • Subscribe to "Ockam - Node for Amazon Timestream InfluxDB" in AWS Marketplace

  • Navigate to AWS Marketplace -> Manage subscriptions. Select Ockam - Node for Amazon Timestream InfluxDB from the list of subscriptions. Select Actions-> Launch Cloudformation stack

  • Select the Region you want to deploy and click Continue to Launch. Under Actions, select Launch Cloudformation

  • Create stack with the following details

    • Stack name: influxdb-ockam-outlet or any name you prefer

    • Network Configuration

      • VPC ID: Choose a VPC ID where the EC2 instance will be deployed.

      • Subnet ID: Select a suitable Subnet ID within the chosen VPC that has access to Amazon Timestream InfluxDB Database.

      • EC2 Instance Type: Default instance type is m6a.large Adjust instance type depending on your use case. If you would like to have predictable network bandwidth of 12.5 Gbps use m6a.8xlarge. Make sure the instance type is available in the subnet you are launching in.

    • Ockam Node Configuration

      • Enrollment ticket: Copy and paste the content of the outlet.ticket generated above

      • InfluxDBEndpoint: To configure the Ockam Timestream InfluxDB Outlet Node, you'll need to specify the Amazon Timestream InfluxDB Endpoint. This configuration allows the Ockam Postgres Outlet Node to connect to the database. In AWS Console, go to Timestream -> InfluxDB databases, select your influxdb database and copy "Endpoint" details

      • InfluxDBOrgID: Enter the Organization of InfluxDB instance.

      • InfluxDBTokenSecretArn: Enter the ARN of the Secret that contains the all access token.

      • InfluxDBLeasedTokenPermissions: JSON array of permission objects for InfluxDB leased token in the below format. Update as needed. Leave the variable INFLUX_ORG_ID as it will be replaced during runtime.

[
    {
      "action": "read",
      "resource": {
        "type": "buckets",
        "orgID": "INFLUX_ORG_ID"
      }
    },
    {
      "action": "write",
      "resource": {
        "type": "buckets",
        "orgID": "INFLUX_ORG_ID"
      }
    }
]
  • NodeConfig: Copy and paste the below configuration. Note that the configuration values match with the enrollment tickets created in the previous step. INFLUX_ENDPOINT, INFLUX_ORG_ID and INFLUX_TOKEN will be replaced during runtime.

{
    "relay": "influxdb",
    "influxdb-outlet": {
      "to": "INFLUX_ENDPOINT:8086",
      "tls": true,
      "allow": "amazon-influxdb-inlet",
      "org-id": "INFLUX_ORG_ID",
      "all-access-token": "INFLUX_TOKEN",
      "leased-token-expires-in": "300",
      "leased-token-permissions": "LEASED_TOKEN_PERMISSIONS"
    }
  }
  • Click Next to launch the CloudFormation run.

  • A successful CloudFormation stack run configures the Ockam Timestream InfluxDB Outlet node on an EC2 machine.

    • EC2 machine mounts an EFS volume created in the same subnet. Ockam state is stored in the EFS volume.

    • A security group with egress access to the internet will be attached to the EC2 machine.

  • Connect to the EC2 machine via AWS Session Manager.

    • To view the log file, run sudo cat /var/log/cloud-init-output.log.

      • Successful run will show Ockam node setup completed successfully in the logs

    • To view the status of Ockam node run curl http://localhost:23345/show | jq

  • View the Ockam node status in CloudWatch.

    • Navigate to Cloudwatch -> Log Group and select influxdb-ockam-outlet-status-logs. Select the Logstream for the EC2 instance.

    • The Cloudformation template creates a subscription filter that sends data to a Cloudwatch alarm influxdb-ockam-outlet-OckamNodeDownAlarm.Alarm will turn green upon ockam node successfully running.

  • An Autoscaling group ensures atleast one EC2 instance is running at all times.

Ockam Timestream InfluxDB outlet node setup is complete. You can now create Ockam Timestream InfluxDB inlet nodes in any network to establish secure communication.

Setup Ockam InfluxDB Inlet Node

You can set up an Ockam Timestream InfluxDB Inlet Node either in AWS or locally using Docker. Here are both options:

Option 1: Setup Inlet Node Locally with Docker Compose

To set up an Inlet Node locally and interact with it outside of AWS, use Docker Compose.

  • Find your Ockam project id by running the command where you created the enrollment tickets and use it to create to endpoint to use for REPLACE_WITH_YOUR_PROJECT_ID

# Below command will find your ockam project id 
ockam project show --jq .id 
  • Create a file named docker-compose.yml with the following content:

docker-compose.yml
services:
  ockam:
    image: ghcr.io/build-trust/ockam
    container_name: influxdb-inlet
    environment:
      ENROLLMENT_TICKET: ${ENROLLMENT_TICKET:-}
      OCKAM_DEVELOPER: ${OCKAM_DEVELOPER:-false}
      OCKAM_LOGGING: true
      OCKAM_LOG_LEVEL: info
    command:
      - node
      - create
      - --foreground
      - --node-config
      - |
        ticket: ${ENROLLMENT_TICKET}

        influxdb-inlet:
          from: 0.0.0.0:8086
          via: influxdb
          allow: amazon-influxdb-outlet
          tls: true
    network_mode: host

  node-app:
    image: node:18
    container_name: node-app
    volumes:
      - ./:/app
    working_dir: /app
    command: /bin/sh -c "while true; do sleep 30; done"
    depends_on:
      - ockam
    network_mode: host
  • Create a file named app.mjs and package.json.

    • Update REPLACE_WITH_* variables

    • Value of token doesn't matter as it will be injected with the temporary token by Ockam

app.mjs

"use strict";

import { InfluxDB, Point, flux } from "@influxdata/influxdb-client";
import os from "os";
import { execSync } from "child_process";
import * as https from "https";

// Update below URL 
const url = "https://influxdb-inlet.REPLACE_WITH_YOUR_PROJECT_ID.ockam.network:8086";
const token = "OCKAM_MANAGED" 
const org = "REPLACE_WITH_YOUR_ORG_NAME";
const bucket = "REPLACE_WITH_YOUR_BUCKET_NAME";

const httpsAgent = new https.Agent({ rejectUnauthorized: true });
const influxDB = new InfluxDB({ url, token, transportOptions: { agent: httpsAgent } });

const writeApi = influxDB.getWriteApi(org, bucket);

async function writeData() {
  const hostname = os.hostname();
  let cpuLoad;
  let freeDiskSpace;

  try {
    cpuLoad = parseFloat(execSync("uptime | awk '{print $(NF-2)}' | sed 's/,//'").toString().trim());
    freeDiskSpace = parseInt(execSync("df -BG / | tail -n 1 | awk '{print $4}' | sed 's/G//'").toString().trim(), 10);
  } catch (error) {
    console.error("Error extracting system metrics:", error);
    return;
  }

  if (isNaN(cpuLoad) || isNaN(freeDiskSpace)) {
    console.error("Extracted metrics are NaN", { cpuLoad, freeDiskSpace });
    return;
  }

  const point = new Point("system_metrics")
    .tag("host", hostname)
    .floatField("cpu_load", cpuLoad)
    .intField("free_disk_space", freeDiskSpace);

  console.log(`Writing point: ${point.toLineProtocol(writeApi)}`);

  writeApi.writePoint(point);

  await writeApi
    .close()
    .then(() => {
      console.log("WRITE FINISHED");
    })
    .catch((e) => {
      console.error("Write failed", e);
    });
}

async function queryData() {
  const queryApi = influxDB.getQueryApi(org);
  const query = flux`
    from(bucket: "${bucket}")
    |> range(start: -1h)
    |> filter(fn: (r) => r._measurement == "system_metrics")
  `;

  console.log("Querying data:");

  queryApi.queryRows(query, {
    next(row, tableMeta) {
      const fieldValue = row[5];
      const fieldName = row[6];

      let cpuLoad = "N/A";
      let freeDiskSpace = "N/A";

      if (fieldName === "cpu_load") {
        cpuLoad = fieldValue;
      } else if (fieldName === "free_disk_space") {
        freeDiskSpace = fieldValue;
      }

      console.log(`cpu_load=${cpuLoad}, free_disk_space=${freeDiskSpace}`);
    },
    error(error) {
      console.error("Query failed", error);
    },
    complete() {
      console.log(
        "\nThe example run was successful 🥳.\n" +
          "\nThe app connected with the database through an encrypted portal." +
          "\nInserted some data into a bucket, and querried it back.\n",
      );
    },
  });
}

writeData().then(() => {
  setTimeout(() => {
    queryData();
  }, 3000);
});
package.json
{
  "dependencies": {
    "@influxdata/influxdb-client": "^1.35.0"
  }
}
  • Run the following command from the same location as the docker-compose.yml and the inlet.ticket to create an Ockam Timestream InfluxDB inlet that can connect to the outlet running in AWS , along with node client container

ENROLLMENT_TICKET=$(cat inlet.ticket) docker-compose up -d
  • Check status of Ockam inlet node. You will see The node is UP when ockam is configured successfully and ready to accept connection

docker exec -it influxdb-inlet /ockam node show
  • Connect to influxdb-client container and run commands

# Connect to the container
docker exec -it node-app /bin/bash

# Install dependencies
npm install

# Run app that writes and read the data to a bucket in private influxDB via ockam
node app.mjs

# You will see below message upon a successful run
# The example run was successful 🥳.

Option 2: Setup Inlet Node in AWS

  • Login to AWS Account you would like to use

  • Subscribe to "Ockam - Node" in AWS Marketplace

  • Navigate to AWS Marketplace -> Manage subscriptions. Select Ockam - Node from the list of subscriptions. Select Actions-> Launch Cloudformation stack

  • Select the Region you want to deploy and click Continue to Launch. Under Actions, select Launch Cloudformation

  • Create stack with below details

    • Stack name: influxdb-ockam-inlet or any name you prefer

    • Network Configuration

      • Select suitable values for VPC ID and Subnet ID

      • EC2 Instance Type: Default instance type is m6a.8xlarge because of the predictable network bandwidth of 12.5 Gbps. Adjust to a small instance type depending on your use case. Eg: m6a.large

    • Ockam Configuration

      • Enrollment ticket: Copy and paste the content of the inlet.ticket generated above

      • JSON Node Configuration: Copy and paste the below configuration.

{
    "influxdb-inlet": {
      "from": "0.0.0.0:8086",
      "allow": "amazon-influxdb-outlet",
      "via": "influxdb",
      "tls": true 
    }
  }
  • Click Next to launch the CloudFormation run.

  • A successful CloudFormation stack run configures the Ockam inlet node on an EC2 machine.

  • EC2 machine mounts an EFS volume created in the same subnet. Ockam state is stored in the EFS volume.

  • Connect to the EC2 machine via AWS Session Manager.

    • To view the log file, run sudo cat /var/log/cloud-init-output.log.

      • Successful run will show Ockam node setup completed successfully in the logs

    • To view the status of Ockam node run curl http://localhost:23345/show | jq

  • View the Ockam node status in CloudWatch.

    • Navigate to Cloudwatch -> Log Group and select influxdb-ockam-inlet-status-logs. Select the Logstream for the EC2 instance.

    • Cloudformation template creates a subscription filter that sends data to a Cloudwatch alarm influxdb-ockam-inlet-OckamNodeDownAlarm.Alarm will turn green upon ockam node successfully running.

  • An Autoscaling group ensures atleast one EC2 instance is running at all times.

  • Find your Ockam project id and use it to create to endpoint to use for INFLUXDB_ENDPOINT

# Below command will find your ockam project id 
ockam project show --jq .id 
  • Follow testing steps in docker example above for node.js or use InfluxDB cli client with below details

INFLUXDB_ENDPOINT="https://influxdb-inlet.REPLACE_WITH_YOUR_PROJECT_ID.ockam.network:8086"
# Need some value as influxdb client expects a value
INFLUXDB_TOKEN="OCKAM_MANAGED"
INFLUXDB_ORG="REPLACE_WITH_YOUR_ORG_NAME"

# Create config
influx config create -n testconfig  -u $INFLUXDB_ENDPOINT -o $INFLUXDB_ORG  -t "OCKAM_MANAGED"

# View buckets
influx bucket list