LogoLogo
Ockam.ioOpen Source CodeContact usSign up
  • Intro to Ockam
  • Ockam's core concepts
  • Get started demo
  • Quickstarts
    • Add secure connectivity to your SaaS product
    • Snowflake federated queries to Postgres
    • Postgres to Snowflake
    • Snowflake to Postgres
    • Kafka to Snowflake
    • Snowflake to Kafka
    • Snowflake stage as SFTP server
    • Snowflake stage as WebDAV file share
    • Snowflake hosted private APIs
    • Federated queries from Snowflake
  • ENCRYPTED PORTALS TO ...
    • Databases
      • PostgreSQL
        • Docker
        • Kubernetes
        • Amazon RDS
        • Amazon Aurora
      • MongoDB
        • Docker
        • Kubernetes
        • Amazon EC2
      • InfluxDB
        • Amazon Timestream
    • APIs
      • Nodejs
      • Python
    • AI
      • Amazon Bedrock
      • Amazon EC2
      • Azure OpenAI
    • Code Repos
      • Gitlab Enterprise
    • Kafka
      • Apache Kafka
        • Docker
      • Redpanda
        • Self Hosted
      • Confluent
        • Cloud
      • Warpstream
        • Cloud
      • Instaclustr
        • Cloud
      • Aiven
        • Cloud
  • Reference
    • Command
      • Nodes and Workers
      • Routing and Transports
      • Relays and Portals
      • Identities and Vaults
      • Secure Channels
      • Verifiable Credentials
      • Guides
        • AWS Marketplace
          • Ockam Node
          • Ockam Node for Amazon MSK
          • Ockam Node for Amazon RDS Postgres
          • Ockam Node for Amazon Timestream InfluxDB
          • Ockam Node for Amazon Redshift
          • Ockam Node for Amazon Bedrock
      • Manual
    • Programming Libraries
      • Rust
        • Nodes and Workers
        • Routing and Transports
        • Identities and Vaults
        • Secure Channels
        • Credentials and Authorities
        • Implementation and Internals
          • Nodes and Workers
        • docs.rs/ockam
    • Protocols
      • Nodes and Workers
      • Routing and Transports
      • Keys and Vaults
      • Identities and Credentials
      • Secure Channels
      • Access Controls and Policies
Powered by GitBook
On this page
  • Run
  • Walkthrough
  • Recap
  • Cleanup

Was this helpful?

Edit on GitHub
Export as PDF
  1. ENCRYPTED PORTALS TO ...
  2. Kafka
  3. Warpstream

Cloud

PreviousWarpstreamNextInstaclustr

Last updated 11 months ago

Was this helpful?

In this hands-on example we send end-to-end encrypted messages through Warpstream Cloud.

encrypts messages from a Producer all-of-the-way to a specific Consumer. Only that specific Consumer can decrypt these messages. This guarantees that your data cannot be observed or tampered with as it passes through Warpstream Cloud or the network where it is hosted. The operators of Warpstream Cloud can only see encrypted data in the network and in service that they operate. Thus, a compromise of the operator's infrastructure will not compromise the data stream's security, privacy, or integrity.

To learn how end-to-end trust is established, please read: “”

Run

This example requires Bash, Git, Curl, Docker, and Docker Compose. Please set up these tools for your operating system. It's also necessary to include your warpstream application key as an environment variable when running the example, example can be run as following:

# Clone the Ockam repo from Github.
git clone --depth 1 https://github.com/build-trust/ockam && cd ockam

# Navigate to this example’s directory.
cd examples/command/portals/kafka/warpstream/

# Run the example by calling the run.sh script and passing your warpstream application key as an argument, use Ctrl-C to exit at any point.
./run.sh _warpstream_application_key_

If everything runs as expected, you'll see the message: The example run was successful 🥳

Walkthrough

Administrator

Application Teams

# Create a dedicated and isolated virtual network for application_team.
networks:
  application_team:
      driver: bridge

Recap

We sent end-to-end encrypted messages through Warpstream cloud.

Messages are encrypted with strong forward secrecy as soon as they leave a Producer, and only the intended Consumer can decrypt those messages. Warpstream Cloud and other Consumers can only see encrypted messages.

All communication is mutually authenticated and authorized. Keys and credentials are automatically rotated. Access can be easily revoked.

Cleanup

To delete all containers and images:

./run.sh cleanup _warpstream_application_key

The , that you ran above, and its are full of comments and meant to be read. The example setup is only a few simple steps, so please take some time to read and explore.

The calls the which invokes the to create a new identity, sign in to Ockam Orchestrator, set up a new Ockam project, make you the administrator of this project, and get a project membership .

The run function then by using your Warpstream's application key.

An Ockam relay is then started which creates an encrypted relay that transmits Kafka messages over a secure portal.

We then , each valid for 10 minutes, and can be redeemed only once. The are meant for the Consumer and Producer, in the Ockam node that will run in Application Team’s network.

In a typical production setup, an administrator or provisioning pipeline generates enrollment tickets and gives them to nodes that are being provisioned. In our example, the run function is acting on your behalf as the administrator of the Ockam project. It creates a Kafka relay using a pre-baked Ockam Kafka addon which will host the Warpstream Kafka server and , passing them their tickets using environment variables.

For the Application team, the run function takes the enrollment tickets, sets them as the value of an , and to create the Application Teams’s networks.

Application Teams’s is used when run.sh invokes docker-compose. It creates an for Application Teams. In this network, docker compose starts a and a .

The Kafka consumer node container is created using and this . The consumer enrollment ticket from run.sh is via environment variable.

When the Kafka consumer node container starts in the Application Teams network, it runs . The entrypoint and then calls the which starts the Kafka inlet and listens and traffic connection on localhost port 9092 through Ockam relay.

Next, the entrypoint at the end executes the , which launches a Kafka consumer waiting for messages in the demo topic. Once the messages are received, they are printed out.

In the producer container, the process is analogous, once the Ockam kafka-producer inlet is set up, the launches a Kafka producer that sends messages.

run.sh script
accompanying files
creates a new Kafka cluster
using the Ockam Kafka addon
generate two new enrollment tickets
two tickets
Application Team’s network
environment variable
passes the Warpstream authentication variables
invokes docker-compose
docker-compose configuration
isolated virtual network
Kafka Consumer container
Kafka Producer container
this dockerfile
entrypoint script
passed to the container
its entrypoint
enrolls with your project
Ockam kafka-consumer command
command present in the docker-compose configuration
command within docker-compose configuration
Ockam
How does Ockam work?
run.sh script
run function
enroll command
credential