LogoLogo
Ockam.ioOpen Source CodeContact usSign up
  • Intro to Ockam
  • Ockam's core concepts
  • Get started demo
  • Quickstarts
    • Add secure connectivity to your SaaS product
    • Snowflake federated queries to Postgres
    • Postgres to Snowflake
    • Snowflake to Postgres
    • Kafka to Snowflake
    • Snowflake to Kafka
    • Snowflake stage as SFTP server
    • Snowflake stage as WebDAV file share
    • Snowflake hosted private APIs
    • Federated queries from Snowflake
  • ENCRYPTED PORTALS TO ...
    • Databases
      • PostgreSQL
        • Docker
        • Kubernetes
        • Amazon RDS
        • Amazon Aurora
      • MongoDB
        • Docker
        • Kubernetes
        • Amazon EC2
      • InfluxDB
        • Amazon Timestream
    • APIs
      • Nodejs
      • Python
    • AI
      • Amazon Bedrock
      • Amazon EC2
      • Azure OpenAI
    • Code Repos
      • Gitlab Enterprise
    • Kafka
      • Apache Kafka
        • Docker
      • Redpanda
        • Self Hosted
      • Confluent
        • Cloud
      • Warpstream
        • Cloud
      • Instaclustr
        • Cloud
      • Aiven
        • Cloud
  • Reference
    • Command
      • Nodes and Workers
      • Routing and Transports
      • Relays and Portals
      • Identities and Vaults
      • Secure Channels
      • Verifiable Credentials
      • Guides
        • AWS Marketplace
          • Ockam Node
          • Ockam Node for Amazon MSK
          • Ockam Node for Amazon RDS Postgres
          • Ockam Node for Amazon Timestream InfluxDB
          • Ockam Node for Amazon Redshift
          • Ockam Node for Amazon Bedrock
      • Manual
    • Programming Libraries
      • Rust
        • Nodes and Workers
        • Routing and Transports
        • Identities and Vaults
        • Secure Channels
        • Credentials and Authorities
        • Implementation and Internals
          • Nodes and Workers
        • docs.rs/ockam
    • Protocols
      • Nodes and Workers
      • Routing and Transports
      • Keys and Vaults
      • Identities and Credentials
      • Secure Channels
      • Access Controls and Policies
Powered by GitBook
On this page
  • Run
  • Walkthrough
  • Recap
  • Cleanup

Was this helpful?

Edit on GitHub
Export as PDF
  1. ENCRYPTED PORTALS TO ...
  2. Kafka
  3. Apache Kafka

Docker

PreviousApache KafkaNextRedpanda

Last updated 11 months ago

Was this helpful?

In this hands-on example we send end-to-end encrypted messages through Apache Kafka.

Ockam encrypts messages from a Producer to a specific Consumer. Only that specific Consumer can decrypt these messages. This guarantees that your data cannot be observed or tampered as it passes through Kafka. Operators of the Kafka cluster only see end-to-end encrypted data. Any compromise of an operator's infrastructure cannot compromise your business data.

To learn how end-to-end trust is established, please read: “How does Ockam work?”

Run

This example requires Bash, Git, Curl, Docker, and Docker Compose. Please set up these tools for your operating system, then run the following commands:

# Clone the Ockam repo from Github.
git clone --depth 1 https://github.com/build-trust/ockam && cd ockam

# Navigate to this example’s directory.
cd examples/command/portals/kafka/apache/docker/

# Run the example, use Ctrl-C to exit at any point.
./run.sh

If everything runs as expected, you'll see the message: The example run was successful 🥳

Walkthrough

The run.sh script that you ran above, and its accompanying files, are full of comments and meant to be read. The example setup is only a few simple steps, so please take some time to read and explore.

Administrator

  • The run.sh script calls the run function which invokes the enroll command to create a new identity, sign in to Ockam Orchestrator, set up a new Ockam project, make you the administrator of this project, and get a project membership credential.

  • The run function then generates three new enrollment tickets, each valid for 10 minutes, and can be redeemed only once. The first ticket is meant for the Ockam node that will run in Kafka Operator's network. The second and third tickets are meant for the Ockam node that will run in Application Team’s network.

  • In a typical production setup, an administrator or provisioning pipeline generates enrollment tickets and gives them to nodes that are being provisioned. In our example, the run function is acting on your behalf as the administrator of the Ockam project. It provisions Ockam nodes in Kafka Operator’s network and Application Team’s network, passing them their tickets using environment variables.

  • The run function invokes docker-compose for both Kafka Operator's network and Application Team's network.

Kafka Operator

# Create a dedicated and isolated virtual network for kafka_operator.
networks:
  kafka_operator:
    driver: bridge
  • Kafka Operator’s docker-compose configuration is used when run.sh invokes docker-compose. It creates an isolated virtual network for Kafka Operator.

  • In this network, docker compose starts a container with an Apache Kafka server. This container becomes available at kafka:9092 in the Kafka Operator's network.

  • Once the Kafka container is ready, docker compose starts an Ockam node in a container as a companion to the Kafka container described by ockam.yaml, embedded in the script. The node will automatically create an identity, enroll with your project using the ticket passed to the container, and set up Kafka outlet.

  • The Ockam node then uses this identity and membership credential to authenticate and create a relay in the project, back to the node, at relay: kafka. The run function gave the enrollment ticket permission to use this relay address.

Application Team

# Create a dedicated and isolated virtual network for application_team.
networks:
  application_team:
    driver: bridge
  • Application Team’s docker-compose configuration is used when run.sh invokes docker-compose. It creates an isolated virtual network for Application Team. In this network, docker compose starts a Kafka Consumer container and a Kafka Producer container.

  • The Kafka consumer container is created using a dockerfile and an entrypoint script. The consumer enrollment ticket from run.sh is passed to the container via an environment variable.

  • When the Kafka consumer node container starts in the Application Team's network, it runs its entrypoint, creating the Ockam node described by ockam.yaml, embedded in the script. The node will automatically create an identity, enroll with your project, and set up the Kafka inlet.

  • Next, the entrypoint at the end executes the command present in the docker-compose configuration, which launches a Kafka consumer waiting for messages in the demo topic. Once the messages are received, they are printed out.

  • In the producer container, the process is analogous. Once the Ockam node is setup, the command within docker-compose configuration launches a Kafka producer that sends messages.

Recap

We sent end-to-end encrypted messages through Apache Kafka.

Messages are encrypted with strong forward secrecy as soon as they leave a Producer, and only the intended Consumer can decrypt those messages. Kafka brokers and other Consumers can only see encrypted messages.

All communication is mutually authenticated and authorized. Keys and credentials are automatically rotated. Access can be easily revoked.

Cleanup

To delete all containers and images:

./run.sh cleanup