IBM Cloud Docs
Quick setup guide for Event Streams for IBM Cloud

Quick setup guide for Event Streams for IBM Cloud

This tutorial guides you through the steps to quickly start using Event Streams by provisioning an instance, creating a topic and a credential, and then producing and consuming data. Additionally, you'll learn how to connect IBM Cloud® Monitoring and IBM Cloud® Activity Tracker, and optionally how to use Kafka Connect or ksqlDB. Finally, you'll also find out how to get help with Event Streams.

Select your interface using the tabs at the start of the page.

Follow these steps to complete the tutorial:

Follow these steps to complete the tutorial:

Follow these steps to complete the tutorial:

Before you begin

Before you get started, we highly recommend that you read the following information to better understand Apache Kafka, which Event Streams is built on:

Step 1: Choose your plan

Event Streams offers three different plans. To help you decide which one best suits your needs, see Choosing your plan.

  • The Lite plan offers access to a single partition in a multi-tenant Event Streams cluster free of charge. Use the Lite plan to try out Event Streams or build a proof-of-concept.

  • The Standard plan offers pay-as-you-go shared access to the multi-tenant Event Streams service. This service seamlessly autoscales as you increase the number of partitions you are using for your workload. The Standard plan has a limit of 100 partitions per instance.

  • The Enterprise plan offers pay-as-you-go access to an isolated single-tenant Event Streams service. In addition to a selection of throughput and storage options, this plan also offers user-managed encryption private endpoints, schema registry support, and meets a higher number of regulatory compliance standards. The Enterprise plan is the best choice if data isolation, guaranteed performance, and increased retention are important considerations.

  • The Satellite plan offers pay-as-you-go access to an Event Streams service by deploying functionality similar to the Enterprise plan into your chosen Satellite locations. You can create a hybrid environment that brings the scalability and on-demand flexibility of public cloud services to the applications and data that run in your secure private cloud.

Using APIs

You can use multiple APIs to work with Event Streams. This tutorial uses the following APIs:

Step 2: Provision an Event Streams instance by using the console

  1. Log in to the IBM Cloud console.

  2. Click the Event Streams service in the Catalog.

  3. Select the Lite plan, Standard plan, or Enterprise plan from the Select a pricing plan section.

  4. Enter a name for your service. You can use the default value.

  5. Click Create. The Event Streams Resource list page opens.

  6. When your instance has been created, click on the instance name to view more information.

  7. Optional. You can complete the steps in the Getting started tutorial to run a sample starter app.

Step 2: Provision an Event Streams instance by using the CLI

If it's the first time you've used the CLI, see Getting started with the CLI.

To provision an instance of Event Streams Standard Plan with the IBM Cloud CLI, complete the following steps:

  1. Install the IBM Cloud CLI by completing the steps in Getting started with the IBM Cloud CLI.

  2. Log in to IBM Cloud by running the following command:

    ibmcloud login -a cloud.ibm.com
    
  3. Create an Event Streams instance on IBM Cloud using the Lite, Standard, or Enterprise plans.

    Select one of the following methods:

    • To create an instance from the CLI on the Enterprise plan, run the following command:

      ibmcloud resource service-instance-create <INSTANCE_NAME> messagehub enterprise-3nodes-2tb <REGION>
      

      Because the Enterprise plan has its own dedicated resources for each cluster, it requires more time for provisioning, so a new Enterprise instance might take up to 3 hours.

    • To create an instance from the CLI on the Standard plan, run the following command:

      ibmcloud resource service-instance-create <INSTANCE_NAME> messagehub standard <REGION>
      

      Provisioning a new Standard plan instance is instantaneous because the underlying resources are already set up.

Step 2: Provision an Event Streams instance by using the resource controller API

The preferred method to provision an instance is to use the CLI.

Alternatively, you can use the resource controller API. First, retrieve an access token then run a resource controller API command with the access token to create the instance.

Step 2a: Retrieve an access token with the resource controller API

You can retrieve your access token programmatically by first creating a service ID API key for your application, and then exchanging your API key for an IBM Cloud IAM token.

  1. Log in to IBM Cloud with the IBM Cloud CLI.

    ibmcloud login
    

    If the login fails, run the ibmcloud login --sso command to try again. The --sso parameter is required when you log in with a federated ID. If this option is used, go to the link listed in the CLI output to generate a one-time passcode.

  2. Select the account, region, and resource group that contain your provisioned instance of Event Streams.

  3. Create a service ID for your application.

    ibmcloud iam service-id-create SERVICE_ID_NAME
                [-d, --description DESCRIPTION]
    
  4. Refer to Managing access to resources for information about the service ID.

    You can assign access permissions for your service ID by using the IBM Cloud console. To learn how the Manager, Writer, and Reader access roles map to specific Key Protect service actions, see yRoles and permissions.

    You can assign access permissions for your service ID by using the IBM Cloud console. To learn how the Manager, Writer, and Reader access roles map to user access to Event Streams resources, see What can I secure?

  5. Create a service ID API key.

    ibmcloud iam service-api-key-create API_KEY_NAME SERVICE_ID_NAME
                [-d, --description DESCRIPTION]
                [--file FILE_NAME]
    

    Replace <service_ID_name> with the unique alias that you assigned to your service ID in the previous step. Save your API key by downloading it to a secure location.

  6. Call the IAM Identity Services API to retrieve your access token.

    $ curl -X POST \
        "https://iam.cloud.ibm.com/identity/token" \
        -H "content-type: application/x-www-form-urlencoded" \
        -H "accept: application/json" \
        -d 'grant_type=urn%3Aibm%3Aparams%3Aoauth%3Agrant-type%3Aapikey&apikey=<API_KEY>' > token.json
    

    In the request, replace <API_KEY> with the API key that you created in the previous step. The following truncated example shows the contents of the token.json file:

    {
        "access_token": "b3VyIGZhdGhlc...",
        "expiration": 1512161390,
        "expires_in": 3600,
        "refresh_token": "dGhpcyBjb250a...",
        "token_type": "Bearer"
    }
    

    Use the full access_token value, prefixed by the Bearer token type, to programmatically manage keys for your service using the Key Protect API. To see an example Key Protect API request, check out Forming your API request.

    Access tokens are valid for 1 hour, but you can regenerate them as needed. To maintain access to the service, regenerate the access token for your API key on a regular basis by calling the IAM Identity Services API.

    • Use IBM Cloud Identity and Access Management (IAM) tokens to make authenticated requests to IBM Watson services without embedding service credentials in every call.

    • IAM authentication uses access tokens for authentication, which you acquire by sending a request with an API key.

Step 2b: Create an instance

Run a command like the following to create an Enterprise instance in US South:

curl -X POST https://resource-controller.cloud.ibm.com/v2/resource_instances -H "Authorization: ${token}" -H "Content-Type: application/json" \
-d '{ "name": "JG-test-curl", "target": "us-south", "resource_group":"9eba3cff1b0540b9ab7fb93829911da0", "resource_plan_id": "ibm.message.hub.enterprise.3nodes.2tb", "parameters":{"service-endpoints":"public","throughput":"150"}}'

Step 3: Create a topic and select number of partitions by using the console

For guidance about the settings that you can modify when creating topics, see topic configuration.

  1. From your newly provisioned instance, navigate to Topics using the menu on the left.

  2. Click the Create topic button and an enter a topic name. Click Next. Topic names are restricted to a maximum of 200 characters.

  3. Select the number of partitions.

    One or more partitions make up a topic. A partition is an ordered list of messages. 1 partition is sufficient for getting started, but production systems often have more.

    Partitions are distributed across the brokers to increase the scalability of your topic. You can also use them to distribute messages across the members of a consumer group.

    Click Next.

  4. Set the message retention period. This is how long messages are retained before they are deleted. If your messages are not read by a consumer within this time, they will be missed. The default retention period for messages is 24 hours. The minimum is 1 hour and the maximum is 30 days. Specify this value as multiples of hours.

    Click Create topic.

Working with topics using the console

After you create topics, you can use the console to list topics.

List topics

From your Event Streams instance, navigate to Topics from the menu on the left.

From the Topics page, you can view the following information about your topics: Name, Partitions, Retention time, Retention size, Cleanup policy, and Stream landing.

Step 3: Create a topic and select number of partitions by using the CLI

For guidance about the settings that you can modify when creating topics, see topic configuration.

Use the following ibmcloud es topic-create command to create a new topic with your chosen number of partitions:

ibmcloud es topic-create [--name] TOPIC_NAME [--partitions PARTITIONS] [--config KEY=VALUE[;KEY=VALUE]* ]*

Prerequisites: None

Command options:

--name value, -n value

Topic name. Topic names are restricted to a maximum of 200 characters.

--partitions value, -p value

Set the number of partitions for the topic.

One or more partitions make up a topic. A partition is an ordered list of messages. 1 partition is sufficient for getting started, but production systems often have more.

Partitions are distributed across the brokers to increase the scalability of your topic. You can also use them to distribute messages across the members of a consumer group.

--config KEY=VALUE, -c KEY=VALUE(optional)

Set a configuration option for the topic as a KEY=VALUE pair.

You can specify multiple --config options. Each '--config' option can specify a semicolon-delimited list of assignments. The following list shows the valid configuration keys:

  • cleanup.policy
  • retention.ms
  • retention.bytes
  • segment.bytes
  • segment.ms
  • segment.index.bytes

The default retention period for messages as specified by the retention.ms key is 24 hours. The minimum is 1 hour and the maximum is 30 days. Specify this value as multiples of hours.

Working with topics

After you create topics, you can use the CLI to list topics and view details about your cluster.

List a topic using the ibmcloud es topics command

Run the ibmcloud es topics command to list your topics.

ibmcloud es topics [--filter FILTER] [--json]

Prerequisites: None

Command options:

--filter value, -f value (optional)
Topic name.
--json (optional)
Format output in JSON. Up to 1000 topics are returned.

Display cluster details using the ibmcloud es cluster command

Run the ibmcloud es cluster command to display the details of the cluster, including the Kafka version.

ibmcloud es cluster [--json]

Prerequisites: None

Command options:

--json (optional)
Output format in JSON.

For information about other Event Streams CLI commands for topics, see CLI reference.

Step 3: Create a topic and select number of partitions by using the Admin REST API

Event Streams provides a REST API for administration that you can use to create and list topics.

You can create a Kafka topic by issuing a POST request to the /admin/topics path. The body of the request must contain a JSON document. For example:

{
    "name": "topicname",
    "partitions": 1,
    "configs": {
        "retentionMs": 86400000,
        "cleanupPolicy": "delete"
    }
}

The JSON document must contain a name attribute, specifying the name of the Kafka topic to create. Topic names are restricted to a maximum of 200 characters.The JSON can also specify the number of partitions to assign to the topic (using the partitions property). If the number of partitions is not specified, the topic is created with a single partition.

One or more partitions make up a topic. A partition is an ordered list of messages. 1 partition is sufficient for getting started, but production systems often have more.

Partitions are distributed across the brokers to increase the scalability of your topic. You can also use them to distribute messages across the members of a consumer group.

You can also specify an optional configs object within the request. This allows you to specify the retentionMs property, which controls how long (in milliseconds) Kafka retains messages published to the topic. After this time elapses the messages are automatically deleted to free space. You must specify the value of the retentionMs property in a whole number of hours (for example, multiples of 3600000). The default retention period for messages is 24 hours. The minimum is 1 hour and the maximum is 30 days.

For guidance about the settings that you can modify when creating topics, see topic configuration.

The expected HTTP status codes are as follows:

  • 202: Topic creation request was accepted.
  • 400: Invalid request JSON.
  • 403: Not authorized to create topic.
  • 422: Semantically invalid request.

If the request to create a Kafka topic succeeds, HTTP status code 202 (Accepted) is returned. If the operation fails, an HTTP status code of 422 (Unprocessable Entity) is returned, and a JSON object containing additional information about the failure is returned as the body of the response.

Example

You can exercise the REST endpoint for creating a Kafka topic using the following snippet of curl. You'll need to supply your own API key or token and specify the correct endpoint for ADMIN API. For more information about obtaining a key or a token, see Retrieve an access token with the API.

curl -i -X POST -H 'Accept: application/json' -H 'Content-Type: application/json' -H 'Authorization: Bearer ${TOKEN}' --data '{ "name": "newtopic", "partitions": 1}' ${ADMIN_URL}/admin/topics

Working with topics using the Admin REST API

After you create topics, you can use the Admin REST API to list topics. For information about other topic-related commands you can run, see Admin REST API methods.

List topics

You can list all your Kafka topics by issuing a GET request to the /admin/topics path.

The expected status code is:

  • 200: The topic list is returned as JSON in the following format:
[
  {
    "name": "topic1",
    "partitions": 1,
    "retentionMs": 86400000,
    "cleanupPolicy": "delete"
  },
  { "name": "topic2",
    "partitions": 2,
    "retentionMs": 86400000,
    "cleanupPolicy": "delete"
  }
]

A successful response will have HTTP status code 200 (OK) and contain an array of JSON objects, where each object represents a Kafka topic and has the following properties:

Event Streams topic properties
Property name Description
name The name of the Kafka topic.
partitions The number of partitions assigned to the Kafka topic.
retentionsMs The retention period for messages on the topic (in ms).
cleanupPolicy The cleanup policy of the Kafka topic.
List topics example

You can use the following curl command to list all your Kafka topics:

curl -i -X GET -H 'Accept: application/json' -H 'Authorization: Bearer ${TOKEN}' ${ADMIN_URL}/admin/topics

Step 4: Create a service credential by using the console

To allow you to connect to your Event Streams instance, create a service key by using the IBM Cloud console:

  1. Locate your Event Streams service in the Resource list.
  2. Click your service tile.
  3. Click Service credentials.
  4. Click New credential.
  5. Complete the details for your new credential like a name and role and click Add. A new credential appears in the credentials list.
  6. Expand the new credential's section to reveal the details in JSON format.

Step 4: Create a service credential by using the CLI

Create a service key by using the IBM Cloud CLI, so that you can connect to your Event Streams instance:

  1. Locate your service:

    ibmcloud resource service-instances
    
  2. Create a service key:

    ibmcloud resource service-key-create <key_name> <key_role> --instance-name <your_service_name>
    
  3. Print the service key:

    ibmcloud resource service-key <key_name>
    

    A single set of endpoint details are contained in each service key. For service instances configured to be connected to a single network type, either the IBM Cloud Public network (the default) or the IBM Cloud Private network, the service key contains the details relevant to that network type. For instances configured to support both the private and public networks, details for the public network are returned. If you want details for the private network, you must add the --service-endpoint private parameter to the previous service-key-create CLI command. For example:

    ibmcloud resource service-key-create <private-key-name> <role> --instance-name <instance-name> --service-endpoint private
    

Step 4: Create a service credential by using the CLI and the REST producer API

To connect to your Event Streams instance, the supported authentication mechanism is using a bearer token. To obtain your token by using the IBM Cloud CLI, first log in to IBM Cloud and then run the following command:

ibmcloud iam oauth-tokens

Place this token in the Authorization header of the HTTP request in the form Bearer<token>. Both API key or JWT tokens are supported.

Step 5: Produce data using the console

You cannot produce data by using the console. You can produce data using the command line, the REST Producer API, or the Kafka API.

However, you can complete the steps for the console in the Getting started tutorial to run a sample starter app and see messages flowing through a topic.

Step 5: Produce data using the command line

You can use the Event Streams Kafka console producer tool to produce data. The console tools are in the bin directory of your Kafka client download, which you can download from Apache Kafka downloads. We recommend that you download the latest available stable binary version. Kafka client versions are backwards compatible with the version of Kafka on the server.

You must provide a list of brokers (using the BOOTSTRAP_ENDPOINTS property) and SASL credentials.

To provide the SASL credentials to this tool, create a properties file based on the following example:

    sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="<user>" password="<api_key>";
    security.protocol=SASL_SSL
    sasl.mechanism=PLAIN
    ssl.protocol=TLSv1.2
    ssl.enabled.protocols=TLSv1.2
    ssl.endpoint.identification.algorithm=HTTPS

Use the <user> field from the service key as the username and the <api_key> field from the service key as the password. You can find these values in the Event Streams Service Credentials tab in the IBM Cloud console.

Event Streams provides example producer.properties and consumer.properties files for the Java client.

After you create the properties file, you can run the console producer in a terminal as follows:

   kafka-console-producer.sh --broker-list BOOTSTRAP_ENDPOINTS --producer.config CONFIG_FILE --topic TOPIC_NAME

Replace the following variables in the example with your own values:

  • BOOTSTRAP_ENDPOINTS with the value from your Event Streams Service Credentials tab in the IBM Cloud console.
  • CONFIG_FILE with the path of the configuration file.
  • Use the <bootstrap_endpoints> field from the service key as the bootstrap.servers property of your Kafka application.
  • Use the <user> field from the service key as the username and the <api_key> field from the service key as the password. Ensure that your application parses the details.

You can use many of the other options of this tool, except for those that require access to ZooKeeper. For more information, see Using Kafka console tools with Event Streams.

Producer configuration settings

For details of some of the most important settings that you can configure for the producer, see the following information:

Step 5: Produce data using the REST Producer API

Use the v2 endpoint of the REST Producer API to send messages of type text, binary, JSON, or avro to topics. With the v2 endpoint you can use the Event Streams schema registry by specifying the schema for the avro data type.

  • What are headers? Are they optional? You see headers in the message body. That’s the chunk of data that includes everything in the request or response. The headers usually come after the request line or response line. Value of colour is base64 - how to do that?

The following code shows an example of sending a message of text type by using curl:

curl -v -X POST \
-H "Authorization: Bearer $token" -H "Content-Type: application/json" -H "Accept: application/json" \
-d '{
  "headers": [
    {
      "name": "colour",
      "value": "YmxhY2s="
    }
  ],
  "key": {
    "type": "text",
    "data": "Test Key"
  },
  "value": {
    "type": "text",
    "data": "Test Value"
  }
}' \
"$kafka_http_url/v2/topics/$topic_name/records"

For more information, see the Event Streams REST Producer v2 endpoint API reference.

Producer configuration settings

For details of some of the most important settings that you can configure for the producer, see the following information:

Step 6: Consume data using the console

You cannot consume data by using the console. You can consume data using the command line or the Kafka API.

However, you can complete the steps in the Getting started tutorial to run a sample starter app and see messages flowing through a topic.

Step 6: Consume data using the command line

You can use the Event Streams Kafka console consumer tool to consume data.

The console tools are in the bin directory of your Kafka client download.

You must provide a list of brokers and SASL credentials. After you create the properties file as described in produce data using the command line, run the console consumer in a terminal as follows:

   kafka-console-consumer.sh --bootstrap-server BOOTSTRAP_ENDPOINTS --consumer.config CONFIG_FILE --topic TOPIC_NAME 

Replace the following variables in the example with your own values:

  • BOOTSTRAP_ENDPOINTS with the value from your Event Streams Service Credentials tab in the IBM Cloud console.
  • CONFIG_FILE with the path of the configuration file.

You can use many of the other options of this tool, except for those that require access to ZooKeeper. For more information, see Using Kafka console tools with Event Streams.

Consumer configuration settings

For details of some of the most important settings that you can configure for the consumer, see the following information:

Step 6: Consume data using an API

You cannot consume data using an Event Streams API although consumption of data from Kafka is possible using the native Kafka libraries. For more information, see Kafka consumer API.

As an alternative, use the command line.

Step 7: Connect IBM Cloud Monitoring for operational visibility by using the console

You can use IBM Cloud Monitoring to get operational visibility into the performance and health of your applications, services, and platforms. IBM Cloud Monitoring provides administrators, DevOps teams, and developers full stack telemetry with advanced features to monitor and troubleshoot, define alerts, and design custom dashboards.

For more information about how to use Monitoring with Event Streams, see:

Step 7: Connect IBM Cloud Monitoring for operational visibility by using the CLI or command line

You cannot connect IBM Cloud Monitoring by using the CLI or command line. Use the console to complete this task.

Step 7: Connect IBM Cloud Monitoring for operational visibility by using an API

You cannot connect IBM Cloud Monitoring by using an API. Use the console to complete this task.

Step 8: Connect IBM Cloud® Activity Tracker to audit service activity

IBM Cloud Activity Tracker allows you to view, manage, and audit service activity to comply with corporate policies and industry regulations. Activity Tracker records user-initiated activities that change the state of a service in IBM Cloud. Use Activity Tracker to track how users and applications interact with the Event Streams service on the Standard and Enterprise plans.

To get up and running with Activity Tracker, see Getting Started with Activity Tracker.

Activity Tracker can have only one instance per location. To view events, you must access the web UI of the Activity Tracker service in the same location where your service instance is available. For more information, see Launch the web UI.

For more information about events specific to Event Streams, see:

Events are formatted according to the Cloud Auditing Data Federation (CADF) standard. For further details of the information they include, see CADF standard.

Step 8: Connect IBM Cloud® Activity Tracker using the CLI or command line to audit service activity

You cannot connect Activity Tracker using the CLI or command line. Use the console to complete this task.

Step 8: Connect IBM Cloud® Activity Tracker using an API to audit service activity

You cannot connect Activity Tracker using an API. Use the console to complete this task.

Step 9: (Optional) Use Kafka Connect or ksqlDB

Kafka Connect

Kafka Connect is part of the Apache Kafka project and allows you to connect external systems to Kafka. It consists of a runtime that can run connectors to copy data to and from a cluster.

For more information, see Using Kafka Connect with Event Streams.

Kafka Connect is not part of the managed Event Streams service.

ksqlDB

You can use KSQL with the Event Streams Enterprise plan for stream processing.

ksqlDB is a purpose-built database for event streaming. Use it to build end-to-end event streaming applications quickly with a purpose-built stream processing database for Apache Kafka.

First complete these setup steps. Then the quickest and easiest way to run ksqlDB with Event Streams is to use a docker container as described in ksqlDB quickstart.

Step 10: Get help

For a general overview of how to get help with Event Streams and where to get support, see Getting help and support.

FAQs details answers to some of the common questions about Event Streams.

If you're experiencing a problem with Event Streams, here's a list of the information you need to gather before you open a case Reporting a problem to the Event Streams team - Standard and Enterprise plans.