IBM Cloud Docs
Managing authentication to your Event Streams instances

Managing authentication to your Event Streams instances

Event Streams supports two SASL (Simple Authentication and Security Layer) mechanisms as the authentication methods to Event Streams instances by default: PLAIN and OAUTHBEARER.

Kafka client configured with SASL PLAIN uses an IAM API key as a plain text password in the authentication process, Event Streams sends the API key to IAM for verification. When authenticated, this client will keep connected and will not require re-authentication until it is disconnected and wants to re-connect.

Kafka client configured with SASL OAUTHBEARER uses IAM access token in the authentication process, Event Streams verifies the token via IAM public key. Because an IAM access token has an expiration time (usually at 1 hour), Kafka client is required to re-generate a new token and go through the authentication process again when previous token is approaching expiration time. This approach provides better security comparing to SASL PLAIN in two ways:

  1. The API key always stays at client side to generate the access token and is no longer sent to Kafka brokers over the network, which removes the risk of API key exposure.
  2. The authentication process happens on a regular basis when the access token is expiring and this minimizes the risk of token exposure.

For more secure authentication, SASL OAUTHBEARER is the only recommended authentication method for Kafka clients. See Configuring your Kafka API client how to configure SASL OAUTHBEARER in Kafka clients.

Enterprise users have the option to disable SASL PLAIN in their Enterprise instances. Use the following command:

ibmcloud resource service-instance-update <instance-name> -p '{"iam_token_only":true}'

Connecting to Event Streams

For more information about how to get a security key credential for an external application, see Connecting to Event Streams.

Managing authorization to your Event Streams resources

You can secure your Event Streams resources in a fine-grained manner to manage the access that you want to grant each user to each resource.

When you change IAM policies and permissions, they can sometimes take several minutes to be reflected in the underlying service.

What can I secure?

Within Event Streams, you have secure access to the following resources:

  • Cluster (cluster): You can control which applications and users can connect to the service.
  • Topics (topic): You can control the ability of users and applications to create, delete, read, and write to a topic.
  • Consumer groups (group): You can control an application's ability to join a consumer group.
  • Producer transactions (txnid): You can control the ability to use the transactional producer capability in Kafka (that is, single, atomic writes across multiple partitions).

The levels of access (also known as a role) that you can assign to a user to each resource are as follows.

Example Event Streams user roles and actions
Access role Description of actions Example actions
Reader Perform read-only actions within Event Streams such as viewing resources. Allow an app to connect to a cluster by assigning read access to cluster resource type.
Writer Writers have permissions beyond the reader role, including editing Event Streams resources. Allow an app to produce to topics by assigning write access to topic resource and topic name types.
Manager Managers have permissions beyond the writer role to complete privileged actions. In addition, you can create and edit Event Streams resources. Allow full access to all resources by assigning manage access to the Event Streams instance.

How do I assign access?

Cloud Identity and Access Management (IAM) policies are attached to the resources to be controlled. Each policy defines the level of access that a particular user must have and to which resource or set of resources. A policy consists of the following information:

  • The type of service the policy applies to. For example, Event Streams. You can scope a policy to include all service types.
  • The instance of the service to be secured. You can scope a policy to include all instances of a service type.
  • The type of resource to be secured. The valid values are cluster, topic, group, schema, or txnid. Specifying a type is optional. If you do not specify a type, the policy then applies to all resources in the service instance. If you want to specify more than one type of resource, you must create one policy per resource.
  • The resource to be secured. Specify for resources of type topic, group, schema, and txnid. If you do not specify the resource, the policy then applies to all resources of the type specified in the service instance.
  • The role that is assigned to the user. For example, Reader, Writer, or Manager.

For more information about IAM, see IBM Cloud Identity and Access Management.

For an example of how to set policies, see IBM Cloud IAM Service IDs and API Keys.

Wildcarding

You can take advantage of the IAM wildcarding facility to set policies for groups of resources on Event Streams. For example, if you give all your topics names like Dept1_Topic1 and Dept1_Topic2, you can set policies for topics that are called Dept1_* and these policies are applied to all topics with that prefix. For more information, see Assigning access by using wildcard policies.

What are the default security settings?

By default, when Event Streams is provisioned, the user who provisioned it is granted the manager role to all the instance's resources. Additionally, any user who has a manager role for either 'All' services or 'All' Event Streams service instances in the same account also has full access.

You can then apply more policies to extend access to other users. You can either scope a policy to apply to Event Streams as a whole or to individual resources within Event Streams. For more information, see Common actions.

Only users with an administration role for an account can assign policies to users. Assign policies either by using IBM Cloud dashboard or by using the ibmcloud commands.

Common actions

The following tables summarize some common Event Streams actions and the access that you need to assign.

Cluster requirements

By controlling access to the cluster resource, you can determine which applications and users can connect to the service. In addition to the policies required for the resource types below, access to ResourceType: Cluster and a Role: Reader, Writer, Manager is required.

Producer actions

The following table describes the role and resource requirements that are needed by a user or an application that produces messages to Event Streams. In addition to the policies required for this resource type, access to ResourceType: Cluster and a Role: Reader, Writer, Manager is required.

Producer actions
Producer actions Topic Group Txnid
Send a message to a topic. Writer Writer [1]
Allow an app to produce to a topic transactionally. Writer Reader Writer
Initialize a transaction. Writer
Commit a transaction. Writer Writer
Abort a transaction. Writer
Send offsets to a transaction. Reader Writer

Consumer actions

The following table describes the role and resource requirements that are needed by a user or application that consumes messages from Event Streams. In addition to the policies required for this resource type, access to ResourceType: Cluster and a Role: Reader, Writer, Manager is required.

Consumer actions
Consumer actions Topic Group Txnid
Allow an app to consume a topic (consumer group). Reader Reader [2]
Allow an app to connect and consume from a specific topic (no consumer group). Reader
Allow an app to connect and consume from any topic (no consumer group). Reader
Use Kafka Streams. Manager Reader
Delete consumer group. Manager
Assign. Reader
Commit async. Reader Reader
Commit sync. Reader Reader
Enforce rebalance. Reader
Poll. Reader
Subscribe. Reader
Unsubscribe. Reader Writer

Administration actions

In addition to the policies required for this resource type, access to ResourceType: Cluster and a Role: Reader, Writer, Manager is required.

Administration actions
Administration actions Topic Group Txnid
Alter topic configurations. Manager
Alter consumer group offsets. Reader Reader
Create partitions. Manager
Create topics. Manager
Delete consumer group offsets. Reader Manager
Delete consumer groups. Manager
Delete records. Manager
Delete topics. Manager
Describe producers. Reader
Fence producers. Writer
Incrementally alter topic configurations. Manager
Remove members from consumer group. Reader

Schema Registry actions

With Schema Registry actions, you can alter the schema version, such as create, update, and delete artifact or artifact versions (Enterprise plan only). Artifact is the term that Event Streams uses to describe related schemas, often associated with and used by a particular Kafka topic. The term subject is often used to describe the same concept. For more information, see Using Event Streams Schema Registry. In addition to the policies required for this resource type, access to ResourceType: Cluster and a Role: Reader, Writer, Manager is required.

Schema Registry actions
Schema Registry actions Schema
Get latest artifact. Reader
List versions. Reader
Get version. Reader
Get metadata by content. Reader
Get metadata. Reader
Get version metadata. Reader
Get the schema string identified by the input ID. Reader
Retrieve only the schema identified by the input ID. Reader
Get the subject-version pairs identified by the input ID. Reader
Get a list of versions registered under the specified subject. Reader
Get artifact compatibility rule. Reader
Get a specific version of the schema registered under this subject. Reader
Get the schema for the specified version of this subject. Reader
Register a new schema under the specified subject (if version already exists). Reader
Check if a schema has already been registered under the specified subject. Reader
Get a list of IDs of schemas that reference the schema with the given subject and version. Reader
Test input schema against a particular version of a subject’s schema for compatibility. Reader
Perform a compatibility check on the schema against one or more versions in the subject. Reader
Get compatibility level for a subject. Reader
Register a new schema under the specified subject (if version is to be created). Writer
Create artifact. Writer
Update artifact. Writer
Disable artifact. Writer
Create version. Writer
Delete version. Manager
Update artifact state. Manager
Update version state. Manager
Delete artifact. Manager
Create artifact compatibility rule. Manager
Update artifact compatibility rule. Manager
Update compatibility level for the specified subject. Manager
Delete artifact compatibility rule. Manager
Deletes the specified subject and its associated compatibility level if registered. Manager
Delete a specific version of the schema registered under this subject. Manager
Delete the specified subject-level compatibility level config and reverts to the global default. Manager
Update the global compatibility rule. [3]
Update the global compatibility level. [4]

Schema Registry compatibility actions

For interoperation with existing applications, the Event Streams Schema Registry supports a subset of the Confluent Schema Registry API v7.2. To perform these actions, you need the following resource level access.

Compatibility actions table
Schema Registry compatibility actions Schema
Get the schema string identified by the input ID. Reader
Retrieves only the schema identified by the input ID. Reader
Get the schema types that are registered with Schema Registry.
Get the subject-version pairs identified by the input ID. Reader
Get a list of registered subjects.
Get a list of versions registered under the specified subject. Reader
Deletes the specified subject and its associated compatibility level if registered. Manager
Get a specific version of the schema registered under this subject. Reader
Get the schema for the specified version of this subject. Reader
Register a new schema under the specified subject. Reader/Writer [5]
Check if a schema has already been registered under the specified subject. Reader
Deletes a specific version of the schema registered under this subject. Manager
Get a list of IDs of schemas that reference the schema with the given subject and version. Reader
Test input schema against a particular version of a subject’s schema for compatibility. Reader
Perform a compatibility check on the schema against one or more versions in the subject. Reader
Update global compatibility level. [6]
Get global compatibility level.
Update compatibility level for the specified subject. Manager
Get compatibility level for a subject. Reader
Deletes the specified subject-level compatibility level config and reverts to the global default. Manager

Managing access to the Schema Registry

The authorization model for the Schema Registry uses the same style of policies that are described in the Managing authorization to your Event Streams resources section of this document.

IAM resources

With the new schema IAM resource type, it is possible to create policies that control access by using varying degrees of granularity, as in the following examples.

  • A specific schema.
  • A set of schemas selected by a wildcard expression.
  • All of the schemas stored by an instance of IBM Event Streams.
  • All of the schemas stored by all of the instances of IBM Event Streams in an account.

Event Streams already has the concept of a cluster resource type. It is used to control all access to the service instance, with the minimum role of Reader being required to access any Kafka or HTTPS endpoint. This use of the cluster resource type is also applied to the Schema Registry whereby a minimum role of Reader is required to access the registry.

Example authorization scenarios

The following table describes some examples of scenarios for interacting with the Event Streams Schema Registry, together with the roles that are required by the actors involved. The process of managing schemas is handled separately to deploying applications. So policies are required for both the service ID that manages schemas in the registry and the application that connects to the registry.

Examples of authorization scenarios
Scenario Person or process role Person or process resource Application role Application resource
New schema versions are placed into the registry by a person or process that is separate from the applications that use the schemas. Reader
Writer
cluster
schema
Reader
Reader
cluster
schema
Adding a schema to the registry needs to specify a nondefault rule that controls how versions of the schema are allowed to evolve. Reader
Manager
cluster
schema
Not applicable Not applicable
Schemas are managed alongside the application code that uses the schema. New schema versions are created at the point that an application tries to use the new schema version. Not applicable Not applicable Reader
Writer
cluster
schema
The global default rule that controls schema evolution is changed. Manager cluster Not applicable Not applicable

  1. Writer on txnid is only required for transactional produce. ↩︎

  2. Reader on group is only required if the assign causes the consumer to leave its current group. ↩︎

  3. You do not need access to the schema resource, instead Manager access on the cluster resource is required. ↩︎

  4. You do not need access to the schema resource, instead Manager access on the cluster resource is required. ↩︎

  5. Reader if the version already exists, Writer if the version is to be created by the API call. ↩︎

  6. You do not need access to the schema resource, instead Manager access on the cluster resource is required. ↩︎