Deployment values
The following deployment values can be used to configure the Spectrum LSF cluster instance on IBM Cloud®.
All the permissions are mandatory, therefore ensure that all your permissions are set, failing to have these permissions will lead to deployment failure. Contact the account administrator for the permissions.
Mandatory deployment values
The following are the mandatory deployment values used to configure the Spectrum LSF cluster instance on IBM Cloud®:
Value | Description | Is it required? | Default value |
---|---|---|---|
app_center_gui_password |
Password required to access the IBM Spectrum LSF Application Center (App Center) GUI, which is enabled by default in both Fix Pack 15 and Fix Pack 14 with HTTPS. This is a mandatory value and omitting it will result in deployment failure. The password must meet the following requirements, at least 8 characters in length, and must include one uppercase letter, one lowercase letter, one number, and one special character. | Yes | "" |
existing_resource_group |
Specify the name of the existing resource group in your IBM Cloud account where VPC resources will be deployed. By default, the resource group is set to 'Default.' In some older accounts, it may be 'default,' so please verify the resource
group name before proceeding. If the value is set to "null", the automation creates two separate resource groups: workload-rg and service-rg . For more details, see Managing resource groups. |
Yes | Default |
ibmcloud_api_key |
Provide the IBM Cloud API key associated with the account to deploy the IBM Spectrum LSF cluster. This key is used to authenticate your deployment and grant the necessary access to create and manage resources in your IBM Cloud environment, see Managing user API keys. | Yes | None |
lsf_version |
Select the desired version of IBM Spectrum LSF to deploy either fixpack_15 or fixpack_14 . By default, the solution uses the latest available version, which is Fix Pack 15. If you need to deploy an earlier version
such as Fix Pack 14, update the lsf_version field to fixpack_14. When changing the LSF version, ensure that all the custom images used for management, compute, and login nodes correspond to the same version. This is essential
to maintain compatibility across the cluster and to prevent deployment issues. |
Yes | "fixpack_15" |
remote_allowed_ips |
Comma-separated list of IP addresses that can access the IBM Spectrum LSF cluster instance through an SSH interface. For security purposes, provide the public IP addresses assigned to the devices that are authorized to establish SSH connections (for example, ["169.45.117.34"]). To fetch the IP address of the device, use https://ipv4.icanhazip.com/. | Yes | None |
ssh_keys |
Provide the list of SSH key names already configured in your IBM Cloud account to establish a connection to the Spectrum LSF nodes. Solution does not create new SSH keys, provide the existing keys. Make sure the SSH key exists in the same resource group and region where the cluster is being provisioned. To pass multiple SSH keys, use the format ["key-name-1", "key-name-2"]. If you don't have an SSH key in your IBM Cloud account, you can create one by following the provided. For more information, see SSH Keys. | Yes | None |
zones |
Specify the IBM Cloud zone within the chosen region where the IBM Spectrum LSF cluster will be deployed. A single zone input is required, the management nodes, file storage shares, and compute nodes will be provisioned in this zone. For more information, see Zones. | Yes | "us-east-1" |
Optional deployment values
The following are the optional deployment values used to configure the Spectrum LSF cluster instance on IBM Cloud®:
Value | Description | Is it required? | Default value |
---|---|---|---|
app_config_plan |
Specify the IBM service pricing plan for the app configuration. Allowed values are 'basic', 'lite', 'standardv2', 'enterprise'. | No | basic |
cspm_enabled |
CSPM (Cloud Security Posture Management) is a set of tools and practices that continuously monitor and secure cloud infrastructure. When enabled, it creates a trusted profile with viewer access to the App Configuration and Enterprise services for the SCC Workload Protection instance. Make sure the required IAM permissions are in place, as missing permissions will cause deployment to fail. If CSPM is disabled, dashboard data will not be available. Learn more. | No | true |
sccwp_service_plan |
Specify the plan type for the Security and Compliance Center (SCC) Workload Protection instance. Valid values are free-trial and graduated-tier only. | No | free-trial |
sccwp_enable |
Set this flag to true to create an instance of IBM Security and Compliance Center (SCC) Workload Protection. When enabled, it provides tools to discover and prioritize vulnerabilities, monitor for security threats, and enforce configuration, permission, and compliance policies across the full lifecycle of your workloads. To view the data on the dashboard, enable the cspm to create the app configuration and required trusted profile policies. Learn more. | No | true |
bastion_instance |
Configuration for the Bastion node, including the image and instance profile. Only Ubuntu stock images are supported. | No | {image = "ibm-ubuntu-22-04-5-minimal-amd64-3" profile = "cx2-4x8"} |
deployer_instance |
Configuration for the deployer node, including the custom image and instance profile. By default, uses fixpack_15 image and a bx2-8x32 profile. | No | [{ image = "hpc-lsf-fp15-deployer-rhel810-v1" profile = "bx2-8x32"}] |
vpc_name |
Provide the name of an existing VPC in which the cluster resources will be deployed. If no value is given, solution provisions a new VPC. Learn more. | No | None |
vpc_cidr |
An address prefix is created for the new VPC when the vpc_name variable is set to null. This prefix is required to provision subnets within a single zone, and the subnets will be created using the specified CIDR blocks. For more information, see Setting IP ranges. | No | "10.241.0.0/18" |
vpc_cluster_login_private_subnets_cidr_blocks |
Specify the CIDR block for the private subnet used by the login cluster. Only a single CIDR block is required. In hybrid environments, ensure the CIDR range does not overlap with any on-premises networks. Since this subnet is dedicated to login virtual server instances, a /28 CIDR range is recommended. | No | "10.241.16.0/28" |
vpc_cluster_private_subnets_cidr_blocks |
Provide the CIDR block required for the creation of the compute cluster's private subnet. One CIDR block is required. If using a hybrid environment, modify the CIDR block to avoid conflicts with any on-premises CIDR blocks. Ensure the selected CIDR block size can accommodate the maximum number of management and dynamic compute nodes expected in your cluster. For more information on CIDR block size selection, refer to the documentation, see Choosing IP ranges for your VPC. | No | "10.241.0.0/20" |
login_subnet_id |
Provide the ID of an existing subnet to deploy cluster resources, this is used only for provisioning bastion, deployer, and login nodes. If not provided, new subnet will be created.Learn more. | No | None |
cluster_subnet_id |
Provide the ID of an existing subnet to deploy cluster resources. This is used only for provisioning VPC file storage shares, management, and compute nodes. If not provided, a new subnet will be created. Ensure that a public gateway is attached to enable VPC API communication. For more information, see IBM Cloud VPC docs. | No | None |
cluster_prefix |
This prefix uniquely identifies the IBM Cloud Spectrum LSF cluster and its resources, it must always be unique. The name must start with a lowercase letter and can include only lowercase letters, digits, and hyphens. Hyphens must be followed by a lowercase letter or digit, with no leading, trailing, or consecutive hyphens. The prefix length must be less than 16 characters. | No | "hpc-lsf" |
login_instance |
Specify the list of login node configurations, including instance profile and image name. By default, login nodes is created using Fix Pack 15. If deploying with Fix Pack 14, set lsf_version to fixpack_14 and use the corresponding
image hpc-lsf-fp14-compute-rhel810-v1. The selected image must align with the specified lsf_version, any mismatch may lead to deployment failures. |
No | [{profile = "bx2-2x8" image = "hpc-lsf-fp15-compute-rhel810-v1"}] |
management_instances |
Specify the list of management node configurations, including instance profile, image name and count. By default, all management nodes are created using Fix Pack 15. If deploying with Fix Pack 14, set lsf_version to fixpack_14 and use the corresponding image hpc-lsf-fp14-rhel810-v1. The selected image must align with the specified lsf_version, any mismatch may lead to deployment failures. The solution allows customization of instance profiles and counts, but mixing custom images and IBM stock images across instances is not supported. If using IBM stock images, only Red Hat-based images are allowed. | No | [{profile = "bx2-16x64" count = 2 image = "hpc-lsf-fp15-rhel810-v1"}] |
static_compute_instances |
Specify the list of static compute node configurations, including instance profile, image name, and count. By default, all compute nodes are created using Fix Pack 15. If deploying with Fix Pack 14, set lsf_version to fixpack_14 and use the corresponding image hpc-lsf-fp14-compute-rhel810-v1. The selected image must align with the specified lsf_version, any mismatch may lead to deployment failures. The solution allows customization of instance profiles and counts, but mixing custom images and IBM stock images across instances is not supported. If using IBM stock images, only Red Hat-based images are allowed. | No | [{profile = "bx2-4x16" count = 1 image = "hpc-lsf-fp15-compute-rhel810-v1"}] |
dynamic_compute_instances |
Specify the list of dynamic compute node configurations, including instance profile, image name, and count. By default, all dynamic compute nodes are created using Fix Pack 15. When deploying with Fix Pack 14, set lsf_version to fixpack_14 and use the corresponding image hpc-lsf-fp14-compute-rhel810-v1 . The selected image must align with the specified lsf_version, any mismatch may lead to deployment failures. Currently, only a single instance
profile is supported for dynamic compute nodes—multiple profiles are not yet supported. |
No | [{ profile = "bx2-4x16" count = 1024 image = "hpc-lsf-fp15-compute-rhel810-v1" }] |
storage_security_group_id |
Provide the storage security group ID from the Spectrum Scale storage cluster when an nfs_share value is specified for a given mount_path in the cluster_file_share variable. This security group is necessary to enable network connectivity between the Spectrum LSF cluster nodes and the NFS mount point, ensuring successful access to the shared file system. | No | None |
custom_file_shares |
Provide details for customizing your shared file storage layout, including mount points, sizes (in GB), and IOPS ranges for up to five file shares if using VPC file storage as the storage option. If using IBM Storage Scale as an NFS mount,
update the appropriate mount path and nfs_share values created from the Storage Scale cluster. Note that VPC file storage supports attachment to a maximum of 256 nodes. Exceeding this limit may result in mount point failures
due to attachment restrictions. For more information, see Storage options. |
No | [{ mount_path = "/mnt/vpcstorage/tools", size = 100, iops = 2000 }, { mount_path = "/mnt/vpcstorage/data", size = 100, iops = 6000 }, { mount_path = "/mnt/scale/tools", nfs_share = "" }] |
enable_cos_integration |
Set to true to create an extra cos bucket to integrate with HPC cluster deployment. | No | true |
cos_instance_name |
Provide the name of the existing COS instance where the logs for the enabled functionalities will be stored. | No | None |
enable_vpc_flow_logs |
This flag determines whether VPC flow logs are enabled. When set to true, a flow log collector will be created to capture and monitor network traffic data within the VPC. Enabling flow logs provides valuable insights for troubleshooting, performance monitoring, and security auditing by recording information about the traffic passing through your VPC. Consider enabling this feature to enhance visibility and maintain robust network management practices. | No | true |
vpn_enabled |
Set the value as true to deploy a VPN gateway for VPC in the cluster. | No | false |
enable_hyperthreading |
Setting this to true will enable hyper-threading in the worker nodes of the cluster (default). Otherwise, hyper-threading will be disabled. | No | true |
observability_atracker_enable |
Configures Activity Tracker Event Routing to determine how audit events routed. While multiple Activity Tracker instances can be created, only one tracker is needed to capture all events. Creating additional trackers is unnecessary if an existing Activity Tracker is already integrated with a COS bucket. In such cases, set the value to false, as all events can be monitored and accessed through the existing Activity Tracker. | No | true |
observability_atracker_target_type |
Specify the target where Atracker events will be stored—either IBM Cloud Logs or a Cloud Object Storage (COS) bucket—based on the selected value. This allows the logs to be accessed or integrated with external systems. | No | "cloudlogs" |
observability_logs_enable_for_management |
Set false to disable IBM Cloud Logs integration. If enabled, infrastructure and LSF application logs from management nodes will be ingested. | No | false |
observability_logs_enable_for_compute |
Set false to disable IBM Cloud Logs integration. If enabled, infrastructure and LSF application logs from compute nodes will be ingested. | No | false |
observability_enable_platform_logs |
Setting this to true will create a tenant in the same region that the Cloud Logs instance is provisioned to enable platform logs for that region. NOTE: You can only have one tenant per region in an account. | No | false |
observability_enable_metrics_routing |
Enable metrics routing to manage metrics at the account-level by configuring targets and routes that define where data points are routed. | No | false |
observability_logs_retention_period |
The number of days IBM Cloud Logs will retain the logs data in priority insights. Allowed values: 7, 14, 30, 60, 90. | No | 7 |
observability_monitoring_enable |
Enables or disables IBM Cloud Monitoring integration. When enabled, metrics from both the infrastructure and LSF application running on management nodes will be collected. This must be set to true if monitoring is required on management nodes. | No | true |
observability_monitoring_on_compute_nodes_enable |
Enables or disables IBM Cloud Monitoring integration. When enabled, metrics from both the infrastructure and LSF application running on compute Nodes will be collected. This must be set to true if monitoring is required on compute nodes. | No | false |
observability_monitoring_plan |
Type of service plan for IBM Cloud Monitoring instance. You can choose one of the following: lite, graduated-tier. For more information, see IBM Cloud Monitoring Service Plans. | No | "graduated-tier" |
dns_instance_id |
Specify the ID of an existing IBM Cloud DNS service instance. When provided, domain names are created within the specified instance. If set to null, a new DNS service instance is created, and the required DNS zones are associated with it. | No | None |
dns_domain_name |
IBM Cloud DNS service domain name to be used for the IBM Spectrum LSF cluster. | No | {compute = "comp.com"} |
dns_custom_resolver_id |
Specify the ID of an existing IBM Cloud DNS custom resolver to avoid creating a new one. If set to null, a new custom resolver will be created and associated with the VPC. Note: A VPC can be associated with only one custom resolver. When using an existing VPC, if a custom resolver is already associated and this ID is not provided, the deployment will fail. | No | None |
enable_ldap |
Set this option to true to enable LDAP for IBM Spectrum LSF, with the default value set to false. | No | false |
ldap_basedns |
The dns domain name used for configuring the LDAP server. If an LDAP server is already in existence, ensure to provide the associated DNS domain name. | No | "lsf.com" |
ldap_server |
Provide the IP address for the existing LDAP server. If no address is given, a new LDAP server will be created. | No | None |
ldap_server_cert |
Provide the existing LDAP server certificate. This value is required if the 'ldap_server' variable is not set to null. If the certificate is not provided or is invalid, the LDAP configuration may fail. | No | None |
ldap_admin_password |
The LDAP administrative password should be 8 to 20 characters long, with a mix of at least three alphabetic characters, including one uppercase and one lowercase letter. It must also include two numerical digits and at least one special character from (~@_+:) are required. It is important to avoid including the username in the password for enhanced security.[This value is ignored for an existing LDAP server]. | No | None |
ldap_user_name |
Custom LDAP user for performing cluster operations. Note: Username should be between 4 to 32 characters, (any combination of lowercase and uppercase letters).[This value is ignored for an existing LDAP server]. | No | "" |
ldap_user_password |
The LDAP user password should be 8 to 20 characters long, with a mix of at least three alphabetic characters, including one uppercase and one lowercase letter. It must also include two numerical digits and at least one special character from (~@_+:) are required.It is important to avoid including the username in the password for enhanced security.[This value is ignored for an existing LDAP server]. | No | "" |
ldap_instance |
Specify the compute instance profile and image to be used for deploying LDAP instances. Only Debian-based operating systems, such as Ubuntu, are supported for LDAP functionality. | No | [{profile = "cx2-2x4" image = "ibm-ubuntu-22-04-5-minimal-amd64-3"}] |
enable_dedicated_host |
Set this option to true to enable dedicated hosts for the VSIs provisioned as workload servers. The default value is false. When dedicated hosts are enabled, multiple VSI instance profiles from the same or different families (for example, bx2, cx2, mx2) can be used. If you plan to deploy a static cluster with a third-generation profile, ensure that dedicated host support is available in the selected region, as not all regions support third-gen profiles on dedicated hosts. To learn more about dedicated host, see x86-64 dedicated host profiles. | No | false |
existing_bastion_instance_name |
Provide the name of the bastion instance. If none given then new bastion will be created. | No | None |
existing_bastion_instance_public_ip |
Provide the public IP address of the existing bastion instance to establish the remote connection. Also using this public IP address, connection to the LSF cluster nodes is established. | No | None |
existing_bastion_security_group_id |
Specify the security group ID for the bastion server. This ID will be added as an allowlist rule on the HPC cluster nodes to facilitate secure SSH connections through the bastion node. By restricting access through a bastion server, this setup enhances security by controlling and monitoring entry points into the cluster environment. Ensure that the specified security group is correctly configured to permit only authorized traffic for secure and efficient management of cluster resources. | No | None |
existing_bastion_ssh_private_key |
Provide the private SSH key (named id_rsa) used during the creation and configuration of the bastion server to securely authenticate and connect to the bastion server. This allows access to internal network resources from a secure entry point. Note: The corresponding public SSH key (named id_rsa.pub) must already be available in the ~/.ssh/authorized_keys file on the bastion host to establish authentication. | No | None |
key_management |
Set the value as key_protect to enable customer managed encryption for boot volume and file share. If the key_management is set as null, IBM Cloud resources will be always be encrypted through provider managed. | No | "key_protect" |
kms_instance_name |
Provide the name of the existing Key Protect instance associated with the Key Management Service. Note: To use existing kms_instance_name set key_management as key_protect . The name can be found under the details
of the KMS, see View key-protect ID. |
No | None |
kms_key_name |
Provide the existing kms key name to be used for the IBM Cloud HPC cluster. Note: kms_key_name to be considered only if key_management value is set as key_protect. For example kms_key_name: my-encryption-key. | No | None |
skip_iam_block_storage_authorization_policy |
When using an existing KMS instance name, set this value to true if authorization is already enabled between KMS instance and the block storage volume. Otherwise, default is set to false. Ensuring proper authorization avoids access issues during deployment.For more information on how to create authorization policy manually, see creating authorization policies for block storage volume. | No | false |
skip_flowlogs_s2s_auth_policy |
When using an existing COS instance, set this value to true if authorization is already enabled between COS instance and the flow logs service. Otherwise, default is set to false. Ensuring proper authorization avoids access issues during deployment. | No | false |
skip_kms_s2s_auth_policy |
When using an existing COS instance, set this value to true if authorization is already enabled between COS instance and the kms. Otherwise, default is set to false. Ensuring proper authorization avoids access issues during deployment. | No | false |
skip_iam_share_authorization_policy |
When using an existing KMS instance name, set this value to true if authorization is already enabled between KMS instance and the VPC file share. Otherwise, default is set to false. Ensuring proper authorization avoids access issues during deployment.For more information on how to create authorization policy manually, see creating authorization policies for VPC file share. | No | false |