Introduction
You can use a collection of IBM DataStage REST APIs to process, compile, and run flows. DataStage flows are design-time assets that contain data integration logic in JSON-based schemas.
Process flows Use the processing API to manipulate data that you have read from a data source before writing it to a data target.
Compile flows Use the compile API to compile flows. All flows must be compiled before you run them. .
Run flows Use the run API to run flows. When you run a flow, the extraction, loading, and transforming tasks that were built into the flow designs are actually implemented.
You can use the DataStage REST APIs for both DataStage in Cloud Pak for Data as a service and DataStage in Cloud Pak for Data.
For more information on the DataStage service, see the following links:
The code examples on this tab use the client library that is provided for Java.
<dependency>
<groupId>com.ibm.cloud</groupId>
<artifactId>datastage</artifactId>
<version>0.0.1</version>
</dependency>
Gradle
compile 'com.ibm.cloud:datastage:0.0.1'
GitHub
The code examples on this tab use the client library that is provided for Node.js.
Installation
npm install datastage
GitHub
The code examples on this tab use the client library that is provided for Python.
Installation
pip install --upgrade "datastage>=0.0.1"
GitHub
Authentication
Before you can call an IBM DataStage API, you must first create an IAM bearer token. Tokens support authenticated requests without embedding service credentials in every call. Each token is valid for one hour. After a token expires, you must create a new one if you want to continue using the API. The recommended method to retrieve a token programmatically is to create an API key for your IBM Cloud identity and then use the IAM token API to exchange that key for a token. For more information on authentication, see the following links:
- Cloud Pak for Data as a Service: Authenticating to Watson services
- Cloud Pak for Data (this information is applicable to DataStage even though the topic title refers to Watson Machine Learning):
- If IAM integration was disabled during installation (default setting): Getting a bearer token with IAM integration disabled
- If IAM integration was enabled during installation: Getting a bearer token with IAM integration enabled
Replace {apikey} and {url} with your service credentials.
curl -X {request_method} -u "apikey:{apikey}" "{url}/v4/{method}"
Setting client options through external configuration
Example environment variables, where <SERVICE_URL> is the endpoint URL, <API_KEY> is your IAM API key and <IAM_URL> is your IAM URL endpoint
DATASTAGE_AUTH_TYPE=iam
DATASTAGE_URL=https://api.dataplatform.cloud.ibm.com/data_intg
DATASTAGE_APIKEY=<API_KEY>
DATASTAGE_AUTH_URL=https://iam.cloud.ibm.com/identity/token
Example of constructing the service client
import com.ibm.cloud.datastage.v3.Datastage;
Datastage service = Datastage.newInstance();
Setting client options through external configuration
Example environment variables, where <SERVICE_URL> is the endpoint URL, <API_KEY> is your IAM API key and <IAM_URL> is your IAM URL endpoint
DATASTAGE_AUTH_TYPE=iam
DATASTAGE_URL=https://api.dataplatform.cloud.ibm.com/data_intg
DATASTAGE_APIKEY=<API_KEY>
DATASTAGE_AUTH_URL=https://iam.cloud.ibm.com/identity/token
Example of constructing the service client
const DatastageV3 = require('datastage/datastage/v3');
const datastageService = DatastageV3.newInstance({});
Setting client options through external configuration
To authenticate when using this sdk, an external credentials file is necessary (i.e. credentials.env).
In this credentials file you will define and set 4 required fields for authenticating your sdk use against IAM.
Example environment variables, where <API_KEY> is your IAM API key
DATASTAGE_AUTH_TYPE=iam
DATASTAGE_URL=https://api.dataplatform.cloud.ibm.com/data_intg
DATASTAGE_APIKEY=<API_KEY>
DATASTAGE_AUTH_URL=https://iam.cloud.ibm.com/identity/token
Example of constructing the service client
import os
from datastage.datastage_v3 import DatastageV3
# define path to external credentials file
config_file = 'credentials.env'
# define a chosen service name
custom_service_name = 'DATASTAGE'
datastage_service = None
if os.path.exists(config_file):
# set environment variable to point towards credentials file path
os.environ['IBM_CREDENTIALS_FILE'] = config_file
# create datastage instance using custom service name
datastage_service = DatastageV3.new_instance(custom_service_name)
IBM Cloud URLs
The base URLs come from the service instance. To find the URL, view the service credentials by clicking the name of the service in the Resource list. Use the value of the URL. Add the method to form the complete API endpoint for your request.
https://api.dataplatform.cloud.ibm.com/data_intg
Example API request
curl --request GET --header "Content-Type: application/json" --header "Accept: application/json" --header "Authorization: Bearer <IAM token>" --url "https://api.dataplatform.cloud.ibm.com/data_intg/v3/data_intg_flows?project_id=<Project ID>&limit=10"
Replace <IAM token> and <Project ID> in this example with the values for your particular API call.
Error handling
DataStage uses standard HTTP response codes to indicate whether a method completed successfully. HTTP response codes in the 2xx range indicate success. A response in the 4xx range is some sort of failure, and a response in the 5xx range usually indicates an internal system error that cannot be resolved by the user. Response codes are listed with the method.
ErrorResponse
| Name | Description |
|---|---|
| error string |
Description of the problem. |
| code integer |
HTTP response code. |
| code_description string |
Response message. |
| warnings string |
Warnings associated with the error. |
Methods
Delete DataStage flows
Deletes the specified data flows in a project or catalog (either project_id or catalog_id must be set).
If the deletion of the data flows and their runs will take some time to finish, then a 202 response will be returned and the deletion will continue asynchronously.
All the data flow runs associated with the data flows will also be deleted. If a data flow is still running, it will not be deleted unless the force parameter is set to true. If a data flow is still running and the force parameter is set to true, the call returns immediately with a 202 response. The related data flows are deleted after the data flow runs are stopped.
Deletes the specified data flows in a project or catalog (either project_id or catalog_id must be set).
If the deletion of the data flows and their runs will take some time to finish, then a 202 response will be returned and the deletion will continue asynchronously. All the data flow runs associated with the data flows will also be deleted. If a data flow is still running, it will not be deleted unless the force parameter is set to true. If a data flow is still running and the force parameter is set to true, the call returns immediately with a 202 response. The related data flows are deleted after the data flow runs are stopped.
Deletes the specified data flows in a project or catalog (either project_id or catalog_id must be set).
If the deletion of the data flows and their runs will take some time to finish, then a 202 response will be returned and the deletion will continue asynchronously. All the data flow runs associated with the data flows will also be deleted. If a data flow is still running, it will not be deleted unless the force parameter is set to true. If a data flow is still running and the force parameter is set to true, the call returns immediately with a 202 response. The related data flows are deleted after the data flow runs are stopped.
Deletes the specified data flows in a project or catalog (either project_id or catalog_id must be set).
If the deletion of the data flows and their runs will take some time to finish, then a 202 response will be returned and the deletion will continue asynchronously. All the data flow runs associated with the data flows will also be deleted. If a data flow is still running, it will not be deleted unless the force parameter is set to true. If a data flow is still running and the force parameter is set to true, the call returns immediately with a 202 response. The related data flows are deleted after the data flow runs are stopped.
DELETE /v3/data_intg_flows
ServiceCall<Void> deleteDatastageFlows(DeleteDatastageFlowsOptions deleteDatastageFlowsOptions)deleteDatastageFlows(params)
delete_datastage_flows(
self,
id: List[str],
*,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
force: Optional[bool] = None,
**kwargs,
) -> DetailedResponseRequest
Use the DeleteDatastageFlowsOptions.Builder to create a DeleteDatastageFlowsOptions object that contains the parameter values for the deleteDatastageFlows method.
Query Parameters
The list of DataStage flow IDs to delete.
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Example:
4c9adbb4-28ef-4a7d-b273-1cee0c38021fWhether to stop all running data flows. Running DataStage flows must be stopped before the DataStage flows can be deleted.
The deleteDatastageFlows options.
The list of DataStage flow IDs to delete.
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:4c9adbb4-28ef-4a7d-b273-1cee0c38021fWhether to stop all running data flows. Running DataStage flows must be stopped before the DataStage flows can be deleted.
parameters
The list of DataStage flow IDs to delete.
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:Whether to stop all running data flows. Running DataStage flows must be stopped before the DataStage flows can be deleted.
parameters
The list of DataStage flow IDs to delete.
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:Whether to stop all running data flows. Running DataStage flows must be stopped before the DataStage flows can be deleted.
curl -X DELETE --location --header "Authorization: Bearer {iam_token}" "{base_url}/v3/data_intg_flows?id=[]&project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
String[] ids = new String[] {flowID, cloneFlowID}; DeleteDatastageFlowsOptions deleteDatastageFlowsOptions = new DeleteDatastageFlowsOptions.Builder() .id(Arrays.asList(ids)) .projectId(projectID) .build(); datastageService.deleteDatastageFlows(deleteDatastageFlowsOptions).execute();
const params = { id: [subflow_assetID, subflowCloneID], projectId: projectID, }; const res = await datastageService.deleteDatastageSubflows(params);
response = datastage_service.delete_datastage_flows( id=createdFlowId, project_id=config['PROJECT_ID'] )
Response
Status Code
The requested operation is in progress.
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred. See response for more information.
No Sample Response
Get metadata for DataStage flows
Lists the metadata and entity for DataStage flows that are contained in the specified project.
Use the following parameters to filter the results:
| Field | Match type | Example |
| ------------------------ | ------------ | --------------------------------------- |
| entity.name | Equals | entity.name=MyDataStageFlow |
| entity.name | Starts with | entity.name=starts:MyData |
| entity.description | Equals | entity.description=movement |
| entity.description | Starts with | entity.description=starts:data |
To sort the results, use one or more of the parameters described in the following section. If no sort key is specified, the results are sorted in descending order on metadata.create_time (i.e. returning the most
recently created data flows first).
| Field | Example |
| ------------------------- | ----------------------------------- |
| sort | sort=+entity.name (sort by ascending name) |
| sort | sort=-metadata.create_time (sort by descending creation time) |
Multiple sort keys can be specified by delimiting them with a comma. For example, to sort in descending order on create_time and then in ascending order on name use: sort=-metadata.create_time,+entity.name
Lists the metadata and entity for DataStage flows that are contained in the specified project.
Use the following parameters to filter the results:
| Field | Match type | Example |
| ------------------------ | ------------ | --------------------------------------- |
| entity.name | Equals | entity.name=MyDataStageFlow |
| entity.name | Starts with | entity.name=starts:MyData |
| entity.description | Equals | entity.description=movement |
| entity.description | Starts with | entity.description=starts:data |
To sort the results, use one or more of the parameters described in the following section. If no sort key is specified, the results are sorted in descending order on metadata.create_time (i.e. returning the most recently created data flows first).
| Field | Example |
| ------------------------- | ----------------------------------- |
| sort | sort=+entity.name (sort by ascending name) |
| sort | sort=-metadata.create_time (sort by descending creation time) |
Multiple sort keys can be specified by delimiting them with a comma. For example, to sort in descending order on create_time and then in ascending order on name use: sort=-metadata.create_time,+entity.name.
Lists the metadata and entity for DataStage flows that are contained in the specified project.
Use the following parameters to filter the results:
| Field | Match type | Example |
| ------------------------ | ------------ | --------------------------------------- |
| entity.name | Equals | entity.name=MyDataStageFlow |
| entity.name | Starts with | entity.name=starts:MyData |
| entity.description | Equals | entity.description=movement |
| entity.description | Starts with | entity.description=starts:data |
To sort the results, use one or more of the parameters described in the following section. If no sort key is specified, the results are sorted in descending order on metadata.create_time (i.e. returning the most recently created data flows first).
| Field | Example |
| ------------------------- | ----------------------------------- |
| sort | sort=+entity.name (sort by ascending name) |
| sort | sort=-metadata.create_time (sort by descending creation time) |
Multiple sort keys can be specified by delimiting them with a comma. For example, to sort in descending order on create_time and then in ascending order on name use: sort=-metadata.create_time,+entity.name.
Lists the metadata and entity for DataStage flows that are contained in the specified project.
Use the following parameters to filter the results:
| Field | Match type | Example |
| ------------------------ | ------------ | --------------------------------------- |
| entity.name | Equals | entity.name=MyDataStageFlow |
| entity.name | Starts with | entity.name=starts:MyData |
| entity.description | Equals | entity.description=movement |
| entity.description | Starts with | entity.description=starts:data |
To sort the results, use one or more of the parameters described in the following section. If no sort key is specified, the results are sorted in descending order on metadata.create_time (i.e. returning the most recently created data flows first).
| Field | Example |
| ------------------------- | ----------------------------------- |
| sort | sort=+entity.name (sort by ascending name) |
| sort | sort=-metadata.create_time (sort by descending creation time) |
Multiple sort keys can be specified by delimiting them with a comma. For example, to sort in descending order on create_time and then in ascending order on name use: sort=-metadata.create_time,+entity.name.
GET /v3/data_intg_flows
ServiceCall<DataFlowPagedCollection> listDatastageFlows(ListDatastageFlowsOptions listDatastageFlowsOptions)listDatastageFlows(params)
list_datastage_flows(
self,
*,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
sort: Optional[str] = None,
start: Optional[str] = None,
limit: Optional[int] = None,
entity_name: Optional[str] = None,
entity_description: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the ListDatastageFlowsOptions.Builder to create a ListDatastageFlowsOptions object that contains the parameter values for the listDatastageFlows method.
Query Parameters
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Example:
4c9adbb4-28ef-4a7d-b273-1cee0c38021fThe field to sort the results on, including whether to sort ascending (+) or descending (-), for example, sort=-metadata.create_time.
The page token indicating where to start paging from.
Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e2The limit of the number of items to return for each page, for example limit=50. If not specified a default of 100 will be used. The maximum value of limit is 200.
Example:
100Filter results based on the specified name.
Filter results based on the specified description.
The listDatastageFlows options.
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:4c9adbb4-28ef-4a7d-b273-1cee0c38021fThe field to sort the results on, including whether to sort ascending (+) or descending (-), for example, sort=-metadata.create_time.
The page token indicating where to start paging from.
Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e2The limit of the number of items to return for each page, for example limit=50. If not specified a default of 100 will be used. The maximum value of limit is 200.
Examples:100Filter results based on the specified name.
Filter results based on the specified description.
parameters
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:The field to sort the results on, including whether to sort ascending (+) or descending (-), for example, sort=-metadata.create_time.
The page token indicating where to start paging from.
Examples:The limit of the number of items to return for each page, for example limit=50. If not specified a default of 100 will be used. The maximum value of limit is 200.
Examples:Filter results based on the specified name.
Filter results based on the specified description.
parameters
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:The field to sort the results on, including whether to sort ascending (+) or descending (-), for example, sort=-metadata.create_time.
The page token indicating where to start paging from.
Examples:The limit of the number of items to return for each page, for example limit=50. If not specified a default of 100 will be used. The maximum value of limit is 200.
Examples:Filter results based on the specified name.
Filter results based on the specified description.
curl -X GET --location --header "Authorization: Bearer {iam_token}" --header "Accept: application/json;charset=utf-8" "{base_url}/v3/data_intg_flows?project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23&limit=100"
ListDatastageFlowsOptions listDatastageFlowsOptions = new ListDatastageFlowsOptions.Builder() .projectId(projectID) .limit(Long.valueOf("100")) .build(); Response<DataFlowPagedCollection> response = datastageService.listDatastageFlows(listDatastageFlowsOptions).execute(); DataFlowPagedCollection dataFlowPagedCollection = response.getResult(); System.out.println(dataFlowPagedCollection);
const params = { projectId: projectID, sort: 'name', limit: 100, }; const res = await datastageService.listDatastageFlows(params);
data_flow_paged_collection = datastage_service.list_datastage_flows( project_id=config['PROJECT_ID'], limit=100 ).get_result() print(json.dumps(data_flow_paged_collection, indent=2))
Response
A page from a collection of DataStage flows.
A page from a collection of DataStage flows.
URI of a resource.
URI of a resource.
The number of data flows requested to be returned.
URI of a resource.
URI of a resource.
The total number of DataStage flows available.
A page from a collection of DataStage flows.
A page from a collection of DataStage flows.
- dataFlows
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Asset type object for folder container.
The name of the DataStage flow.
Asset type object.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_idorproject_idor 'space_id` is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_idorproject_idor 'space_id` is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset.
catalog_idorproject_idor 'space_id` is required.A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
URI of a resource.
- first
URI of a resource.
URI of a resource.
- last
URI of a resource.
The number of data flows requested to be returned.
URI of a resource.
- next
URI of a resource.
URI of a resource.
- prev
URI of a resource.
The total number of DataStage flows available.
A page from a collection of DataStage flows.
A page from a collection of DataStage flows.
- data_flows
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Asset type object for folder container.
The name of the DataStage flow.
Asset type object.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_idorproject_idor 'space_id` is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_idorproject_idor 'space_id` is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset.
catalog_idorproject_idor 'space_id` is required.A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
URI of a resource.
- first
URI of a resource.
URI of a resource.
- last
URI of a resource.
The number of data flows requested to be returned.
URI of a resource.
- next
URI of a resource.
URI of a resource.
- prev
URI of a resource.
The total number of DataStage flows available.
A page from a collection of DataStage flows.
A page from a collection of DataStage flows.
- data_flows
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Asset type object for folder container.
The name of the DataStage flow.
Asset type object.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_idorproject_idor 'space_id` is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_idorproject_idor 'space_id` is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset.
catalog_idorproject_idor 'space_id` is required.A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
URI of a resource.
- first
URI of a resource.
URI of a resource.
- last
URI of a resource.
The number of data flows requested to be returned.
URI of a resource.
- next
URI of a resource.
URI of a resource.
- prev
URI of a resource.
The total number of DataStage flows available.
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred. See response for more information.
{ "data_flows": [ { "entity": { "description": " ", "name": "{job_name}", "rov": { "mode": 0 } }, "metadata": { "asset_id": "{asset_id}", "asset_type": "data_intg_flow", "create_time": "2021-04-03 15:32:55+00:00", "creator_id": "IBMid-xxxxxxxxx", "description": " ", "href": "{url}/data_intg/v3/data_intg_flows/{asset_id}?project_id={project_id}", "name": "{job_name}", "project_id": "{project_id}", "resource_key": "{project_id}/data_intg_flow/{job_name}", "size": 5780, "usage": { "access_count": 0, "last_access_time": "2021-04-03 15:33:01.320000+00:00", "last_accessor_id": "IBMid-xxxxxxxxx", "last_modification_time": "2021-04-03 15:33:01.320000+00:00", "last_modifier_id": "IBMid-xxxxxxxxx" } } } ], "first": { "href": "{url}/data_intg/v3/data_intg_flows?project_id={project_id}&limit=2" }, "next": { "href": "{url}/data_intg/v3/data_intg_flows?project_id={project_id}&limit=2&start=g1AAAADOeJzLYWBgYMpgTmHQSklKzi9KdUhJMjTUS8rVTU7WLS3WLc4vLcnQNbLQS87JL01JzCvRy0styQHpyWMBkgwNQOr____9WWCxXCAhYmRgZKhrYKJrYBxiaGplbGRlahqVaJCFZocB8XYcgNhxHrcdhlamhlGJ-llZAD4lOMI" }, "total_count": 135 }{ "data_flows": [ { "entity": { "description": " ", "name": "{job_name}", "rov": { "mode": 0 } }, "metadata": { "asset_id": "{asset_id}", "asset_type": "data_intg_flow", "create_time": "2021-04-03 15:32:55+00:00", "creator_id": "IBMid-xxxxxxxxx", "description": " ", "href": "{url}/data_intg/v3/data_intg_flows/{asset_id}?project_id={project_id}", "name": "{job_name}", "project_id": "{project_id}", "resource_key": "{project_id}/data_intg_flow/{job_name}", "size": 5780, "usage": { "access_count": 0, "last_access_time": "2021-04-03 15:33:01.320000+00:00", "last_accessor_id": "IBMid-xxxxxxxxx", "last_modification_time": "2021-04-03 15:33:01.320000+00:00", "last_modifier_id": "IBMid-xxxxxxxxx" } } } ], "first": { "href": "{url}/data_intg/v3/data_intg_flows?project_id={project_id}&limit=2" }, "next": { "href": "{url}/data_intg/v3/data_intg_flows?project_id={project_id}&limit=2&start=g1AAAADOeJzLYWBgYMpgTmHQSklKzi9KdUhJMjTUS8rVTU7WLS3WLc4vLcnQNbLQS87JL01JzCvRy0styQHpyWMBkgwNQOr____9WWCxXCAhYmRgZKhrYKJrYBxiaGplbGRlahqVaJCFZocB8XYcgNhxHrcdhlamhlGJ-llZAD4lOMI" }, "total_count": 135 }
Create DataStage flow
Creates a DataStage flow in the specified project or catalog (either project_id or catalog_id must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.
Creates a DataStage flow in the specified project or catalog (either project_id or catalog_id must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.
Creates a DataStage flow in the specified project or catalog (either project_id or catalog_id must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.
Creates a DataStage flow in the specified project or catalog (either project_id or catalog_id must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.
POST /v3/data_intg_flows
ServiceCall<DataIntgFlow> createDatastageFlows(CreateDatastageFlowsOptions createDatastageFlowsOptions)createDatastageFlows(params)
create_datastage_flows(
self,
data_intg_flow_name: str,
*,
pipeline_flows: Optional['PipelineJson'] = None,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
directory_asset_id: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the CreateDatastageFlowsOptions.Builder to create a CreateDatastageFlowsOptions object that contains the parameter values for the createDatastageFlows method.
Query Parameters
The data flow name.
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Example:
4c9adbb4-28ef-4a7d-b273-1cee0c38021fThe directory asset ID.
Pipeline json to be attached.
Pipeline flow to be stored.
The createDatastageFlows options.
The data flow name.
Pipeline flow to be stored.
- pipelineFlows
Object containing app-specific data.
The document type.
Examples:pipeline
Array of parameter set references.
Examples:[ { "name": "Test Param Set", "project_ref": "bd0dbbfd-810d-4f0e-b0a9-228c328a8e23", "ref": "eeabf991-b69e-4f8c-b9f1-e6f2129b9a57" } ]Document identifier, GUID recommended.
Examples:84c2b6fb-1dd5-4114-b4ba-9bb2cb364fffRefers to the JSON schema used to validate documents of this type.
Examples:http://api.dataplatform.ibm.com/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.jsonParameters for the flow document.
Examples:{ "local_parameters": [ { "name": "srcFile", "type": "string" }, { "name": "my_connection", "subtype": "connection", "type": "asset_id", "value": "dfe7c595-81d8-461e-8d13-a7c544f3f500" } ] }Array of pipeline.
- pipelines
Object containing app-specific data.
Examples:{ "ui_data": { "comments": [] } }A brief description of the DataStage flow.
Examples:A test DataStage flow.
Unique identifier.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bcName of the pipeline.
Examples:ContainerC1
Array of pipeline nodes.
Examples:[ { "app_data": { "ui_data": { "description": "Produce a set of mock data based on the specified metadata", "image": "/data-intg/flows/graphics/palette/PxRowGenerator.svg", "label": "Row_Generator_1", "x_pos": 108, "y_pos": 162 } }, "id": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "op": "PxRowGenerator", "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "73a5fb2c-f499-4c75-a8a7-71cea90f5105" }, "ui_data": { "label": "outPort" } }, "id": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "parameters": { "records": 10 }, "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "parameters": { "input_count": 0, "output_count": 1 }, "type": "binding" }, { "app_data": { "ui_data": { "description": "Print row column values to either the job log or to a separate output link", "image": "/data-intg/flows/graphics/palette/PxPeek.svg", "label": "Peek_1", "x_pos": 342, "y_pos": 162 } }, "id": "4195b012-d3e7-4f74-8099-e7b23ec6ebb9", "inputs": [ { "app_data": { "ui_data": { "label": "inPort" } }, "id": "c4195b34-8b4a-473f-b987-fa6d028f3968", "links": [ { "app_data": { "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "Link_1", "label": "Link_1", "outline": true, "path": "", "position": "middle" } ] } }, "id": "73a5fb2c-f499-4c75-a8a7-71cea90f5105", "link_name": "Link_1", "node_id_ref": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "port_id_ref": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "type_attr": "PRIMARY" } ], "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "op": "PxPeek", "outputs": [ { "app_data": { "ui_data": { "label": "outPort" } }, "id": "" } ], "parameters": { "all": " ", "columns": " ", "dataset": " ", "input_count": 1, "name": "name", "nrecs": 10, "output_count": 0, "selection": " " }, "type": "execution_node" } ]Reference to the runtime type.
Examples:pxOsh
Reference to the primary (main) pipeline flow within the document.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bcRuntime information for pipeline flow.
Examples:[ { "id": "pxOsh", "name": "pxOsh" } ]Array of data record schemas used in the pipeline.
Examples:[ { "fields": [ { "app_data": { "is_unicode_string": false, "odbc_type": "INTEGER", "type_code": "INT32" }, "metadata": { "decimal_precision": 6, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 6, "min_length": 0 }, "name": "ID", "nullable": false, "type": "integer" } ], "id": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ]Pipeline flow version.
Examples:3.0
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:4c9adbb4-28ef-4a7d-b273-1cee0c38021fThe directory asset ID.
parameters
The data flow name.
Pipeline flow to be stored.
- pipelineFlows
Object containing app-specific data.
The document type.
Examples:pipeline
Array of parameter set references.
Examples:[ { "name": "Test Param Set", "project_ref": "bd0dbbfd-810d-4f0e-b0a9-228c328a8e23", "ref": "eeabf991-b69e-4f8c-b9f1-e6f2129b9a57" } ]Document identifier, GUID recommended.
Examples:84c2b6fb-1dd5-4114-b4ba-9bb2cb364fffRefers to the JSON schema used to validate documents of this type.
Examples:http://api.dataplatform.ibm.com/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.jsonParameters for the flow document.
Examples:{ "local_parameters": [ { "name": "srcFile", "type": "string" }, { "name": "my_connection", "subtype": "connection", "type": "asset_id", "value": "dfe7c595-81d8-461e-8d13-a7c544f3f500" } ] }Array of pipeline.
- pipelines
Object containing app-specific data.
Examples:{ "ui_data": { "comments": [] } }A brief description of the DataStage flow.
Examples:A test DataStage flow.
Unique identifier.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bcName of the pipeline.
Examples:ContainerC1
Array of pipeline nodes.
Examples:[ { "app_data": { "ui_data": { "description": "Produce a set of mock data based on the specified metadata", "image": "/data-intg/flows/graphics/palette/PxRowGenerator.svg", "label": "Row_Generator_1", "x_pos": 108, "y_pos": 162 } }, "id": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "op": "PxRowGenerator", "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "73a5fb2c-f499-4c75-a8a7-71cea90f5105" }, "ui_data": { "label": "outPort" } }, "id": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "parameters": { "records": 10 }, "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "parameters": { "input_count": 0, "output_count": 1 }, "type": "binding" }, { "app_data": { "ui_data": { "description": "Print row column values to either the job log or to a separate output link", "image": "/data-intg/flows/graphics/palette/PxPeek.svg", "label": "Peek_1", "x_pos": 342, "y_pos": 162 } }, "id": "4195b012-d3e7-4f74-8099-e7b23ec6ebb9", "inputs": [ { "app_data": { "ui_data": { "label": "inPort" } }, "id": "c4195b34-8b4a-473f-b987-fa6d028f3968", "links": [ { "app_data": { "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "Link_1", "label": "Link_1", "outline": true, "path": "", "position": "middle" } ] } }, "id": "73a5fb2c-f499-4c75-a8a7-71cea90f5105", "link_name": "Link_1", "node_id_ref": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "port_id_ref": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "type_attr": "PRIMARY" } ], "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "op": "PxPeek", "outputs": [ { "app_data": { "ui_data": { "label": "outPort" } }, "id": "" } ], "parameters": { "all": " ", "columns": " ", "dataset": " ", "input_count": 1, "name": "name", "nrecs": 10, "output_count": 0, "selection": " " }, "type": "execution_node" } ]Reference to the runtime type.
Examples:pxOsh
Reference to the primary (main) pipeline flow within the document.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bcRuntime information for pipeline flow.
Examples:[ { "id": "pxOsh", "name": "pxOsh" } ]Array of data record schemas used in the pipeline.
Examples:[ { "fields": [ { "app_data": { "is_unicode_string": false, "odbc_type": "INTEGER", "type_code": "INT32" }, "metadata": { "decimal_precision": 6, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 6, "min_length": 0 }, "name": "ID", "nullable": false, "type": "integer" } ], "id": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ]Pipeline flow version.
Examples:3.0
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:The directory asset ID.
parameters
The data flow name.
Pipeline flow to be stored.
- pipeline_flows
Object containing app-specific data.
The document type.
Examples:pipeline
Array of parameter set references.
Examples:[ { "name": "Test Param Set", "project_ref": "bd0dbbfd-810d-4f0e-b0a9-228c328a8e23", "ref": "eeabf991-b69e-4f8c-b9f1-e6f2129b9a57" } ]Document identifier, GUID recommended.
Examples:84c2b6fb-1dd5-4114-b4ba-9bb2cb364fffRefers to the JSON schema used to validate documents of this type.
Examples:http://api.dataplatform.ibm.com/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.jsonParameters for the flow document.
Examples:{ "local_parameters": [ { "name": "srcFile", "type": "string" }, { "name": "my_connection", "subtype": "connection", "type": "asset_id", "value": "dfe7c595-81d8-461e-8d13-a7c544f3f500" } ] }Array of pipeline.
- pipelines
Object containing app-specific data.
Examples:{ "ui_data": { "comments": [] } }A brief description of the DataStage flow.
Examples:A test DataStage flow.
Unique identifier.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bcName of the pipeline.
Examples:ContainerC1
Array of pipeline nodes.
Examples:[ { "app_data": { "ui_data": { "description": "Produce a set of mock data based on the specified metadata", "image": "/data-intg/flows/graphics/palette/PxRowGenerator.svg", "label": "Row_Generator_1", "x_pos": 108, "y_pos": 162 } }, "id": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "op": "PxRowGenerator", "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "73a5fb2c-f499-4c75-a8a7-71cea90f5105" }, "ui_data": { "label": "outPort" } }, "id": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "parameters": { "records": 10 }, "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "parameters": { "input_count": 0, "output_count": 1 }, "type": "binding" }, { "app_data": { "ui_data": { "description": "Print row column values to either the job log or to a separate output link", "image": "/data-intg/flows/graphics/palette/PxPeek.svg", "label": "Peek_1", "x_pos": 342, "y_pos": 162 } }, "id": "4195b012-d3e7-4f74-8099-e7b23ec6ebb9", "inputs": [ { "app_data": { "ui_data": { "label": "inPort" } }, "id": "c4195b34-8b4a-473f-b987-fa6d028f3968", "links": [ { "app_data": { "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "Link_1", "label": "Link_1", "outline": true, "path": "", "position": "middle" } ] } }, "id": "73a5fb2c-f499-4c75-a8a7-71cea90f5105", "link_name": "Link_1", "node_id_ref": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "port_id_ref": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "type_attr": "PRIMARY" } ], "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "op": "PxPeek", "outputs": [ { "app_data": { "ui_data": { "label": "outPort" } }, "id": "" } ], "parameters": { "all": " ", "columns": " ", "dataset": " ", "input_count": 1, "name": "name", "nrecs": 10, "output_count": 0, "selection": " " }, "type": "execution_node" } ]Reference to the runtime type.
Examples:pxOsh
Reference to the primary (main) pipeline flow within the document.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bcRuntime information for pipeline flow.
Examples:[ { "id": "pxOsh", "name": "pxOsh" } ]Array of data record schemas used in the pipeline.
Examples:[ { "fields": [ { "app_data": { "is_unicode_string": false, "odbc_type": "INTEGER", "type_code": "INT32" }, "metadata": { "decimal_precision": 6, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 6, "min_length": 0 }, "name": "ID", "nullable": false, "type": "integer" } ], "id": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ]Pipeline flow version.
Examples:3.0
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:The directory asset ID.
curl -X POST --location --header "Authorization: Bearer {iam_token}" --header "Accept: application/json;charset=utf-8" --header "Content-Type: application/json;charset=utf-8" --data '{}' "{base_url}/v3/data_intg_flows?data_intg_flow_name={data_intg_flow_name}&project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
PipelineJson exampleFlow = PipelineFlowHelper.buildPipelineFlow(flowJson); CreateDatastageFlowsOptions createDatastageFlowsOptions = new CreateDatastageFlowsOptions.Builder() .dataIntgFlowName(flowName) .pipelineFlows(exampleFlow) .projectId(projectID) .build(); Response<DataIntgFlow> response = datastageService.createDatastageFlows(createDatastageFlowsOptions).execute(); DataIntgFlow dataIntgFlow = response.getResult(); System.out.println(dataIntgFlow);
const pipelineJsonFromFile = JSON.parse(fs.readFileSync('testInput/rowgen_peek.json', 'utf-8')); const params = { dataIntgFlowName, pipelineFlows: pipelineJsonFromFile, projectId: projectID, assetCategory: 'system', }; const res = await datastageService.createDatastageFlows(params);
data_intg_flow = datastage_service.create_datastage_flows( data_intg_flow_name='testFlowJob1', pipeline_flows=UtilHelper.readJsonFileToDict('inputFiles/exampleFlow.json'), project_id=config['PROJECT_ID'] ).get_result() print(json.dumps(data_intg_flow, indent=2))
Response
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
System metadata about an asset.
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Asset type object for folder container.
The name of the DataStage flow.
Asset type object.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_idorproject_idor 'space_id` is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_idorproject_idor 'space_id` is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset.
catalog_idorproject_idor 'space_id` is required.A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Asset type object for folder container.
The name of the DataStage flow.
Asset type object.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_idorproject_idor 'space_id` is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_idorproject_idor 'space_id` is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset.
catalog_idorproject_idor 'space_id` is required.A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Asset type object for folder container.
The name of the DataStage flow.
Asset type object.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_idorproject_idor 'space_id` is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_idorproject_idor 'space_id` is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset.
catalog_idorproject_idor 'space_id` is required.A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred. See response for more information.
{ "entity": { "data_intg_flow": { "dataset": false, "mime_type": "application/json" } }, "metadata": { "asset_id": "{asset_id}", "asset_type": "data_intg_flow", "href": "{url}/data_intg/v3/data_intg_flows/{asset_id}", "name": "{job_name}", "origin_country": "US", "resource_key": "{project_id}/data_intg_flow/{job_name}" } }{ "entity": { "data_intg_flow": { "dataset": false, "mime_type": "application/json" } }, "metadata": { "asset_id": "{asset_id}", "asset_type": "data_intg_flow", "href": "{url}/data_intg/v3/data_intg_flows/{asset_id}", "name": "{job_name}", "origin_country": "US", "resource_key": "{project_id}/data_intg_flow/{job_name}" } }
Get DataStage flow
Lists the DataStage flow that is contained in the specified project. Attachments, metadata and a limited number of attributes from the entity of each DataStage flow is returned.
Lists the DataStage flow that is contained in the specified project. Attachments, metadata and a limited number of attributes from the entity of each DataStage flow is returned.
Lists the DataStage flow that is contained in the specified project. Attachments, metadata and a limited number of attributes from the entity of each DataStage flow is returned.
Lists the DataStage flow that is contained in the specified project. Attachments, metadata and a limited number of attributes from the entity of each DataStage flow is returned.
GET /v3/data_intg_flows/{data_intg_flow_id}ServiceCall<DataIntgFlowJson> getDatastageFlows(GetDatastageFlowsOptions getDatastageFlowsOptions)getDatastageFlows(params)
get_datastage_flows(
self,
data_intg_flow_id: str,
*,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the GetDatastageFlowsOptions.Builder to create a GetDatastageFlowsOptions object that contains the parameter values for the getDatastageFlows method.
Path Parameters
The DataStage flow ID to use.
Query Parameters
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Example:
4c9adbb4-28ef-4a7d-b273-1cee0c38021f
The getDatastageFlows options.
The DataStage flow ID to use.
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:4c9adbb4-28ef-4a7d-b273-1cee0c38021f
parameters
The DataStage flow ID to use.
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:
parameters
The DataStage flow ID to use.
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:
curl -X GET --location --header "Authorization: Bearer {iam_token}" --header "Accept: application/json;charset=utf-8" "{base_url}/v3/data_intg_flows/{data_intg_flow_id}?project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
GetDatastageFlowsOptions getDatastageFlowsOptions = new GetDatastageFlowsOptions.Builder() .dataIntgFlowId(flowID) .projectId(projectID) .build(); Response<DataIntgFlowJson> response = datastageService.getDatastageFlows(getDatastageFlowsOptions).execute(); DataIntgFlowJson dataIntgFlowJson = response.getResult(); System.out.println(dataIntgFlowJson);
const params = { dataIntgFlowId: assetID, projectId: projectID, }; const res = await datastageService.getDatastageFlows(params);
data_intg_flow_json = datastage_service.get_datastage_flows( data_intg_flow_id=createdFlowId, project_id=config['PROJECT_ID'] ).get_result() print(json.dumps(data_intg_flow_json, indent=2))
Response
A pipeline JSON containing operations to apply to source(s).
Pipeline flow to be stored.
The underlying DataStage flow definition.
System metadata about an asset.
A pipeline JSON containing operations to apply to source(s).
Pipeline flow to be stored.
- attachments
Object containing app-specific data.
The document type.
Examples:pipeline
Array of parameter set references.
Examples:[ { "name": "Test Param Set", "project_ref": "bd0dbbfd-810d-4f0e-b0a9-228c328a8e23", "ref": "eeabf991-b69e-4f8c-b9f1-e6f2129b9a57" } ]Document identifier, GUID recommended.
Examples:84c2b6fb-1dd5-4114-b4ba-9bb2cb364fffRefers to the JSON schema used to validate documents of this type.
Examples:http://api.dataplatform.ibm.com/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.jsonParameters for the flow document.
Examples:{ "local_parameters": [ { "name": "srcFile", "type": "string" }, { "name": "my_connection", "subtype": "connection", "type": "asset_id", "value": "dfe7c595-81d8-461e-8d13-a7c544f3f500" } ] }Array of pipeline.
- pipelines
Object containing app-specific data.
Examples:{ "ui_data": { "comments": [] } }A brief description of the DataStage flow.
Examples:A test DataStage flow.
Unique identifier.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bcName of the pipeline.
Examples:ContainerC1
Array of pipeline nodes.
Examples:[ { "app_data": { "ui_data": { "description": "Produce a set of mock data based on the specified metadata", "image": "/data-intg/flows/graphics/palette/PxRowGenerator.svg", "label": "Row_Generator_1", "x_pos": 108, "y_pos": 162 } }, "id": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "op": "PxRowGenerator", "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "73a5fb2c-f499-4c75-a8a7-71cea90f5105" }, "ui_data": { "label": "outPort" } }, "id": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "parameters": { "records": 10 }, "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "parameters": { "input_count": 0, "output_count": 1 }, "type": "binding" }, { "app_data": { "ui_data": { "description": "Print row column values to either the job log or to a separate output link", "image": "/data-intg/flows/graphics/palette/PxPeek.svg", "label": "Peek_1", "x_pos": 342, "y_pos": 162 } }, "id": "4195b012-d3e7-4f74-8099-e7b23ec6ebb9", "inputs": [ { "app_data": { "ui_data": { "label": "inPort" } }, "id": "c4195b34-8b4a-473f-b987-fa6d028f3968", "links": [ { "app_data": { "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "Link_1", "label": "Link_1", "outline": true, "path": "", "position": "middle" } ] } }, "id": "73a5fb2c-f499-4c75-a8a7-71cea90f5105", "link_name": "Link_1", "node_id_ref": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "port_id_ref": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "type_attr": "PRIMARY" } ], "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "op": "PxPeek", "outputs": [ { "app_data": { "ui_data": { "label": "outPort" } }, "id": "" } ], "parameters": { "all": " ", "columns": " ", "dataset": " ", "input_count": 1, "name": "name", "nrecs": 10, "output_count": 0, "selection": " " }, "type": "execution_node" } ]Reference to the runtime type.
Examples:pxOsh
Reference to the primary (main) pipeline flow within the document.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bcRuntime information for pipeline flow.
Examples:[ { "id": "pxOsh", "name": "pxOsh" } ]Array of data record schemas used in the pipeline.
Examples:[ { "fields": [ { "app_data": { "is_unicode_string": false, "odbc_type": "INTEGER", "type_code": "INT32" }, "metadata": { "decimal_precision": 6, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 6, "min_length": 0 }, "name": "ID", "nullable": false, "type": "integer" } ], "id": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ]Pipeline flow version.
Examples:3.0
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Asset type object for folder container.
The name of the DataStage flow.
Asset type object.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_idorproject_idor 'space_id` is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_idorproject_idor 'space_id` is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset.
catalog_idorproject_idor 'space_id` is required.A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
A pipeline JSON containing operations to apply to source(s).
Pipeline flow to be stored.
- attachments
Object containing app-specific data.
The document type.
Examples:pipeline
Array of parameter set references.
Examples:[ { "name": "Test Param Set", "project_ref": "bd0dbbfd-810d-4f0e-b0a9-228c328a8e23", "ref": "eeabf991-b69e-4f8c-b9f1-e6f2129b9a57" } ]Document identifier, GUID recommended.
Examples:84c2b6fb-1dd5-4114-b4ba-9bb2cb364fffRefers to the JSON schema used to validate documents of this type.
Examples:http://api.dataplatform.ibm.com/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.jsonParameters for the flow document.
Examples:{ "local_parameters": [ { "name": "srcFile", "type": "string" }, { "name": "my_connection", "subtype": "connection", "type": "asset_id", "value": "dfe7c595-81d8-461e-8d13-a7c544f3f500" } ] }Array of pipeline.
- pipelines
Object containing app-specific data.
Examples:{ "ui_data": { "comments": [] } }A brief description of the DataStage flow.
Examples:A test DataStage flow.
Unique identifier.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bcName of the pipeline.
Examples:ContainerC1
Array of pipeline nodes.
Examples:[ { "app_data": { "ui_data": { "description": "Produce a set of mock data based on the specified metadata", "image": "/data-intg/flows/graphics/palette/PxRowGenerator.svg", "label": "Row_Generator_1", "x_pos": 108, "y_pos": 162 } }, "id": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "op": "PxRowGenerator", "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "73a5fb2c-f499-4c75-a8a7-71cea90f5105" }, "ui_data": { "label": "outPort" } }, "id": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "parameters": { "records": 10 }, "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "parameters": { "input_count": 0, "output_count": 1 }, "type": "binding" }, { "app_data": { "ui_data": { "description": "Print row column values to either the job log or to a separate output link", "image": "/data-intg/flows/graphics/palette/PxPeek.svg", "label": "Peek_1", "x_pos": 342, "y_pos": 162 } }, "id": "4195b012-d3e7-4f74-8099-e7b23ec6ebb9", "inputs": [ { "app_data": { "ui_data": { "label": "inPort" } }, "id": "c4195b34-8b4a-473f-b987-fa6d028f3968", "links": [ { "app_data": { "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "Link_1", "label": "Link_1", "outline": true, "path": "", "position": "middle" } ] } }, "id": "73a5fb2c-f499-4c75-a8a7-71cea90f5105", "link_name": "Link_1", "node_id_ref": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "port_id_ref": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "type_attr": "PRIMARY" } ], "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "op": "PxPeek", "outputs": [ { "app_data": { "ui_data": { "label": "outPort" } }, "id": "" } ], "parameters": { "all": " ", "columns": " ", "dataset": " ", "input_count": 1, "name": "name", "nrecs": 10, "output_count": 0, "selection": " " }, "type": "execution_node" } ]Reference to the runtime type.
Examples:pxOsh
Reference to the primary (main) pipeline flow within the document.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bcRuntime information for pipeline flow.
Examples:[ { "id": "pxOsh", "name": "pxOsh" } ]Array of data record schemas used in the pipeline.
Examples:[ { "fields": [ { "app_data": { "is_unicode_string": false, "odbc_type": "INTEGER", "type_code": "INT32" }, "metadata": { "decimal_precision": 6, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 6, "min_length": 0 }, "name": "ID", "nullable": false, "type": "integer" } ], "id": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ]Pipeline flow version.
Examples:3.0
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Asset type object for folder container.
The name of the DataStage flow.
Asset type object.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_idorproject_idor 'space_id` is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_idorproject_idor 'space_id` is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset.
catalog_idorproject_idor 'space_id` is required.A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
A pipeline JSON containing operations to apply to source(s).
Pipeline flow to be stored.
- attachments
Object containing app-specific data.
The document type.
Examples:pipeline
Array of parameter set references.
Examples:[ { "name": "Test Param Set", "project_ref": "bd0dbbfd-810d-4f0e-b0a9-228c328a8e23", "ref": "eeabf991-b69e-4f8c-b9f1-e6f2129b9a57" } ]Document identifier, GUID recommended.
Examples:84c2b6fb-1dd5-4114-b4ba-9bb2cb364fffRefers to the JSON schema used to validate documents of this type.
Examples:http://api.dataplatform.ibm.com/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.jsonParameters for the flow document.
Examples:{ "local_parameters": [ { "name": "srcFile", "type": "string" }, { "name": "my_connection", "subtype": "connection", "type": "asset_id", "value": "dfe7c595-81d8-461e-8d13-a7c544f3f500" } ] }Array of pipeline.
- pipelines
Object containing app-specific data.
Examples:{ "ui_data": { "comments": [] } }A brief description of the DataStage flow.
Examples:A test DataStage flow.
Unique identifier.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bcName of the pipeline.
Examples:ContainerC1
Array of pipeline nodes.
Examples:[ { "app_data": { "ui_data": { "description": "Produce a set of mock data based on the specified metadata", "image": "/data-intg/flows/graphics/palette/PxRowGenerator.svg", "label": "Row_Generator_1", "x_pos": 108, "y_pos": 162 } }, "id": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "op": "PxRowGenerator", "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "73a5fb2c-f499-4c75-a8a7-71cea90f5105" }, "ui_data": { "label": "outPort" } }, "id": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "parameters": { "records": 10 }, "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "parameters": { "input_count": 0, "output_count": 1 }, "type": "binding" }, { "app_data": { "ui_data": { "description": "Print row column values to either the job log or to a separate output link", "image": "/data-intg/flows/graphics/palette/PxPeek.svg", "label": "Peek_1", "x_pos": 342, "y_pos": 162 } }, "id": "4195b012-d3e7-4f74-8099-e7b23ec6ebb9", "inputs": [ { "app_data": { "ui_data": { "label": "inPort" } }, "id": "c4195b34-8b4a-473f-b987-fa6d028f3968", "links": [ { "app_data": { "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "Link_1", "label": "Link_1", "outline": true, "path": "", "position": "middle" } ] } }, "id": "73a5fb2c-f499-4c75-a8a7-71cea90f5105", "link_name": "Link_1", "node_id_ref": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "port_id_ref": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "type_attr": "PRIMARY" } ], "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "op": "PxPeek", "outputs": [ { "app_data": { "ui_data": { "label": "outPort" } }, "id": "" } ], "parameters": { "all": " ", "columns": " ", "dataset": " ", "input_count": 1, "name": "name", "nrecs": 10, "output_count": 0, "selection": " " }, "type": "execution_node" } ]Reference to the runtime type.
Examples:pxOsh
Reference to the primary (main) pipeline flow within the document.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bcRuntime information for pipeline flow.
Examples:[ { "id": "pxOsh", "name": "pxOsh" } ]Array of data record schemas used in the pipeline.
Examples:[ { "fields": [ { "app_data": { "is_unicode_string": false, "odbc_type": "INTEGER", "type_code": "INT32" }, "metadata": { "decimal_precision": 6, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 6, "min_length": 0 }, "name": "ID", "nullable": false, "type": "integer" } ], "id": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ]Pipeline flow version.
Examples:3.0
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Asset type object for folder container.
The name of the DataStage flow.
Asset type object.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_idorproject_idor 'space_id` is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_idorproject_idor 'space_id` is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset.
catalog_idorproject_idor 'space_id` is required.A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
Unexpected error.
{ "attachments": { "app_data": { "datastage": { "external_parameters": [] } }, "doc_type": "pipeline", "id": "98cc1fa0-0fd8-4d55-9b27-d477096b4b37", "json_schema": "{url}/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.json", "pipelines": [ { "app_data": { "datastage": { "runtime_column_propagation": "false" }, "ui_data": { "comments": [] } }, "id": "287b2b30-95ff-4cc8-b18f-92e23c464134", "nodes": [ { "app_data": { "datastage": { "outputs_order": "46e18367-1820-4fe8-8c7c-d8badbc76aa3" }, "ui_data": { "image": "../graphics/palette/PxRowGenerator.svg", "label": "RowGen_1", "x_pos": 239, "y_pos": 236 } }, "id": "77e6d535-8312-4692-8850-c129dcf921ed", "op": "PxRowGenerator", "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "55b884a7-9cfb-4e02-802b-82444ee95bb5" }, "ui_data": { "label": "outPort" } }, "id": "46e18367-1820-4fe8-8c7c-d8badbc76aa3", "parameters": { "buf_free_run": 50, "disk_write_inc": 1048576, "max_mem_buf_size": 3145728, "queue_upper_size": 0, "records": 10 }, "schema_ref": "07fed318-4370-4c95-bbbc-16d4a91421bb" } ], "parameters": { "input_count": 0, "output_count": 1 }, "type": "binding" }, { "app_data": { "datastage": { "inputs_order": "9e842525-7bbf-4a42-ae95-49ae325e0c87" }, "ui_data": { "image": "../graphics/palette/informix.svg", "label": "informixTgt", "x_pos": 690, "y_pos": 229 } }, "connection": { "project_ref": "{project_id}", "properties": { "create_statement": "CREATE TABLE custid(customer_num int)", "table_action": "append", "table_name": "custid", "write_mode": "insert" }, "ref": "85193161-aa63-4cc5-80e7-7bfcdd59c438" }, "id": "8b4933d9-32c0-4c40-9c47-d8791ab12baf", "inputs": [ { "app_data": { "datastage": {}, "ui_data": { "label": "inPort" } }, "id": "9e842525-7bbf-4a42-ae95-49ae325e0c87", "links": [ { "app_data": { "datastage": {}, "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "Link_3", "label": "Link_3", "outline": true, "path": "", "position": "middle" } ] } }, "id": "55b884a7-9cfb-4e02-802b-82444ee95bb5", "link_name": "Link_3", "node_id_ref": "77e6d535-8312-4692-8850-c129dcf921ed", "port_id_ref": "46e18367-1820-4fe8-8c7c-d8badbc76aa3", "type_attr": "PRIMARY" } ], "parameters": { "part_coll": "part_type" }, "schema_ref": "07fed318-4370-4c95-bbbc-16d4a91421bb" } ], "op": "informix", "parameters": { "input_count": 1, "output_count": 0 }, "type": "binding" } ], "runtime_ref": "pxOsh" } ], "primary_pipeline": "287b2b30-95ff-4cc8-b18f-92e23c464134", "schemas": [ { "fields": [ { "app_data": { "column_reference": "customer_num", "is_unicode_string": false, "odbc_type": "INTEGER", "table_def": "Saved\\\\Link_3\\\\ifx_customer", "type_code": "INT32" }, "metadata": { "decimal_precision": 0, "decimal_scale": 0, "is_key": false, "is_signed": true, "item_index": 0, "max_length": 0, "min_length": 0 }, "name": "customer_num", "nullable": false, "type": "integer" } ], "id": "07fed318-4370-4c95-bbbc-16d4a91421bb" } ], "version": "3.0" }, "entity": { "description": "", "name": "{job_name}", "rov": { "mode": 0 } }, "metadata": { "asset_id": "{asset_id}", "asset_type": "data_intg_flow", "create_time": "2021-04-08 17:14:08+00:00", "creator_id": "IBMid-xxxxxxxxxx", "description": "", "name": "{job_name}", "project_id": "{project_id}", "resource_key": "{project_id}/data_intg_flow/{job_name}", "size": 2712, "usage": { "access_count": 0, "last_access_time": "2021-04-08 17:14:10.193000+00:00", "last_accessor_id": "IBMid-xxxxxxxxxx", "last_modification_time": "2021-04-08 17:14:10.193000+00:00", "last_modifier_id": "IBMid-xxxxxxxxxx" } } }{ "attachments": { "app_data": { "datastage": { "external_parameters": [] } }, "doc_type": "pipeline", "id": "98cc1fa0-0fd8-4d55-9b27-d477096b4b37", "json_schema": "{url}/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.json", "pipelines": [ { "app_data": { "datastage": { "runtime_column_propagation": "false" }, "ui_data": { "comments": [] } }, "id": "287b2b30-95ff-4cc8-b18f-92e23c464134", "nodes": [ { "app_data": { "datastage": { "outputs_order": "46e18367-1820-4fe8-8c7c-d8badbc76aa3" }, "ui_data": { "image": "../graphics/palette/PxRowGenerator.svg", "label": "RowGen_1", "x_pos": 239, "y_pos": 236 } }, "id": "77e6d535-8312-4692-8850-c129dcf921ed", "op": "PxRowGenerator", "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "55b884a7-9cfb-4e02-802b-82444ee95bb5" }, "ui_data": { "label": "outPort" } }, "id": "46e18367-1820-4fe8-8c7c-d8badbc76aa3", "parameters": { "buf_free_run": 50, "disk_write_inc": 1048576, "max_mem_buf_size": 3145728, "queue_upper_size": 0, "records": 10 }, "schema_ref": "07fed318-4370-4c95-bbbc-16d4a91421bb" } ], "parameters": { "input_count": 0, "output_count": 1 }, "type": "binding" }, { "app_data": { "datastage": { "inputs_order": "9e842525-7bbf-4a42-ae95-49ae325e0c87" }, "ui_data": { "image": "../graphics/palette/informix.svg", "label": "informixTgt", "x_pos": 690, "y_pos": 229 } }, "connection": { "project_ref": "{project_id}", "properties": { "create_statement": "CREATE TABLE custid(customer_num int)", "table_action": "append", "table_name": "custid", "write_mode": "insert" }, "ref": "85193161-aa63-4cc5-80e7-7bfcdd59c438" }, "id": "8b4933d9-32c0-4c40-9c47-d8791ab12baf", "inputs": [ { "app_data": { "datastage": {}, "ui_data": { "label": "inPort" } }, "id": "9e842525-7bbf-4a42-ae95-49ae325e0c87", "links": [ { "app_data": { "datastage": {}, "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "Link_3", "label": "Link_3", "outline": true, "path": "", "position": "middle" } ] } }, "id": "55b884a7-9cfb-4e02-802b-82444ee95bb5", "link_name": "Link_3", "node_id_ref": "77e6d535-8312-4692-8850-c129dcf921ed", "port_id_ref": "46e18367-1820-4fe8-8c7c-d8badbc76aa3", "type_attr": "PRIMARY" } ], "parameters": { "part_coll": "part_type" }, "schema_ref": "07fed318-4370-4c95-bbbc-16d4a91421bb" } ], "op": "informix", "parameters": { "input_count": 1, "output_count": 0 }, "type": "binding" } ], "runtime_ref": "pxOsh" } ], "primary_pipeline": "287b2b30-95ff-4cc8-b18f-92e23c464134", "schemas": [ { "fields": [ { "app_data": { "column_reference": "customer_num", "is_unicode_string": false, "odbc_type": "INTEGER", "table_def": "Saved\\\\Link_3\\\\ifx_customer", "type_code": "INT32" }, "metadata": { "decimal_precision": 0, "decimal_scale": 0, "is_key": false, "is_signed": true, "item_index": 0, "max_length": 0, "min_length": 0 }, "name": "customer_num", "nullable": false, "type": "integer" } ], "id": "07fed318-4370-4c95-bbbc-16d4a91421bb" } ], "version": "3.0" }, "entity": { "description": "", "name": "{job_name}", "rov": { "mode": 0 } }, "metadata": { "asset_id": "{asset_id}", "asset_type": "data_intg_flow", "create_time": "2021-04-08 17:14:08+00:00", "creator_id": "IBMid-xxxxxxxxxx", "description": "", "name": "{job_name}", "project_id": "{project_id}", "resource_key": "{project_id}/data_intg_flow/{job_name}", "size": 2712, "usage": { "access_count": 0, "last_access_time": "2021-04-08 17:14:10.193000+00:00", "last_accessor_id": "IBMid-xxxxxxxxxx", "last_modification_time": "2021-04-08 17:14:10.193000+00:00", "last_modifier_id": "IBMid-xxxxxxxxxx" } } }
Update DataStage flow
Modifies a data flow in the specified project or catalog (either project_id or catalog_id must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.
Modifies a data flow in the specified project or catalog (either project_id or catalog_id must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.
Modifies a data flow in the specified project or catalog (either project_id or catalog_id must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.
Modifies a data flow in the specified project or catalog (either project_id or catalog_id must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.
PUT /v3/data_intg_flows/{data_intg_flow_id}ServiceCall<DataIntgFlow> updateDatastageFlows(UpdateDatastageFlowsOptions updateDatastageFlowsOptions)updateDatastageFlows(params)
update_datastage_flows(
self,
data_intg_flow_id: str,
data_intg_flow_name: str,
*,
pipeline_flows: Optional['PipelineJson'] = None,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
directory_asset_id: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the UpdateDatastageFlowsOptions.Builder to create a UpdateDatastageFlowsOptions object that contains the parameter values for the updateDatastageFlows method.
Path Parameters
The DataStage flow ID to use.
Query Parameters
The data flow name.
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Example:
4c9adbb4-28ef-4a7d-b273-1cee0c38021fThe directory asset ID.
Pipeline json to be attached.
Pipeline flow to be stored.
The updateDatastageFlows options.
The DataStage flow ID to use.
The data flow name.
Pipeline flow to be stored.
- pipelineFlows
Object containing app-specific data.
The document type.
Examples:pipeline
Array of parameter set references.
Examples:[ { "name": "Test Param Set", "project_ref": "bd0dbbfd-810d-4f0e-b0a9-228c328a8e23", "ref": "eeabf991-b69e-4f8c-b9f1-e6f2129b9a57" } ]Document identifier, GUID recommended.
Examples:84c2b6fb-1dd5-4114-b4ba-9bb2cb364fffRefers to the JSON schema used to validate documents of this type.
Examples:http://api.dataplatform.ibm.com/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.jsonParameters for the flow document.
Examples:{ "local_parameters": [ { "name": "srcFile", "type": "string" }, { "name": "my_connection", "subtype": "connection", "type": "asset_id", "value": "dfe7c595-81d8-461e-8d13-a7c544f3f500" } ] }Array of pipeline.
- pipelines
Object containing app-specific data.
Examples:{ "ui_data": { "comments": [] } }A brief description of the DataStage flow.
Examples:A test DataStage flow.
Unique identifier.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bcName of the pipeline.
Examples:ContainerC1
Array of pipeline nodes.
Examples:[ { "app_data": { "ui_data": { "description": "Produce a set of mock data based on the specified metadata", "image": "/data-intg/flows/graphics/palette/PxRowGenerator.svg", "label": "Row_Generator_1", "x_pos": 108, "y_pos": 162 } }, "id": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "op": "PxRowGenerator", "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "73a5fb2c-f499-4c75-a8a7-71cea90f5105" }, "ui_data": { "label": "outPort" } }, "id": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "parameters": { "records": 10 }, "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "parameters": { "input_count": 0, "output_count": 1 }, "type": "binding" }, { "app_data": { "ui_data": { "description": "Print row column values to either the job log or to a separate output link", "image": "/data-intg/flows/graphics/palette/PxPeek.svg", "label": "Peek_1", "x_pos": 342, "y_pos": 162 } }, "id": "4195b012-d3e7-4f74-8099-e7b23ec6ebb9", "inputs": [ { "app_data": { "ui_data": { "label": "inPort" } }, "id": "c4195b34-8b4a-473f-b987-fa6d028f3968", "links": [ { "app_data": { "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "Link_1", "label": "Link_1", "outline": true, "path": "", "position": "middle" } ] } }, "id": "73a5fb2c-f499-4c75-a8a7-71cea90f5105", "link_name": "Link_1", "node_id_ref": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "port_id_ref": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "type_attr": "PRIMARY" } ], "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "op": "PxPeek", "outputs": [ { "app_data": { "ui_data": { "label": "outPort" } }, "id": "" } ], "parameters": { "all": " ", "columns": " ", "dataset": " ", "input_count": 1, "name": "name", "nrecs": 10, "output_count": 0, "selection": " " }, "type": "execution_node" } ]Reference to the runtime type.
Examples:pxOsh
Reference to the primary (main) pipeline flow within the document.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bcRuntime information for pipeline flow.
Examples:[ { "id": "pxOsh", "name": "pxOsh" } ]Array of data record schemas used in the pipeline.
Examples:[ { "fields": [ { "app_data": { "is_unicode_string": false, "odbc_type": "INTEGER", "type_code": "INT32" }, "metadata": { "decimal_precision": 6, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 6, "min_length": 0 }, "name": "ID", "nullable": false, "type": "integer" } ], "id": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ]Pipeline flow version.
Examples:3.0
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:4c9adbb4-28ef-4a7d-b273-1cee0c38021fThe directory asset ID.
parameters
The DataStage flow ID to use.
The data flow name.
Pipeline flow to be stored.
- pipelineFlows
Object containing app-specific data.
The document type.
Examples:pipeline
Array of parameter set references.
Examples:[ { "name": "Test Param Set", "project_ref": "bd0dbbfd-810d-4f0e-b0a9-228c328a8e23", "ref": "eeabf991-b69e-4f8c-b9f1-e6f2129b9a57" } ]Document identifier, GUID recommended.
Examples:84c2b6fb-1dd5-4114-b4ba-9bb2cb364fffRefers to the JSON schema used to validate documents of this type.
Examples:http://api.dataplatform.ibm.com/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.jsonParameters for the flow document.
Examples:{ "local_parameters": [ { "name": "srcFile", "type": "string" }, { "name": "my_connection", "subtype": "connection", "type": "asset_id", "value": "dfe7c595-81d8-461e-8d13-a7c544f3f500" } ] }Array of pipeline.
- pipelines
Object containing app-specific data.
Examples:{ "ui_data": { "comments": [] } }A brief description of the DataStage flow.
Examples:A test DataStage flow.
Unique identifier.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bcName of the pipeline.
Examples:ContainerC1
Array of pipeline nodes.
Examples:[ { "app_data": { "ui_data": { "description": "Produce a set of mock data based on the specified metadata", "image": "/data-intg/flows/graphics/palette/PxRowGenerator.svg", "label": "Row_Generator_1", "x_pos": 108, "y_pos": 162 } }, "id": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "op": "PxRowGenerator", "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "73a5fb2c-f499-4c75-a8a7-71cea90f5105" }, "ui_data": { "label": "outPort" } }, "id": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "parameters": { "records": 10 }, "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "parameters": { "input_count": 0, "output_count": 1 }, "type": "binding" }, { "app_data": { "ui_data": { "description": "Print row column values to either the job log or to a separate output link", "image": "/data-intg/flows/graphics/palette/PxPeek.svg", "label": "Peek_1", "x_pos": 342, "y_pos": 162 } }, "id": "4195b012-d3e7-4f74-8099-e7b23ec6ebb9", "inputs": [ { "app_data": { "ui_data": { "label": "inPort" } }, "id": "c4195b34-8b4a-473f-b987-fa6d028f3968", "links": [ { "app_data": { "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "Link_1", "label": "Link_1", "outline": true, "path": "", "position": "middle" } ] } }, "id": "73a5fb2c-f499-4c75-a8a7-71cea90f5105", "link_name": "Link_1", "node_id_ref": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "port_id_ref": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "type_attr": "PRIMARY" } ], "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "op": "PxPeek", "outputs": [ { "app_data": { "ui_data": { "label": "outPort" } }, "id": "" } ], "parameters": { "all": " ", "columns": " ", "dataset": " ", "input_count": 1, "name": "name", "nrecs": 10, "output_count": 0, "selection": " " }, "type": "execution_node" } ]Reference to the runtime type.
Examples:pxOsh
Reference to the primary (main) pipeline flow within the document.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bcRuntime information for pipeline flow.
Examples:[ { "id": "pxOsh", "name": "pxOsh" } ]Array of data record schemas used in the pipeline.
Examples:[ { "fields": [ { "app_data": { "is_unicode_string": false, "odbc_type": "INTEGER", "type_code": "INT32" }, "metadata": { "decimal_precision": 6, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 6, "min_length": 0 }, "name": "ID", "nullable": false, "type": "integer" } ], "id": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ]Pipeline flow version.
Examples:3.0
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:The directory asset ID.
parameters
The DataStage flow ID to use.
The data flow name.
Pipeline flow to be stored.
- pipeline_flows
Object containing app-specific data.
The document type.
Examples:pipeline
Array of parameter set references.
Examples:[ { "name": "Test Param Set", "project_ref": "bd0dbbfd-810d-4f0e-b0a9-228c328a8e23", "ref": "eeabf991-b69e-4f8c-b9f1-e6f2129b9a57" } ]Document identifier, GUID recommended.
Examples:84c2b6fb-1dd5-4114-b4ba-9bb2cb364fffRefers to the JSON schema used to validate documents of this type.
Examples:http://api.dataplatform.ibm.com/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.jsonParameters for the flow document.
Examples:{ "local_parameters": [ { "name": "srcFile", "type": "string" }, { "name": "my_connection", "subtype": "connection", "type": "asset_id", "value": "dfe7c595-81d8-461e-8d13-a7c544f3f500" } ] }Array of pipeline.
- pipelines
Object containing app-specific data.
Examples:{ "ui_data": { "comments": [] } }A brief description of the DataStage flow.
Examples:A test DataStage flow.
Unique identifier.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bcName of the pipeline.
Examples:ContainerC1
Array of pipeline nodes.
Examples:[ { "app_data": { "ui_data": { "description": "Produce a set of mock data based on the specified metadata", "image": "/data-intg/flows/graphics/palette/PxRowGenerator.svg", "label": "Row_Generator_1", "x_pos": 108, "y_pos": 162 } }, "id": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "op": "PxRowGenerator", "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "73a5fb2c-f499-4c75-a8a7-71cea90f5105" }, "ui_data": { "label": "outPort" } }, "id": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "parameters": { "records": 10 }, "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "parameters": { "input_count": 0, "output_count": 1 }, "type": "binding" }, { "app_data": { "ui_data": { "description": "Print row column values to either the job log or to a separate output link", "image": "/data-intg/flows/graphics/palette/PxPeek.svg", "label": "Peek_1", "x_pos": 342, "y_pos": 162 } }, "id": "4195b012-d3e7-4f74-8099-e7b23ec6ebb9", "inputs": [ { "app_data": { "ui_data": { "label": "inPort" } }, "id": "c4195b34-8b4a-473f-b987-fa6d028f3968", "links": [ { "app_data": { "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "Link_1", "label": "Link_1", "outline": true, "path": "", "position": "middle" } ] } }, "id": "73a5fb2c-f499-4c75-a8a7-71cea90f5105", "link_name": "Link_1", "node_id_ref": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "port_id_ref": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "type_attr": "PRIMARY" } ], "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "op": "PxPeek", "outputs": [ { "app_data": { "ui_data": { "label": "outPort" } }, "id": "" } ], "parameters": { "all": " ", "columns": " ", "dataset": " ", "input_count": 1, "name": "name", "nrecs": 10, "output_count": 0, "selection": " " }, "type": "execution_node" } ]Reference to the runtime type.
Examples:pxOsh
Reference to the primary (main) pipeline flow within the document.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bcRuntime information for pipeline flow.
Examples:[ { "id": "pxOsh", "name": "pxOsh" } ]Array of data record schemas used in the pipeline.
Examples:[ { "fields": [ { "app_data": { "is_unicode_string": false, "odbc_type": "INTEGER", "type_code": "INT32" }, "metadata": { "decimal_precision": 6, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 6, "min_length": 0 }, "name": "ID", "nullable": false, "type": "integer" } ], "id": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ]Pipeline flow version.
Examples:3.0
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:The directory asset ID.
curl -X PUT --location --header "Authorization: Bearer {iam_token}" --header "Accept: application/json;charset=utf-8" --header "Content-Type: application/json;charset=utf-8" --data '{}' "{base_url}/v3/data_intg_flows/{data_intg_flow_id}?data_intg_flow_name={data_intg_flow_name}&project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
PipelineJson exampleFlowUpdated = PipelineFlowHelper.buildPipelineFlow(updatedFlowJson); UpdateDatastageFlowsOptions updateDatastageFlowsOptions = new UpdateDatastageFlowsOptions.Builder() .dataIntgFlowId(flowID) .dataIntgFlowName(flowName) .pipelineFlows(exampleFlowUpdated) .projectId(projectID) .build(); Response<DataIntgFlow> response = datastageService.updateDatastageFlows(updateDatastageFlowsOptions).execute(); DataIntgFlow dataIntgFlow = response.getResult(); System.out.println(dataIntgFlow);
const params = { dataIntgFlowId: assetID, dataIntgFlowName, pipelineFlows: pipelineJsonFromFile, projectId: projectID, assetCategory: 'system', }; const res = await datastageService.updateDatastageFlows(params);
data_intg_flow = datastage_service.update_datastage_flows( data_intg_flow_id=createdFlowId, data_intg_flow_name='testFlowJob1Updated', pipeline_flows=UtilHelper.readJsonFileToDict('inputFiles/exampleFlowUpdated.json'), project_id=config['PROJECT_ID'] ).get_result() print(json.dumps(data_intg_flow, indent=2))
Response
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
System metadata about an asset.
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Asset type object for folder container.
The name of the DataStage flow.
Asset type object.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_idorproject_idor 'space_id` is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_idorproject_idor 'space_id` is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset.
catalog_idorproject_idor 'space_id` is required.A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Asset type object for folder container.
The name of the DataStage flow.
Asset type object.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_idorproject_idor 'space_id` is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_idorproject_idor 'space_id` is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset.
catalog_idorproject_idor 'space_id` is required.A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Asset type object for folder container.
The name of the DataStage flow.
Asset type object.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_idorproject_idor 'space_id` is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_idorproject_idor 'space_id` is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset.
catalog_idorproject_idor 'space_id` is required.A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred. See response for more information.
{ "attachments": [ { "asset_type": "data_intg_flow", "attachment_id": "9081dd6b-0ab7-47b5-8233-c10c6e64509d", "href": "{url}/v2/assets/{asset_id}/attachments/9081dd6b-0ab7-47b5-8233-c10c6e64509d?project_id={project_id}", "mime": "application/json", "name": "data_intg_flows", "object_key": "data_intg_flow/{project_id}{asset_id}", "object_key_is_read_only": false, "private_url": false } ], "entity": { "data_intg_flow": { "dataset": false, "mime_type": "application/json" } }, "metadata": { "asset_id": "{asset_id}", "asset_type": "data_intg_flow", "catalog_id": "{catalog_id}", "create_time": "2021-04-08 17:14:08+00:00", "creator_id": "IBMid-xxxxxxxxxx", "description": "", "href": "{url}/data_intg/v3/data_intg_flows/{asset_id}?catalog_id={catalog_id}", "name": "{job_name}", "origin_country": "us", "project_id": "{project_id}", "resource_key": "{project_id}/data_intg_flow/{job_name}", "size": 2712, "tags": [], "usage": { "access_count": 0, "last_access_time": "2021-04-08 17:21:33.936000+00:00", "last_accessor_id": "IBMid-xxxxxxxxxx" } } }{ "attachments": [ { "asset_type": "data_intg_flow", "attachment_id": "9081dd6b-0ab7-47b5-8233-c10c6e64509d", "href": "{url}/v2/assets/{asset_id}/attachments/9081dd6b-0ab7-47b5-8233-c10c6e64509d?project_id={project_id}", "mime": "application/json", "name": "data_intg_flows", "object_key": "data_intg_flow/{project_id}{asset_id}", "object_key_is_read_only": false, "private_url": false } ], "entity": { "data_intg_flow": { "dataset": false, "mime_type": "application/json" } }, "metadata": { "asset_id": "{asset_id}", "asset_type": "data_intg_flow", "catalog_id": "{catalog_id}", "create_time": "2021-04-08 17:14:08+00:00", "creator_id": "IBMid-xxxxxxxxxx", "description": "", "href": "{url}/data_intg/v3/data_intg_flows/{asset_id}?catalog_id={catalog_id}", "name": "{job_name}", "origin_country": "us", "project_id": "{project_id}", "resource_key": "{project_id}/data_intg_flow/{job_name}", "size": 2712, "tags": [], "usage": { "access_count": 0, "last_access_time": "2021-04-08 17:21:33.936000+00:00", "last_accessor_id": "IBMid-xxxxxxxxxx" } } }
Modifies attributes of a DataStage flow
Modifies attributes of a DataStage flow in the specified project or catalog (either project_id or catalog_id must be set).
Modifies attributes of a DataStage flow in the specified project or catalog (either project_id or catalog_id must be set).
Modifies attributes of a DataStage flow in the specified project or catalog (either project_id or catalog_id must be set).
Modifies attributes of a DataStage flow in the specified project or catalog (either project_id or catalog_id must be set).
PUT /v3/data_intg_flows/{data_intg_flow_id}/attributesServiceCall<DataIntgFlow> patchAttributesDatastageFlow(PatchAttributesDatastageFlowOptions patchAttributesDatastageFlowOptions)patchAttributesDatastageFlow(params)
patch_attributes_datastage_flow(
self,
data_intg_flow_id: str,
*,
description: Optional[str] = None,
directory_asset_id: Optional[str] = None,
name: Optional[str] = None,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the PatchAttributesDatastageFlowOptions.Builder to create a PatchAttributesDatastageFlowOptions object that contains the parameter values for the patchAttributesDatastageFlow method.
Path Parameters
The DataStage flow ID to use.
Query Parameters
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Example:
4c9adbb4-28ef-4a7d-b273-1cee0c38021f
attributes of flow to modify.
description of the asset.
The directory asset ID of the asset.
name of the asset.
The patchAttributesDatastageFlow options.
The DataStage flow ID to use.
description of the asset.
The directory asset ID of the asset.
name of the asset.
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:4c9adbb4-28ef-4a7d-b273-1cee0c38021f
parameters
The DataStage flow ID to use.
description of the asset.
The directory asset ID of the asset.
name of the asset.
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:
parameters
The DataStage flow ID to use.
description of the asset.
The directory asset ID of the asset.
name of the asset.
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:
Response
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
System metadata about an asset.
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Asset type object for folder container.
The name of the DataStage flow.
Asset type object.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_idorproject_idor 'space_id` is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_idorproject_idor 'space_id` is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset.
catalog_idorproject_idor 'space_id` is required.A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Asset type object for folder container.
The name of the DataStage flow.
Asset type object.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_idorproject_idor 'space_id` is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_idorproject_idor 'space_id` is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset.
catalog_idorproject_idor 'space_id` is required.A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Asset type object for folder container.
The name of the DataStage flow.
Asset type object.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_idorproject_idor 'space_id` is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_idorproject_idor 'space_id` is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset.
catalog_idorproject_idor 'space_id` is required.A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred. See response for more information.
{ "attachments": [ { "asset_type": "data_intg_flow", "attachment_id": "9081dd6b-0ab7-47b5-8233-c10c6e64509d", "href": "{url}/v2/assets/{asset_id}/attachments/9081dd6b-0ab7-47b5-8233-c10c6e64509d?project_id={project_id}", "mime": "application/json", "name": "data_intg_flows", "object_key": "data_intg_flow/{project_id}{asset_id}", "object_key_is_read_only": false, "private_url": false } ], "entity": { "data_intg_flow": { "dataset": false, "mime_type": "application/json" } }, "metadata": { "asset_id": "{asset_id}", "asset_type": "data_intg_flow", "catalog_id": "{catalog_id}", "create_time": "2021-04-08 17:14:08+00:00", "creator_id": "IBMid-xxxxxxxxxx", "description": "", "href": "{url}/data_intg/v3/data_intg_flows/{asset_id}?catalog_id={catalog_id}", "name": "{job_name}", "origin_country": "us", "project_id": "{project_id}", "resource_key": "{project_id}/data_intg_flow/{job_name}", "size": 2712, "tags": [], "usage": { "access_count": 0, "last_access_time": "2021-04-08 17:21:33.936000+00:00", "last_accessor_id": "IBMid-xxxxxxxxxx" } } }{ "attachments": [ { "asset_type": "data_intg_flow", "attachment_id": "9081dd6b-0ab7-47b5-8233-c10c6e64509d", "href": "{url}/v2/assets/{asset_id}/attachments/9081dd6b-0ab7-47b5-8233-c10c6e64509d?project_id={project_id}", "mime": "application/json", "name": "data_intg_flows", "object_key": "data_intg_flow/{project_id}{asset_id}", "object_key_is_read_only": false, "private_url": false } ], "entity": { "data_intg_flow": { "dataset": false, "mime_type": "application/json" } }, "metadata": { "asset_id": "{asset_id}", "asset_type": "data_intg_flow", "catalog_id": "{catalog_id}", "create_time": "2021-04-08 17:14:08+00:00", "creator_id": "IBMid-xxxxxxxxxx", "description": "", "href": "{url}/data_intg/v3/data_intg_flows/{asset_id}?catalog_id={catalog_id}", "name": "{job_name}", "origin_country": "us", "project_id": "{project_id}", "resource_key": "{project_id}/data_intg_flow/{job_name}", "size": 2712, "tags": [], "usage": { "access_count": 0, "last_access_time": "2021-04-08 17:21:33.936000+00:00", "last_accessor_id": "IBMid-xxxxxxxxxx" } } }
Clone DataStage flow
Create a DataStage flow in the specified project or catalog or space based on an existing DataStage flow in the same project or catalog or space.
Create a DataStage flow in the specified project or catalog or space based on an existing DataStage flow in the same project or catalog or space.
Create a DataStage flow in the specified project or catalog or space based on an existing DataStage flow in the same project or catalog or space.
Create a DataStage flow in the specified project or catalog or space based on an existing DataStage flow in the same project or catalog or space.
POST /v3/data_intg_flows/{data_intg_flow_id}/cloneServiceCall<DataIntgFlow> cloneDatastageFlows(CloneDatastageFlowsOptions cloneDatastageFlowsOptions)cloneDatastageFlows(params)
clone_datastage_flows(
self,
data_intg_flow_id: str,
*,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
directory_asset_id: Optional[str] = None,
data_intg_flow_name: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the CloneDatastageFlowsOptions.Builder to create a CloneDatastageFlowsOptions object that contains the parameter values for the cloneDatastageFlows method.
Path Parameters
The DataStage flow ID to use.
Query Parameters
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Example:
4c9adbb4-28ef-4a7d-b273-1cee0c38021fThe directory asset ID.
The data flow name.
The cloneDatastageFlows options.
The DataStage flow ID to use.
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:4c9adbb4-28ef-4a7d-b273-1cee0c38021fThe directory asset ID.
The data flow name.
parameters
The DataStage flow ID to use.
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:The directory asset ID.
The data flow name.
parameters
The DataStage flow ID to use.
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:The directory asset ID.
The data flow name.
curl -X POST --location --header "Authorization: Bearer {iam_token}" --header "Accept: application/json;charset=utf-8" "{base_url}/v3/data_intg_flows/{data_intg_flow_id}/clone?project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
CloneDatastageFlowsOptions cloneDatastageFlowsOptions = new CloneDatastageFlowsOptions.Builder() .dataIntgFlowId(flowID) .projectId(projectID) .build(); Response<DataIntgFlow> response = datastageService.cloneDatastageFlows(cloneDatastageFlowsOptions).execute(); DataIntgFlow dataIntgFlow = response.getResult(); System.out.println(dataIntgFlow);
const params = { dataIntgFlowId: assetID, projectId: projectID, }; const res = await datastageService.cloneDatastageFlows(params);
data_intg_flow = datastage_service.clone_datastage_flows( data_intg_flow_id=createdFlowId, project_id=config['PROJECT_ID'] ).get_result() print(json.dumps(data_intg_flow, indent=2))
Response
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
System metadata about an asset.
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Asset type object for folder container.
The name of the DataStage flow.
Asset type object.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_idorproject_idor 'space_id` is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_idorproject_idor 'space_id` is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset.
catalog_idorproject_idor 'space_id` is required.A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Asset type object for folder container.
The name of the DataStage flow.
Asset type object.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_idorproject_idor 'space_id` is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_idorproject_idor 'space_id` is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset.
catalog_idorproject_idor 'space_id` is required.A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Asset type object for folder container.
The name of the DataStage flow.
Asset type object.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_idorproject_idor 'space_id` is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_idorproject_idor 'space_id` is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset.
catalog_idorproject_idor 'space_id` is required.A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred. See response for more information.
{ "entity": { "data_intg_flow": { "dataset": false, "mime_type": "application/json" } }, "metadata": { "asset_id": "{asset_id}", "asset_type": "data_intg_flow", "href": "{url}/data_intg/v3/data_intg_flows/{asset_id}", "name": "{job_name_copy}", "origin_country": "US", "resource_key": "{project_id}/data_intg_flow/{job_name_copy}" } }{ "entity": { "data_intg_flow": { "dataset": false, "mime_type": "application/json" } }, "metadata": { "asset_id": "{asset_id}", "asset_type": "data_intg_flow", "href": "{url}/data_intg/v3/data_intg_flows/{asset_id}", "name": "{job_name_copy}", "origin_country": "US", "resource_key": "{project_id}/data_intg_flow/{job_name_copy}" } }
Compile DataStage flow to generate runtime assets
Generate the runtime assets for a DataStage flow in the specified project or catalog for a specified runtime type. Either project_id or catalog_id must be specified.
Generate the runtime assets for a DataStage flow in the specified project or catalog for a specified runtime type. Either project_id or catalog_id must be specified.
Generate the runtime assets for a DataStage flow in the specified project or catalog for a specified runtime type. Either project_id or catalog_id must be specified.
Generate the runtime assets for a DataStage flow in the specified project or catalog for a specified runtime type. Either project_id or catalog_id must be specified.
POST /v3/ds_codegen/compile/{data_intg_flow_id}ServiceCall<FlowCompileResponse> compileDatastageFlows(CompileDatastageFlowsOptions compileDatastageFlowsOptions)compileDatastageFlows(params)
compile_datastage_flows(
self,
data_intg_flow_id: str,
*,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
runtime_type: Optional[str] = None,
enable_sql_pushdown: Optional[bool] = None,
enable_async_compile: Optional[bool] = None,
enable_pushdown_source: Optional[bool] = None,
enable_push_processing_to_source: Optional[bool] = None,
enable_push_join_to_source: Optional[bool] = None,
enable_pushdown_target: Optional[bool] = None,
**kwargs,
) -> DetailedResponseRequest
Use the CompileDatastageFlowsOptions.Builder to create a CompileDatastageFlowsOptions object that contains the parameter values for the compileDatastageFlows method.
Path Parameters
The DataStage flow ID to use.
Query Parameters
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Example:
4c9adbb4-28ef-4a7d-b273-1cee0c38021fThe type of the runtime to use. e.g. dspxosh or Spark etc. If not provided queried from within pipeline flow if available otherwise default of dspxosh is used.
Whether to enable the SQL pushdown code generation or not. When this flag is set to true and enable_pushdown_source is not specified, enable_pushdown_source will be set to true. When this flag is set to true and enable_pushdown_target is not specified, enable_pushdown_target will be set to true.
Whether to compile the flow asynchronously or not. When set to true, the compile request will be queued and then compiled. Response will be returned immediately as "Compiling". For compile status, call get compile status api.
Whether to enable the push sql to source connectors. Setting this flag to true will automatically set enable_sql_pushdown to true if the latter is not specified or is explicitly set to false. When this flag is set to true and enable_push_processing_to_source is not specified, enable_push_processing_to_source will be automatically set to true as well. When this flag is set to true and enable_push_join_to_source is not speicified, enable_push_join_to_source will be automatically set to true as well.
Whether to enable pushing processing stages to source connectors or not. Setting this flag to true will automatically set enable_pushdown_source to true if the latter is not specified or is explicitly set to false.
Whether to enable pushing join/lookup stages to source connectors or not. Setting this flag to true will automatically set enable_pushdown_source to true if the latter is not specified or is explicitly set to false.
Whether to enable the push sql to target connectors. Setting this flag to true will automatically set enable_sql_pushdown to true if the latter is not specified or is explicitly set to false.
The compileDatastageFlows options.
The DataStage flow ID to use.
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:4c9adbb4-28ef-4a7d-b273-1cee0c38021fThe type of the runtime to use. e.g. dspxosh or Spark etc. If not provided queried from within pipeline flow if available otherwise default of dspxosh is used.
Whether to enable the SQL pushdown code generation or not. When this flag is set to true and enable_pushdown_source is not specified, enable_pushdown_source will be set to true. When this flag is set to true and enable_pushdown_target is not specified, enable_pushdown_target will be set to true.
Whether to compile the flow asynchronously or not. When set to true, the compile request will be queued and then compiled. Response will be returned immediately as "Compiling". For compile status, call get compile status api.
Whether to enable the push sql to source connectors. Setting this flag to true will automatically set enable_sql_pushdown to true if the latter is not specified or is explicitly set to false. When this flag is set to true and enable_push_processing_to_source is not specified, enable_push_processing_to_source will be automatically set to true as well. When this flag is set to true and enable_push_join_to_source is not speicified, enable_push_join_to_source will be automatically set to true as well.
Whether to enable pushing processing stages to source connectors or not. Setting this flag to true will automatically set enable_pushdown_source to true if the latter is not specified or is explicitly set to false.
Whether to enable pushing join/lookup stages to source connectors or not. Setting this flag to true will automatically set enable_pushdown_source to true if the latter is not specified or is explicitly set to false.
Whether to enable the push sql to target connectors. Setting this flag to true will automatically set enable_sql_pushdown to true if the latter is not specified or is explicitly set to false.
parameters
The DataStage flow ID to use.
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:The type of the runtime to use. e.g. dspxosh or Spark etc. If not provided queried from within pipeline flow if available otherwise default of dspxosh is used.
Whether to enable the SQL pushdown code generation or not. When this flag is set to true and enable_pushdown_source is not specified, enable_pushdown_source will be set to true. When this flag is set to true and enable_pushdown_target is not specified, enable_pushdown_target will be set to true.
Whether to compile the flow asynchronously or not. When set to true, the compile request will be queued and then compiled. Response will be returned immediately as "Compiling". For compile status, call get compile status api.
Whether to enable the push sql to source connectors. Setting this flag to true will automatically set enable_sql_pushdown to true if the latter is not specified or is explicitly set to false. When this flag is set to true and enable_push_processing_to_source is not specified, enable_push_processing_to_source will be automatically set to true as well. When this flag is set to true and enable_push_join_to_source is not speicified, enable_push_join_to_source will be automatically set to true as well.
Whether to enable pushing processing stages to source connectors or not. Setting this flag to true will automatically set enable_pushdown_source to true if the latter is not specified or is explicitly set to false.
Whether to enable pushing join/lookup stages to source connectors or not. Setting this flag to true will automatically set enable_pushdown_source to true if the latter is not specified or is explicitly set to false.
Whether to enable the push sql to target connectors. Setting this flag to true will automatically set enable_sql_pushdown to true if the latter is not specified or is explicitly set to false.
parameters
The DataStage flow ID to use.
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:The type of the runtime to use. e.g. dspxosh or Spark etc. If not provided queried from within pipeline flow if available otherwise default of dspxosh is used.
Whether to enable the SQL pushdown code generation or not. When this flag is set to true and enable_pushdown_source is not specified, enable_pushdown_source will be set to true. When this flag is set to true and enable_pushdown_target is not specified, enable_pushdown_target will be set to true.
Whether to compile the flow asynchronously or not. When set to true, the compile request will be queued and then compiled. Response will be returned immediately as "Compiling". For compile status, call get compile status api.
Whether to enable the push sql to source connectors. Setting this flag to true will automatically set enable_sql_pushdown to true if the latter is not specified or is explicitly set to false. When this flag is set to true and enable_push_processing_to_source is not specified, enable_push_processing_to_source will be automatically set to true as well. When this flag is set to true and enable_push_join_to_source is not speicified, enable_push_join_to_source will be automatically set to true as well.
Whether to enable pushing processing stages to source connectors or not. Setting this flag to true will automatically set enable_pushdown_source to true if the latter is not specified or is explicitly set to false.
Whether to enable pushing join/lookup stages to source connectors or not. Setting this flag to true will automatically set enable_pushdown_source to true if the latter is not specified or is explicitly set to false.
Whether to enable the push sql to target connectors. Setting this flag to true will automatically set enable_sql_pushdown to true if the latter is not specified or is explicitly set to false.
curl -X POST --location --header "Authorization: Bearer {iam_token}" --header "Accept: application/json;charset=utf-8" "{base_url}/v3/ds_codegen/compile/{data_intg_flow_id}?project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
CompileDatastageFlowsOptions compileDatastageFlowsOptions = new CompileDatastageFlowsOptions.Builder() .dataIntgFlowId(flowID) .projectId(projectID) .build(); Response<FlowCompileResponse> response = datastageService.compileDatastageFlows(compileDatastageFlowsOptions).execute(); FlowCompileResponse flowCompileResponse = response.getResult(); System.out.println(flowCompileResponse);
const params = { dataIntgFlowId: assetID, projectId: projectID, }; const res = await datastageService.compileDatastageFlows(params);
flow_compile_response = datastage_service.compile_datastage_flows( data_intg_flow_id=createdFlowId, project_id=config['PROJECT_ID'] ).get_result() print(json.dumps(flow_compile_response, indent=2))
Response
Describes the compile response model.
Compile result for DataStage flow.
- message
Compile response type. For example ok or error.
Describes the compile response model.
Compile result for DataStage flow.
Compile response type. For example ok or error.
Describes the compile response model.
Compile result for DataStage flow.
Compile response type. For example ok or error.
Describes the compile response model.
Compile result for DataStage flow.
Compile response type. For example ok or error.
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
Request object contains invalid information. Server is not able to process the request object.
Unexpected error.
Delete DataStage subflows
Deletes the specified data subflows in a project or catalog (either project_id or catalog_id must be set).
If the deletion of the data subflows will take some time to finish, then a 202 response will be returned and the deletion will continue asynchronously.
Deletes the specified data subflows in a project or catalog (either project_id or catalog_id must be set).
If the deletion of the data subflows will take some time to finish, then a 202 response will be returned and the deletion will continue asynchronously.
Deletes the specified data subflows in a project or catalog (either project_id or catalog_id must be set).
If the deletion of the data subflows will take some time to finish, then a 202 response will be returned and the deletion will continue asynchronously.
Deletes the specified data subflows in a project or catalog (either project_id or catalog_id must be set).
If the deletion of the data subflows will take some time to finish, then a 202 response will be returned and the deletion will continue asynchronously.
DELETE /v3/data_intg_flows/subflows
ServiceCall<Void> deleteDatastageSubflows(DeleteDatastageSubflowsOptions deleteDatastageSubflowsOptions)deleteDatastageSubflows(params)
delete_datastage_subflows(
self,
id: List[str],
*,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the DeleteDatastageSubflowsOptions.Builder to create a DeleteDatastageSubflowsOptions object that contains the parameter values for the deleteDatastageSubflows method.
Query Parameters
The list of DataStage subflow IDs to delete.
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Example:
4c9adbb4-28ef-4a7d-b273-1cee0c38021f
The deleteDatastageSubflows options.
The list of DataStage subflow IDs to delete.
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:4c9adbb4-28ef-4a7d-b273-1cee0c38021f
parameters
The list of DataStage subflow IDs to delete.
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:
parameters
The list of DataStage subflow IDs to delete.
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:
curl -X DELETE --location --header "Authorization: Bearer {iam_token}" "{base_url}/v3/data_intg_flows/subflows?id=[]&project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
String[] ids = new String[] {subflowID, cloneSubflowID}; DeleteDatastageSubflowsOptions deleteDatastageSubflowsOptions = new DeleteDatastageSubflowsOptions.Builder() .id(Arrays.asList(ids)) .projectId(projectID) .build(); datastageService.deleteDatastageSubflows(deleteDatastageSubflowsOptions).execute();
const params = { id: [assetID, cloneID], projectId: projectID, force: true, }; const res = await datastageService.deleteDatastageFlows(params);
response = datastage_service.delete_datastage_subflows( id=createdSubflowId, project_id=config['PROJECT_ID'] )
Response
Status Code
The requested operation is in progress.
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred. See response for more information.
No Sample Response
Get metadata for DataStage subflows
Lists the metadata and entity for DataStage subflows that are contained in the specified project.
Use the following parameters to filter the results:
| Field | Match type | Example |
| ------------------------ | ------------ | --------------------------------------- |
| entity.name | Equals | entity.name=MyDataStageSubFlow |
| entity.name | Starts with | entity.name=starts:MyData |
| entity.description | Equals | entity.description=movement |
| entity.description | Starts with | entity.description=starts:data |
To sort the results, use one or more of the parameters described in the following section. If no sort key is specified, the results are sorted in descending order on metadata.create_time (i.e. returning the most
recently created data flows first).
| Field | Example |
| ------------------------- | ----------------------------------- |
| sort | sort=+entity.name (sort by ascending name) |
| sort | sort=-metadata.create_time (sort by descending creation time) |
Multiple sort keys can be specified by delimiting them with a comma. For example, to sort in descending order on create_time and then in ascending order on name use: sort=-metadata.create_time,+entity.name
Lists the metadata and entity for DataStage subflows that are contained in the specified project.
Use the following parameters to filter the results:
| Field | Match type | Example |
| ------------------------ | ------------ | --------------------------------------- |
| entity.name | Equals | entity.name=MyDataStageSubFlow |
| entity.name | Starts with | entity.name=starts:MyData |
| entity.description | Equals | entity.description=movement |
| entity.description | Starts with | entity.description=starts:data |
To sort the results, use one or more of the parameters described in the following section. If no sort key is specified, the results are sorted in descending order on metadata.create_time (i.e. returning the most recently created data flows first).
| Field | Example |
| ------------------------- | ----------------------------------- |
| sort | sort=+entity.name (sort by ascending name) |
| sort | sort=-metadata.create_time (sort by descending creation time) |
Multiple sort keys can be specified by delimiting them with a comma. For example, to sort in descending order on create_time and then in ascending order on name use: sort=-metadata.create_time,+entity.name.
Lists the metadata and entity for DataStage subflows that are contained in the specified project.
Use the following parameters to filter the results:
| Field | Match type | Example |
| ------------------------ | ------------ | --------------------------------------- |
| entity.name | Equals | entity.name=MyDataStageSubFlow |
| entity.name | Starts with | entity.name=starts:MyData |
| entity.description | Equals | entity.description=movement |
| entity.description | Starts with | entity.description=starts:data |
To sort the results, use one or more of the parameters described in the following section. If no sort key is specified, the results are sorted in descending order on metadata.create_time (i.e. returning the most recently created data flows first).
| Field | Example |
| ------------------------- | ----------------------------------- |
| sort | sort=+entity.name (sort by ascending name) |
| sort | sort=-metadata.create_time (sort by descending creation time) |
Multiple sort keys can be specified by delimiting them with a comma. For example, to sort in descending order on create_time and then in ascending order on name use: sort=-metadata.create_time,+entity.name.
Lists the metadata and entity for DataStage subflows that are contained in the specified project.
Use the following parameters to filter the results:
| Field | Match type | Example |
| ------------------------ | ------------ | --------------------------------------- |
| entity.name | Equals | entity.name=MyDataStageSubFlow |
| entity.name | Starts with | entity.name=starts:MyData |
| entity.description | Equals | entity.description=movement |
| entity.description | Starts with | entity.description=starts:data |
To sort the results, use one or more of the parameters described in the following section. If no sort key is specified, the results are sorted in descending order on metadata.create_time (i.e. returning the most recently created data flows first).
| Field | Example |
| ------------------------- | ----------------------------------- |
| sort | sort=+entity.name (sort by ascending name) |
| sort | sort=-metadata.create_time (sort by descending creation time) |
Multiple sort keys can be specified by delimiting them with a comma. For example, to sort in descending order on create_time and then in ascending order on name use: sort=-metadata.create_time,+entity.name.
GET /v3/data_intg_flows/subflows
ServiceCall<DataFlowPagedCollection> listDatastageSubflows(ListDatastageSubflowsOptions listDatastageSubflowsOptions)listDatastageSubflows(params)
list_datastage_subflows(
self,
*,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
sort: Optional[str] = None,
start: Optional[str] = None,
limit: Optional[int] = None,
entity_name: Optional[str] = None,
entity_description: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the ListDatastageSubflowsOptions.Builder to create a ListDatastageSubflowsOptions object that contains the parameter values for the listDatastageSubflows method.
Query Parameters
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Example:
4c9adbb4-28ef-4a7d-b273-1cee0c38021fThe field to sort the results on, including whether to sort ascending (+) or descending (-), for example, sort=-metadata.create_time.
The page token indicating where to start paging from.
Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e2The limit of the number of items to return for each page, for example limit=50. If not specified a default of 100 will be used. The maximum value of limit is 200.
Example:
100Filter results based on the specified name.
Filter results based on the specified description.
The listDatastageSubflows options.
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:4c9adbb4-28ef-4a7d-b273-1cee0c38021fThe field to sort the results on, including whether to sort ascending (+) or descending (-), for example, sort=-metadata.create_time.
The page token indicating where to start paging from.
Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e2The limit of the number of items to return for each page, for example limit=50. If not specified a default of 100 will be used. The maximum value of limit is 200.
Examples:100Filter results based on the specified name.
Filter results based on the specified description.
parameters
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:The field to sort the results on, including whether to sort ascending (+) or descending (-), for example, sort=-metadata.create_time.
The page token indicating where to start paging from.
Examples:The limit of the number of items to return for each page, for example limit=50. If not specified a default of 100 will be used. The maximum value of limit is 200.
Examples:Filter results based on the specified name.
Filter results based on the specified description.
parameters
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:The field to sort the results on, including whether to sort ascending (+) or descending (-), for example, sort=-metadata.create_time.
The page token indicating where to start paging from.
Examples:The limit of the number of items to return for each page, for example limit=50. If not specified a default of 100 will be used. The maximum value of limit is 200.
Examples:Filter results based on the specified name.
Filter results based on the specified description.
curl -X GET --location --header "Authorization: Bearer {iam_token}" --header "Accept: application/json;charset=utf-8" "{base_url}/v3/data_intg_flows/subflows?project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23&limit=100"
ListDatastageSubflowsOptions listDatastageSubflowsOptions = new ListDatastageSubflowsOptions.Builder() .projectId(projectID) .limit(Long.valueOf("100")) .build(); Response<DataFlowPagedCollection> response = datastageService.listDatastageSubflows(listDatastageSubflowsOptions).execute(); DataFlowPagedCollection dataFlowPagedCollection = response.getResult(); System.out.println(dataFlowPagedCollection);
const params = { projectId: projectID, sort: 'name', limit: 100, }; const res = await datastageService.listDatastageSubflows(params);
data_flow_paged_collection = datastage_service.list_datastage_subflows( project_id=config['PROJECT_ID'], limit=100 ).get_result() print(json.dumps(data_flow_paged_collection, indent=2))
Response
A page from a collection of DataStage flows.
A page from a collection of DataStage flows.
URI of a resource.
URI of a resource.
The number of data flows requested to be returned.
URI of a resource.
URI of a resource.
The total number of DataStage flows available.
A page from a collection of DataStage flows.
A page from a collection of DataStage flows.
- dataFlows
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Asset type object for folder container.
The name of the DataStage flow.
Asset type object.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_idorproject_idor 'space_id` is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_idorproject_idor 'space_id` is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset.
catalog_idorproject_idor 'space_id` is required.A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
URI of a resource.
- first
URI of a resource.
URI of a resource.
- last
URI of a resource.
The number of data flows requested to be returned.
URI of a resource.
- next
URI of a resource.
URI of a resource.
- prev
URI of a resource.
The total number of DataStage flows available.
A page from a collection of DataStage flows.
A page from a collection of DataStage flows.
- data_flows
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Asset type object for folder container.
The name of the DataStage flow.
Asset type object.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_idorproject_idor 'space_id` is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_idorproject_idor 'space_id` is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset.
catalog_idorproject_idor 'space_id` is required.A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
URI of a resource.
- first
URI of a resource.
URI of a resource.
- last
URI of a resource.
The number of data flows requested to be returned.
URI of a resource.
- next
URI of a resource.
URI of a resource.
- prev
URI of a resource.
The total number of DataStage flows available.
A page from a collection of DataStage flows.
A page from a collection of DataStage flows.
- data_flows
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Asset type object for folder container.
The name of the DataStage flow.
Asset type object.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_idorproject_idor 'space_id` is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_idorproject_idor 'space_id` is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset.
catalog_idorproject_idor 'space_id` is required.A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
URI of a resource.
- first
URI of a resource.
URI of a resource.
- last
URI of a resource.
The number of data flows requested to be returned.
URI of a resource.
- next
URI of a resource.
URI of a resource.
- prev
URI of a resource.
The total number of DataStage flows available.
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred. See response for more information.
{ "data_flows": [ { "entity": { "data_intg_subflow": { "dataset": false, "mime_type": "application/json" } }, "metadata": { "asset_id": "{asset_id}", "asset_type": "data_intg_subflow", "create_time": "2021-04-03 15:32:55+00:00", "creator_id": "IBMid-xxxxxxxxx", "description": " ", "href": "{url}/data_intg/v3/data_intg_flows/subflows/{asset_id}?project_id={project_id}", "name": "{job_name}", "project_id": "{project_id}", "resource_key": "{project_id}/data_intg_subflow/{job_name}", "size": 5780, "usage": { "access_count": 0, "last_access_time": "2021-04-03 15:33:01.320000+00:00", "last_accessor_id": "IBMid-xxxxxxxxx", "last_modification_time": "2021-04-03 15:33:01.320000+00:00", "last_modifier_id": "IBMid-xxxxxxxxx" } } } ], "first": { "href": "{url}/data_intg/v3/data_intg_flows/subflows?project_id={project_id}&limit=2" }, "next": { "href": "{url}/data_intg/v3/data_intg_flows/subflows?project_id={project_id}&limit=2&start=g1AAAADOeJzLYWBgYMpgTmHQSklKzi9KdUhJMjTUS8rVTU7WLS3WLc4vLcnQNbLQS87JL01JzCvRy0styQHpyWMBkgwNQOr____9WWCxXCAhYmRgZKhrYKJrYBxiaGplbGRlahqVaJCFZocB8XYcgNhxHrcdhlamhlGJ-llZAD4lOMI" }, "total_count": 1 }{ "data_flows": [ { "entity": { "data_intg_subflow": { "dataset": false, "mime_type": "application/json" } }, "metadata": { "asset_id": "{asset_id}", "asset_type": "data_intg_subflow", "create_time": "2021-04-03 15:32:55+00:00", "creator_id": "IBMid-xxxxxxxxx", "description": " ", "href": "{url}/data_intg/v3/data_intg_flows/subflows/{asset_id}?project_id={project_id}", "name": "{job_name}", "project_id": "{project_id}", "resource_key": "{project_id}/data_intg_subflow/{job_name}", "size": 5780, "usage": { "access_count": 0, "last_access_time": "2021-04-03 15:33:01.320000+00:00", "last_accessor_id": "IBMid-xxxxxxxxx", "last_modification_time": "2021-04-03 15:33:01.320000+00:00", "last_modifier_id": "IBMid-xxxxxxxxx" } } } ], "first": { "href": "{url}/data_intg/v3/data_intg_flows/subflows?project_id={project_id}&limit=2" }, "next": { "href": "{url}/data_intg/v3/data_intg_flows/subflows?project_id={project_id}&limit=2&start=g1AAAADOeJzLYWBgYMpgTmHQSklKzi9KdUhJMjTUS8rVTU7WLS3WLc4vLcnQNbLQS87JL01JzCvRy0styQHpyWMBkgwNQOr____9WWCxXCAhYmRgZKhrYKJrYBxiaGplbGRlahqVaJCFZocB8XYcgNhxHrcdhlamhlGJ-llZAD4lOMI" }, "total_count": 1 }
Create DataStage subflow
Creates a DataStage subflow in the specified project or catalog (either project_id or catalog_id must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.
Creates a DataStage subflow in the specified project or catalog (either project_id or catalog_id must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.
Creates a DataStage subflow in the specified project or catalog (either project_id or catalog_id must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.
Creates a DataStage subflow in the specified project or catalog (either project_id or catalog_id must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.
POST /v3/data_intg_flows/subflows
ServiceCall<DataIntgFlow> createDatastageSubflows(CreateDatastageSubflowsOptions createDatastageSubflowsOptions)createDatastageSubflows(params)
create_datastage_subflows(
self,
data_intg_subflow_name: str,
*,
entity: Optional['SubFlowEntityJson'] = None,
pipeline_flows: Optional['PipelineJson'] = None,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
directory_asset_id: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the CreateDatastageSubflowsOptions.Builder to create a CreateDatastageSubflowsOptions object that contains the parameter values for the createDatastageSubflows method.
Query Parameters
The DataStage subflow name.
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Example:
4c9adbb4-28ef-4a7d-b273-1cee0c38021fThe directory asset ID.
Pipeline json to be attached.
Pipeline flow to be stored.
The createDatastageSubflows options.
The DataStage subflow name.
- entity
The JSON object details will be saved as entity of the subflow.
The sub type of the subflow.
Examples:data_rule
Pipeline flow to be stored.
- pipelineFlows
Object containing app-specific data.
The document type.
Examples:pipeline
Array of parameter set references.
Examples:[ { "name": "Test Param Set", "project_ref": "bd0dbbfd-810d-4f0e-b0a9-228c328a8e23", "ref": "eeabf991-b69e-4f8c-b9f1-e6f2129b9a57" } ]Document identifier, GUID recommended.
Examples:84c2b6fb-1dd5-4114-b4ba-9bb2cb364fffRefers to the JSON schema used to validate documents of this type.
Examples:http://api.dataplatform.ibm.com/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.jsonParameters for the flow document.
Examples:{ "local_parameters": [ { "name": "srcFile", "type": "string" }, { "name": "my_connection", "subtype": "connection", "type": "asset_id", "value": "dfe7c595-81d8-461e-8d13-a7c544f3f500" } ] }Array of pipeline.
- pipelines
Object containing app-specific data.
Examples:{ "ui_data": { "comments": [] } }A brief description of the DataStage flow.
Examples:A test DataStage flow.
Unique identifier.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bcName of the pipeline.
Examples:ContainerC1
Array of pipeline nodes.
Examples:[ { "app_data": { "ui_data": { "description": "Produce a set of mock data based on the specified metadata", "image": "/data-intg/flows/graphics/palette/PxRowGenerator.svg", "label": "Row_Generator_1", "x_pos": 108, "y_pos": 162 } }, "id": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "op": "PxRowGenerator", "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "73a5fb2c-f499-4c75-a8a7-71cea90f5105" }, "ui_data": { "label": "outPort" } }, "id": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "parameters": { "records": 10 }, "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "parameters": { "input_count": 0, "output_count": 1 }, "type": "binding" }, { "app_data": { "ui_data": { "description": "Print row column values to either the job log or to a separate output link", "image": "/data-intg/flows/graphics/palette/PxPeek.svg", "label": "Peek_1", "x_pos": 342, "y_pos": 162 } }, "id": "4195b012-d3e7-4f74-8099-e7b23ec6ebb9", "inputs": [ { "app_data": { "ui_data": { "label": "inPort" } }, "id": "c4195b34-8b4a-473f-b987-fa6d028f3968", "links": [ { "app_data": { "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "Link_1", "label": "Link_1", "outline": true, "path": "", "position": "middle" } ] } }, "id": "73a5fb2c-f499-4c75-a8a7-71cea90f5105", "link_name": "Link_1", "node_id_ref": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "port_id_ref": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "type_attr": "PRIMARY" } ], "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "op": "PxPeek", "outputs": [ { "app_data": { "ui_data": { "label": "outPort" } }, "id": "" } ], "parameters": { "all": " ", "columns": " ", "dataset": " ", "input_count": 1, "name": "name", "nrecs": 10, "output_count": 0, "selection": " " }, "type": "execution_node" } ]Reference to the runtime type.
Examples:pxOsh
Reference to the primary (main) pipeline flow within the document.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bcRuntime information for pipeline flow.
Examples:[ { "id": "pxOsh", "name": "pxOsh" } ]Array of data record schemas used in the pipeline.
Examples:[ { "fields": [ { "app_data": { "is_unicode_string": false, "odbc_type": "INTEGER", "type_code": "INT32" }, "metadata": { "decimal_precision": 6, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 6, "min_length": 0 }, "name": "ID", "nullable": false, "type": "integer" } ], "id": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ]Pipeline flow version.
Examples:3.0
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:4c9adbb4-28ef-4a7d-b273-1cee0c38021fThe directory asset ID.
parameters
The DataStage subflow name.
- entity
The JSON object details will be saved as entity of the subflow.
The sub type of the subflow.
Examples:data_rule
Pipeline flow to be stored.
- pipelineFlows
Object containing app-specific data.
The document type.
Examples:pipeline
Array of parameter set references.
Examples:[ { "name": "Test Param Set", "project_ref": "bd0dbbfd-810d-4f0e-b0a9-228c328a8e23", "ref": "eeabf991-b69e-4f8c-b9f1-e6f2129b9a57" } ]Document identifier, GUID recommended.
Examples:84c2b6fb-1dd5-4114-b4ba-9bb2cb364fffRefers to the JSON schema used to validate documents of this type.
Examples:http://api.dataplatform.ibm.com/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.jsonParameters for the flow document.
Examples:{ "local_parameters": [ { "name": "srcFile", "type": "string" }, { "name": "my_connection", "subtype": "connection", "type": "asset_id", "value": "dfe7c595-81d8-461e-8d13-a7c544f3f500" } ] }Array of pipeline.
- pipelines
Object containing app-specific data.
Examples:{ "ui_data": { "comments": [] } }A brief description of the DataStage flow.
Examples:A test DataStage flow.
Unique identifier.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bcName of the pipeline.
Examples:ContainerC1
Array of pipeline nodes.
Examples:[ { "app_data": { "ui_data": { "description": "Produce a set of mock data based on the specified metadata", "image": "/data-intg/flows/graphics/palette/PxRowGenerator.svg", "label": "Row_Generator_1", "x_pos": 108, "y_pos": 162 } }, "id": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "op": "PxRowGenerator", "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "73a5fb2c-f499-4c75-a8a7-71cea90f5105" }, "ui_data": { "label": "outPort" } }, "id": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "parameters": { "records": 10 }, "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "parameters": { "input_count": 0, "output_count": 1 }, "type": "binding" }, { "app_data": { "ui_data": { "description": "Print row column values to either the job log or to a separate output link", "image": "/data-intg/flows/graphics/palette/PxPeek.svg", "label": "Peek_1", "x_pos": 342, "y_pos": 162 } }, "id": "4195b012-d3e7-4f74-8099-e7b23ec6ebb9", "inputs": [ { "app_data": { "ui_data": { "label": "inPort" } }, "id": "c4195b34-8b4a-473f-b987-fa6d028f3968", "links": [ { "app_data": { "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "Link_1", "label": "Link_1", "outline": true, "path": "", "position": "middle" } ] } }, "id": "73a5fb2c-f499-4c75-a8a7-71cea90f5105", "link_name": "Link_1", "node_id_ref": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "port_id_ref": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "type_attr": "PRIMARY" } ], "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "op": "PxPeek", "outputs": [ { "app_data": { "ui_data": { "label": "outPort" } }, "id": "" } ], "parameters": { "all": " ", "columns": " ", "dataset": " ", "input_count": 1, "name": "name", "nrecs": 10, "output_count": 0, "selection": " " }, "type": "execution_node" } ]Reference to the runtime type.
Examples:pxOsh
Reference to the primary (main) pipeline flow within the document.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bcRuntime information for pipeline flow.
Examples:[ { "id": "pxOsh", "name": "pxOsh" } ]Array of data record schemas used in the pipeline.
Examples:[ { "fields": [ { "app_data": { "is_unicode_string": false, "odbc_type": "INTEGER", "type_code": "INT32" }, "metadata": { "decimal_precision": 6, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 6, "min_length": 0 }, "name": "ID", "nullable": false, "type": "integer" } ], "id": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ]Pipeline flow version.
Examples:3.0
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:The directory asset ID.
parameters
The DataStage subflow name.
- entity
The JSON object details will be saved as entity of the subflow.
The sub type of the subflow.
Examples:data_rule
Pipeline flow to be stored.
- pipeline_flows
Object containing app-specific data.
The document type.
Examples:pipeline
Array of parameter set references.
Examples:[ { "name": "Test Param Set", "project_ref": "bd0dbbfd-810d-4f0e-b0a9-228c328a8e23", "ref": "eeabf991-b69e-4f8c-b9f1-e6f2129b9a57" } ]Document identifier, GUID recommended.
Examples:84c2b6fb-1dd5-4114-b4ba-9bb2cb364fffRefers to the JSON schema used to validate documents of this type.
Examples:http://api.dataplatform.ibm.com/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.jsonParameters for the flow document.
Examples:{ "local_parameters": [ { "name": "srcFile", "type": "string" }, { "name": "my_connection", "subtype": "connection", "type": "asset_id", "value": "dfe7c595-81d8-461e-8d13-a7c544f3f500" } ] }Array of pipeline.
- pipelines
Object containing app-specific data.
Examples:{ "ui_data": { "comments": [] } }A brief description of the DataStage flow.
Examples:A test DataStage flow.
Unique identifier.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bcName of the pipeline.
Examples:ContainerC1
Array of pipeline nodes.
Examples:[ { "app_data": { "ui_data": { "description": "Produce a set of mock data based on the specified metadata", "image": "/data-intg/flows/graphics/palette/PxRowGenerator.svg", "label": "Row_Generator_1", "x_pos": 108, "y_pos": 162 } }, "id": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "op": "PxRowGenerator", "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "73a5fb2c-f499-4c75-a8a7-71cea90f5105" }, "ui_data": { "label": "outPort" } }, "id": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "parameters": { "records": 10 }, "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "parameters": { "input_count": 0, "output_count": 1 }, "type": "binding" }, { "app_data": { "ui_data": { "description": "Print row column values to either the job log or to a separate output link", "image": "/data-intg/flows/graphics/palette/PxPeek.svg", "label": "Peek_1", "x_pos": 342, "y_pos": 162 } }, "id": "4195b012-d3e7-4f74-8099-e7b23ec6ebb9", "inputs": [ { "app_data": { "ui_data": { "label": "inPort" } }, "id": "c4195b34-8b4a-473f-b987-fa6d028f3968", "links": [ { "app_data": { "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "Link_1", "label": "Link_1", "outline": true, "path": "", "position": "middle" } ] } }, "id": "73a5fb2c-f499-4c75-a8a7-71cea90f5105", "link_name": "Link_1", "node_id_ref": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "port_id_ref": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "type_attr": "PRIMARY" } ], "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "op": "PxPeek", "outputs": [ { "app_data": { "ui_data": { "label": "outPort" } }, "id": "" } ], "parameters": { "all": " ", "columns": " ", "dataset": " ", "input_count": 1, "name": "name", "nrecs": 10, "output_count": 0, "selection": " " }, "type": "execution_node" } ]Reference to the runtime type.
Examples:pxOsh
Reference to the primary (main) pipeline flow within the document.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bcRuntime information for pipeline flow.
Examples:[ { "id": "pxOsh", "name": "pxOsh" } ]Array of data record schemas used in the pipeline.
Examples:[ { "fields": [ { "app_data": { "is_unicode_string": false, "odbc_type": "INTEGER", "type_code": "INT32" }, "metadata": { "decimal_precision": 6, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 6, "min_length": 0 }, "name": "ID", "nullable": false, "type": "integer" } ], "id": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ]Pipeline flow version.
Examples:3.0
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:The directory asset ID.
curl -X POST --location --header "Authorization: Bearer {iam_token}" --header "Accept: application/json;charset=utf-8" --header "Content-Type: application/json;charset=utf-8" --data '{}' "{base_url}/v3/data_intg_flows/subflows?data_intg_subflow_name={data_intg_subflow_name}&project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
PipelineJson exampleSubFlow = PipelineFlowHelper.buildPipelineFlow(subFlowJson); CreateDatastageSubflowsOptions createDatastageSubflowsOptions = new CreateDatastageSubflowsOptions.Builder() .dataIntgSubflowName(subflowName) .pipelineFlows(exampleSubFlow) .projectId(projectID) .build(); Response<DataIntgFlow> response = datastageService.createDatastageSubflows(createDatastageSubflowsOptions).execute(); DataIntgFlow dataIntgFlow = response.getResult(); System.out.println(dataIntgFlow);
const params = { dataIntgSubflowName: dataIntgSubFlowName, pipelineFlows: pipelineJsonFromFile, projectId: projectID, assetCategory: 'system', }; const res = await datastageService.createDatastageSubflows(params);
data_intg_flow = datastage_service.create_datastage_subflows( data_intg_subflow_name='testSubflow1', pipeline_flows=UtilHelper.readJsonFileToDict('inputFiles/exampleSubflow.json'), project_id=config['PROJECT_ID'] ).get_result() print(json.dumps(data_intg_flow, indent=2))
Response
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
System metadata about an asset.
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Asset type object for folder container.
The name of the DataStage flow.
Asset type object.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_idorproject_idor 'space_id` is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_idorproject_idor 'space_id` is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset.
catalog_idorproject_idor 'space_id` is required.A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Asset type object for folder container.
The name of the DataStage flow.
Asset type object.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_idorproject_idor 'space_id` is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_idorproject_idor 'space_id` is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset.
catalog_idorproject_idor 'space_id` is required.A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Asset type object for folder container.
The name of the DataStage flow.
Asset type object.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_idorproject_idor 'space_id` is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_idorproject_idor 'space_id` is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset.
catalog_idorproject_idor 'space_id` is required.A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred. See response for more information.
No Sample Response
Get DataStage subflow
Lists the DataStage subflow that is contained in the specified project. Attachments, metadata and a limited number of attributes from the entity of each DataStage flow is returned.
Lists the DataStage subflow that is contained in the specified project. Attachments, metadata and a limited number of attributes from the entity of each DataStage flow is returned.
Lists the DataStage subflow that is contained in the specified project. Attachments, metadata and a limited number of attributes from the entity of each DataStage flow is returned.
Lists the DataStage subflow that is contained in the specified project. Attachments, metadata and a limited number of attributes from the entity of each DataStage flow is returned.
GET /v3/data_intg_flows/subflows/{data_intg_subflow_id}ServiceCall<DataIntgFlowJson> getDatastageSubflows(GetDatastageSubflowsOptions getDatastageSubflowsOptions)getDatastageSubflows(params)
get_datastage_subflows(
self,
data_intg_subflow_id: str,
*,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the GetDatastageSubflowsOptions.Builder to create a GetDatastageSubflowsOptions object that contains the parameter values for the getDatastageSubflows method.
Path Parameters
The DataStage subflow ID to use.
Query Parameters
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Example:
4c9adbb4-28ef-4a7d-b273-1cee0c38021f
The getDatastageSubflows options.
The DataStage subflow ID to use.
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:4c9adbb4-28ef-4a7d-b273-1cee0c38021f
parameters
The DataStage subflow ID to use.
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:
parameters
The DataStage subflow ID to use.
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:
curl -X GET --location --header "Authorization: Bearer {iam_token}" --header "Accept: application/json;charset=utf-8" "{base_url}/v3/data_intg_flows/subflows/{data_intg_subflow_id}?project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
GetDatastageSubflowsOptions getDatastageSubflowsOptions = new GetDatastageSubflowsOptions.Builder() .dataIntgSubflowId(subflowID) .projectId(projectID) .build(); Response<DataIntgFlowJson> response = datastageService.getDatastageSubflows(getDatastageSubflowsOptions).execute(); DataIntgFlowJson dataIntgFlowJson = response.getResult(); System.out.println(dataIntgFlowJson);
const params = { dataIntgSubflowId: subflow_assetID, projectId: projectID, }; const res = await datastageService.getDatastageSubflows(params);
data_intg_flow_json = datastage_service.get_datastage_subflows( data_intg_subflow_id=createdSubflowId, project_id=config['PROJECT_ID'] ).get_result() print(json.dumps(data_intg_flow_json, indent=2))
Response
A pipeline JSON containing operations to apply to source(s).
Pipeline flow to be stored.
The underlying DataStage flow definition.
System metadata about an asset.
A pipeline JSON containing operations to apply to source(s).
Pipeline flow to be stored.
- attachments
Object containing app-specific data.
The document type.
Examples:pipeline
Array of parameter set references.
Examples:[ { "name": "Test Param Set", "project_ref": "bd0dbbfd-810d-4f0e-b0a9-228c328a8e23", "ref": "eeabf991-b69e-4f8c-b9f1-e6f2129b9a57" } ]Document identifier, GUID recommended.
Examples:84c2b6fb-1dd5-4114-b4ba-9bb2cb364fffRefers to the JSON schema used to validate documents of this type.
Examples:http://api.dataplatform.ibm.com/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.jsonParameters for the flow document.
Examples:{ "local_parameters": [ { "name": "srcFile", "type": "string" }, { "name": "my_connection", "subtype": "connection", "type": "asset_id", "value": "dfe7c595-81d8-461e-8d13-a7c544f3f500" } ] }Array of pipeline.
- pipelines
Object containing app-specific data.
Examples:{ "ui_data": { "comments": [] } }A brief description of the DataStage flow.
Examples:A test DataStage flow.
Unique identifier.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bcName of the pipeline.
Examples:ContainerC1
Array of pipeline nodes.
Examples:[ { "app_data": { "ui_data": { "description": "Produce a set of mock data based on the specified metadata", "image": "/data-intg/flows/graphics/palette/PxRowGenerator.svg", "label": "Row_Generator_1", "x_pos": 108, "y_pos": 162 } }, "id": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "op": "PxRowGenerator", "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "73a5fb2c-f499-4c75-a8a7-71cea90f5105" }, "ui_data": { "label": "outPort" } }, "id": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "parameters": { "records": 10 }, "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "parameters": { "input_count": 0, "output_count": 1 }, "type": "binding" }, { "app_data": { "ui_data": { "description": "Print row column values to either the job log or to a separate output link", "image": "/data-intg/flows/graphics/palette/PxPeek.svg", "label": "Peek_1", "x_pos": 342, "y_pos": 162 } }, "id": "4195b012-d3e7-4f74-8099-e7b23ec6ebb9", "inputs": [ { "app_data": { "ui_data": { "label": "inPort" } }, "id": "c4195b34-8b4a-473f-b987-fa6d028f3968", "links": [ { "app_data": { "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "Link_1", "label": "Link_1", "outline": true, "path": "", "position": "middle" } ] } }, "id": "73a5fb2c-f499-4c75-a8a7-71cea90f5105", "link_name": "Link_1", "node_id_ref": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "port_id_ref": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "type_attr": "PRIMARY" } ], "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "op": "PxPeek", "outputs": [ { "app_data": { "ui_data": { "label": "outPort" } }, "id": "" } ], "parameters": { "all": " ", "columns": " ", "dataset": " ", "input_count": 1, "name": "name", "nrecs": 10, "output_count": 0, "selection": " " }, "type": "execution_node" } ]Reference to the runtime type.
Examples:pxOsh
Reference to the primary (main) pipeline flow within the document.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bcRuntime information for pipeline flow.
Examples:[ { "id": "pxOsh", "name": "pxOsh" } ]Array of data record schemas used in the pipeline.
Examples:[ { "fields": [ { "app_data": { "is_unicode_string": false, "odbc_type": "INTEGER", "type_code": "INT32" }, "metadata": { "decimal_precision": 6, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 6, "min_length": 0 }, "name": "ID", "nullable": false, "type": "integer" } ], "id": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ]Pipeline flow version.
Examples:3.0
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Asset type object for folder container.
The name of the DataStage flow.
Asset type object.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_idorproject_idor 'space_id` is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_idorproject_idor 'space_id` is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset.
catalog_idorproject_idor 'space_id` is required.A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
A pipeline JSON containing operations to apply to source(s).
Pipeline flow to be stored.
- attachments
Object containing app-specific data.
The document type.
Examples:pipeline
Array of parameter set references.
Examples:[ { "name": "Test Param Set", "project_ref": "bd0dbbfd-810d-4f0e-b0a9-228c328a8e23", "ref": "eeabf991-b69e-4f8c-b9f1-e6f2129b9a57" } ]Document identifier, GUID recommended.
Examples:84c2b6fb-1dd5-4114-b4ba-9bb2cb364fffRefers to the JSON schema used to validate documents of this type.
Examples:http://api.dataplatform.ibm.com/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.jsonParameters for the flow document.
Examples:{ "local_parameters": [ { "name": "srcFile", "type": "string" }, { "name": "my_connection", "subtype": "connection", "type": "asset_id", "value": "dfe7c595-81d8-461e-8d13-a7c544f3f500" } ] }Array of pipeline.
- pipelines
Object containing app-specific data.
Examples:{ "ui_data": { "comments": [] } }A brief description of the DataStage flow.
Examples:A test DataStage flow.
Unique identifier.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bcName of the pipeline.
Examples:ContainerC1
Array of pipeline nodes.
Examples:[ { "app_data": { "ui_data": { "description": "Produce a set of mock data based on the specified metadata", "image": "/data-intg/flows/graphics/palette/PxRowGenerator.svg", "label": "Row_Generator_1", "x_pos": 108, "y_pos": 162 } }, "id": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "op": "PxRowGenerator", "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "73a5fb2c-f499-4c75-a8a7-71cea90f5105" }, "ui_data": { "label": "outPort" } }, "id": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "parameters": { "records": 10 }, "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "parameters": { "input_count": 0, "output_count": 1 }, "type": "binding" }, { "app_data": { "ui_data": { "description": "Print row column values to either the job log or to a separate output link", "image": "/data-intg/flows/graphics/palette/PxPeek.svg", "label": "Peek_1", "x_pos": 342, "y_pos": 162 } }, "id": "4195b012-d3e7-4f74-8099-e7b23ec6ebb9", "inputs": [ { "app_data": { "ui_data": { "label": "inPort" } }, "id": "c4195b34-8b4a-473f-b987-fa6d028f3968", "links": [ { "app_data": { "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "Link_1", "label": "Link_1", "outline": true, "path": "", "position": "middle" } ] } }, "id": "73a5fb2c-f499-4c75-a8a7-71cea90f5105", "link_name": "Link_1", "node_id_ref": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "port_id_ref": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "type_attr": "PRIMARY" } ], "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "op": "PxPeek", "outputs": [ { "app_data": { "ui_data": { "label": "outPort" } }, "id": "" } ], "parameters": { "all": " ", "columns": " ", "dataset": " ", "input_count": 1, "name": "name", "nrecs": 10, "output_count": 0, "selection": " " }, "type": "execution_node" } ]Reference to the runtime type.
Examples:pxOsh
Reference to the primary (main) pipeline flow within the document.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bcRuntime information for pipeline flow.
Examples:[ { "id": "pxOsh", "name": "pxOsh" } ]Array of data record schemas used in the pipeline.
Examples:[ { "fields": [ { "app_data": { "is_unicode_string": false, "odbc_type": "INTEGER", "type_code": "INT32" }, "metadata": { "decimal_precision": 6, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 6, "min_length": 0 }, "name": "ID", "nullable": false, "type": "integer" } ], "id": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ]Pipeline flow version.
Examples:3.0
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Asset type object for folder container.
The name of the DataStage flow.
Asset type object.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_idorproject_idor 'space_id` is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_idorproject_idor 'space_id` is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset.
catalog_idorproject_idor 'space_id` is required.A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
A pipeline JSON containing operations to apply to source(s).
Pipeline flow to be stored.
- attachments
Object containing app-specific data.
The document type.
Examples:pipeline
Array of parameter set references.
Examples:[ { "name": "Test Param Set", "project_ref": "bd0dbbfd-810d-4f0e-b0a9-228c328a8e23", "ref": "eeabf991-b69e-4f8c-b9f1-e6f2129b9a57" } ]Document identifier, GUID recommended.
Examples:84c2b6fb-1dd5-4114-b4ba-9bb2cb364fffRefers to the JSON schema used to validate documents of this type.
Examples:http://api.dataplatform.ibm.com/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.jsonParameters for the flow document.
Examples:{ "local_parameters": [ { "name": "srcFile", "type": "string" }, { "name": "my_connection", "subtype": "connection", "type": "asset_id", "value": "dfe7c595-81d8-461e-8d13-a7c544f3f500" } ] }Array of pipeline.
- pipelines
Object containing app-specific data.
Examples:{ "ui_data": { "comments": [] } }A brief description of the DataStage flow.
Examples:A test DataStage flow.
Unique identifier.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bcName of the pipeline.
Examples:ContainerC1
Array of pipeline nodes.
Examples:[ { "app_data": { "ui_data": { "description": "Produce a set of mock data based on the specified metadata", "image": "/data-intg/flows/graphics/palette/PxRowGenerator.svg", "label": "Row_Generator_1", "x_pos": 108, "y_pos": 162 } }, "id": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "op": "PxRowGenerator", "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "73a5fb2c-f499-4c75-a8a7-71cea90f5105" }, "ui_data": { "label": "outPort" } }, "id": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "parameters": { "records": 10 }, "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "parameters": { "input_count": 0, "output_count": 1 }, "type": "binding" }, { "app_data": { "ui_data": { "description": "Print row column values to either the job log or to a separate output link", "image": "/data-intg/flows/graphics/palette/PxPeek.svg", "label": "Peek_1", "x_pos": 342, "y_pos": 162 } }, "id": "4195b012-d3e7-4f74-8099-e7b23ec6ebb9", "inputs": [ { "app_data": { "ui_data": { "label": "inPort" } }, "id": "c4195b34-8b4a-473f-b987-fa6d028f3968", "links": [ { "app_data": { "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "Link_1", "label": "Link_1", "outline": true, "path": "", "position": "middle" } ] } }, "id": "73a5fb2c-f499-4c75-a8a7-71cea90f5105", "link_name": "Link_1", "node_id_ref": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "port_id_ref": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "type_attr": "PRIMARY" } ], "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "op": "PxPeek", "outputs": [ { "app_data": { "ui_data": { "label": "outPort" } }, "id": "" } ], "parameters": { "all": " ", "columns": " ", "dataset": " ", "input_count": 1, "name": "name", "nrecs": 10, "output_count": 0, "selection": " " }, "type": "execution_node" } ]Reference to the runtime type.
Examples:pxOsh
Reference to the primary (main) pipeline flow within the document.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bcRuntime information for pipeline flow.
Examples:[ { "id": "pxOsh", "name": "pxOsh" } ]Array of data record schemas used in the pipeline.
Examples:[ { "fields": [ { "app_data": { "is_unicode_string": false, "odbc_type": "INTEGER", "type_code": "INT32" }, "metadata": { "decimal_precision": 6, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 6, "min_length": 0 }, "name": "ID", "nullable": false, "type": "integer" } ], "id": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ]Pipeline flow version.
Examples:3.0
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Asset type object for folder container.
The name of the DataStage flow.
Asset type object.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_idorproject_idor 'space_id` is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_idorproject_idor 'space_id` is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset.
catalog_idorproject_idor 'space_id` is required.A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
Unexpected error.
{ "attachments": { "app_data": { "datastage": { "version": "3.0.2" } }, "doc_type": "pipeline", "id": "913abf38-fac2-4c56-815b-f6f21e140fa3", "json_schema": "https://api.dataplatform.ibm.com/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.json", "pipelines": [ { "app_data": { "datastage": { "runtimecolumnpropagation": "true" }, "ui_data": { "comments": [] } }, "id": "abd53940-0ab2-4559-978e-864800ee875a", "nodes": [ { "app_data": { "datastage": { "outputs_order": "5e514391-fc64-4ad9-b7ef-d164783d1484" }, "ui_data": { "image": "", "label": "Entry node 1", "x_pos": 48, "y_pos": 48 } }, "id": "602a1843-4cb2-4a28-93f3-f6d08e9910b6", "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "aaac7610-cf58-4b7c-9431-643afe952621" }, "ui_data": { "label": "outPort" } }, "id": "5e514391-fc64-4ad9-b7ef-d164783d1484", "schema_ref": "a479344e-7835-42b8-a5f5-7d88bc490dfe" } ], "type": "binding" }, { "app_data": { "datastage": { "inputs_order": "fb38f373-7b2c-4e70-8629-c4e5e05a7cff", "outputs_order": "c539d891-84a8-481e-82fa-a6c90e588e1d" }, "ui_data": { "expanded_height": 200, "expanded_width": 300, "image": "../graphics/palette/Standardize.svg", "is_expanded": false, "label": "ContainerC3", "x_pos": 192, "y_pos": 48 } }, "id": "a2fb41ad-5088-4849-a3cc-453a6416492c", "inputs": [ { "app_data": { "datastage": {}, "ui_data": { "label": "inPort" } }, "id": "fb38f373-7b2c-4e70-8629-c4e5e05a7cff", "links": [ { "app_data": { "datastage": {}, "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "DSLink1E", "label": "DSLink1E", "outline": true, "path": "", "position": "middle" } ] } }, "id": "aaac7610-cf58-4b7c-9431-643afe952621", "link_name": "DSLink1E", "node_id_ref": "602a1843-4cb2-4a28-93f3-f6d08e9910b6", "port_id_ref": "5e514391-fc64-4ad9-b7ef-d164783d1484", "type_attr": "PRIMARY" } ], "parameters": { "runtime_column_propagation": 0 }, "schema_ref": "a479344e-7835-42b8-a5f5-7d88bc490dfe" } ], "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "78c4cdcd-2f6b-474a-805f-f15d00b7cac2" }, "ui_data": { "label": "outPort" } }, "id": "c539d891-84a8-481e-82fa-a6c90e588e1d", "schema_ref": "d4ba6846-debd-47c5-90ec-dda663728a36" } ], "parameters": { "input_count": 1, "output_count": 1 }, "subflow_ref": { "pipeline_id_ref": "default_pipeline_id", "url": "app_defined" }, "type": "super_node" }, { "app_data": { "datastage": { "inputs_order": "2a52a02d-113c-4d9b-8f36-609414be8bf5" }, "ui_data": { "image": "", "label": "Exit node 1", "x_pos": 384, "y_pos": 48 } }, "id": "547dcda4-a052-432d-ae4b-06df14e8e5b3", "inputs": [ { "app_data": { "datastage": {}, "ui_data": { "label": "inPort" } }, "id": "2a52a02d-113c-4d9b-8f36-609414be8bf5", "links": [ { "app_data": { "datastage": {}, "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "DSLink2E", "label": "DSLink2E", "outline": true, "path": "", "position": "middle" } ] } }, "id": "78c4cdcd-2f6b-474a-805f-f15d00b7cac2", "link_name": "DSLink2E", "node_id_ref": "a2fb41ad-5088-4849-a3cc-453a6416492c", "port_id_ref": "c539d891-84a8-481e-82fa-a6c90e588e1d", "type_attr": "PRIMARY" } ], "schema_ref": "d4ba6846-debd-47c5-90ec-dda663728a36" } ], "type": "binding" } ], "runtime_ref": "pxOsh" } ], "primary_pipeline": "abd53940-0ab2-4559-978e-864800ee875a", "runtimes": [ { "id": "pxOsh", "name": "pxOsh" } ], "schemas": [ { "fields": [ { "app_data": { "column_reference": "col1", "is_unicode_string": false, "odbc_type": "INTEGER", "table_def": "Basic3\\\\Basic3\\\\Basic3", "type_code": "INT32" }, "metadata": { "decimal_precision": 0, "decimal_scale": 0, "is_key": true, "is_signed": true, "item_index": 0, "max_length": 0, "min_length": 0 }, "name": "col1", "nullable": false, "type": "integer" }, { "app_data": { "column_reference": "col2", "is_unicode_string": false, "odbc_type": "CHAR", "table_def": "Basic3\\\\Basic3\\\\Basic3", "type_code": "STRING" }, "metadata": { "decimal_precision": 0, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 5, "min_length": 5 }, "name": "col2", "nullable": false, "type": "string" }, { "app_data": { "column_reference": "col3", "is_unicode_string": false, "odbc_type": "VARCHAR", "table_def": "Basic3\\\\Basic3\\\\Basic3", "type_code": "STRING" }, "metadata": { "decimal_precision": 0, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 10, "min_length": 0 }, "name": "col3", "nullable": false, "type": "string" } ], "id": "a479344e-7835-42b8-a5f5-7d88bc490dfe" }, { "fields": [ { "app_data": { "column_reference": "col1", "is_unicode_string": false, "odbc_type": "INTEGER", "table_def": "Basic3\\\\Basic3\\\\Basic3", "type_code": "INT32" }, "metadata": { "decimal_precision": 0, "decimal_scale": 0, "is_key": true, "is_signed": true, "item_index": 0, "max_length": 0, "min_length": 0 }, "name": "col1", "nullable": false, "type": "integer" }, { "app_data": { "column_reference": "col2", "is_unicode_string": false, "odbc_type": "CHAR", "table_def": "Basic3\\\\Basic3\\\\Basic3", "type_code": "STRING" }, "metadata": { "decimal_precision": 0, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 5, "min_length": 5 }, "name": "col2", "nullable": false, "type": "string" }, { "app_data": { "column_reference": "col3", "is_unicode_string": false, "odbc_type": "VARCHAR", "table_def": "Basic3\\\\Basic3\\\\Basic3", "type_code": "STRING" }, "metadata": { "decimal_precision": 0, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 10, "min_length": 0 }, "name": "col3", "nullable": false, "type": "string" } ], "id": "d4ba6846-debd-47c5-90ec-dda663728a36" } ], "version": "3.0" }, "entity": { "data_intg_subflow": { "dataset": false, "mime_type": "application/json" } }, "metadata": { "asset_id": "7ad1e03c-5380-4bfa-8317-3604b95954c1", "asset_type": "data_intg_subflow", "catalog_id": "e35806c5-5314-4677-bb8a-416d3c628d41", "create_time": "2021-05-10 19:11:04+00:00", "creator_id": "IBMid-xxxxxxxxx", "description": "", "name": "NSC2_Subflow", "origin_country": "us", "project_id": "{project_id}", "resource_key": "baa8b445-9bea-4c7b-9930-233f57f8c629/data_intg_subflow/NSC2_Subflow", "size": 5117, "tags": [], "usage": { "access_count": 0, "last_access_time": "2021-05-10 19:11:05.474000+00:00", "last_accessor_id": "IBMid-xxxxxxxxx" } } }{ "attachments": { "app_data": { "datastage": { "version": "3.0.2" } }, "doc_type": "pipeline", "id": "913abf38-fac2-4c56-815b-f6f21e140fa3", "json_schema": "https://api.dataplatform.ibm.com/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.json", "pipelines": [ { "app_data": { "datastage": { "runtimecolumnpropagation": "true" }, "ui_data": { "comments": [] } }, "id": "abd53940-0ab2-4559-978e-864800ee875a", "nodes": [ { "app_data": { "datastage": { "outputs_order": "5e514391-fc64-4ad9-b7ef-d164783d1484" }, "ui_data": { "image": "", "label": "Entry node 1", "x_pos": 48, "y_pos": 48 } }, "id": "602a1843-4cb2-4a28-93f3-f6d08e9910b6", "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "aaac7610-cf58-4b7c-9431-643afe952621" }, "ui_data": { "label": "outPort" } }, "id": "5e514391-fc64-4ad9-b7ef-d164783d1484", "schema_ref": "a479344e-7835-42b8-a5f5-7d88bc490dfe" } ], "type": "binding" }, { "app_data": { "datastage": { "inputs_order": "fb38f373-7b2c-4e70-8629-c4e5e05a7cff", "outputs_order": "c539d891-84a8-481e-82fa-a6c90e588e1d" }, "ui_data": { "expanded_height": 200, "expanded_width": 300, "image": "../graphics/palette/Standardize.svg", "is_expanded": false, "label": "ContainerC3", "x_pos": 192, "y_pos": 48 } }, "id": "a2fb41ad-5088-4849-a3cc-453a6416492c", "inputs": [ { "app_data": { "datastage": {}, "ui_data": { "label": "inPort" } }, "id": "fb38f373-7b2c-4e70-8629-c4e5e05a7cff", "links": [ { "app_data": { "datastage": {}, "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "DSLink1E", "label": "DSLink1E", "outline": true, "path": "", "position": "middle" } ] } }, "id": "aaac7610-cf58-4b7c-9431-643afe952621", "link_name": "DSLink1E", "node_id_ref": "602a1843-4cb2-4a28-93f3-f6d08e9910b6", "port_id_ref": "5e514391-fc64-4ad9-b7ef-d164783d1484", "type_attr": "PRIMARY" } ], "parameters": { "runtime_column_propagation": 0 }, "schema_ref": "a479344e-7835-42b8-a5f5-7d88bc490dfe" } ], "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "78c4cdcd-2f6b-474a-805f-f15d00b7cac2" }, "ui_data": { "label": "outPort" } }, "id": "c539d891-84a8-481e-82fa-a6c90e588e1d", "schema_ref": "d4ba6846-debd-47c5-90ec-dda663728a36" } ], "parameters": { "input_count": 1, "output_count": 1 }, "subflow_ref": { "pipeline_id_ref": "default_pipeline_id", "url": "app_defined" }, "type": "super_node" }, { "app_data": { "datastage": { "inputs_order": "2a52a02d-113c-4d9b-8f36-609414be8bf5" }, "ui_data": { "image": "", "label": "Exit node 1", "x_pos": 384, "y_pos": 48 } }, "id": "547dcda4-a052-432d-ae4b-06df14e8e5b3", "inputs": [ { "app_data": { "datastage": {}, "ui_data": { "label": "inPort" } }, "id": "2a52a02d-113c-4d9b-8f36-609414be8bf5", "links": [ { "app_data": { "datastage": {}, "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "DSLink2E", "label": "DSLink2E", "outline": true, "path": "", "position": "middle" } ] } }, "id": "78c4cdcd-2f6b-474a-805f-f15d00b7cac2", "link_name": "DSLink2E", "node_id_ref": "a2fb41ad-5088-4849-a3cc-453a6416492c", "port_id_ref": "c539d891-84a8-481e-82fa-a6c90e588e1d", "type_attr": "PRIMARY" } ], "schema_ref": "d4ba6846-debd-47c5-90ec-dda663728a36" } ], "type": "binding" } ], "runtime_ref": "pxOsh" } ], "primary_pipeline": "abd53940-0ab2-4559-978e-864800ee875a", "runtimes": [ { "id": "pxOsh", "name": "pxOsh" } ], "schemas": [ { "fields": [ { "app_data": { "column_reference": "col1", "is_unicode_string": false, "odbc_type": "INTEGER", "table_def": "Basic3\\\\Basic3\\\\Basic3", "type_code": "INT32" }, "metadata": { "decimal_precision": 0, "decimal_scale": 0, "is_key": true, "is_signed": true, "item_index": 0, "max_length": 0, "min_length": 0 }, "name": "col1", "nullable": false, "type": "integer" }, { "app_data": { "column_reference": "col2", "is_unicode_string": false, "odbc_type": "CHAR", "table_def": "Basic3\\\\Basic3\\\\Basic3", "type_code": "STRING" }, "metadata": { "decimal_precision": 0, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 5, "min_length": 5 }, "name": "col2", "nullable": false, "type": "string" }, { "app_data": { "column_reference": "col3", "is_unicode_string": false, "odbc_type": "VARCHAR", "table_def": "Basic3\\\\Basic3\\\\Basic3", "type_code": "STRING" }, "metadata": { "decimal_precision": 0, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 10, "min_length": 0 }, "name": "col3", "nullable": false, "type": "string" } ], "id": "a479344e-7835-42b8-a5f5-7d88bc490dfe" }, { "fields": [ { "app_data": { "column_reference": "col1", "is_unicode_string": false, "odbc_type": "INTEGER", "table_def": "Basic3\\\\Basic3\\\\Basic3", "type_code": "INT32" }, "metadata": { "decimal_precision": 0, "decimal_scale": 0, "is_key": true, "is_signed": true, "item_index": 0, "max_length": 0, "min_length": 0 }, "name": "col1", "nullable": false, "type": "integer" }, { "app_data": { "column_reference": "col2", "is_unicode_string": false, "odbc_type": "CHAR", "table_def": "Basic3\\\\Basic3\\\\Basic3", "type_code": "STRING" }, "metadata": { "decimal_precision": 0, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 5, "min_length": 5 }, "name": "col2", "nullable": false, "type": "string" }, { "app_data": { "column_reference": "col3", "is_unicode_string": false, "odbc_type": "VARCHAR", "table_def": "Basic3\\\\Basic3\\\\Basic3", "type_code": "STRING" }, "metadata": { "decimal_precision": 0, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 10, "min_length": 0 }, "name": "col3", "nullable": false, "type": "string" } ], "id": "d4ba6846-debd-47c5-90ec-dda663728a36" } ], "version": "3.0" }, "entity": { "data_intg_subflow": { "dataset": false, "mime_type": "application/json" } }, "metadata": { "asset_id": "7ad1e03c-5380-4bfa-8317-3604b95954c1", "asset_type": "data_intg_subflow", "catalog_id": "e35806c5-5314-4677-bb8a-416d3c628d41", "create_time": "2021-05-10 19:11:04+00:00", "creator_id": "IBMid-xxxxxxxxx", "description": "", "name": "NSC2_Subflow", "origin_country": "us", "project_id": "{project_id}", "resource_key": "baa8b445-9bea-4c7b-9930-233f57f8c629/data_intg_subflow/NSC2_Subflow", "size": 5117, "tags": [], "usage": { "access_count": 0, "last_access_time": "2021-05-10 19:11:05.474000+00:00", "last_accessor_id": "IBMid-xxxxxxxxx" } } }
Update DataStage subflow
Modifies a data subflow in the specified project or catalog (either project_id or catalog_id must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.
Modifies a data subflow in the specified project or catalog (either project_id or catalog_id must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.
Modifies a data subflow in the specified project or catalog (either project_id or catalog_id must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.
Modifies a data subflow in the specified project or catalog (either project_id or catalog_id must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.
PUT /v3/data_intg_flows/subflows/{data_intg_subflow_id}ServiceCall<DataIntgFlow> updateDatastageSubflows(UpdateDatastageSubflowsOptions updateDatastageSubflowsOptions)updateDatastageSubflows(params)
update_datastage_subflows(
self,
data_intg_subflow_id: str,
data_intg_subflow_name: str,
*,
entity: Optional['SubFlowEntityJson'] = None,
pipeline_flows: Optional['PipelineJson'] = None,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
directory_asset_id: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the UpdateDatastageSubflowsOptions.Builder to create a UpdateDatastageSubflowsOptions object that contains the parameter values for the updateDatastageSubflows method.
Path Parameters
The DataStage subflow ID to use.
Query Parameters
The DataStage subflow name.
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Example:
4c9adbb4-28ef-4a7d-b273-1cee0c38021fThe directory asset ID.
Pipeline json to be attached.
Pipeline flow to be stored.
The updateDatastageSubflows options.
The DataStage subflow ID to use.
The DataStage subflow name.
- entity
The JSON object details will be saved as entity of the subflow.
The sub type of the subflow.
Examples:data_rule
Pipeline flow to be stored.
- pipelineFlows
Object containing app-specific data.
The document type.
Examples:pipeline
Array of parameter set references.
Examples:[ { "name": "Test Param Set", "project_ref": "bd0dbbfd-810d-4f0e-b0a9-228c328a8e23", "ref": "eeabf991-b69e-4f8c-b9f1-e6f2129b9a57" } ]Document identifier, GUID recommended.
Examples:84c2b6fb-1dd5-4114-b4ba-9bb2cb364fffRefers to the JSON schema used to validate documents of this type.
Examples:http://api.dataplatform.ibm.com/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.jsonParameters for the flow document.
Examples:{ "local_parameters": [ { "name": "srcFile", "type": "string" }, { "name": "my_connection", "subtype": "connection", "type": "asset_id", "value": "dfe7c595-81d8-461e-8d13-a7c544f3f500" } ] }Array of pipeline.
- pipelines
Object containing app-specific data.
Examples:{ "ui_data": { "comments": [] } }A brief description of the DataStage flow.
Examples:A test DataStage flow.
Unique identifier.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bcName of the pipeline.
Examples:ContainerC1
Array of pipeline nodes.
Examples:[ { "app_data": { "ui_data": { "description": "Produce a set of mock data based on the specified metadata", "image": "/data-intg/flows/graphics/palette/PxRowGenerator.svg", "label": "Row_Generator_1", "x_pos": 108, "y_pos": 162 } }, "id": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "op": "PxRowGenerator", "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "73a5fb2c-f499-4c75-a8a7-71cea90f5105" }, "ui_data": { "label": "outPort" } }, "id": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "parameters": { "records": 10 }, "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "parameters": { "input_count": 0, "output_count": 1 }, "type": "binding" }, { "app_data": { "ui_data": { "description": "Print row column values to either the job log or to a separate output link", "image": "/data-intg/flows/graphics/palette/PxPeek.svg", "label": "Peek_1", "x_pos": 342, "y_pos": 162 } }, "id": "4195b012-d3e7-4f74-8099-e7b23ec6ebb9", "inputs": [ { "app_data": { "ui_data": { "label": "inPort" } }, "id": "c4195b34-8b4a-473f-b987-fa6d028f3968", "links": [ { "app_data": { "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "Link_1", "label": "Link_1", "outline": true, "path": "", "position": "middle" } ] } }, "id": "73a5fb2c-f499-4c75-a8a7-71cea90f5105", "link_name": "Link_1", "node_id_ref": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "port_id_ref": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "type_attr": "PRIMARY" } ], "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "op": "PxPeek", "outputs": [ { "app_data": { "ui_data": { "label": "outPort" } }, "id": "" } ], "parameters": { "all": " ", "columns": " ", "dataset": " ", "input_count": 1, "name": "name", "nrecs": 10, "output_count": 0, "selection": " " }, "type": "execution_node" } ]Reference to the runtime type.
Examples:pxOsh
Reference to the primary (main) pipeline flow within the document.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bcRuntime information for pipeline flow.
Examples:[ { "id": "pxOsh", "name": "pxOsh" } ]Array of data record schemas used in the pipeline.
Examples:[ { "fields": [ { "app_data": { "is_unicode_string": false, "odbc_type": "INTEGER", "type_code": "INT32" }, "metadata": { "decimal_precision": 6, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 6, "min_length": 0 }, "name": "ID", "nullable": false, "type": "integer" } ], "id": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ]Pipeline flow version.
Examples:3.0
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:4c9adbb4-28ef-4a7d-b273-1cee0c38021fThe directory asset ID.
parameters
The DataStage subflow ID to use.
The DataStage subflow name.
- entity
The JSON object details will be saved as entity of the subflow.
The sub type of the subflow.
Examples:data_rule
Pipeline flow to be stored.
- pipelineFlows
Object containing app-specific data.
The document type.
Examples:pipeline
Array of parameter set references.
Examples:[ { "name": "Test Param Set", "project_ref": "bd0dbbfd-810d-4f0e-b0a9-228c328a8e23", "ref": "eeabf991-b69e-4f8c-b9f1-e6f2129b9a57" } ]Document identifier, GUID recommended.
Examples:84c2b6fb-1dd5-4114-b4ba-9bb2cb364fffRefers to the JSON schema used to validate documents of this type.
Examples:http://api.dataplatform.ibm.com/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.jsonParameters for the flow document.
Examples:{ "local_parameters": [ { "name": "srcFile", "type": "string" }, { "name": "my_connection", "subtype": "connection", "type": "asset_id", "value": "dfe7c595-81d8-461e-8d13-a7c544f3f500" } ] }Array of pipeline.
- pipelines
Object containing app-specific data.
Examples:{ "ui_data": { "comments": [] } }A brief description of the DataStage flow.
Examples:A test DataStage flow.
Unique identifier.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bcName of the pipeline.
Examples:ContainerC1
Array of pipeline nodes.
Examples:[ { "app_data": { "ui_data": { "description": "Produce a set of mock data based on the specified metadata", "image": "/data-intg/flows/graphics/palette/PxRowGenerator.svg", "label": "Row_Generator_1", "x_pos": 108, "y_pos": 162 } }, "id": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "op": "PxRowGenerator", "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "73a5fb2c-f499-4c75-a8a7-71cea90f5105" }, "ui_data": { "label": "outPort" } }, "id": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "parameters": { "records": 10 }, "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "parameters": { "input_count": 0, "output_count": 1 }, "type": "binding" }, { "app_data": { "ui_data": { "description": "Print row column values to either the job log or to a separate output link", "image": "/data-intg/flows/graphics/palette/PxPeek.svg", "label": "Peek_1", "x_pos": 342, "y_pos": 162 } }, "id": "4195b012-d3e7-4f74-8099-e7b23ec6ebb9", "inputs": [ { "app_data": { "ui_data": { "label": "inPort" } }, "id": "c4195b34-8b4a-473f-b987-fa6d028f3968", "links": [ { "app_data": { "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "Link_1", "label": "Link_1", "outline": true, "path": "", "position": "middle" } ] } }, "id": "73a5fb2c-f499-4c75-a8a7-71cea90f5105", "link_name": "Link_1", "node_id_ref": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "port_id_ref": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "type_attr": "PRIMARY" } ], "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "op": "PxPeek", "outputs": [ { "app_data": { "ui_data": { "label": "outPort" } }, "id": "" } ], "parameters": { "all": " ", "columns": " ", "dataset": " ", "input_count": 1, "name": "name", "nrecs": 10, "output_count": 0, "selection": " " }, "type": "execution_node" } ]Reference to the runtime type.
Examples:pxOsh
Reference to the primary (main) pipeline flow within the document.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bcRuntime information for pipeline flow.
Examples:[ { "id": "pxOsh", "name": "pxOsh" } ]Array of data record schemas used in the pipeline.
Examples:[ { "fields": [ { "app_data": { "is_unicode_string": false, "odbc_type": "INTEGER", "type_code": "INT32" }, "metadata": { "decimal_precision": 6, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 6, "min_length": 0 }, "name": "ID", "nullable": false, "type": "integer" } ], "id": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ]Pipeline flow version.
Examples:3.0
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:The directory asset ID.
parameters
The DataStage subflow ID to use.
The DataStage subflow name.
- entity
The JSON object details will be saved as entity of the subflow.
The sub type of the subflow.
Examples:data_rule
Pipeline flow to be stored.
- pipeline_flows
Object containing app-specific data.
The document type.
Examples:pipeline
Array of parameter set references.
Examples:[ { "name": "Test Param Set", "project_ref": "bd0dbbfd-810d-4f0e-b0a9-228c328a8e23", "ref": "eeabf991-b69e-4f8c-b9f1-e6f2129b9a57" } ]Document identifier, GUID recommended.
Examples:84c2b6fb-1dd5-4114-b4ba-9bb2cb364fffRefers to the JSON schema used to validate documents of this type.
Examples:http://api.dataplatform.ibm.com/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.jsonParameters for the flow document.
Examples:{ "local_parameters": [ { "name": "srcFile", "type": "string" }, { "name": "my_connection", "subtype": "connection", "type": "asset_id", "value": "dfe7c595-81d8-461e-8d13-a7c544f3f500" } ] }Array of pipeline.
- pipelines
Object containing app-specific data.
Examples:{ "ui_data": { "comments": [] } }A brief description of the DataStage flow.
Examples:A test DataStage flow.
Unique identifier.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bcName of the pipeline.
Examples:ContainerC1
Array of pipeline nodes.
Examples:[ { "app_data": { "ui_data": { "description": "Produce a set of mock data based on the specified metadata", "image": "/data-intg/flows/graphics/palette/PxRowGenerator.svg", "label": "Row_Generator_1", "x_pos": 108, "y_pos": 162 } }, "id": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "op": "PxRowGenerator", "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "73a5fb2c-f499-4c75-a8a7-71cea90f5105" }, "ui_data": { "label": "outPort" } }, "id": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "parameters": { "records": 10 }, "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "parameters": { "input_count": 0, "output_count": 1 }, "type": "binding" }, { "app_data": { "ui_data": { "description": "Print row column values to either the job log or to a separate output link", "image": "/data-intg/flows/graphics/palette/PxPeek.svg", "label": "Peek_1", "x_pos": 342, "y_pos": 162 } }, "id": "4195b012-d3e7-4f74-8099-e7b23ec6ebb9", "inputs": [ { "app_data": { "ui_data": { "label": "inPort" } }, "id": "c4195b34-8b4a-473f-b987-fa6d028f3968", "links": [ { "app_data": { "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "Link_1", "label": "Link_1", "outline": true, "path": "", "position": "middle" } ] } }, "id": "73a5fb2c-f499-4c75-a8a7-71cea90f5105", "link_name": "Link_1", "node_id_ref": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "port_id_ref": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "type_attr": "PRIMARY" } ], "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "op": "PxPeek", "outputs": [ { "app_data": { "ui_data": { "label": "outPort" } }, "id": "" } ], "parameters": { "all": " ", "columns": " ", "dataset": " ", "input_count": 1, "name": "name", "nrecs": 10, "output_count": 0, "selection": " " }, "type": "execution_node" } ]Reference to the runtime type.
Examples:pxOsh
Reference to the primary (main) pipeline flow within the document.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bcRuntime information for pipeline flow.
Examples:[ { "id": "pxOsh", "name": "pxOsh" } ]Array of data record schemas used in the pipeline.
Examples:[ { "fields": [ { "app_data": { "is_unicode_string": false, "odbc_type": "INTEGER", "type_code": "INT32" }, "metadata": { "decimal_precision": 6, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 6, "min_length": 0 }, "name": "ID", "nullable": false, "type": "integer" } ], "id": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ]Pipeline flow version.
Examples:3.0
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:The directory asset ID.
curl -X PUT --location --header "Authorization: Bearer {iam_token}" --header "Accept: application/json;charset=utf-8" --header "Content-Type: application/json;charset=utf-8" --data '{}' "{base_url}/v3/data_intg_flows/subflows/{data_intg_subflow_id}?data_intg_subflow_name={data_intg_subflow_name}&project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
PipelineJson exampleSubFlowUpdated = PipelineFlowHelper.buildPipelineFlow(updatedSubFlowJson); UpdateDatastageSubflowsOptions updateDatastageSubflowsOptions = new UpdateDatastageSubflowsOptions.Builder() .dataIntgSubflowId(subflowID) .dataIntgSubflowName(subflowName) .pipelineFlows(exampleSubFlowUpdated) .projectId(projectID) .build(); Response<DataIntgFlow> response = datastageService.updateDatastageSubflows(updateDatastageSubflowsOptions).execute(); DataIntgFlow dataIntgFlow = response.getResult(); System.out.println(dataIntgFlow);
const params = { dataIntgSubflowId: subflow_assetID, dataIntgSubflowName: dataIntgSubFlowName, pipelineFlows: pipelineJsonFromFile, projectId: projectID, assetCategory: 'system', }; const res = await datastageService.updateDatastageSubflows(params);
data_intg_flow = datastage_service.update_datastage_subflows( data_intg_subflow_id=createdSubflowId, data_intg_subflow_name='testSubflow1Updated', pipeline_flows=UtilHelper.readJsonFileToDict('inputFiles/exampleSubflowUpdated.json'), project_id=config['PROJECT_ID'] ).get_result() print(json.dumps(data_intg_flow, indent=2))
Response
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
System metadata about an asset.
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Asset type object for folder container.
The name of the DataStage flow.
Asset type object.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_idorproject_idor 'space_id` is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_idorproject_idor 'space_id` is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset.
catalog_idorproject_idor 'space_id` is required.A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Asset type object for folder container.
The name of the DataStage flow.
Asset type object.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_idorproject_idor 'space_id` is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_idorproject_idor 'space_id` is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset.
catalog_idorproject_idor 'space_id` is required.A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Asset type object for folder container.
The name of the DataStage flow.
Asset type object.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_idorproject_idor 'space_id` is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_idorproject_idor 'space_id` is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset.
catalog_idorproject_idor 'space_id` is required.A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred. See response for more information.
No Sample Response
Modifies attributes of DataStage subflow
Modifies attributes of a data subflow in the specified project or catalog (either project_id or catalog_id must be set).
Modifies attributes of a data subflow in the specified project or catalog (either project_id or catalog_id must be set).
Modifies attributes of a data subflow in the specified project or catalog (either project_id or catalog_id must be set).
Modifies attributes of a data subflow in the specified project or catalog (either project_id or catalog_id must be set).
PUT /v3/data_intg_flows/subflows/{data_intg_subflow_id}/attributesServiceCall<DataIntgFlow> patchAttributesDatastageSubflow(PatchAttributesDatastageSubflowOptions patchAttributesDatastageSubflowOptions)patchAttributesDatastageSubflow(params)
patch_attributes_datastage_subflow(
self,
data_intg_subflow_id: str,
*,
description: Optional[str] = None,
directory_asset_id: Optional[str] = None,
name: Optional[str] = None,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the PatchAttributesDatastageSubflowOptions.Builder to create a PatchAttributesDatastageSubflowOptions object that contains the parameter values for the patchAttributesDatastageSubflow method.
Path Parameters
The DataStage subflow ID to use.
Query Parameters
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Example:
4c9adbb4-28ef-4a7d-b273-1cee0c38021f
attributes of subflows to modify.
description of the asset.
The directory asset ID of the asset.
name of the asset.
The patchAttributesDatastageSubflow options.
The DataStage subflow ID to use.
description of the asset.
The directory asset ID of the asset.
name of the asset.
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:4c9adbb4-28ef-4a7d-b273-1cee0c38021f
parameters
The DataStage subflow ID to use.
description of the asset.
The directory asset ID of the asset.
name of the asset.
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:
parameters
The DataStage subflow ID to use.
description of the asset.
The directory asset ID of the asset.
name of the asset.
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:
Response
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
System metadata about an asset.
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Asset type object for folder container.
The name of the DataStage flow.
Asset type object.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_idorproject_idor 'space_id` is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_idorproject_idor 'space_id` is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset.
catalog_idorproject_idor 'space_id` is required.A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Asset type object for folder container.
The name of the DataStage flow.
Asset type object.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_idorproject_idor 'space_id` is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_idorproject_idor 'space_id` is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset.
catalog_idorproject_idor 'space_id` is required.A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Asset type object for folder container.
The name of the DataStage flow.
Asset type object.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_idorproject_idor 'space_id` is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_idorproject_idor 'space_id` is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset.
catalog_idorproject_idor 'space_id` is required.A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred. See response for more information.
No Sample Response
Clone DataStage subflow
Create a DataStage subflow in the specified project or catalog based on an existing DataStage subflow in the same project or catalog.
Create a DataStage subflow in the specified project or catalog based on an existing DataStage subflow in the same project or catalog.
Create a DataStage subflow in the specified project or catalog based on an existing DataStage subflow in the same project or catalog.
Create a DataStage subflow in the specified project or catalog based on an existing DataStage subflow in the same project or catalog.
POST /v3/data_intg_flows/subflows/{data_intg_subflow_id}/cloneServiceCall<DataIntgFlow> cloneDatastageSubflows(CloneDatastageSubflowsOptions cloneDatastageSubflowsOptions)cloneDatastageSubflows(params)
clone_datastage_subflows(
self,
data_intg_subflow_id: str,
*,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
directory_asset_id: Optional[str] = None,
data_intg_subflow_name: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the CloneDatastageSubflowsOptions.Builder to create a CloneDatastageSubflowsOptions object that contains the parameter values for the cloneDatastageSubflows method.
Path Parameters
The DataStage subflow ID to use.
Query Parameters
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Example:
4c9adbb4-28ef-4a7d-b273-1cee0c38021fThe directory asset ID.
The data subflow name.
The cloneDatastageSubflows options.
The DataStage subflow ID to use.
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:4c9adbb4-28ef-4a7d-b273-1cee0c38021fThe directory asset ID.
The data subflow name.
parameters
The DataStage subflow ID to use.
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:The directory asset ID.
The data subflow name.
parameters
The DataStage subflow ID to use.
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:The directory asset ID.
The data subflow name.
curl -X POST --location --header "Authorization: Bearer {iam_token}" --header "Accept: application/json;charset=utf-8" "{base_url}/v3/data_intg_flows/subflows/{data_intg_subflow_id}/clone?project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
CloneDatastageSubflowsOptions cloneDatastageSubflowsOptions = new CloneDatastageSubflowsOptions.Builder() .dataIntgSubflowId(subflowID) .projectId(projectID) .build(); Response<DataIntgFlow> response = datastageService.cloneDatastageSubflows(cloneDatastageSubflowsOptions).execute(); DataIntgFlow dataIntgFlow = response.getResult(); System.out.println(dataIntgFlow);
const params = { dataIntgSubflowId: subflow_assetID, projectId: projectID, }; const res = await datastageService.cloneDatastageSubflows(params);
data_intg_flow = datastage_service.clone_datastage_subflows( data_intg_subflow_id=createdSubflowId, project_id=config['PROJECT_ID'] ).get_result() print(json.dumps(data_intg_flow, indent=2))
Response
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
System metadata about an asset.
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Asset type object for folder container.
The name of the DataStage flow.
Asset type object.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_idorproject_idor 'space_id` is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_idorproject_idor 'space_id` is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset.
catalog_idorproject_idor 'space_id` is required.A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Asset type object for folder container.
The name of the DataStage flow.
Asset type object.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_idorproject_idor 'space_id` is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_idorproject_idor 'space_id` is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset.
catalog_idorproject_idor 'space_id` is required.A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Asset type object for folder container.
The name of the DataStage flow.
Asset type object.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_idorproject_idor 'space_id` is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_idorproject_idor 'space_id` is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset.
catalog_idorproject_idor 'space_id` is required.A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred. See response for more information.
No Sample Response
Delete table definitions
Delete the specified table definitions from a project or catalog (either project_id or catalog_id must be set).
Delete the specified table definitions from a project or catalog (either project_id or catalog_id must be set).
Delete the specified table definitions from a project or catalog (either project_id or catalog_id must be set).
Delete the specified table definitions from a project or catalog (either project_id or catalog_id must be set).
DELETE /v3/table_definitions
ServiceCall<Void> deleteTableDefinitions(DeleteTableDefinitionsOptions deleteTableDefinitionsOptions)deleteTableDefinitions(params)
delete_table_definitions(
self,
id: List[str],
*,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the DeleteTableDefinitionsOptions.Builder to create a DeleteTableDefinitionsOptions object that contains the parameter values for the deleteTableDefinitions method.
Query Parameters
The list of table definitions IDs to delete.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use. catalog_id, space_id, or project_id is required.
The deleteTableDefinitions options.
The list of table definitions IDs to delete.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use. catalog_id, space_id, or project_id is required.
parameters
The list of table definitions IDs to delete.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:The ID of the space to use. catalog_id, space_id, or project_id is required.
parameters
The list of table definitions IDs to delete.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:The ID of the space to use. catalog_id, space_id, or project_id is required.
Response
Status Code
The requested operation completed successfully.
The requested operation is in progress.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred. See response for more information.
No Sample Response
List table definitions
Lists the table definitions that are contained in the specified project or catalog (either project_id or catalog_id must be set).
Use the following parameters to filter the results:
| Field | Match type | Example | | ------------------------ | ------------ | --------------------------------------- | | asset.name | Starts with | ?asset.name=starts:MyTable | | asset.description | Equals | ?asset.description=starts:profiling |
To sort the returned results, use one or more of the parameters described in the following section. If no sort key is specified, the results are sorted in descending order on metadata.create_time, returning the most recently created data flows first.
| Field | Example | | ------------------------- | ----------------------------------- | | asset.name | ?sort=+asset.name | | metadata.create_time | ?sort=-metadata.create_time |
Multiple sort keys can be specified by delimiting them with a comma. For example, to sort in descending order on create_time and then in ascending order on name use: `?sort=-metadata.create_time,+asset.name
Lists the table definitions that are contained in the specified project or catalog (either project_id or catalog_id must be set).
Use the following parameters to filter the results:
| Field | Match type | Example | | ------------------------ | ------------ | --------------------------------------- | | asset.name | Starts with | ?asset.name=starts:MyTable | | asset.description | Equals | ?asset.description=starts:profiling |
To sort the returned results, use one or more of the parameters described in the following section. If no sort key is specified, the results are sorted in descending order on metadata.create_time, returning the most recently created data flows first.
| Field | Example | | ------------------------- | ----------------------------------- | | asset.name | ?sort=+asset.name | | metadata.create_time | ?sort=-metadata.create_time |
Multiple sort keys can be specified by delimiting them with a comma. For example, to sort in descending order on create_time and then in ascending order on name use: `?sort=-metadata.create_time,+asset.name.
Lists the table definitions that are contained in the specified project or catalog (either project_id or catalog_id must be set).
Use the following parameters to filter the results:
| Field | Match type | Example | | ------------------------ | ------------ | --------------------------------------- | | asset.name | Starts with | ?asset.name=starts:MyTable | | asset.description | Equals | ?asset.description=starts:profiling |
To sort the returned results, use one or more of the parameters described in the following section. If no sort key is specified, the results are sorted in descending order on metadata.create_time, returning the most recently created data flows first.
| Field | Example | | ------------------------- | ----------------------------------- | | asset.name | ?sort=+asset.name | | metadata.create_time | ?sort=-metadata.create_time |
Multiple sort keys can be specified by delimiting them with a comma. For example, to sort in descending order on create_time and then in ascending order on name use: `?sort=-metadata.create_time,+asset.name.
Lists the table definitions that are contained in the specified project or catalog (either project_id or catalog_id must be set).
Use the following parameters to filter the results:
| Field | Match type | Example | | ------------------------ | ------------ | --------------------------------------- | | asset.name | Starts with | ?asset.name=starts:MyTable | | asset.description | Equals | ?asset.description=starts:profiling |
To sort the returned results, use one or more of the parameters described in the following section. If no sort key is specified, the results are sorted in descending order on metadata.create_time, returning the most recently created data flows first.
| Field | Example | | ------------------------- | ----------------------------------- | | asset.name | ?sort=+asset.name | | metadata.create_time | ?sort=-metadata.create_time |
Multiple sort keys can be specified by delimiting them with a comma. For example, to sort in descending order on create_time and then in ascending order on name use: `?sort=-metadata.create_time,+asset.name.
GET /v3/table_definitions
ServiceCall<TableDefinitionPagedCollection> getTableDefinitions(GetTableDefinitionsOptions getTableDefinitionsOptions)getTableDefinitions(params)
get_table_definitions(
self,
*,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
sort: Optional[str] = None,
start: Optional[str] = None,
limit: Optional[int] = None,
asset_name: Optional[str] = None,
asset_description: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the GetTableDefinitionsOptions.Builder to create a GetTableDefinitionsOptions object that contains the parameter values for the getTableDefinitions method.
Query Parameters
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use. catalog_id, space_id, or project_id is required.
The field to sort the results on, including whether to sort ascending (+) or descending (-), for example, sort=-metadata.create_time
The page token indicating where to start paging from.
The limit of the number of items to return, for example limit=50. If not specified a default of 100 will be used.
Possible values: value ≥ 1
Example:
100Filter results based on the specified name.
Filter results based on the specified description.
The getTableDefinitions options.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use. catalog_id, space_id, or project_id is required.
The field to sort the results on, including whether to sort ascending (+) or descending (-), for example, sort=-metadata.create_time.
The page token indicating where to start paging from.
The limit of the number of items to return, for example limit=50. If not specified a default of 100 will be used.
Possible values: value ≥ 1
Examples:100Filter results based on the specified name.
Filter results based on the specified description.
parameters
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:The ID of the space to use. catalog_id, space_id, or project_id is required.
The field to sort the results on, including whether to sort ascending (+) or descending (-), for example, sort=-metadata.create_time.
The page token indicating where to start paging from.
The limit of the number of items to return, for example limit=50. If not specified a default of 100 will be used.
Possible values: value ≥ 1
Examples:Filter results based on the specified name.
Filter results based on the specified description.
parameters
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:The ID of the space to use. catalog_id, space_id, or project_id is required.
The field to sort the results on, including whether to sort ascending (+) or descending (-), for example, sort=-metadata.create_time.
The page token indicating where to start paging from.
The limit of the number of items to return, for example limit=50. If not specified a default of 100 will be used.
Possible values: value ≥ 1
Examples:Filter results based on the specified name.
Filter results based on the specified description.
Response
A page from a collection of table definitions.
The number of table definitions requested to be returned.
A page from a collection of table definitions.
The total number of table definitions available.
A page from a collection of table definitions.
- first
URI of a resource.
- last
URI of a resource.
The number of table definitions requested to be returned.
- next
URI of a resource.
- prev
URI of a resource.
A page from a collection of table definitions.
- tableDefinitions
The underlying table definition.
- entity
column definitions and table properties.
- dataAsset
table properties.
column name, type & properties.
- columns
- directoryAsset
The directory asset id.
data type defaults and format properties.
- dsInfo
definitions for custom data type.
default properties for data types.
- typeDefaults
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
The total number of table definitions available.
A page from a collection of table definitions.
- first
URI of a resource.
- last
URI of a resource.
The number of table definitions requested to be returned.
- next
URI of a resource.
- prev
URI of a resource.
A page from a collection of table definitions.
- table_definitions
The underlying table definition.
- entity
column definitions and table properties.
- data_asset
table properties.
column name, type & properties.
- columns
- directory_asset
The directory asset id.
data type defaults and format properties.
- ds_info
definitions for custom data type.
default properties for data types.
- type_defaults
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
The total number of table definitions available.
A page from a collection of table definitions.
- first
URI of a resource.
- last
URI of a resource.
The number of table definitions requested to be returned.
- next
URI of a resource.
- prev
URI of a resource.
A page from a collection of table definitions.
- table_definitions
The underlying table definition.
- entity
column definitions and table properties.
- data_asset
table properties.
column name, type & properties.
- columns
- directory_asset
The directory asset id.
data type defaults and format properties.
- ds_info
definitions for custom data type.
default properties for data types.
- type_defaults
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
The total number of table definitions available.
Status Code
The requested operation completed successfully.
You are not permitted to perform this action. See response for more information.
Not authorized.
An error occurred. See response for more information.
No Sample Response
Create table definition
Creates a table definition in the specified project or catalog (either project_id or catalog_id must be set). All subsequent calls to use the parameter set must specify the project or catalog ID the table definition was created in.
Creates a table definition in the specified project or catalog (either project_id or catalog_id must be set). All subsequent calls to use the parameter set must specify the project or catalog ID the table definition was created in.
Creates a table definition in the specified project or catalog (either project_id or catalog_id must be set). All subsequent calls to use the parameter set must specify the project or catalog ID the table definition was created in.
Creates a table definition in the specified project or catalog (either project_id or catalog_id must be set). All subsequent calls to use the parameter set must specify the project or catalog ID the table definition was created in.
POST /v3/table_definitions
ServiceCall<TableDefinition> createTableDefinition(CreateTableDefinitionOptions createTableDefinitionOptions)createTableDefinition(params)
create_table_definition(
self,
entity: 'TableDefinitionEntity',
metadata: 'TableDefinitionMetadata',
*,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
directory_asset_id: Optional[str] = None,
asset_category: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the CreateTableDefinitionOptions.Builder to create a CreateTableDefinitionOptions object that contains the parameter values for the createTableDefinition method.
Query Parameters
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use. catalog_id, space_id, or project_id is required.
The directory asset id to create the asset in or move to
The category of the asset. Must be either SYSTEM or USER. Only a registered service can use this parameter.
Allowable values: [
SYSTEM,USER]
The table definition to be created.
The underlying table definition.
System metadata about a table definition.
The createTableDefinition options.
The underlying table definition.
- entity
column definitions and table properties.
- dataAsset
table properties.
column name, type & properties.
- columns
- directoryAsset
The directory asset id.
data type defaults and format properties.
- dsInfo
definitions for custom data type.
default properties for data types.
- typeDefaults
System metadata about a table definition.
- metadata
table definition description.
table definition name.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use. catalog_id, space_id, or project_id is required.
The directory asset id to create the asset in or move to.
The category of the asset. Must be either SYSTEM or USER. Only a registered service can use this parameter.
Allowable values: [
SYSTEM,USER]
parameters
The underlying table definition.
- entity
column definitions and table properties.
- data_asset
table properties.
column name, type & properties.
- columns
- directory_asset
The directory asset id.
data type defaults and format properties.
- ds_info
definitions for custom data type.
default properties for data types.
- type_defaults
System metadata about a table definition.
- metadata
table definition description.
table definition name.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:The ID of the space to use. catalog_id, space_id, or project_id is required.
The directory asset id to create the asset in or move to.
The category of the asset. Must be either SYSTEM or USER. Only a registered service can use this parameter.
Allowable values: [
SYSTEM,USER]
parameters
The underlying table definition.
- entity
column definitions and table properties.
- data_asset
table properties.
column name, type & properties.
- columns
- directory_asset
The directory asset id.
data type defaults and format properties.
- ds_info
definitions for custom data type.
default properties for data types.
- type_defaults
System metadata about a table definition.
- metadata
table definition description.
table definition name.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:The ID of the space to use. catalog_id, space_id, or project_id is required.
The directory asset id to create the asset in or move to.
The category of the asset. Must be either SYSTEM or USER. Only a registered service can use this parameter.
Allowable values: [
SYSTEM,USER]
Response
A table definition model that defines a set of parameters that can be referenced at runtime.
The underlying table definition.
System metadata about an asset
A table definition model that defines a set of parameters that can be referenced at runtime.
The underlying table definition.
- entity
column definitions and table properties.
- dataAsset
table properties.
column name, type & properties.
- columns
- directoryAsset
The directory asset id.
data type defaults and format properties.
- dsInfo
definitions for custom data type.
default properties for data types.
- typeDefaults
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
A table definition model that defines a set of parameters that can be referenced at runtime.
The underlying table definition.
- entity
column definitions and table properties.
- data_asset
table properties.
column name, type & properties.
- columns
- directory_asset
The directory asset id.
data type defaults and format properties.
- ds_info
definitions for custom data type.
default properties for data types.
- type_defaults
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
A table definition model that defines a set of parameters that can be referenced at runtime.
The underlying table definition.
- entity
column definitions and table properties.
- data_asset
table properties.
column name, type & properties.
- columns
- directory_asset
The directory asset id.
data type defaults and format properties.
- ds_info
definitions for custom data type.
default properties for data types.
- type_defaults
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
The data source type details cannot be found.
An error occurred. See response for more information.
No Sample Response
Delete a table definition
Delete the specified table definition from a project or catalog (either project_id or catalog_id must be set).
DELETE /v3/table_definitions/{table_id}Request
Path Parameters
Table definition ID
Query Parameters
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use. catalog_id, space_id, or project_id is required.
Response
Status Code
The requested operation completed successfully.
Bad request. See response for more information.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
The data source type details cannot be found.
Gone. See response for more information.
An error occurred. See response for more information.
No Sample Response
Get table definition
Get table definition
Get table definition.
Get table definition.
Get table definition.
GET /v3/table_definitions/{table_id}ServiceCall<TableDefinition> getTableDefinition(GetTableDefinitionOptions getTableDefinitionOptions)getTableDefinition(params)
get_table_definition(
self,
table_id: str,
*,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the GetTableDefinitionOptions.Builder to create a GetTableDefinitionOptions object that contains the parameter values for the getTableDefinition method.
Path Parameters
Table definition ID
Query Parameters
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use. catalog_id, space_id, or project_id is required.
The getTableDefinition options.
Table definition ID.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use. catalog_id, space_id, or project_id is required.
parameters
Table definition ID.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:The ID of the space to use. catalog_id, space_id, or project_id is required.
parameters
Table definition ID.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:The ID of the space to use. catalog_id, space_id, or project_id is required.
Response
A table definition model that defines a set of parameters that can be referenced at runtime.
The underlying table definition.
System metadata about an asset
A table definition model that defines a set of parameters that can be referenced at runtime.
The underlying table definition.
- entity
column definitions and table properties.
- dataAsset
table properties.
column name, type & properties.
- columns
- directoryAsset
The directory asset id.
data type defaults and format properties.
- dsInfo
definitions for custom data type.
default properties for data types.
- typeDefaults
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
A table definition model that defines a set of parameters that can be referenced at runtime.
The underlying table definition.
- entity
column definitions and table properties.
- data_asset
table properties.
column name, type & properties.
- columns
- directory_asset
The directory asset id.
data type defaults and format properties.
- ds_info
definitions for custom data type.
default properties for data types.
- type_defaults
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
A table definition model that defines a set of parameters that can be referenced at runtime.
The underlying table definition.
- entity
column definitions and table properties.
- data_asset
table properties.
column name, type & properties.
- columns
- directory_asset
The directory asset id.
data type defaults and format properties.
- ds_info
definitions for custom data type.
default properties for data types.
- type_defaults
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
Status Code
Table definition found.
Bad request. See response for more information.
You are not authorized to retrieve the table definition.
You are not permitted to perform this action.
The data source type details cannot be found.
The service is currently receiving more requests than it can process in a timely fashion. Please retry submitting your request later.
An error occurred. No table definitions were retrieved.
A timeout occurred when processing your request. Please retry later.
No Sample Response
Patch a table definition
Patch a table definition in the specified project or catalog (either project_id or catalog_id must be set).
Patch a table definition in the specified project or catalog (either project_id or catalog_id must be set).
Patch a table definition in the specified project or catalog (either project_id or catalog_id must be set).
Patch a table definition in the specified project or catalog (either project_id or catalog_id must be set).
PATCH /v3/table_definitions/{table_id}ServiceCall<TableDefinition> patchTableDefinition(PatchTableDefinitionOptions patchTableDefinitionOptions)patchTableDefinition(params)
patch_table_definition(
self,
table_id: str,
json_patch: List['PatchDocument'],
*,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the PatchTableDefinitionOptions.Builder to create a PatchTableDefinitionOptions object that contains the parameter values for the patchTableDefinition method.
Path Parameters
Table definition ID
Query Parameters
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use. catalog_id, space_id, or project_id is required.
The patch operations to apply.
The operation to be performed
Allowable values: [
add,remove,replace]A JSON-Pointer
The value to be used within the operations.
The patchTableDefinition options.
Table definition ID.
The patch operations to apply.
Examples:{ "op": "replace", "path": "/metadata/name", "value": "NewAssetName" }- jsonPatch
The operation to be performed.
Allowable values: [
add,remove,replace]A JSON-Pointer.
The value to be used within the operations.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use. catalog_id, space_id, or project_id is required.
parameters
Table definition ID.
The patch operations to apply.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:The ID of the space to use. catalog_id, space_id, or project_id is required.
parameters
Table definition ID.
The patch operations to apply.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:The ID of the space to use. catalog_id, space_id, or project_id is required.
Response
A table definition model that defines a set of parameters that can be referenced at runtime.
The underlying table definition.
System metadata about an asset
A table definition model that defines a set of parameters that can be referenced at runtime.
The underlying table definition.
- entity
column definitions and table properties.
- dataAsset
table properties.
column name, type & properties.
- columns
- directoryAsset
The directory asset id.
data type defaults and format properties.
- dsInfo
definitions for custom data type.
default properties for data types.
- typeDefaults
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
A table definition model that defines a set of parameters that can be referenced at runtime.
The underlying table definition.
- entity
column definitions and table properties.
- data_asset
table properties.
column name, type & properties.
- columns
- directory_asset
The directory asset id.
data type defaults and format properties.
- ds_info
definitions for custom data type.
default properties for data types.
- type_defaults
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
A table definition model that defines a set of parameters that can be referenced at runtime.
The underlying table definition.
- entity
column definitions and table properties.
- data_asset
table properties.
column name, type & properties.
- columns
- directory_asset
The directory asset id.
data type defaults and format properties.
- ds_info
definitions for custom data type.
default properties for data types.
- type_defaults
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
Status Code
The requested operation completed successfully.
Bad request. See response for more information.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
The data source type details cannot be found.
An error occurred. See response for more information.
No Sample Response
Update a table definition with a replacement
Replace the contents of a table definition in the specified project or catalog (either project_id or catalog_id must be set).
Replace the contents of a table definition in the specified project or catalog (either project_id or catalog_id must be set).
Replace the contents of a table definition in the specified project or catalog (either project_id or catalog_id must be set).
Replace the contents of a table definition in the specified project or catalog (either project_id or catalog_id must be set).
PUT /v3/table_definitions/{table_id}ServiceCall<TableDefinition> updateTableDefinition(UpdateTableDefinitionOptions updateTableDefinitionOptions)updateTableDefinition(params)
update_table_definition(
self,
table_id: str,
entity: 'TableDefinitionEntity',
metadata: 'TableDefinitionMetadata',
*,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
directory_asset_id: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the UpdateTableDefinitionOptions.Builder to create a UpdateTableDefinitionOptions object that contains the parameter values for the updateTableDefinition method.
Path Parameters
Table definition ID
Query Parameters
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use. catalog_id, space_id, or project_id is required.
The directory asset id to create the asset in or move to
The table definition to be updated.
The underlying table definition.
System metadata about a table definition.
The updateTableDefinition options.
Table definition ID.
The underlying table definition.
- entity
column definitions and table properties.
- dataAsset
table properties.
column name, type & properties.
- columns
- directoryAsset
The directory asset id.
data type defaults and format properties.
- dsInfo
definitions for custom data type.
default properties for data types.
- typeDefaults
System metadata about a table definition.
- metadata
table definition description.
table definition name.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use. catalog_id, space_id, or project_id is required.
The directory asset id to create the asset in or move to.
parameters
Table definition ID.
The underlying table definition.
- entity
column definitions and table properties.
- data_asset
table properties.
column name, type & properties.
- columns
- directory_asset
The directory asset id.
data type defaults and format properties.
- ds_info
definitions for custom data type.
default properties for data types.
- type_defaults
System metadata about a table definition.
- metadata
table definition description.
table definition name.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:The ID of the space to use. catalog_id, space_id, or project_id is required.
The directory asset id to create the asset in or move to.
parameters
Table definition ID.
The underlying table definition.
- entity
column definitions and table properties.
- data_asset
table properties.
column name, type & properties.
- columns
- directory_asset
The directory asset id.
data type defaults and format properties.
- ds_info
definitions for custom data type.
default properties for data types.
- type_defaults
System metadata about a table definition.
- metadata
table definition description.
table definition name.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:The ID of the space to use. catalog_id, space_id, or project_id is required.
The directory asset id to create the asset in or move to.
Response
A table definition model that defines a set of parameters that can be referenced at runtime.
The underlying table definition.
System metadata about an asset
A table definition model that defines a set of parameters that can be referenced at runtime.
The underlying table definition.
- entity
column definitions and table properties.
- dataAsset
table properties.
column name, type & properties.
- columns
- directoryAsset
The directory asset id.
data type defaults and format properties.
- dsInfo
definitions for custom data type.
default properties for data types.
- typeDefaults
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
A table definition model that defines a set of parameters that can be referenced at runtime.
The underlying table definition.
- entity
column definitions and table properties.
- data_asset
table properties.
column name, type & properties.
- columns
- directory_asset
The directory asset id.
data type defaults and format properties.
- ds_info
definitions for custom data type.
default properties for data types.
- type_defaults
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
A table definition model that defines a set of parameters that can be referenced at runtime.
The underlying table definition.
- entity
column definitions and table properties.
- data_asset
table properties.
column name, type & properties.
- columns
- directory_asset
The directory asset id.
data type defaults and format properties.
- ds_info
definitions for custom data type.
default properties for data types.
- type_defaults
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
Status Code
The requested operation completed successfully.
Bad request. See response for more information.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
The data source type details cannot be found.
An error occurred. See response for more information.
No Sample Response
Clone table definition
Clone table definition
Clone table definition.
Clone table definition.
Clone table definition.
POST /v3/table_definitions/{table_id}/cloneServiceCall<TableDefinition> cloneTableDefinition(CloneTableDefinitionOptions cloneTableDefinitionOptions)cloneTableDefinition(params)
clone_table_definition(
self,
table_id: str,
*,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
directory_asset_id: Optional[str] = None,
name: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the CloneTableDefinitionOptions.Builder to create a CloneTableDefinitionOptions object that contains the parameter values for the cloneTableDefinition method.
Path Parameters
Table definition ID
Query Parameters
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use. catalog_id, space_id, or project_id is required.
The directory asset id to create the asset in or move to
The new name.
The cloneTableDefinition options.
Table definition ID.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use. catalog_id, space_id, or project_id is required.
The directory asset id to create the asset in or move to.
The new name.
parameters
Table definition ID.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:The ID of the space to use. catalog_id, space_id, or project_id is required.
The directory asset id to create the asset in or move to.
The new name.
parameters
Table definition ID.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:The ID of the space to use. catalog_id, space_id, or project_id is required.
The directory asset id to create the asset in or move to.
The new name.
Response
A table definition model that defines a set of parameters that can be referenced at runtime.
The underlying table definition.
System metadata about an asset
A table definition model that defines a set of parameters that can be referenced at runtime.
The underlying table definition.
- entity
column definitions and table properties.
- dataAsset
table properties.
column name, type & properties.
- columns
- directoryAsset
The directory asset id.
data type defaults and format properties.
- dsInfo
definitions for custom data type.
default properties for data types.
- typeDefaults
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
A table definition model that defines a set of parameters that can be referenced at runtime.
The underlying table definition.
- entity
column definitions and table properties.
- data_asset
table properties.
column name, type & properties.
- columns
- directory_asset
The directory asset id.
data type defaults and format properties.
- ds_info
definitions for custom data type.
default properties for data types.
- type_defaults
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
A table definition model that defines a set of parameters that can be referenced at runtime.
The underlying table definition.
- entity
column definitions and table properties.
- data_asset
table properties.
column name, type & properties.
- columns
- directory_asset
The directory asset id.
data type defaults and format properties.
- ds_info
definitions for custom data type.
default properties for data types.
- type_defaults
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
Status Code
The requested operation completed successfully.
Bad request. See response for more information.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
The data source type details cannot be found.
Internal server error. See response for more information.
An error occurred. See response for more information.
No Sample Response
Delete assets and their attachments
Delete assets and their attachments
Delete assets and their attachments.
Delete assets and their attachments.
Delete assets and their attachments.
DELETE /v3/assets
ServiceCall<Void> deleteAssets(DeleteAssetsOptions deleteAssetsOptions)deleteAssets(params)
delete_assets(
self,
asset_ids: List[str],
*,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
asset_type: Optional[str] = None,
purge_test_data: Optional[bool] = None,
**kwargs,
) -> DetailedResponseRequest
Use the DeleteAssetsOptions.Builder to create a DeleteAssetsOptions object that contains the parameter values for the deleteAssets method.
Query Parameters
A list of asset IDs
The ID of the project to use. space_id, or project_id is required.
Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the project to use. space_id, or project_id is required.
The type of asset
Only take effect when delete data_intg_test_case. by default, it is false which means keep data files
The deleteAssets options.
A list of asset IDs.
The ID of the project to use. space_id, or project_id is required.
Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the project to use. space_id, or project_id is required.
The type of asset.
Only take effect when delete data_intg_test_case. by default, it is false which means keep data files.
parameters
A list of asset IDs.
The ID of the project to use. space_id, or project_id is required.
Examples:The ID of the project to use. space_id, or project_id is required.
The type of asset.
Only take effect when delete data_intg_test_case. by default, it is false which means keep data files.
parameters
A list of asset IDs.
The ID of the project to use. space_id, or project_id is required.
Examples:The ID of the project to use. space_id, or project_id is required.
The type of asset.
Only take effect when delete data_intg_test_case. by default, it is false which means keep data files.
Response
Status Code
The requested operation completed successfully.
The requested operation is in progress.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An unexpected error occurred. See response for more information.
No Sample Response
List assets
Lists the assets that are contained in the specified project.
Use the following parameters to filter the results:
| Field | Match type | Example | | ------------------------ | ------------ | --------------------------------------- | | asset.name | Starts with | ?asset.name=starts:assetName | | asset.description | Equals | ?asset.description=starts:assetDesc |
To sort the returned results, use one or more of the parameters described in the following section. If no sort key is specified, the results are sorted in descending order on metadata.create_time, returning the most recently created data flows first.
| Field | Example | | ------------------------- | ----------------------------------- | | asset.name | ?sort=+asset.name | | metadata.create_time | ?sort=-metadata.create_time |
Multiple sort keys can be specified by delimiting them with a comma. For example, to sort in descending order on create_time and then in ascending order on name use: ?sort=-metadata.create_time,+asset.name
Lists the assets that are contained in the specified project.
Use the following parameters to filter the results:
| Field | Match type | Example | | ------------------------ | ------------ | --------------------------------------- | | asset.name | Starts with | ?asset.name=starts:assetName | | asset.description | Equals | ?asset.description=starts:assetDesc |
To sort the returned results, use one or more of the parameters described in the following section. If no sort key is specified, the results are sorted in descending order on metadata.create_time, returning the most recently created data flows first.
| Field | Example | | ------------------------- | ----------------------------------- | | asset.name | ?sort=+asset.name | | metadata.create_time | ?sort=-metadata.create_time |
Multiple sort keys can be specified by delimiting them with a comma. For example, to sort in descending order on create_time and then in ascending order on name use: ?sort=-metadata.create_time,+asset.name.
Lists the assets that are contained in the specified project.
Use the following parameters to filter the results:
| Field | Match type | Example | | ------------------------ | ------------ | --------------------------------------- | | asset.name | Starts with | ?asset.name=starts:assetName | | asset.description | Equals | ?asset.description=starts:assetDesc |
To sort the returned results, use one or more of the parameters described in the following section. If no sort key is specified, the results are sorted in descending order on metadata.create_time, returning the most recently created data flows first.
| Field | Example | | ------------------------- | ----------------------------------- | | asset.name | ?sort=+asset.name | | metadata.create_time | ?sort=-metadata.create_time |
Multiple sort keys can be specified by delimiting them with a comma. For example, to sort in descending order on create_time and then in ascending order on name use: ?sort=-metadata.create_time,+asset.name.
Lists the assets that are contained in the specified project.
Use the following parameters to filter the results:
| Field | Match type | Example | | ------------------------ | ------------ | --------------------------------------- | | asset.name | Starts with | ?asset.name=starts:assetName | | asset.description | Equals | ?asset.description=starts:assetDesc |
To sort the returned results, use one or more of the parameters described in the following section. If no sort key is specified, the results are sorted in descending order on metadata.create_time, returning the most recently created data flows first.
| Field | Example | | ------------------------- | ----------------------------------- | | asset.name | ?sort=+asset.name | | metadata.create_time | ?sort=-metadata.create_time |
Multiple sort keys can be specified by delimiting them with a comma. For example, to sort in descending order on create_time and then in ascending order on name use: ?sort=-metadata.create_time,+asset.name.
GET /v3/assets
ServiceCall<DSAssetPagedCollection> findAssets(FindAssetsOptions findAssetsOptions)findAssets(params)
find_assets(
self,
*,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
asset_type: Optional[str] = None,
sort: Optional[str] = None,
start: Optional[str] = None,
limit: Optional[int] = None,
asset_name: Optional[str] = None,
asset_description: Optional[str] = None,
asset_resource_key: Optional[str] = None,
tags: Optional[List[str]] = None,
**kwargs,
) -> DetailedResponseRequest
Use the FindAssetsOptions.Builder to create a FindAssetsOptions object that contains the parameter values for the findAssets method.
Query Parameters
The ID of the project to use. space_id, or project_id is required.
Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the project to use. space_id, or project_id is required.
The type of asset
The field to sort the results on, including whether to sort ascending (+) or descending (-), for example, sort=-metadata.create_time
The page token indicating where to start paging from.
The limit of the number of items to return, for example limit=50. If not specified a default of 100 will be used.
Possible values: value ≥ 1
Example:
100Filter results based on the specified name.
Filter results based on the specified description.
Filter results based on the specified resource_key.
A list of tags
The findAssets options.
The ID of the project to use. space_id, or project_id is required.
Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the project to use. space_id, or project_id is required.
The type of asset.
The field to sort the results on, including whether to sort ascending (+) or descending (-), for example, sort=-metadata.create_time.
The page token indicating where to start paging from.
The limit of the number of items to return, for example limit=50. If not specified a default of 100 will be used.
Possible values: value ≥ 1
Examples:100Filter results based on the specified name.
Filter results based on the specified description.
Filter results based on the specified resource_key.
A list of tags.
parameters
The ID of the project to use. space_id, or project_id is required.
Examples:The ID of the project to use. space_id, or project_id is required.
The type of asset.
The field to sort the results on, including whether to sort ascending (+) or descending (-), for example, sort=-metadata.create_time.
The page token indicating where to start paging from.
The limit of the number of items to return, for example limit=50. If not specified a default of 100 will be used.
Possible values: value ≥ 1
Examples:Filter results based on the specified name.
Filter results based on the specified description.
Filter results based on the specified resource_key.
A list of tags.
parameters
The ID of the project to use. space_id, or project_id is required.
Examples:The ID of the project to use. space_id, or project_id is required.
The type of asset.
The field to sort the results on, including whether to sort ascending (+) or descending (-), for example, sort=-metadata.create_time.
The page token indicating where to start paging from.
The limit of the number of items to return, for example limit=50. If not specified a default of 100 will be used.
Possible values: value ≥ 1
Examples:Filter results based on the specified name.
Filter results based on the specified description.
Filter results based on the specified resource_key.
A list of tags.
Response
A page from a collection of assets
A page from a collection of assets
The number of assets requested to be returned
The total number of assets available
A page from a collection of assets.
A page from a collection of assets.
- assets
- attachments
- Examples:
{ "key": "value" } System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
The number of assets requested to be returned.
- next
URI of a resource.
The total number of assets available.
A page from a collection of assets.
A page from a collection of assets.
- assets
- attachments
- Examples:
{ "key": "value" } System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
The number of assets requested to be returned.
- next
URI of a resource.
The total number of assets available.
A page from a collection of assets.
A page from a collection of assets.
- assets
- attachments
- Examples:
{ "key": "value" } System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
The number of assets requested to be returned.
- next
URI of a resource.
The total number of assets available.
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An unexpected error occurred. See response for more information.
No Sample Response
Create asset
Create asset
Create asset.
Create asset.
Create asset.
POST /v3/assets
ServiceCall<DSAsset> createAsset(CreateAssetOptions createAssetOptions)createAsset(params)
create_asset(
self,
asset_type: str,
entity: dict,
name: str,
*,
attachments: Optional[List[dict]] = None,
description: Optional[str] = None,
tags: Optional[List[str]] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
directory_asset_id: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the CreateAssetOptions.Builder to create a CreateAssetOptions object that contains the parameter values for the createAsset method.
Query Parameters
The ID of the project to use. space_id, or project_id is required.
Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the project to use. space_id, or project_id is required.
The directory asset id to create the asset in or move to
Asset definition
asset type
asset entity definition
Examples:{ "key": "value" }asset name
asset description
List of tags to identify the asset
The createAsset options.
asset type.
asset entity definition.
Examples:{ "key": "value" }asset name.
asset description.
List of tags to identify the asset.
The ID of the project to use. space_id, or project_id is required.
Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the project to use. space_id, or project_id is required.
The directory asset id to create the asset in or move to.
parameters
asset type.
asset entity definition.
Examples:asset name.
asset description.
List of tags to identify the asset.
The ID of the project to use. space_id, or project_id is required.
Examples:The ID of the project to use. space_id, or project_id is required.
The directory asset id to create the asset in or move to.
parameters
asset type.
asset entity definition.
Examples:asset name.
asset description.
List of tags to identify the asset.
The ID of the project to use. space_id, or project_id is required.
Examples:The ID of the project to use. space_id, or project_id is required.
The directory asset id to create the asset in or move to.
Response
DataStage asset definition
- Examples:
{ "key": "value" } System metadata about an asset
DataStage asset definition.
- attachments
- Examples:
{ "key": "value" } System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
DataStage asset definition.
- attachments
- Examples:
{ "key": "value" } System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
DataStage asset definition.
- attachments
- Examples:
{ "key": "value" } System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
The data source type details cannot be found.
An unexpected error occurred. See response for more information.
No Sample Response
Create/update asset based on the given zip file.
Create/update asset based on the given zip file. Unzip the body and use the fixed metadata file(${asset_type}_metadata) to create/update asset, then upload remain files as attachments one by one
Create/update asset based on the given zip file. Unzip the body and use the fixed metadata file(${asset_type}_metadata) to create/update asset, then upload remain files as attachments one by one.
Create/update asset based on the given zip file. Unzip the body and use the fixed metadata file(${asset_type}_metadata) to create/update asset, then upload remain files as attachments one by one.
Create/update asset based on the given zip file. Unzip the body and use the fixed metadata file(${asset_type}_metadata) to create/update asset, then upload remain files as attachments one by one.
PUT /v3/assets
ServiceCall<DSAsset> zipImport(ZipImportOptions zipImportOptions)zipImport(params)
zip_import(
self,
body: BinaryIO,
*,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
asset_type: Optional[str] = None,
directory_asset_id: Optional[str] = None,
override: Optional[bool] = None,
**kwargs,
) -> DetailedResponseRequest
Use the ZipImportOptions.Builder to create a ZipImportOptions object that contains the parameter values for the zipImport method.
Query Parameters
The ID of the project to use. space_id, or project_id is required.
Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the project to use. space_id, or project_id is required.
The type of asset
The directory asset id to create the asset in or move to
whether override if the header file is common path and already exists in the cluster, by default it is false. data_set and file_set only.
The zip file to import.
The zipImport options.
The zip file to import.
The ID of the project to use. space_id, or project_id is required.
Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the project to use. space_id, or project_id is required.
The type of asset.
The directory asset id to create the asset in or move to.
whether override if the header file is common path and already exists in the cluster, by default it is false. data_set and file_set only.
parameters
The zip file to import.
The ID of the project to use. space_id, or project_id is required.
Examples:The ID of the project to use. space_id, or project_id is required.
The type of asset.
The directory asset id to create the asset in or move to.
whether override if the header file is common path and already exists in the cluster, by default it is false. data_set and file_set only.
parameters
The zip file to import.
The ID of the project to use. space_id, or project_id is required.
Examples:The ID of the project to use. space_id, or project_id is required.
The type of asset.
The directory asset id to create the asset in or move to.
whether override if the header file is common path and already exists in the cluster, by default it is false. data_set and file_set only.
Response
DataStage asset definition
- Examples:
{ "key": "value" } System metadata about an asset
DataStage asset definition.
- attachments
- Examples:
{ "key": "value" } System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
DataStage asset definition.
- attachments
- Examples:
{ "key": "value" } System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
DataStage asset definition.
- attachments
- Examples:
{ "key": "value" } System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
The data source type details cannot be found.
An unexpected error occurred. See response for more information.
No Sample Response
Get asset
Get asset
Get asset.
Get asset.
Get asset.
GET /v3/assets/{asset_id}ServiceCall<DSAsset> getAsset(GetAssetOptions getAssetOptions)getAsset(params)
get_asset(
self,
asset_id: str,
*,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the GetAssetOptions.Builder to create a GetAssetOptions object that contains the parameter values for the getAsset method.
Path Parameters
The ID of the asset
Query Parameters
The ID of the project to use. space_id, or project_id is required.
Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the project to use. space_id, or project_id is required.
The getAsset options.
The ID of the asset.
The ID of the project to use. space_id, or project_id is required.
Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the project to use. space_id, or project_id is required.
parameters
The ID of the asset.
The ID of the project to use. space_id, or project_id is required.
Examples:The ID of the project to use. space_id, or project_id is required.
parameters
The ID of the asset.
The ID of the project to use. space_id, or project_id is required.
Examples:The ID of the project to use. space_id, or project_id is required.
Response
DataStage asset definition
- Examples:
{ "key": "value" } System metadata about an asset
DataStage asset definition.
- attachments
- Examples:
{ "key": "value" } System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
DataStage asset definition.
- attachments
- Examples:
{ "key": "value" } System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
DataStage asset definition.
- attachments
- Examples:
{ "key": "value" } System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
Status Code
Asset definition
Bad request. See response for more information.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
The data source type details cannot be found.
An error occurred.
An unexpected error occurred. See response for more information.
No Sample Response
Update asset with a replacement
Update asset with a replacement. If a property is not provided in request body, its corresponding asset field will not be updated.
Update asset with a replacement. If a property is not provided in request body, its corresponding asset field will not be updated.
Update asset with a replacement. If a property is not provided in request body, its corresponding asset field will not be updated.
Update asset with a replacement. If a property is not provided in request body, its corresponding asset field will not be updated.
PUT /v3/assets/{asset_id}ServiceCall<DSAsset> updateAsset(UpdateAssetOptions updateAssetOptions)updateAsset(params)
update_asset(
self,
asset_id: str,
*,
description: Optional[str] = None,
entity: Optional[dict] = None,
name: Optional[str] = None,
tags: Optional[List[str]] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
directory_asset_id: Optional[str] = None,
override: Optional[bool] = None,
**kwargs,
) -> DetailedResponseRequest
Use the UpdateAssetOptions.Builder to create a UpdateAssetOptions object that contains the parameter values for the updateAsset method.
Path Parameters
The ID of the asset
Query Parameters
The ID of the project to use. space_id, or project_id is required.
Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the project to use. space_id, or project_id is required.
The directory asset id to create the asset in or move to
whether override if the header file is common path and already exists in the cluster, by default it is false. data_set and file_set only.
New asset definition
asset description
asset entity definition
Examples:{ "key": "value" }asset name
The updateAsset options.
The ID of the asset.
asset description.
asset entity definition.
Examples:{ "key": "value" }asset name.
The ID of the project to use. space_id, or project_id is required.
Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the project to use. space_id, or project_id is required.
The directory asset id to create the asset in or move to.
whether override if the header file is common path and already exists in the cluster, by default it is false. data_set and file_set only.
parameters
The ID of the asset.
asset description.
asset entity definition.
Examples:asset name.
The ID of the project to use. space_id, or project_id is required.
Examples:The ID of the project to use. space_id, or project_id is required.
The directory asset id to create the asset in or move to.
whether override if the header file is common path and already exists in the cluster, by default it is false. data_set and file_set only.
parameters
The ID of the asset.
asset description.
asset entity definition.
Examples:asset name.
The ID of the project to use. space_id, or project_id is required.
Examples:The ID of the project to use. space_id, or project_id is required.
The directory asset id to create the asset in or move to.
whether override if the header file is common path and already exists in the cluster, by default it is false. data_set and file_set only.
Response
DataStage asset definition
- Examples:
{ "key": "value" } System metadata about an asset
DataStage asset definition.
- attachments
- Examples:
{ "key": "value" } System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
DataStage asset definition.
- attachments
- Examples:
{ "key": "value" } System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
DataStage asset definition.
- attachments
- Examples:
{ "key": "value" } System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
Status Code
The requested operation completed successfully
Bad request. See response for more information.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
The data source type details cannot be found.
An unexpected error occurred. See response for more information.
No Sample Response
Delete attachments
Delete attachments
Delete attachments.
Delete attachments.
Delete attachments.
DELETE /v3/assets/{asset_id}/attachmentsServiceCall<Void> deleteAttachments(DeleteAttachmentsOptions deleteAttachmentsOptions)deleteAttachments(params)
delete_attachments(
self,
asset_id: str,
attachment_ids: List[str],
*,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the DeleteAttachmentsOptions.Builder to create a DeleteAttachmentsOptions object that contains the parameter values for the deleteAttachments method.
Path Parameters
The ID of the asset
Query Parameters
A list of attachment GUIDs or names
The ID of the project to use. space_id, or project_id is required.
Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the project to use. space_id, or project_id is required.
The deleteAttachments options.
The ID of the asset.
A list of attachment GUIDs or names.
The ID of the project to use. space_id, or project_id is required.
Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the project to use. space_id, or project_id is required.
parameters
The ID of the asset.
A list of attachment GUIDs or names.
The ID of the project to use. space_id, or project_id is required.
Examples:The ID of the project to use. space_id, or project_id is required.
parameters
The ID of the asset.
A list of attachment GUIDs or names.
The ID of the project to use. space_id, or project_id is required.
Examples:The ID of the project to use. space_id, or project_id is required.
Response
Status Code
The requested operation completed successfully.
The requested operation is in progress.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An unexpected error occurred. See response for more information.
No Sample Response
Create attachment
Create attachment
Create attachment.
Create attachment.
Create attachment.
POST /v3/assets/{asset_id}/attachmentsServiceCall<DSAttachment> createAttachment(CreateAttachmentOptions createAttachmentOptions)createAttachment(params)
create_attachment(
self,
asset_id: str,
attachment_name: str,
attachment_type: str,
body: BinaryIO,
*,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the CreateAttachmentOptions.Builder to create a CreateAttachmentOptions object that contains the parameter values for the createAttachment method.
Path Parameters
The ID of the asset
Query Parameters
The name of the new attachment
The mime type of the new attachment
Allowable values: [
application/octet-stream,application/json]The ID of the project to use. space_id, or project_id is required.
Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the project to use. space_id, or project_id is required.
Attachment content
The createAttachment options.
The ID of the asset.
The name of the new attachment.
The mime type of the new attachment.
Allowable values: [
application/octet-stream,application/json]Attachment content.
The ID of the project to use. space_id, or project_id is required.
Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the project to use. space_id, or project_id is required.
parameters
The ID of the asset.
The name of the new attachment.
The mime type of the new attachment.
Allowable values: [
application/octet-stream,application/json]Attachment content.
The ID of the project to use. space_id, or project_id is required.
Examples:The ID of the project to use. space_id, or project_id is required.
parameters
The ID of the asset.
The name of the new attachment.
The mime type of the new attachment.
Allowable values: [
application/octet-stream,application/json]Attachment content.
The ID of the project to use. space_id, or project_id is required.
Examples:The ID of the project to use. space_id, or project_id is required.
Response
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
The data source type details cannot be found.
An unexpected error occurred. See response for more information.
No Sample Response
Get attachment
Get attachment
Get attachment.
Get attachment.
Get attachment.
GET /v3/assets/{asset_id}/attachments/{attachment_id}ServiceCall<InputStream> getAttachment(GetAttachmentOptions getAttachmentOptions)getAttachment(params)
get_attachment(
self,
asset_id: str,
attachment_id: str,
*,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the GetAttachmentOptions.Builder to create a GetAttachmentOptions object that contains the parameter values for the getAttachment method.
Path Parameters
The ID of the asset
The GUID or name of the attachment
Query Parameters
The ID of the project to use. space_id, or project_id is required.
Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the project to use. space_id, or project_id is required.
The getAttachment options.
The ID of the asset.
The GUID or name of the attachment.
The ID of the project to use. space_id, or project_id is required.
Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the project to use. space_id, or project_id is required.
parameters
The ID of the asset.
The GUID or name of the attachment.
The ID of the project to use. space_id, or project_id is required.
Examples:The ID of the project to use. space_id, or project_id is required.
parameters
The ID of the asset.
The GUID or name of the attachment.
The ID of the project to use. space_id, or project_id is required.
Examples:The ID of the project to use. space_id, or project_id is required.
Response
Response type: InputStream
Response type: NodeJS.ReadableStream
Response type: BinaryIO
Status Code
Attachment content
Bad request. See response for more information.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
The data source type details cannot be found.
An error occurred.
An unexpected error occurred. See response for more information.
No Sample Response
Patch attachment
Patch attachment
Patch attachment.
Patch attachment.
Patch attachment.
PATCH /v3/assets/{asset_id}/attachments/{attachment_id}ServiceCall<DSAttachment> patchAttachment(PatchAttachmentOptions patchAttachmentOptions)patchAttachment(params)
patch_attachment(
self,
asset_id: str,
attachment_id: str,
request_body: dict,
*,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the PatchAttachmentOptions.Builder to create a PatchAttachmentOptions object that contains the parameter values for the patchAttachment method.
Path Parameters
The ID of the asset
The GUID or name of the attachment
Query Parameters
The ID of the project to use. space_id, or project_id is required.
Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the project to use. space_id, or project_id is required.
the patch json
The patchAttachment options.
The ID of the asset.
The GUID or name of the attachment.
the patch json.
The ID of the project to use. space_id, or project_id is required.
Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the project to use. space_id, or project_id is required.
parameters
The ID of the asset.
The GUID or name of the attachment.
the patch json.
The ID of the project to use. space_id, or project_id is required.
Examples:The ID of the project to use. space_id, or project_id is required.
parameters
The ID of the asset.
The GUID or name of the attachment.
the patch json.
The ID of the project to use. space_id, or project_id is required.
Examples:The ID of the project to use. space_id, or project_id is required.
Response
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
The data source type details cannot be found.
An unexpected error occurred. See response for more information.
No Sample Response
Replace attachment
Replace attachment
Replace attachment.
Replace attachment.
Replace attachment.
PUT /v3/assets/{asset_id}/attachments/{attachment_id}ServiceCall<DSAttachment> replaceAttachment(ReplaceAttachmentOptions replaceAttachmentOptions)replaceAttachment(params)
replace_attachment(
self,
asset_id: str,
attachment_id: str,
attachment_name: str,
attachment_type: str,
body: BinaryIO,
*,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the ReplaceAttachmentOptions.Builder to create a ReplaceAttachmentOptions object that contains the parameter values for the replaceAttachment method.
Path Parameters
The ID of the asset
The GUID or name of the attachment
Query Parameters
The name of the new attachment
The mime type of the new attachment
Allowable values: [
application/octet-stream,application/json]The ID of the project to use. space_id, or project_id is required.
Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the project to use. space_id, or project_id is required.
Attachment content
The replaceAttachment options.
The ID of the asset.
The GUID or name of the attachment.
The name of the new attachment.
The mime type of the new attachment.
Allowable values: [
application/octet-stream,application/json]Attachment content.
The ID of the project to use. space_id, or project_id is required.
Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the project to use. space_id, or project_id is required.
parameters
The ID of the asset.
The GUID or name of the attachment.
The name of the new attachment.
The mime type of the new attachment.
Allowable values: [
application/octet-stream,application/json]Attachment content.
The ID of the project to use. space_id, or project_id is required.
Examples:The ID of the project to use. space_id, or project_id is required.
parameters
The ID of the asset.
The GUID or name of the attachment.
The name of the new attachment.
The mime type of the new attachment.
Allowable values: [
application/octet-stream,application/json]Attachment content.
The ID of the project to use. space_id, or project_id is required.
Examples:The ID of the project to use. space_id, or project_id is required.
Response
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
The data source type details cannot be found.
An unexpected error occurred. See response for more information.
No Sample Response
Clone asset and its attachments
Clone asset and its attachments
Clone asset and its attachments.
Clone asset and its attachments.
Clone asset and its attachments.
POST /v3/assets/{asset_id}/cloneServiceCall<DSAsset> cloneAsset(CloneAssetOptions cloneAssetOptions)cloneAsset(params)
clone_asset(
self,
asset_id: str,
*,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
directory_asset_id: Optional[str] = None,
name: Optional[str] = None,
override: Optional[bool] = None,
**kwargs,
) -> DetailedResponseRequest
Use the CloneAssetOptions.Builder to create a CloneAssetOptions object that contains the parameter values for the cloneAsset method.
Path Parameters
The ID of the asset
Query Parameters
The ID of the project to use. space_id, or project_id is required.
Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the project to use. space_id, or project_id is required.
The directory asset id to create the asset in or move to
The new name
whether override if the header file is common path and already exists in the cluster, by default it is false. data_set and file_set only.
The cloneAsset options.
The ID of the asset.
The ID of the project to use. space_id, or project_id is required.
Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the project to use. space_id, or project_id is required.
The directory asset id to create the asset in or move to.
The new name.
whether override if the header file is common path and already exists in the cluster, by default it is false. data_set and file_set only.
parameters
The ID of the asset.
The ID of the project to use. space_id, or project_id is required.
Examples:The ID of the project to use. space_id, or project_id is required.
The directory asset id to create the asset in or move to.
The new name.
whether override if the header file is common path and already exists in the cluster, by default it is false. data_set and file_set only.
parameters
The ID of the asset.
The ID of the project to use. space_id, or project_id is required.
Examples:The ID of the project to use. space_id, or project_id is required.
The directory asset id to create the asset in or move to.
The new name.
whether override if the header file is common path and already exists in the cluster, by default it is false. data_set and file_set only.
Response
DataStage asset definition
- Examples:
{ "key": "value" } System metadata about an asset
DataStage asset definition.
- attachments
- Examples:
{ "key": "value" } System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
DataStage asset definition.
- attachments
- Examples:
{ "key": "value" } System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
DataStage asset definition.
- attachments
- Examples:
{ "key": "value" } System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
Status Code
The requested operation completed successfully.
Bad request. See response for more information.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
The data source type details cannot be found.
An unexpected error occurred. See response for more information.
No Sample Response
Export the asset with its all attachments as a zip
Export the asset with its all attachments as a zip which include the fixed metadata file(${asset_type}_metadata) and all attachments(without entity-attachment)
Export the asset with its all attachments as a zip which include the fixed metadata file(${asset_type}_metadata) and all attachments(without entity-attachment).
Export the asset with its all attachments as a zip which include the fixed metadata file(${asset_type}_metadata) and all attachments(without entity-attachment).
Export the asset with its all attachments as a zip which include the fixed metadata file(${asset_type}_metadata) and all attachments(without entity-attachment).
GET /v3/assets/{asset_id}/exportServiceCall<InputStream> zipExport(ZipExportOptions zipExportOptions)zipExport(params)
zip_export(
self,
asset_id: str,
*,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
max_allowed_data_size: Optional[int] = None,
**kwargs,
) -> DetailedResponseRequest
Use the ZipExportOptions.Builder to create a ZipExportOptions object that contains the parameter values for the zipExport method.
Path Parameters
The ID of the asset
Query Parameters
The ID of the project to use. space_id, or project_id is required.
Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the project to use. space_id, or project_id is required.
(Optional) Only the data size smaller than this are exported. In mega byte, only take effect on data_intg_data_set and data_intg_file_set and data_intg_test_case. By default or set to negative, means 500. If set 0, skip data. if greater than 1000, use 1000.
The zipExport options.
The ID of the asset.
The ID of the project to use. space_id, or project_id is required.
Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the project to use. space_id, or project_id is required.
(Optional) Only the data size smaller than this are exported. In mega byte, only take effect on data_intg_data_set and data_intg_file_set and data_intg_test_case. By default or set to negative, means 500. If set 0, skip data. if greater than 1000, use 1000.
parameters
The ID of the asset.
The ID of the project to use. space_id, or project_id is required.
Examples:The ID of the project to use. space_id, or project_id is required.
(Optional) Only the data size smaller than this are exported. In mega byte, only take effect on data_intg_data_set and data_intg_file_set and data_intg_test_case. By default or set to negative, means 500. If set 0, skip data. if greater than 1000, use 1000.
parameters
The ID of the asset.
The ID of the project to use. space_id, or project_id is required.
Examples:The ID of the project to use. space_id, or project_id is required.
(Optional) Only the data size smaller than this are exported. In mega byte, only take effect on data_intg_data_set and data_intg_file_set and data_intg_test_case. By default or set to negative, means 500. If set 0, skip data. if greater than 1000, use 1000.
Response
Response type: InputStream
Response type: NodeJS.ReadableStream
Response type: BinaryIO
Status Code
The requested operation completed successfully.
Bad request. See response for more information.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
The data source type details cannot be found.
An error occurred.
An unexpected error occurred. See response for more information.
No Sample Response
List DataStage XML schema libraries
List existing DataStage XML schema libraries in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).
List existing DataStage XML schema libraries in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).
List existing DataStage XML schema libraries in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).
List existing DataStage XML schema libraries in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).
GET /v3/schema_libraries
ServiceCall<LibraryList> listDatastageLibraries(ListDatastageLibrariesOptions listDatastageLibrariesOptions)listDatastageLibraries(params)
list_datastage_libraries(
self,
*,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the ListDatastageLibrariesOptions.Builder to create a ListDatastageLibrariesOptions object that contains the parameter values for the listDatastageLibraries method.
Query Parameters
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use. catalog_id, space_id, or project_id is required.
The listDatastageLibraries options.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use. catalog_id, space_id, or project_id is required.
parameters
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:The ID of the space to use. catalog_id, space_id, or project_id is required.
parameters
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:The ID of the space to use. catalog_id, space_id, or project_id is required.
Response
the list of libraries.
library info.
the list of libraries.
library info.
- libraries
Pseudo attachment.
- attachments
the message.
Error Bean.
- errors
the folder.
map to asset id.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
the status of the library, EMPTY/OK/ERROR/OUT_OF_SYNC.
Pseudo attachment.
- skips
Error Bean.
- warnings
the list of libraries.
library info.
- libraries
Pseudo attachment.
- attachments
the message.
Error Bean.
- errors
the folder.
map to asset id.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
the status of the library, EMPTY/OK/ERROR/OUT_OF_SYNC.
Pseudo attachment.
- skips
Error Bean.
- warnings
the list of libraries.
library info.
- libraries
Pseudo attachment.
- attachments
the message.
Error Bean.
- errors
the folder.
map to asset id.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
the status of the library, EMPTY/OK/ERROR/OUT_OF_SYNC.
Pseudo attachment.
- skips
Error Bean.
- warnings
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An unexpected error occurred. See response for more information.
No Sample Response
Create a new DataStage XML schema library
Creates a new DataStage XML schema library in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).
Creates a new DataStage XML schema library in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).
Creates a new DataStage XML schema library in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).
Creates a new DataStage XML schema library in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).
POST /v3/schema_libraries
ServiceCall<Library> createDatastageLibrary(CreateDatastageLibraryOptions createDatastageLibraryOptions)createDatastageLibrary(params)
create_datastage_library(
self,
name: str,
*,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
directory_asset_id: Optional[str] = None,
folder: Optional[str] = None,
description: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the CreateDatastageLibraryOptions.Builder to create a CreateDatastageLibraryOptions object that contains the parameter values for the createDatastageLibrary method.
Query Parameters
The name of the new XML schema library.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use. catalog_id, space_id, or project_id is required.
The directory asset id to create the asset in or move to
The folder that the new XML schema library belongs to.
The description of the new XML schema library.
The createDatastageLibrary options.
The name of the new XML schema library.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use. catalog_id, space_id, or project_id is required.
The directory asset id to create the asset in or move to.
The folder that the new XML schema library belongs to.
The description of the new XML schema library.
parameters
The name of the new XML schema library.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:The ID of the space to use. catalog_id, space_id, or project_id is required.
The directory asset id to create the asset in or move to.
The folder that the new XML schema library belongs to.
The description of the new XML schema library.
parameters
The name of the new XML schema library.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:The ID of the space to use. catalog_id, space_id, or project_id is required.
The directory asset id to create the asset in or move to.
The folder that the new XML schema library belongs to.
The description of the new XML schema library.
Response
library info.
Pseudo attachment.
the message.
Error Bean.
the folder.
map to asset id.
System metadata about an asset
the status of the library, EMPTY/OK/ERROR/OUT_OF_SYNC.
Pseudo attachment.
Error Bean.
library info.
Pseudo attachment.
- attachments
the message.
Error Bean.
- errors
the folder.
map to asset id.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
the status of the library, EMPTY/OK/ERROR/OUT_OF_SYNC.
Pseudo attachment.
- skips
Error Bean.
- warnings
library info.
Pseudo attachment.
- attachments
the message.
Error Bean.
- errors
the folder.
map to asset id.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
the status of the library, EMPTY/OK/ERROR/OUT_OF_SYNC.
Pseudo attachment.
- skips
Error Bean.
- warnings
library info.
Pseudo attachment.
- attachments
the message.
Error Bean.
- errors
the folder.
map to asset id.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
the status of the library, EMPTY/OK/ERROR/OUT_OF_SYNC.
Pseudo attachment.
- skips
Error Bean.
- warnings
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An unexpected error occurred. See response for more information.
No Sample Response
Delete a DataStage XML schema library
Delete a DataStage XML schema library in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).
Delete a DataStage XML schema library in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).
Delete a DataStage XML schema library in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).
Delete a DataStage XML schema library in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).
DELETE /v3/schema_libraries/{library_id}ServiceCall<Void> deleteDatastageLibrary(DeleteDatastageLibraryOptions deleteDatastageLibraryOptions)deleteDatastageLibrary(params)
delete_datastage_library(
self,
library_id: str,
*,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the DeleteDatastageLibraryOptions.Builder to create a DeleteDatastageLibraryOptions object that contains the parameter values for the deleteDatastageLibrary method.
Path Parameters
The ID of the XML Schema Library.
Query Parameters
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use. catalog_id, space_id, or project_id is required.
The deleteDatastageLibrary options.
The ID of the XML Schema Library.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use. catalog_id, space_id, or project_id is required.
parameters
The ID of the XML Schema Library.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:The ID of the space to use. catalog_id, space_id, or project_id is required.
parameters
The ID of the XML Schema Library.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:The ID of the space to use. catalog_id, space_id, or project_id is required.
Response
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An unexpected error occurred. See response for more information.
No Sample Response
Get the specify DataStage XML schema library
Get the specify DataStage XML schema library in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).
Get the specify DataStage XML schema library in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).
Get the specify DataStage XML schema library in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).
Get the specify DataStage XML schema library in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).
GET /v3/schema_libraries/{library_id}ServiceCall<Library> getDatastageLibrary(GetDatastageLibraryOptions getDatastageLibraryOptions)getDatastageLibrary(params)
get_datastage_library(
self,
library_id: str,
*,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the GetDatastageLibraryOptions.Builder to create a GetDatastageLibraryOptions object that contains the parameter values for the getDatastageLibrary method.
Path Parameters
The ID of the XML Schema Library.
Query Parameters
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use. catalog_id, space_id, or project_id is required.
The getDatastageLibrary options.
The ID of the XML Schema Library.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use. catalog_id, space_id, or project_id is required.
parameters
The ID of the XML Schema Library.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:The ID of the space to use. catalog_id, space_id, or project_id is required.
parameters
The ID of the XML Schema Library.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:The ID of the space to use. catalog_id, space_id, or project_id is required.
Response
library info.
Pseudo attachment.
the message.
Error Bean.
the folder.
map to asset id.
System metadata about an asset
the status of the library, EMPTY/OK/ERROR/OUT_OF_SYNC.
Pseudo attachment.
Error Bean.
library info.
Pseudo attachment.
- attachments
the message.
Error Bean.
- errors
the folder.
map to asset id.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
the status of the library, EMPTY/OK/ERROR/OUT_OF_SYNC.
Pseudo attachment.
- skips
Error Bean.
- warnings
library info.
Pseudo attachment.
- attachments
the message.
Error Bean.
- errors
the folder.
map to asset id.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
the status of the library, EMPTY/OK/ERROR/OUT_OF_SYNC.
Pseudo attachment.
- skips
Error Bean.
- warnings
library info.
Pseudo attachment.
- attachments
the message.
Error Bean.
- errors
the folder.
map to asset id.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
the status of the library, EMPTY/OK/ERROR/OUT_OF_SYNC.
Pseudo attachment.
- skips
Error Bean.
- warnings
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred.
No Sample Response
Update a DataStage XML schema library
Update a DataStage XML schema library, name and directory_id only
Update a DataStage XML schema library, name and directory_id only.
Update a DataStage XML schema library, name and directory_id only.
Update a DataStage XML schema library, name and directory_id only.
POST /v3/schema_libraries/{library_id}ServiceCall<Library> updateDatastageLibrary(UpdateDatastageLibraryOptions updateDatastageLibraryOptions)updateDatastageLibrary(params)
update_datastage_library(
self,
library_id: str,
*,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
directory_asset_id: Optional[str] = None,
name: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the UpdateDatastageLibraryOptions.Builder to create a UpdateDatastageLibraryOptions object that contains the parameter values for the updateDatastageLibrary method.
Path Parameters
The ID of the XML Schema Library.
Query Parameters
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use. catalog_id, space_id, or project_id is required.
The directory asset id to create the asset in or move to
The new name of the XML schema library.
The updateDatastageLibrary options.
The ID of the XML Schema Library.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use. catalog_id, space_id, or project_id is required.
The directory asset id to create the asset in or move to.
The new name of the XML schema library.
parameters
The ID of the XML Schema Library.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:The ID of the space to use. catalog_id, space_id, or project_id is required.
The directory asset id to create the asset in or move to.
The new name of the XML schema library.
parameters
The ID of the XML Schema Library.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:The ID of the space to use. catalog_id, space_id, or project_id is required.
The directory asset id to create the asset in or move to.
The new name of the XML schema library.
Response
library info.
Pseudo attachment.
the message.
Error Bean.
the folder.
map to asset id.
System metadata about an asset
the status of the library, EMPTY/OK/ERROR/OUT_OF_SYNC.
Pseudo attachment.
Error Bean.
library info.
Pseudo attachment.
- attachments
the message.
Error Bean.
- errors
the folder.
map to asset id.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
the status of the library, EMPTY/OK/ERROR/OUT_OF_SYNC.
Pseudo attachment.
- skips
Error Bean.
- warnings
library info.
Pseudo attachment.
- attachments
the message.
Error Bean.
- errors
the folder.
map to asset id.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
the status of the library, EMPTY/OK/ERROR/OUT_OF_SYNC.
Pseudo attachment.
- skips
Error Bean.
- warnings
library info.
Pseudo attachment.
- attachments
the message.
Error Bean.
- errors
the folder.
map to asset id.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
the status of the library, EMPTY/OK/ERROR/OUT_OF_SYNC.
Pseudo attachment.
- skips
Error Bean.
- warnings
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred.
No Sample Response
Upload a file to an existing DataStage XML schema library
Upload a file to an existing DataStage XML schema library in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set). Thread unsafe
Upload a file to an existing DataStage XML schema library in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set). Thread unsafe.
Upload a file to an existing DataStage XML schema library in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set). Thread unsafe.
Upload a file to an existing DataStage XML schema library in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set). Thread unsafe.
PUT /v3/schema_libraries/{library_id}ServiceCall<Library> uploadDatastageLibraryFile(UploadDatastageLibraryFileOptions uploadDatastageLibraryFileOptions)uploadDatastageLibraryFile(params)
upload_datastage_library_file(
self,
library_id: str,
body: BinaryIO,
*,
file_name: Optional[str] = None,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
max_tree_size: Optional[int] = None,
output_global_type: Optional[bool] = None,
**kwargs,
) -> DetailedResponseRequest
Use the UploadDatastageLibraryFileOptions.Builder to create a UploadDatastageLibraryFileOptions object that contains the parameter values for the uploadDatastageLibraryFile method.
Path Parameters
The ID of the XML Schema Library.
Query Parameters
The file name you want to upload to the specified XML schema library.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use. catalog_id, space_id, or project_id is required.
Max tree size for one type in schema library
Output the global type
The content of the file to upload.
The uploadDatastageLibraryFile options.
The ID of the XML Schema Library.
The content of the file to upload.
The file name you want to upload to the specified XML schema library.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use. catalog_id, space_id, or project_id is required.
Max tree size for one type in schema library.
Output the global type.
parameters
The ID of the XML Schema Library.
The content of the file to upload.
The file name you want to upload to the specified XML schema library.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:The ID of the space to use. catalog_id, space_id, or project_id is required.
Max tree size for one type in schema library.
Output the global type.
parameters
The ID of the XML Schema Library.
The content of the file to upload.
The file name you want to upload to the specified XML schema library.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:The ID of the space to use. catalog_id, space_id, or project_id is required.
Max tree size for one type in schema library.
Output the global type.
Response
library info.
Pseudo attachment.
the message.
Error Bean.
the folder.
map to asset id.
System metadata about an asset
the status of the library, EMPTY/OK/ERROR/OUT_OF_SYNC.
Pseudo attachment.
Error Bean.
library info.
Pseudo attachment.
- attachments
the message.
Error Bean.
- errors
the folder.
map to asset id.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
the status of the library, EMPTY/OK/ERROR/OUT_OF_SYNC.
Pseudo attachment.
- skips
Error Bean.
- warnings
library info.
Pseudo attachment.
- attachments
the message.
Error Bean.
- errors
the folder.
map to asset id.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
the status of the library, EMPTY/OK/ERROR/OUT_OF_SYNC.
Pseudo attachment.
- skips
Error Bean.
- warnings
library info.
Pseudo attachment.
- attachments
the message.
Error Bean.
- errors
the folder.
map to asset id.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
the status of the library, EMPTY/OK/ERROR/OUT_OF_SYNC.
Pseudo attachment.
- skips
Error Bean.
- warnings
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An unexpected error occurred. See response for more information.
No Sample Response
Clone a DataStage XML schema library
Clone a DataStage XML schema library based on the specify library id in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).
Clone a DataStage XML schema library based on the specify library id in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).
Clone a DataStage XML schema library based on the specify library id in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).
Clone a DataStage XML schema library based on the specify library id in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).
POST /v3/schema_libraries/{library_id}/cloneServiceCall<Library> cloneDatastageLibrary(CloneDatastageLibraryOptions cloneDatastageLibraryOptions)cloneDatastageLibrary(params)
clone_datastage_library(
self,
library_id: str,
*,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
directory_asset_id: Optional[str] = None,
name: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the CloneDatastageLibraryOptions.Builder to create a CloneDatastageLibraryOptions object that contains the parameter values for the cloneDatastageLibrary method.
Path Parameters
The ID of the XML Schema Library.
Query Parameters
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use. catalog_id, space_id, or project_id is required.
The directory asset id to create the asset in or move to
The new name of the XML schema library.
The cloneDatastageLibrary options.
The ID of the XML Schema Library.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use. catalog_id, space_id, or project_id is required.
The directory asset id to create the asset in or move to.
The new name of the XML schema library.
parameters
The ID of the XML Schema Library.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:The ID of the space to use. catalog_id, space_id, or project_id is required.
The directory asset id to create the asset in or move to.
The new name of the XML schema library.
parameters
The ID of the XML Schema Library.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:The ID of the space to use. catalog_id, space_id, or project_id is required.
The directory asset id to create the asset in or move to.
The new name of the XML schema library.
Response
library info.
Pseudo attachment.
the message.
Error Bean.
the folder.
map to asset id.
System metadata about an asset
the status of the library, EMPTY/OK/ERROR/OUT_OF_SYNC.
Pseudo attachment.
Error Bean.
library info.
Pseudo attachment.
- attachments
the message.
Error Bean.
- errors
the folder.
map to asset id.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
the status of the library, EMPTY/OK/ERROR/OUT_OF_SYNC.
Pseudo attachment.
- skips
Error Bean.
- warnings
library info.
Pseudo attachment.
- attachments
the message.
Error Bean.
- errors
the folder.
map to asset id.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
the status of the library, EMPTY/OK/ERROR/OUT_OF_SYNC.
Pseudo attachment.
- skips
Error Bean.
- warnings
library info.
Pseudo attachment.
- attachments
the message.
Error Bean.
- errors
the folder.
map to asset id.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
the status of the library, EMPTY/OK/ERROR/OUT_OF_SYNC.
Pseudo attachment.
- skips
Error Bean.
- warnings
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred.
No Sample Response
Delete files from a DataStage XML schema library
Delete files from a DataStage XML schema library based on the file_names in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set). Thread unsafe.
Delete files from a DataStage XML schema library based on the file_names in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set). Thread unsafe.
Delete files from a DataStage XML schema library based on the file_names in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set). Thread unsafe.
Delete files from a DataStage XML schema library based on the file_names in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set). Thread unsafe.
DELETE /v3/schema_libraries/{library_id}/fileServiceCall<Void> deleteDatastageLibraryFiles(DeleteDatastageLibraryFilesOptions deleteDatastageLibraryFilesOptions)deleteDatastageLibraryFiles(params)
delete_datastage_library_files(
self,
file_names: str,
library_id: str,
*,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
max_tree_size: Optional[int] = None,
output_global_type: Optional[bool] = None,
**kwargs,
) -> DetailedResponseRequest
Use the DeleteDatastageLibraryFilesOptions.Builder to create a DeleteDatastageLibraryFilesOptions object that contains the parameter values for the deleteDatastageLibraryFiles method.
Path Parameters
The ID of the XML Schema Library.
Query Parameters
The file names (path-dependent) you want to delete from the specified XML schema library. Multiple files can be specified by delimiting them with a comma. Skip files are not exist in this library.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use. catalog_id, space_id, or project_id is required.
Max tree size for one type in schema library
Output the global type
The deleteDatastageLibraryFiles options.
The file names (path-dependent) you want to delete from the specified XML schema library. Multiple files can be specified by delimiting them with a comma. Skip files are not exist in this library.
The ID of the XML Schema Library.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use. catalog_id, space_id, or project_id is required.
Max tree size for one type in schema library.
Output the global type.
parameters
The file names (path-dependent) you want to delete from the specified XML schema library. Multiple files can be specified by delimiting them with a comma. Skip files are not exist in this library.
The ID of the XML Schema Library.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:The ID of the space to use. catalog_id, space_id, or project_id is required.
Max tree size for one type in schema library.
Output the global type.
parameters
The file names (path-dependent) you want to delete from the specified XML schema library. Multiple files can be specified by delimiting them with a comma. Skip files are not exist in this library.
The ID of the XML Schema Library.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:The ID of the space to use. catalog_id, space_id, or project_id is required.
Max tree size for one type in schema library.
Output the global type.
Response
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An unexpected error occurred. See response for more information.
No Sample Response
Download file from a DataStage XML schema library
Download file from a DataStage XML schema library based on the file_name in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).
Download file from a DataStage XML schema library based on the file_name in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).
Download file from a DataStage XML schema library based on the file_name in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).
Download file from a DataStage XML schema library based on the file_name in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).
GET /v3/schema_libraries/{library_id}/fileServiceCall<InputStream> downloadDatastageLibraryFile(DownloadDatastageLibraryFileOptions downloadDatastageLibraryFileOptions)downloadDatastageLibraryFile(params)
download_datastage_library_file(
self,
library_id: str,
*,
file_name: Optional[str] = None,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the DownloadDatastageLibraryFileOptions.Builder to create a DownloadDatastageLibraryFileOptions object that contains the parameter values for the downloadDatastageLibraryFile method.
Path Parameters
The ID of the XML Schema Library.
Query Parameters
The file name (path-dependent) you want to download from the specified XML schema library. If specified, only download the file. If not specified, download all files as a zip which maintain its original structure.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use. catalog_id, space_id, or project_id is required.
The downloadDatastageLibraryFile options.
The ID of the XML Schema Library.
The file name (path-dependent) you want to download from the specified XML schema library. If specified, only download the file. If not specified, download all files as a zip which maintain its original structure.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use. catalog_id, space_id, or project_id is required.
parameters
The ID of the XML Schema Library.
The file name (path-dependent) you want to download from the specified XML schema library. If specified, only download the file. If not specified, download all files as a zip which maintain its original structure.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:The ID of the space to use. catalog_id, space_id, or project_id is required.
parameters
The ID of the XML Schema Library.
The file name (path-dependent) you want to download from the specified XML schema library. If specified, only download the file. If not specified, download all files as a zip which maintain its original structure.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:The ID of the space to use. catalog_id, space_id, or project_id is required.
Response
Response type: InputStream
Response type: NodeJS.ReadableStream
Response type: BinaryIO
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An unexpected error occurred. See response for more information.
No Sample Response
Rename a DataStage XML schema library
Rename a DataStage XML schema library based on the specify library id in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).
Rename a DataStage XML schema library based on the specify library id in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).
Rename a DataStage XML schema library based on the specify library id in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).
Rename a DataStage XML schema library based on the specify library id in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).
POST /v3/schema_libraries/{library_id}/renameServiceCall<Library> renameDatastageLibrary(RenameDatastageLibraryOptions renameDatastageLibraryOptions)renameDatastageLibrary(params)
rename_datastage_library(
self,
library_id: str,
name: str,
*,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the RenameDatastageLibraryOptions.Builder to create a RenameDatastageLibraryOptions object that contains the parameter values for the renameDatastageLibrary method.
Path Parameters
The ID of the XML Schema Library.
Query Parameters
The new name of the XML schema library
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use. catalog_id, space_id, or project_id is required.
The renameDatastageLibrary options.
The ID of the XML Schema Library.
The new name of the XML schema library.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use. catalog_id, space_id, or project_id is required.
parameters
The ID of the XML Schema Library.
The new name of the XML schema library.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:The ID of the space to use. catalog_id, space_id, or project_id is required.
parameters
The ID of the XML Schema Library.
The new name of the XML schema library.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:The ID of the space to use. catalog_id, space_id, or project_id is required.
Response
library info.
Pseudo attachment.
the message.
Error Bean.
the folder.
map to asset id.
System metadata about an asset
the status of the library, EMPTY/OK/ERROR/OUT_OF_SYNC.
Pseudo attachment.
Error Bean.
library info.
Pseudo attachment.
- attachments
the message.
Error Bean.
- errors
the folder.
map to asset id.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
the status of the library, EMPTY/OK/ERROR/OUT_OF_SYNC.
Pseudo attachment.
- skips
Error Bean.
- warnings
library info.
Pseudo attachment.
- attachments
the message.
Error Bean.
- errors
the folder.
map to asset id.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
the status of the library, EMPTY/OK/ERROR/OUT_OF_SYNC.
Pseudo attachment.
- skips
Error Bean.
- warnings
library info.
Pseudo attachment.
- attachments
the message.
Error Bean.
- errors
the folder.
map to asset id.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
the status of the library, EMPTY/OK/ERROR/OUT_OF_SYNC.
Pseudo attachment.
- skips
Error Bean.
- warnings
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred.
No Sample Response
Export a XML Schema Library in zip format
Export a XML Schema Library in zip format.
Export a XML Schema Library in zip format.
Export a XML Schema Library in zip format.
Export a XML Schema Library in zip format.
GET /v3/schema_libraries/{library_name}/zipServiceCall<InputStream> exportDatastageLibraryZip(ExportDatastageLibraryZipOptions exportDatastageLibraryZipOptions)exportDatastageLibraryZip(params)
export_datastage_library_zip(
self,
library_name: str,
*,
folder: Optional[str] = None,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the ExportDatastageLibraryZipOptions.Builder to create a ExportDatastageLibraryZipOptions object that contains the parameter values for the exportDatastageLibraryZip method.
Path Parameters
The name or id of the XML schema library
Query Parameters
The folder of the XML schema library
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use. catalog_id, space_id, or project_id is required.
The exportDatastageLibraryZip options.
The name or id of the XML schema library.
The folder of the XML schema library.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use. catalog_id, space_id, or project_id is required.
parameters
The name or id of the XML schema library.
The folder of the XML schema library.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:The ID of the space to use. catalog_id, space_id, or project_id is required.
parameters
The name or id of the XML schema library.
The folder of the XML schema library.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:The ID of the space to use. catalog_id, space_id, or project_id is required.
Response
Response type: InputStream
Response type: NodeJS.ReadableStream
Response type: BinaryIO
Status Code
Success.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An unexpected error occurred. See response for more information.
No Sample Response
Import/create XML Schema Library from zip stream
Import/create XML Schema Library from zip stream
Import/create XML Schema Library from zip stream.
Import/create XML Schema Library from zip stream.
Import/create XML Schema Library from zip stream.
PUT /v3/schema_libraries/{library_name}/zipServiceCall<Library> importDatastageLibraryZip(ImportDatastageLibraryZipOptions importDatastageLibraryZipOptions)importDatastageLibraryZip(params)
import_datastage_library_zip(
self,
library_name: str,
body: BinaryIO,
*,
folder: Optional[str] = None,
conflict_option: Optional[str] = None,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
directory_asset_id: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the ImportDatastageLibraryZipOptions.Builder to create a ImportDatastageLibraryZipOptions object that contains the parameter values for the importDatastageLibraryZip method.
Path Parameters
The name of the XML schema library
Query Parameters
The folder of the XML schema library
The conflict_option. The default is skip
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use. catalog_id, space_id, or project_id is required.
The directory asset id to create the asset in or move to
The importDatastageLibraryZip options.
The name of the XML schema library.
The folder of the XML schema library.
The conflict_option. The default is skip.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use. catalog_id, space_id, or project_id is required.
The directory asset id to create the asset in or move to.
parameters
The name of the XML schema library.
The folder of the XML schema library.
The conflict_option. The default is skip.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:The ID of the space to use. catalog_id, space_id, or project_id is required.
The directory asset id to create the asset in or move to.
parameters
The name of the XML schema library.
The folder of the XML schema library.
The conflict_option. The default is skip.
The ID of the catalog to use. catalog_id, space_id, or project_id is required.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:The ID of the space to use. catalog_id, space_id, or project_id is required.
The directory asset id to create the asset in or move to.
Response
library info.
Pseudo attachment.
the message.
Error Bean.
the folder.
map to asset id.
System metadata about an asset
the status of the library, EMPTY/OK/ERROR/OUT_OF_SYNC.
Pseudo attachment.
Error Bean.
library info.
Pseudo attachment.
- attachments
the message.
Error Bean.
- errors
the folder.
map to asset id.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
the status of the library, EMPTY/OK/ERROR/OUT_OF_SYNC.
Pseudo attachment.
- skips
Error Bean.
- warnings
library info.
Pseudo attachment.
- attachments
the message.
Error Bean.
- errors
the folder.
map to asset id.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
the status of the library, EMPTY/OK/ERROR/OUT_OF_SYNC.
Pseudo attachment.
- skips
Error Bean.
- warnings
library info.
Pseudo attachment.
- attachments
the message.
Error Bean.
- errors
the folder.
map to asset id.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset. catalog_id, space_id, or project_id is required.
The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The IAM ID of the user that created the asset.
The ID of the project which contains the asset. catalog_id, space_id, or project_id is required.
This is a unique string that uniquely identifies an asset.
Custom data to be associated with a given object.
The ID of the space which contains the asset. catalog_id, space_id, or project_id is required.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
the status of the library, EMPTY/OK/ERROR/OUT_OF_SYNC.
Pseudo attachment.
- skips
Error Bean.
- warnings
Status Code
Success.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An unexpected error occurred. See response for more information.
No Sample Response
List all Standardization Rules in a project.
List all Standardization Rules in a project.
List all Standardization Rules in a project.
List all Standardization Rules in a project.
List all Standardization Rules in a project.
GET /v3/quality_stage/rules
ServiceCall<QualityFolder> listRules(ListRulesOptions listRulesOptions)listRules(params)
list_rules(
self,
*,
project_id: Optional[str] = None,
catalog_id: Optional[str] = None,
space_id: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the ListRulesOptions.Builder to create a ListRulesOptions object that contains the parameter values for the listRules method.
Query Parameters
The ID of the project to use.
catalog_idorproject_idorspace_idis required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the space to use.
catalog_idorproject_idorspace_idis required.Example:
4c9adbb4-28ef-4a7d-b273-1cee0c38021f
The listRules options.
The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:4c9adbb4-28ef-4a7d-b273-1cee0c38021f
parameters
The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:
parameters
The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:
Response
A folder.
A Base file for Rule.
A folder.
A folder.
- children
A folder.
A Base file for Rule.
- files
A Base file for Rule.
- files
A folder.
A folder.
- children
A folder.
A Base file for Rule.
- files
A Base file for Rule.
- files
A folder.
A folder.
- children
A folder.
A Base file for Rule.
- files
A Base file for Rule.
- files
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred. See response for more information.
No Sample Response
Removes the Standardization Rule from a project.
Removes the Standardization Rule from a project. For the built-in one, it will restore to the defaults and do not actually delete it from the system.
Removes the Standardization Rule from a project. For the built-in one, it will restore to the defaults and do not actually delete it from the system.
Removes the Standardization Rule from a project. For the built-in one, it will restore to the defaults and do not actually delete it from the system.
Removes the Standardization Rule from a project. For the built-in one, it will restore to the defaults and do not actually delete it from the system.
DELETE /v3/quality_stage/rules/{rule_name}ServiceCall<Void> deleteRule(DeleteRuleOptions deleteRuleOptions)deleteRule(params)
delete_rule(
self,
location: str,
rule_name: str,
*,
project_id: Optional[str] = None,
catalog_id: Optional[str] = None,
space_id: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the DeleteRuleOptions.Builder to create a DeleteRuleOptions object that contains the parameter values for the deleteRule method.
Path Parameters
The rule name or the asset id.
Query Parameters
The location of rule set. Required only when rule_name is not an asset id
The ID of the project to use.
catalog_idorproject_idorspace_idis required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the space to use.
catalog_idorproject_idorspace_idis required.Example:
4c9adbb4-28ef-4a7d-b273-1cee0c38021f
The deleteRule options.
The location of rule set. Required only when rule_name is not an asset id.
The rule name or the asset id.
The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:4c9adbb4-28ef-4a7d-b273-1cee0c38021f
parameters
The location of rule set. Required only when rule_name is not an asset id.
The rule name or the asset id.
The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:
parameters
The location of rule set. Required only when rule_name is not an asset id.
The rule name or the asset id.
The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:
Response
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred. See response for more information.
No Sample Response
Get basic information for the Standardization Rule
Get basic information for the Standardization Rule in a project.
Get basic information for the Standardization Rule in a project.
Get basic information for the Standardization Rule in a project.
Get basic information for the Standardization Rule in a project.
GET /v3/quality_stage/rules/{rule_name}ServiceCall<QualityFolder> getRule(GetRuleOptions getRuleOptions)getRule(params)
get_rule(
self,
location: str,
rule_name: str,
*,
project_id: Optional[str] = None,
catalog_id: Optional[str] = None,
space_id: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the GetRuleOptions.Builder to create a GetRuleOptions object that contains the parameter values for the getRule method.
Path Parameters
The rule name or the asset id.
Query Parameters
The location of rule set. Required only when rule_name is not an asset id
The ID of the project to use.
catalog_idorproject_idorspace_idis required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the space to use.
catalog_idorproject_idorspace_idis required.Example:
4c9adbb4-28ef-4a7d-b273-1cee0c38021f
The getRule options.
The location of rule set. Required only when rule_name is not an asset id.
The rule name or the asset id.
The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:4c9adbb4-28ef-4a7d-b273-1cee0c38021f
parameters
The location of rule set. Required only when rule_name is not an asset id.
The rule name or the asset id.
The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:
parameters
The location of rule set. Required only when rule_name is not an asset id.
The rule name or the asset id.
The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:
Response
A folder.
A Base file for Rule.
A folder.
A folder.
- children
A folder.
A Base file for Rule.
- files
A Base file for Rule.
- files
A folder.
A folder.
- children
A folder.
A Base file for Rule.
- files
A Base file for Rule.
- files
A folder.
A folder.
- children
A folder.
A Base file for Rule.
- files
A Base file for Rule.
- files
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred. See response for more information.
No Sample Response
Create a new Standardization Rule.
create a new Standardization Rule in a given project.
create a new Standardization Rule in a given project.
create a new Standardization Rule in a given project.
create a new Standardization Rule in a given project.
POST /v3/quality_stage/rules/{rule_name}ServiceCall<QualityFolder> createRule(CreateRuleOptions createRuleOptions)createRule(params)
create_rule(
self,
location: str,
rule_name: str,
*,
project_id: Optional[str] = None,
catalog_id: Optional[str] = None,
space_id: Optional[str] = None,
description: Optional[str] = None,
directory_asset_id: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the CreateRuleOptions.Builder to create a CreateRuleOptions object that contains the parameter values for the createRule method.
Path Parameters
The rule name or the asset id.
Query Parameters
The location of rule set. Required only when rule_name is not an asset id
The ID of the project to use.
catalog_idorproject_idorspace_idis required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the space to use.
catalog_idorproject_idorspace_idis required.Example:
4c9adbb4-28ef-4a7d-b273-1cee0c38021fThe directory asset id to create the asset in or move to
The createRule options.
The location of rule set. Required only when rule_name is not an asset id.
The rule name or the asset id.
The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:4c9adbb4-28ef-4a7d-b273-1cee0c38021fThe directory asset id to create the asset in or move to.
parameters
The location of rule set. Required only when rule_name is not an asset id.
The rule name or the asset id.
The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:The directory asset id to create the asset in or move to.
parameters
The location of rule set. Required only when rule_name is not an asset id.
The rule name or the asset id.
The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:The directory asset id to create the asset in or move to.
Response
A folder.
A Base file for Rule.
A folder.
A folder.
- children
A folder.
A Base file for Rule.
- files
A Base file for Rule.
- files
A folder.
A folder.
- children
A folder.
A Base file for Rule.
- files
A Base file for Rule.
- files
A folder.
A folder.
- children
A folder.
A Base file for Rule.
- files
A Base file for Rule.
- files
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred. See response for more information.
No Sample Response
Copy the Standardization Rule from from_location to to_location.
Copy the Standardization Rule from from_location to to_location.
Copy the Standardization Rule from from_location to to_location.
Copy the Standardization Rule from from_location to to_location.
Copy the Standardization Rule from from_location to to_location.
POST /v3/quality_stage/rules/{rule_name}/copyServiceCall<QualityFolder> cloneRule(CloneRuleOptions cloneRuleOptions)cloneRule(params)
clone_rule(
self,
rule_name: str,
from_location: str,
*,
project_id: Optional[str] = None,
catalog_id: Optional[str] = None,
space_id: Optional[str] = None,
to_location: Optional[str] = None,
new_name: Optional[str] = None,
description: Optional[str] = None,
directory_asset_id: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the CloneRuleOptions.Builder to create a CloneRuleOptions object that contains the parameter values for the cloneRule method.
Path Parameters
The rule name or the asset id.
Query Parameters
The from_location.
The ID of the project to use.
catalog_idorproject_idorspace_idis required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the space to use.
catalog_idorproject_idorspace_idis required.Example:
4c9adbb4-28ef-4a7d-b273-1cee0c38021fThe to_location. If not speecified, use Customized Standardization Rules/ + from_location
The new_name. the new name of the Standardization Rule. If not specified, use CopyOf + rule_name + number
The directory asset id to create the asset in or move to
The cloneRule options.
The rule name or the asset id.
The from_location.
The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:4c9adbb4-28ef-4a7d-b273-1cee0c38021fThe to_location. If not speecified, use Customized Standardization Rules/ + from_location.
The new_name. the new name of the Standardization Rule. If not specified, use CopyOf + rule_name + number.
The directory asset id to create the asset in or move to.
parameters
The rule name or the asset id.
The from_location.
The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:The to_location. If not speecified, use Customized Standardization Rules/ + from_location.
The new_name. the new name of the Standardization Rule. If not specified, use CopyOf + rule_name + number.
The directory asset id to create the asset in or move to.
parameters
The rule name or the asset id.
The from_location.
The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:The to_location. If not speecified, use Customized Standardization Rules/ + from_location.
The new_name. the new name of the Standardization Rule. If not specified, use CopyOf + rule_name + number.
The directory asset id to create the asset in or move to.
Response
A folder.
A Base file for Rule.
A folder.
A folder.
- children
A folder.
A Base file for Rule.
- files
A Base file for Rule.
- files
A folder.
A folder.
- children
A folder.
A Base file for Rule.
- files
A Base file for Rule.
- files
A folder.
A folder.
- children
A folder.
A Base file for Rule.
- files
A Base file for Rule.
- files
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred. See response for more information.
No Sample Response
Test the Standardization Rule against a one-line test string (a single record) before you run it against an entire file.
Test the Standardize Rule against a one-line test string (a single record) before you run it against an entire file.
Test the Standardize Rule against a one-line test string (a single record) before you run it against an entire file.
Test the Standardize Rule against a one-line test string (a single record) before you run it against an entire file.
Test the Standardize Rule against a one-line test string (a single record) before you run it against an entire file.
POST /v3/quality_stage/rules/{rule_name}/testServiceCall<RulePropertiesList> testRule(TestRuleOptions testRuleOptions)testRule(params)
test_rule(
self,
location: str,
rule_name: str,
*,
file_content: Optional[str] = None,
file_name: Optional[str] = None,
input: Optional[str] = None,
engine_type: Optional[str] = None,
project_id: Optional[str] = None,
catalog_id: Optional[str] = None,
space_id: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the TestRuleOptions.Builder to create a TestRuleOptions object that contains the parameter values for the testRule method.
Path Parameters
The rule name or the asset id.
Query Parameters
The location of rule set. Required only when rule_name is not an asset id
The type of engine used for testing ruleset.
Allowable values: [
JNI,JAVA]The ID of the project to use.
catalog_idorproject_idorspace_idis required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the space to use.
catalog_idorproject_idorspace_idis required.Example:
4c9adbb4-28ef-4a7d-b273-1cee0c38021f
Json format for input string to be tested.
the file content base64 encoded.
the changed file name.
Input string for testing.
The testRule options.
The location of rule set. Required only when rule_name is not an asset id.
The rule name or the asset id.
the file content base64 encoded.
the changed file name.
Input string for testing.
The type of engine used for testing ruleset.
Allowable values: [
JNI,JAVA]The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:4c9adbb4-28ef-4a7d-b273-1cee0c38021f
parameters
The location of rule set. Required only when rule_name is not an asset id.
The rule name or the asset id.
the file content base64 encoded.
the changed file name.
Input string for testing.
The type of engine used for testing ruleset.
Allowable values: [
JNI,JAVA]The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:
parameters
The location of rule set. Required only when rule_name is not an asset id.
The rule name or the asset id.
the file content base64 encoded.
the changed file name.
Input string for testing.
The type of engine used for testing ruleset.
Allowable values: [
JNI,JAVA]The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:
Response
Delimters of one ruleset.
Info of one column for output columns of one ruleset.
Delimters of one ruleset.
Info of one column for output columns of one ruleset.
- rows
Delimters of one ruleset.
Info of one column for output columns of one ruleset.
- rows
Delimters of one ruleset.
Info of one column for output columns of one ruleset.
- rows
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred. See response for more information.
No Sample Response
Export the Standardization Rule in zip format.
Export the Standardization Rule in zip format.
Export the Standardization Rule in zip format.
Export the Standardization Rule in zip format.
Export the Standardization Rule in zip format.
GET /v3/quality_stage/rules/{rule_name}/zipServiceCall<InputStream> exportZip(ExportZipOptions exportZipOptions)exportZip(params)
export_zip(
self,
location: str,
rule_name: str,
*,
project_id: Optional[str] = None,
catalog_id: Optional[str] = None,
space_id: Optional[str] = None,
force: Optional[bool] = None,
**kwargs,
) -> DetailedResponseRequest
Use the ExportZipOptions.Builder to create a ExportZipOptions object that contains the parameter values for the exportZip method.
Path Parameters
The rule name or the asset id.
Query Parameters
The location of rule set. Required only when rule_name is not an asset id
The ID of the project to use.
catalog_idorproject_idorspace_idis required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the space to use.
catalog_idorproject_idorspace_idis required.Example:
4c9adbb4-28ef-4a7d-b273-1cee0c38021fTemplate rule only. Force download even without any changes, default is true
The exportZip options.
The location of rule set. Required only when rule_name is not an asset id.
The rule name or the asset id.
The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:4c9adbb4-28ef-4a7d-b273-1cee0c38021fTemplate rule only. Force download even without any changes, default is true.
parameters
The location of rule set. Required only when rule_name is not an asset id.
The rule name or the asset id.
The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:Template rule only. Force download even without any changes, default is true.
parameters
The location of rule set. Required only when rule_name is not an asset id.
The rule name or the asset id.
The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:Template rule only. Force download even without any changes, default is true.
Response
Response type: InputStream
Response type: NodeJS.ReadableStream
Response type: BinaryIO
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred. See response for more information.
No Sample Response
Upload/update the Standardization Rule from the given zip file
Upload/update the Standardization Rule from the given zip file.
Upload/update the Standardization Rule from the given zip file.
Upload/update the Standardization Rule from the given zip file.
Upload/update the Standardization Rule from the given zip file.
PUT /v3/quality_stage/rules/{rule_name}/zipServiceCall<Map<String, Object>> importZip(ImportZipOptions importZipOptions)
importZip(params)
import_zip(
self,
location: str,
rule_name: str,
body: BinaryIO,
*,
project_id: Optional[str] = None,
catalog_id: Optional[str] = None,
space_id: Optional[str] = None,
directory_asset_id: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the ImportZipOptions.Builder to create a ImportZipOptions object that contains the parameter values for the importZip method.
Path Parameters
The rule name or the asset id.
Query Parameters
The location of rule set. Required only when rule_name is not an asset id
The ID of the project to use.
catalog_idorproject_idorspace_idis required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the space to use.
catalog_idorproject_idorspace_idis required.Example:
4c9adbb4-28ef-4a7d-b273-1cee0c38021fThe directory asset id to create the asset in or move to
The importZip options.
The location of rule set. Required only when rule_name is not an asset id.
The rule name or the asset id.
The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:4c9adbb4-28ef-4a7d-b273-1cee0c38021fThe directory asset id to create the asset in or move to.
parameters
The location of rule set. Required only when rule_name is not an asset id.
The rule name or the asset id.
The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:The directory asset id to create the asset in or move to.
parameters
The location of rule set. Required only when rule_name is not an asset id.
The rule name or the asset id.
The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:The directory asset id to create the asset in or move to.
Response
Response type: Map<String, Object>
Response type: JsonObject
Response type: dict
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred. See response for more information.
No Sample Response
Get the status of a previous git commit request
Gets the status of a git commit request. The status field in the response object indicates if the given commit is completed, in progress, or failed. Detailed status information about the commit progress is also contained in the response object.
Gets the status of a git commit request. The status field in the response object indicates if the given commit is completed, in progress, or failed. Detailed status information about the commit progress is also contained in the response object.
Gets the status of a git commit request. The status field in the response object indicates if the given commit is completed, in progress, or failed. Detailed status information about the commit progress is also contained in the response object.
Gets the status of a git commit request. The status field in the response object indicates if the given commit is completed, in progress, or failed. Detailed status information about the commit progress is also contained in the response object.
GET /v3/migration/git_commit
ServiceCall<GitCommitResponse> getGitCommit(GetGitCommitOptions getGitCommitOptions)getGitCommit(params)
get_git_commit(
self,
*,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the GetGitCommitOptions.Builder to create a GetGitCommitOptions object that contains the parameter values for the getGitCommit method.
Query Parameters
The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Example:
4c9adbb4-28ef-4a7d-b273-1cee0c38021f
The getGitCommit options.
The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:4c9adbb4-28ef-4a7d-b273-1cee0c38021f
parameters
The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:
parameters
The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:
Response
Response object of an git commit request.
Export the response entity.
Export the response metadata.
Show git commit information
Response object of an git commit request.
Export the response entity.
- entity
elapsed time in seconds.
The timestamp when the import opearton completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
export error.
All data flows failed to export.
- failedFlows
conflict resolution status.
Possible values: [
flow_replacement_succeeded,flow_replacement_failed,import_flow_renamed,import_flow_skipped,connection_replacement_succeeded,connection_replacement_failed,connection_renamed,connection_skipped,parameter_set_replacement_succeeded,parameter_set_replacement_failed,parameter_set_renamed,parameter_set_skipped,table_definition_replacement_succeeded,table_definition_replacement_failed,table_definition_renamed,table_definition_skipped,sequence_job_replacement_succeeded,sequence_job_replacement_failed,sequence_job_renamed,sequence_job_skipped,subflow_replacement_succeeded,subflow_replacement_failed,subflow_renamed,subflow_skipped,message_handler_replacement_succeeded,message_handler_replacement_failed,message_handler_renamed,message_handler_skipped,build_stage_replacement_succeeded,build_stage_replacement_failed,build_stage_renamed,build_stage_skipped,wrapped_stage_replacement_succeeded,wrapped_stage_replacement_failed,wrapped_stage_renamed,wrapped_stage_skipped,custom_stage_replacement_succeeded,custom_stage_replacement_failed,custom_stage_renamed,custom_stage_skipped,data_quality_rule_replacement_succeeded,data_quality_rule_replacement_failed,data_quality_rule_renamed,data_quality_rule_skipped,data_quality_rule_definition_replacement_succeeded,data_quality_rule_definition_failed,data_quality_rule_definition_renamed,data_quality_rule_definition_skipped,cff_schema_replacement_succeeded,cff_schema_replacement_failed,cff_schema_renamed,cff_schema_skipped,test_case_replacement_succeeded,test_case_replacement_failed,test_case_renamed,test_case_skipped]Examples:import_flow_renamed
The timestamp when the flow import is completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
The errors array report all the problems preventing the data flow from being successfully imported.
- errors
additional error text.
error object name.
error stage type.
error type.
Possible values: [
unsupported_stage_type,unsupported_feature,empty_json,isx_conversion_error,model_conversion_error,invalid_input_type,invalid_json_format,json_conversion_error,flow_deletion_error,flow_creation_error,flow_response_parsing_error,auth_token_error,flow_compilation_error,empty_stage_list,empty_stage_node,missing_stage_type_class_name,dummy_stage,missing_stage_type,missing_repos_id,stage_conversion_error,unimplemented_stage_type,job_creation_error,job_run_error,flow_search_error,unsupported_job_type,internal_error,connection_creation_error,flow_rename_error,duplicate_job_error,parameter_set_creation_error,distributed_lock_error,duplicate_object_error,unbound_object_reference,table_def_creation_error,connection_creation_api_error,connection_patch_api_error,connection_deletion_api_error,sequence_job_creation_error,unsupported_stage_type_in_subflow,job_update_error,job_deletion_error,sequence_job_deletion_error,parameter_set_deletion_error,invalid_property_name,invalid_property_value,unsupported_subflow_with_multiple_parents,message_handler_creation_error,message_handler_deletion_error,unsupported_server_container,build_stage_creation_error,build_stage_deletion_error,build_stage_generation_error,data_quality_spec_creation_error,xml_schema_library_creation_error,wrapped_stage_creation_error,wrapped_stage_deletion_error,wrapped_stage_generation_error,insufficient_memory_error,out_of_memory_error,routine_conversion_error,routine_creation_error,routine_deletion_error,custom_stage_creation_error,custom_stage_deletion_error,invalid_file_type,sequence_job_exceeding_max_capacity_error,flow_update_error,parameter_set_update_error,invalid_job_file_type,incorrectset_xmlinputpx,missing_column_definition,cp4d_only_stage_type,unsupported_sybase_lookup,job_get_error,flow_export_error,odm_library_creation_error,generated_parameter_set_error,data_quality_rule_creation_error,xml_schema_library_empty_error,data_asset_creation_error,data_asset_deletion_error,test_case_creation_error,test_case_deletion_error,custom_function_creation_error,sequence_job_update_error,cp4d_only_stage_type_in_subflow,data_set_creation_error,file_set_creation_error,missing_job_referred_flow,custom_stage_name_conflict_error,data_rule_definition_creation_error,CFF_SCHEMA_CREATION_ERROR,build_op_registration_error]
Unique id of the data flow. This field is returned only if the underlying data flow has been successfully imported.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136Unique id of the job. This field is returned only if the corresponding job object has been successfully created.
Examples:ccfaaafd-810d-4f0e-b0a9-228c328a0136Job name. This field is returned only if the corresponding job object has been successfully created.
Examples:Aggregator12_DataStage_1
(deprecated) original type of the job or data flow in the import file.
Possible values: [
px_job,server_job,connection,table_def]Examples:px_job
Name of the imported data flow.
Examples:cancel-reservation-job
Name of the data flow to be imported.
Examples:cancel-reservation-job
The ID of an existing asset this object refers to. If ref_asset_id is specified, the id field will be the same as ref_asset_id for backward compatibility.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136data import status.
Possible values: [
completed,in_progress,failed,skipped,deprecated,unsupported,flow_conversion_failed,flow_creation_failed,flow_compilation_failed,job_creation_failed,job_run_failed,connection_conversion_failed,connection_creation_failed,parameter_set_conversion_failed,parameter_set_creation_failed,data_quality_spec_conversion_failed,data_quality_spec_creation_failed,table_definition_conversion_failed,table_definition_creation_failed,job_update_failed,job_deletion_failed,sequence_job_conversion_failed,sequence_job_creation_failed,message_handler_conversion_failed,message_handler_creation_failed,build_stage_conversion_failed,build_stage_creation_failed,build_stage_generation_failed,xml_schema_library_creation_failed,wrapped_stage_conversion_failed,wrapped_stage_creation_failed,wrapped_stage_generation_failed,routine_conversion_failed,routine_creation_failed,custom_stage_conversion_failed,custom_stage_creation_failed,flow_update_failed,parameter_set_update_failed,completed_with_error,data_quality_rule_creation_failed,data_asset_creation_failed,custom_function_creation_failed,data_set_creation_failed,file_set_creation_failed,data_rule_definition_creation_failed,cff_schema_creation_failed,cff_schema_conversion_failed,test_case_creation_failed,test_case_conversion_failed]Examples:completed
type of the job or data connection in the import file.
Possible values: [
px_job,server_job,connection,table_def,parameter_set,routine,subflow,sequence_job,data_quality_spec,server_container,message_handler,build_stage,wrapped_stage,custom_stage,xml_schema_library,custom_stage_library,function_library,data_intg_parallel_function,data_intg_java_library,odm_ruleset,job,data_quality_rule,data_quality_definition,data_asset,data_intg_data_set,data_intg_file_set,odbc_configuration,data_intg_cff_schema,data_intg_test_case,data_element,ims_database,ims_viewset,mainframe_job,machine_profile,mainframe_routine,parallel_routine,transform,jdbc_driver]Examples:px_job
The warnings array report all the warnings in the data flow import operation.
- warnings
additional warning text.
warning object name.
warning type.
Possible values: [
unreleased_stage_type,unreleased_feature,credentials_file_warning,transformer_trigger_unsupported,transformer_buildtab_unsupported,unsupported_secure_gateway,placeholder_connection_parameters,description_truncated,empty_stage_list,missing_parameter_set,missing_subflow,missing_table_definition,missing_data_connection,duplicate_job,unsupported_connection,unsupported_activity_trigger,unsupported_expression,unsupported_function,unsupported_operator,unsupported_variable,unsupported_activity_variable,default_connection_parameter,unmatched_for_var_warning,unsupported_routine_statement,unsupported_before_after_routine,attachment_creation_failed,unsupported_job_control,unsupported_drs_rdbms_parameter,dataconnection_conflict,placeholder_parameter_set,missing_routine,unsupported_stored_procedure_transform,unattached_links,unreleased_datasource_in_storedprocedure,unsupported_parquet_compression_type_lzo,unsupported_stppx_dbvendor_parameter,unsupported_sappack_connector_property,undefined_variable,unsupported_xmloutputpx_features,xml_transform_missed_schema,unsupported_stage_type,unbound_reference,missing_stage_type,unsupported_dependency,missing_data_asset,unsupported_svtransformer_before_after_subroutine,deprecated_load_control_properties,missing_parameter,unsupported_stage_type_in_subflow,unsupported_xmlinputpx_features,export_request_in_progress,unsupported_stppx_for_db2i_db2z,deprecated_connection_property,hard_coded_file_path,unsupported_property_value,unsupported_property,unsupported_dcn_data_connection,xmlstage_unsupported_function,attachment_update_failed,unchanged_parameter_set,connection_parameter_set_created,connection_parameter_set_patched,skipped_duplicate_connection,unchanged_connection,unsupported_xpath,parameters_in_existing_parameter_set,different_parameter_type_in_parameter_set,unsupported_dataquality_rename,unsupported_data_connection_secure_gateway,missing_default_parameter_value,missing_referenced_connection,unsupported_routine_activity,unsupported_execution_action_validate,unsupported_execution_action_reset,runtime_jis_generation_failed,missing_job,missing_dataset,missing_fileset,identical_parameter_set,missing_runtime_environment,missing_match_spec,unsupported_before_after_subroutine,deprecated_stage_type,missing_storage_volume,unsupported_websphereMQ_server_mode,unsupported_parameter_name,unsupported_property_on_cloud,invalid_job_schedule,missing_cff_schema,missing_data_intg_flow,job_import_skipped,unsupported_spparam_type,unsupported_multiple_reject_links,binaries_import_failed,missing_binaries,deprecated_job_environment_input,unclear_numeric_type,duplicate_parameter_name,user_status_migrated_to_global_user_variable,user_status_migrated_to_dssetuserstatus_function,subroutine_unsupported_on_cloud]
Import event notifications.
- notifications
The timestamp when the notification was created. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Notification id.
Import/Export status associated with the notification.
The timestamp when the import opearton started. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
export status.
Possible values: [
in_progress,started,queued,completed,failed,cancelled]Examples:in_progress
Export statistics. total = exported + failed.
- tally
Total number of build stages.
Total number of cff schemas.
number of flows completed.
Total number of data connections.
Total number of custom stages.
Total number of data assets.
Total number of data quality rules.
Total number of data quality rules.
Total number of data quality specifications.
Total number of datastage datasets.
Total number of data flows that failed to export.
Total number of datastage filesets.
Total number of function libraries.
Total number of java libraries.
Total number of match specifications.
Total number of message handlers.
Total number of ilogjrule/odm libraries.
Total number of parallel flows completed.
Total number of parallel flows that failed to export.
Total number of parallel flows to be exported.
Total number of parameter sets.
Total number of routines.
Total number of rule sets.
Total number of sequence flows completed.
Total number of sequence flows that failed to export.
Total number of sequence flows to be exported.
Total number of parallel job subflows.
Total number of table definitions.
Total number of test cases.
Total number of data flows to be exported.
Total number of wrapped stages.
Total number of xml schema libraries.
Show git commit information.
- gitcommit
name of the branch used for the commit.
Examples:my_branch
the commit sha of this commit.
Examples:<8f8d6bc3ce901ea56877922496dda9f5f7c81593>shows which commit message we are committing.
Examples:This is test commit
The errors array report all the problems preventing the data flow from being successfully committed.
whether data is committed to git successfully.
Possible values: [
in_progress,cancelled,timeout,invalid,started,completed]Examples:in_progress
name of the folder used for the commit.
Examples:my_folder
name of the repo used for the commit.
Examples:my_repo
Export the response metadata.
- metadata
Catalog id.
The timestamp when the import API was submitted. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Account ID of the user who submitted the import request.
Examples:user1@company1.com
The timestamp when the export status was last updated. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Project id.
Project name.
Space id.
The URL which can be used to get the status of the export request right after it is submitted.
Response object of an git commit request.
Export the response entity.
- entity
elapsed time in seconds.
The timestamp when the import opearton completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
export error.
All data flows failed to export.
- failed_flows
conflict resolution status.
Possible values: [
flow_replacement_succeeded,flow_replacement_failed,import_flow_renamed,import_flow_skipped,connection_replacement_succeeded,connection_replacement_failed,connection_renamed,connection_skipped,parameter_set_replacement_succeeded,parameter_set_replacement_failed,parameter_set_renamed,parameter_set_skipped,table_definition_replacement_succeeded,table_definition_replacement_failed,table_definition_renamed,table_definition_skipped,sequence_job_replacement_succeeded,sequence_job_replacement_failed,sequence_job_renamed,sequence_job_skipped,subflow_replacement_succeeded,subflow_replacement_failed,subflow_renamed,subflow_skipped,message_handler_replacement_succeeded,message_handler_replacement_failed,message_handler_renamed,message_handler_skipped,build_stage_replacement_succeeded,build_stage_replacement_failed,build_stage_renamed,build_stage_skipped,wrapped_stage_replacement_succeeded,wrapped_stage_replacement_failed,wrapped_stage_renamed,wrapped_stage_skipped,custom_stage_replacement_succeeded,custom_stage_replacement_failed,custom_stage_renamed,custom_stage_skipped,data_quality_rule_replacement_succeeded,data_quality_rule_replacement_failed,data_quality_rule_renamed,data_quality_rule_skipped,data_quality_rule_definition_replacement_succeeded,data_quality_rule_definition_failed,data_quality_rule_definition_renamed,data_quality_rule_definition_skipped,cff_schema_replacement_succeeded,cff_schema_replacement_failed,cff_schema_renamed,cff_schema_skipped,test_case_replacement_succeeded,test_case_replacement_failed,test_case_renamed,test_case_skipped]Examples:import_flow_renamed
The timestamp when the flow import is completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
The errors array report all the problems preventing the data flow from being successfully imported.
- errors
additional error text.
error object name.
error stage type.
error type.
Possible values: [
unsupported_stage_type,unsupported_feature,empty_json,isx_conversion_error,model_conversion_error,invalid_input_type,invalid_json_format,json_conversion_error,flow_deletion_error,flow_creation_error,flow_response_parsing_error,auth_token_error,flow_compilation_error,empty_stage_list,empty_stage_node,missing_stage_type_class_name,dummy_stage,missing_stage_type,missing_repos_id,stage_conversion_error,unimplemented_stage_type,job_creation_error,job_run_error,flow_search_error,unsupported_job_type,internal_error,connection_creation_error,flow_rename_error,duplicate_job_error,parameter_set_creation_error,distributed_lock_error,duplicate_object_error,unbound_object_reference,table_def_creation_error,connection_creation_api_error,connection_patch_api_error,connection_deletion_api_error,sequence_job_creation_error,unsupported_stage_type_in_subflow,job_update_error,job_deletion_error,sequence_job_deletion_error,parameter_set_deletion_error,invalid_property_name,invalid_property_value,unsupported_subflow_with_multiple_parents,message_handler_creation_error,message_handler_deletion_error,unsupported_server_container,build_stage_creation_error,build_stage_deletion_error,build_stage_generation_error,data_quality_spec_creation_error,xml_schema_library_creation_error,wrapped_stage_creation_error,wrapped_stage_deletion_error,wrapped_stage_generation_error,insufficient_memory_error,out_of_memory_error,routine_conversion_error,routine_creation_error,routine_deletion_error,custom_stage_creation_error,custom_stage_deletion_error,invalid_file_type,sequence_job_exceeding_max_capacity_error,flow_update_error,parameter_set_update_error,invalid_job_file_type,incorrectset_xmlinputpx,missing_column_definition,cp4d_only_stage_type,unsupported_sybase_lookup,job_get_error,flow_export_error,odm_library_creation_error,generated_parameter_set_error,data_quality_rule_creation_error,xml_schema_library_empty_error,data_asset_creation_error,data_asset_deletion_error,test_case_creation_error,test_case_deletion_error,custom_function_creation_error,sequence_job_update_error,cp4d_only_stage_type_in_subflow,data_set_creation_error,file_set_creation_error,missing_job_referred_flow,custom_stage_name_conflict_error,data_rule_definition_creation_error,CFF_SCHEMA_CREATION_ERROR,build_op_registration_error]
Unique id of the data flow. This field is returned only if the underlying data flow has been successfully imported.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136Unique id of the job. This field is returned only if the corresponding job object has been successfully created.
Examples:ccfaaafd-810d-4f0e-b0a9-228c328a0136Job name. This field is returned only if the corresponding job object has been successfully created.
Examples:Aggregator12_DataStage_1
(deprecated) original type of the job or data flow in the import file.
Possible values: [
px_job,server_job,connection,table_def]Examples:px_job
Name of the imported data flow.
Examples:cancel-reservation-job
Name of the data flow to be imported.
Examples:cancel-reservation-job
The ID of an existing asset this object refers to. If ref_asset_id is specified, the id field will be the same as ref_asset_id for backward compatibility.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136data import status.
Possible values: [
completed,in_progress,failed,skipped,deprecated,unsupported,flow_conversion_failed,flow_creation_failed,flow_compilation_failed,job_creation_failed,job_run_failed,connection_conversion_failed,connection_creation_failed,parameter_set_conversion_failed,parameter_set_creation_failed,data_quality_spec_conversion_failed,data_quality_spec_creation_failed,table_definition_conversion_failed,table_definition_creation_failed,job_update_failed,job_deletion_failed,sequence_job_conversion_failed,sequence_job_creation_failed,message_handler_conversion_failed,message_handler_creation_failed,build_stage_conversion_failed,build_stage_creation_failed,build_stage_generation_failed,xml_schema_library_creation_failed,wrapped_stage_conversion_failed,wrapped_stage_creation_failed,wrapped_stage_generation_failed,routine_conversion_failed,routine_creation_failed,custom_stage_conversion_failed,custom_stage_creation_failed,flow_update_failed,parameter_set_update_failed,completed_with_error,data_quality_rule_creation_failed,data_asset_creation_failed,custom_function_creation_failed,data_set_creation_failed,file_set_creation_failed,data_rule_definition_creation_failed,cff_schema_creation_failed,cff_schema_conversion_failed,test_case_creation_failed,test_case_conversion_failed]Examples:completed
type of the job or data connection in the import file.
Possible values: [
px_job,server_job,connection,table_def,parameter_set,routine,subflow,sequence_job,data_quality_spec,server_container,message_handler,build_stage,wrapped_stage,custom_stage,xml_schema_library,custom_stage_library,function_library,data_intg_parallel_function,data_intg_java_library,odm_ruleset,job,data_quality_rule,data_quality_definition,data_asset,data_intg_data_set,data_intg_file_set,odbc_configuration,data_intg_cff_schema,data_intg_test_case,data_element,ims_database,ims_viewset,mainframe_job,machine_profile,mainframe_routine,parallel_routine,transform,jdbc_driver]Examples:px_job
The warnings array report all the warnings in the data flow import operation.
- warnings
additional warning text.
warning object name.
warning type.
Possible values: [
unreleased_stage_type,unreleased_feature,credentials_file_warning,transformer_trigger_unsupported,transformer_buildtab_unsupported,unsupported_secure_gateway,placeholder_connection_parameters,description_truncated,empty_stage_list,missing_parameter_set,missing_subflow,missing_table_definition,missing_data_connection,duplicate_job,unsupported_connection,unsupported_activity_trigger,unsupported_expression,unsupported_function,unsupported_operator,unsupported_variable,unsupported_activity_variable,default_connection_parameter,unmatched_for_var_warning,unsupported_routine_statement,unsupported_before_after_routine,attachment_creation_failed,unsupported_job_control,unsupported_drs_rdbms_parameter,dataconnection_conflict,placeholder_parameter_set,missing_routine,unsupported_stored_procedure_transform,unattached_links,unreleased_datasource_in_storedprocedure,unsupported_parquet_compression_type_lzo,unsupported_stppx_dbvendor_parameter,unsupported_sappack_connector_property,undefined_variable,unsupported_xmloutputpx_features,xml_transform_missed_schema,unsupported_stage_type,unbound_reference,missing_stage_type,unsupported_dependency,missing_data_asset,unsupported_svtransformer_before_after_subroutine,deprecated_load_control_properties,missing_parameter,unsupported_stage_type_in_subflow,unsupported_xmlinputpx_features,export_request_in_progress,unsupported_stppx_for_db2i_db2z,deprecated_connection_property,hard_coded_file_path,unsupported_property_value,unsupported_property,unsupported_dcn_data_connection,xmlstage_unsupported_function,attachment_update_failed,unchanged_parameter_set,connection_parameter_set_created,connection_parameter_set_patched,skipped_duplicate_connection,unchanged_connection,unsupported_xpath,parameters_in_existing_parameter_set,different_parameter_type_in_parameter_set,unsupported_dataquality_rename,unsupported_data_connection_secure_gateway,missing_default_parameter_value,missing_referenced_connection,unsupported_routine_activity,unsupported_execution_action_validate,unsupported_execution_action_reset,runtime_jis_generation_failed,missing_job,missing_dataset,missing_fileset,identical_parameter_set,missing_runtime_environment,missing_match_spec,unsupported_before_after_subroutine,deprecated_stage_type,missing_storage_volume,unsupported_websphereMQ_server_mode,unsupported_parameter_name,unsupported_property_on_cloud,invalid_job_schedule,missing_cff_schema,missing_data_intg_flow,job_import_skipped,unsupported_spparam_type,unsupported_multiple_reject_links,binaries_import_failed,missing_binaries,deprecated_job_environment_input,unclear_numeric_type,duplicate_parameter_name,user_status_migrated_to_global_user_variable,user_status_migrated_to_dssetuserstatus_function,subroutine_unsupported_on_cloud]
Import event notifications.
- notifications
The timestamp when the notification was created. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Notification id.
Import/Export status associated with the notification.
The timestamp when the import opearton started. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
export status.
Possible values: [
in_progress,started,queued,completed,failed,cancelled]Examples:in_progress
Export statistics. total = exported + failed.
- tally
Total number of build stages.
Total number of cff schemas.
number of flows completed.
Total number of data connections.
Total number of custom stages.
Total number of data assets.
Total number of data quality rules.
Total number of data quality rules.
Total number of data quality specifications.
Total number of datastage datasets.
Total number of data flows that failed to export.
Total number of datastage filesets.
Total number of function libraries.
Total number of java libraries.
Total number of match specifications.
Total number of message handlers.
Total number of ilogjrule/odm libraries.
Total number of parallel flows completed.
Total number of parallel flows that failed to export.
Total number of parallel flows to be exported.
Total number of parameter sets.
Total number of routines.
Total number of rule sets.
Total number of sequence flows completed.
Total number of sequence flows that failed to export.
Total number of sequence flows to be exported.
Total number of parallel job subflows.
Total number of table definitions.
Total number of test cases.
Total number of data flows to be exported.
Total number of wrapped stages.
Total number of xml schema libraries.
Show git commit information.
- gitcommit
name of the branch used for the commit.
Examples:my_branch
the commit sha of this commit.
Examples:<8f8d6bc3ce901ea56877922496dda9f5f7c81593>shows which commit message we are committing.
Examples:This is test commit
The errors array report all the problems preventing the data flow from being successfully committed.
whether data is committed to git successfully.
Possible values: [
in_progress,cancelled,timeout,invalid,started,completed]Examples:in_progress
name of the folder used for the commit.
Examples:my_folder
name of the repo used for the commit.
Examples:my_repo
Export the response metadata.
- metadata
Catalog id.
The timestamp when the import API was submitted. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Account ID of the user who submitted the import request.
Examples:user1@company1.com
The timestamp when the export status was last updated. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Project id.
Project name.
Space id.
The URL which can be used to get the status of the export request right after it is submitted.
Response object of an git commit request.
Export the response entity.
- entity
elapsed time in seconds.
The timestamp when the import opearton completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
export error.
All data flows failed to export.
- failed_flows
conflict resolution status.
Possible values: [
flow_replacement_succeeded,flow_replacement_failed,import_flow_renamed,import_flow_skipped,connection_replacement_succeeded,connection_replacement_failed,connection_renamed,connection_skipped,parameter_set_replacement_succeeded,parameter_set_replacement_failed,parameter_set_renamed,parameter_set_skipped,table_definition_replacement_succeeded,table_definition_replacement_failed,table_definition_renamed,table_definition_skipped,sequence_job_replacement_succeeded,sequence_job_replacement_failed,sequence_job_renamed,sequence_job_skipped,subflow_replacement_succeeded,subflow_replacement_failed,subflow_renamed,subflow_skipped,message_handler_replacement_succeeded,message_handler_replacement_failed,message_handler_renamed,message_handler_skipped,build_stage_replacement_succeeded,build_stage_replacement_failed,build_stage_renamed,build_stage_skipped,wrapped_stage_replacement_succeeded,wrapped_stage_replacement_failed,wrapped_stage_renamed,wrapped_stage_skipped,custom_stage_replacement_succeeded,custom_stage_replacement_failed,custom_stage_renamed,custom_stage_skipped,data_quality_rule_replacement_succeeded,data_quality_rule_replacement_failed,data_quality_rule_renamed,data_quality_rule_skipped,data_quality_rule_definition_replacement_succeeded,data_quality_rule_definition_failed,data_quality_rule_definition_renamed,data_quality_rule_definition_skipped,cff_schema_replacement_succeeded,cff_schema_replacement_failed,cff_schema_renamed,cff_schema_skipped,test_case_replacement_succeeded,test_case_replacement_failed,test_case_renamed,test_case_skipped]Examples:import_flow_renamed
The timestamp when the flow import is completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
The errors array report all the problems preventing the data flow from being successfully imported.
- errors
additional error text.
error object name.
error stage type.
error type.
Possible values: [
unsupported_stage_type,unsupported_feature,empty_json,isx_conversion_error,model_conversion_error,invalid_input_type,invalid_json_format,json_conversion_error,flow_deletion_error,flow_creation_error,flow_response_parsing_error,auth_token_error,flow_compilation_error,empty_stage_list,empty_stage_node,missing_stage_type_class_name,dummy_stage,missing_stage_type,missing_repos_id,stage_conversion_error,unimplemented_stage_type,job_creation_error,job_run_error,flow_search_error,unsupported_job_type,internal_error,connection_creation_error,flow_rename_error,duplicate_job_error,parameter_set_creation_error,distributed_lock_error,duplicate_object_error,unbound_object_reference,table_def_creation_error,connection_creation_api_error,connection_patch_api_error,connection_deletion_api_error,sequence_job_creation_error,unsupported_stage_type_in_subflow,job_update_error,job_deletion_error,sequence_job_deletion_error,parameter_set_deletion_error,invalid_property_name,invalid_property_value,unsupported_subflow_with_multiple_parents,message_handler_creation_error,message_handler_deletion_error,unsupported_server_container,build_stage_creation_error,build_stage_deletion_error,build_stage_generation_error,data_quality_spec_creation_error,xml_schema_library_creation_error,wrapped_stage_creation_error,wrapped_stage_deletion_error,wrapped_stage_generation_error,insufficient_memory_error,out_of_memory_error,routine_conversion_error,routine_creation_error,routine_deletion_error,custom_stage_creation_error,custom_stage_deletion_error,invalid_file_type,sequence_job_exceeding_max_capacity_error,flow_update_error,parameter_set_update_error,invalid_job_file_type,incorrectset_xmlinputpx,missing_column_definition,cp4d_only_stage_type,unsupported_sybase_lookup,job_get_error,flow_export_error,odm_library_creation_error,generated_parameter_set_error,data_quality_rule_creation_error,xml_schema_library_empty_error,data_asset_creation_error,data_asset_deletion_error,test_case_creation_error,test_case_deletion_error,custom_function_creation_error,sequence_job_update_error,cp4d_only_stage_type_in_subflow,data_set_creation_error,file_set_creation_error,missing_job_referred_flow,custom_stage_name_conflict_error,data_rule_definition_creation_error,CFF_SCHEMA_CREATION_ERROR,build_op_registration_error]
Unique id of the data flow. This field is returned only if the underlying data flow has been successfully imported.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136Unique id of the job. This field is returned only if the corresponding job object has been successfully created.
Examples:ccfaaafd-810d-4f0e-b0a9-228c328a0136Job name. This field is returned only if the corresponding job object has been successfully created.
Examples:Aggregator12_DataStage_1
(deprecated) original type of the job or data flow in the import file.
Possible values: [
px_job,server_job,connection,table_def]Examples:px_job
Name of the imported data flow.
Examples:cancel-reservation-job
Name of the data flow to be imported.
Examples:cancel-reservation-job
The ID of an existing asset this object refers to. If ref_asset_id is specified, the id field will be the same as ref_asset_id for backward compatibility.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136data import status.
Possible values: [
completed,in_progress,failed,skipped,deprecated,unsupported,flow_conversion_failed,flow_creation_failed,flow_compilation_failed,job_creation_failed,job_run_failed,connection_conversion_failed,connection_creation_failed,parameter_set_conversion_failed,parameter_set_creation_failed,data_quality_spec_conversion_failed,data_quality_spec_creation_failed,table_definition_conversion_failed,table_definition_creation_failed,job_update_failed,job_deletion_failed,sequence_job_conversion_failed,sequence_job_creation_failed,message_handler_conversion_failed,message_handler_creation_failed,build_stage_conversion_failed,build_stage_creation_failed,build_stage_generation_failed,xml_schema_library_creation_failed,wrapped_stage_conversion_failed,wrapped_stage_creation_failed,wrapped_stage_generation_failed,routine_conversion_failed,routine_creation_failed,custom_stage_conversion_failed,custom_stage_creation_failed,flow_update_failed,parameter_set_update_failed,completed_with_error,data_quality_rule_creation_failed,data_asset_creation_failed,custom_function_creation_failed,data_set_creation_failed,file_set_creation_failed,data_rule_definition_creation_failed,cff_schema_creation_failed,cff_schema_conversion_failed,test_case_creation_failed,test_case_conversion_failed]Examples:completed
type of the job or data connection in the import file.
Possible values: [
px_job,server_job,connection,table_def,parameter_set,routine,subflow,sequence_job,data_quality_spec,server_container,message_handler,build_stage,wrapped_stage,custom_stage,xml_schema_library,custom_stage_library,function_library,data_intg_parallel_function,data_intg_java_library,odm_ruleset,job,data_quality_rule,data_quality_definition,data_asset,data_intg_data_set,data_intg_file_set,odbc_configuration,data_intg_cff_schema,data_intg_test_case,data_element,ims_database,ims_viewset,mainframe_job,machine_profile,mainframe_routine,parallel_routine,transform,jdbc_driver]Examples:px_job
The warnings array report all the warnings in the data flow import operation.
- warnings
additional warning text.
warning object name.
warning type.
Possible values: [
unreleased_stage_type,unreleased_feature,credentials_file_warning,transformer_trigger_unsupported,transformer_buildtab_unsupported,unsupported_secure_gateway,placeholder_connection_parameters,description_truncated,empty_stage_list,missing_parameter_set,missing_subflow,missing_table_definition,missing_data_connection,duplicate_job,unsupported_connection,unsupported_activity_trigger,unsupported_expression,unsupported_function,unsupported_operator,unsupported_variable,unsupported_activity_variable,default_connection_parameter,unmatched_for_var_warning,unsupported_routine_statement,unsupported_before_after_routine,attachment_creation_failed,unsupported_job_control,unsupported_drs_rdbms_parameter,dataconnection_conflict,placeholder_parameter_set,missing_routine,unsupported_stored_procedure_transform,unattached_links,unreleased_datasource_in_storedprocedure,unsupported_parquet_compression_type_lzo,unsupported_stppx_dbvendor_parameter,unsupported_sappack_connector_property,undefined_variable,unsupported_xmloutputpx_features,xml_transform_missed_schema,unsupported_stage_type,unbound_reference,missing_stage_type,unsupported_dependency,missing_data_asset,unsupported_svtransformer_before_after_subroutine,deprecated_load_control_properties,missing_parameter,unsupported_stage_type_in_subflow,unsupported_xmlinputpx_features,export_request_in_progress,unsupported_stppx_for_db2i_db2z,deprecated_connection_property,hard_coded_file_path,unsupported_property_value,unsupported_property,unsupported_dcn_data_connection,xmlstage_unsupported_function,attachment_update_failed,unchanged_parameter_set,connection_parameter_set_created,connection_parameter_set_patched,skipped_duplicate_connection,unchanged_connection,unsupported_xpath,parameters_in_existing_parameter_set,different_parameter_type_in_parameter_set,unsupported_dataquality_rename,unsupported_data_connection_secure_gateway,missing_default_parameter_value,missing_referenced_connection,unsupported_routine_activity,unsupported_execution_action_validate,unsupported_execution_action_reset,runtime_jis_generation_failed,missing_job,missing_dataset,missing_fileset,identical_parameter_set,missing_runtime_environment,missing_match_spec,unsupported_before_after_subroutine,deprecated_stage_type,missing_storage_volume,unsupported_websphereMQ_server_mode,unsupported_parameter_name,unsupported_property_on_cloud,invalid_job_schedule,missing_cff_schema,missing_data_intg_flow,job_import_skipped,unsupported_spparam_type,unsupported_multiple_reject_links,binaries_import_failed,missing_binaries,deprecated_job_environment_input,unclear_numeric_type,duplicate_parameter_name,user_status_migrated_to_global_user_variable,user_status_migrated_to_dssetuserstatus_function,subroutine_unsupported_on_cloud]
Import event notifications.
- notifications
The timestamp when the notification was created. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Notification id.
Import/Export status associated with the notification.
The timestamp when the import opearton started. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
export status.
Possible values: [
in_progress,started,queued,completed,failed,cancelled]Examples:in_progress
Export statistics. total = exported + failed.
- tally
Total number of build stages.
Total number of cff schemas.
number of flows completed.
Total number of data connections.
Total number of custom stages.
Total number of data assets.
Total number of data quality rules.
Total number of data quality rules.
Total number of data quality specifications.
Total number of datastage datasets.
Total number of data flows that failed to export.
Total number of datastage filesets.
Total number of function libraries.
Total number of java libraries.
Total number of match specifications.
Total number of message handlers.
Total number of ilogjrule/odm libraries.
Total number of parallel flows completed.
Total number of parallel flows that failed to export.
Total number of parallel flows to be exported.
Total number of parameter sets.
Total number of routines.
Total number of rule sets.
Total number of sequence flows completed.
Total number of sequence flows that failed to export.
Total number of sequence flows to be exported.
Total number of parallel job subflows.
Total number of table definitions.
Total number of test cases.
Total number of data flows to be exported.
Total number of wrapped stages.
Total number of xml schema libraries.
Show git commit information.
- gitcommit
name of the branch used for the commit.
Examples:my_branch
the commit sha of this commit.
Examples:<8f8d6bc3ce901ea56877922496dda9f5f7c81593>shows which commit message we are committing.
Examples:This is test commit
The errors array report all the problems preventing the data flow from being successfully committed.
whether data is committed to git successfully.
Possible values: [
in_progress,cancelled,timeout,invalid,started,completed]Examples:in_progress
name of the folder used for the commit.
Examples:my_folder
name of the repo used for the commit.
Examples:my_repo
Export the response metadata.
- metadata
Catalog id.
The timestamp when the import API was submitted. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Account ID of the user who submitted the import request.
Examples:user1@company1.com
The timestamp when the export status was last updated. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Project id.
Project name.
Space id.
The URL which can be used to get the status of the export request right after it is submitted.
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
Status of the git commit request cannot be found. This can be due to the given project is not valid, or the git commit has been completed long ago and its status information is no longer available.
An error occurred. See response for more information.
No Sample Response
Import Datastage components from git.
Creates DataStage components from the git. This is an asynchronous call. The API call returns almost immediately which does not necessarily imply the completion of the import request. It only means that the import request has been accepted. The status field of the import request is included in the import response object. The status "completed" ("in_progress", "failed", resp.) indicates the import request is completed (in progress, and failed, resp.) The job export file for an import request may contain one mor more data flows. Unless the on_failure option is set to "stop", a completed import request may contain not only successfully imported data flows but also data flows that cannot be imported.
Creates DataStage components from the git. This is an asynchronous call. The API call returns almost immediately which does not necessarily imply the completion of the import request. It only means that the import request has been accepted. The status field of the import request is included in the import response object. The status "completed" ("in_progress", "failed", resp.) indicates the import request is completed (in progress, and failed, resp.) The job export file for an import request may contain one mor more data flows. Unless the on_failure option is set to "stop", a completed import request may contain not only successfully imported data flows but also data flows that cannot be imported.
Creates DataStage components from the git. This is an asynchronous call. The API call returns almost immediately which does not necessarily imply the completion of the import request. It only means that the import request has been accepted. The status field of the import request is included in the import response object. The status "completed" ("in_progress", "failed", resp.) indicates the import request is completed (in progress, and failed, resp.) The job export file for an import request may contain one mor more data flows. Unless the on_failure option is set to "stop", a completed import request may contain not only successfully imported data flows but also data flows that cannot be imported.
Creates DataStage components from the git. This is an asynchronous call. The API call returns almost immediately which does not necessarily imply the completion of the import request. It only means that the import request has been accepted. The status field of the import request is included in the import response object. The status "completed" ("in_progress", "failed", resp.) indicates the import request is completed (in progress, and failed, resp.) The job export file for an import request may contain one mor more data flows. Unless the on_failure option is set to "stop", a completed import request may contain not only successfully imported data flows but also data flows that cannot be imported.
POST /v3/migration/git_pull
ServiceCall<GitPullResponse> gitPull(GitPullOptions gitPullOptions)gitPull(params)
git_pull(
self,
*,
assets: Optional[List['GitPullTree']] = None,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
git_repo: Optional[str] = None,
git_branch: Optional[str] = None,
git_folder: Optional[str] = None,
git_tag: Optional[str] = None,
on_failure: Optional[str] = None,
conflict_resolution: Optional[str] = None,
enable_notification: Optional[bool] = None,
import_only: Optional[bool] = None,
include_dependencies: Optional[bool] = None,
asset_type: Optional[str] = None,
skip_dependencies: Optional[str] = None,
replace_mode: Optional[str] = None,
x_migration_enc_key: Optional[str] = None,
import_binaries: Optional[bool] = None,
project_folder: Optional[str] = None,
project_folder_recursive: Optional[bool] = None,
**kwargs,
) -> DetailedResponseRequest
Use the GitPullOptions.Builder to create a GitPullOptions object that contains the parameter values for the gitPull method.
Custom Headers
The encryption key to encrypt credentials on export or to decrypt them on import
Query Parameters
The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Example:
4c9adbb4-28ef-4a7d-b273-1cee0c38021fThe name of the git repo to use.
Example:
git-dsjobThe name of the git branch to use.
The name of the git folder where the project contents are committed/fetched.
Example:
my-project-folderThe name of the git tag to use.
Action when the first import failure occurs. The default action is "continue" which will continue importing the remaining data flows. The "stop" action will stop the import operation upon the first error.
Allowable values: [
continue,stop]Example:
continueResolution when data flow to be imported has a name conflict with an existing data flow in the project or catalog. The default conflict resolution is "skip" will skip the data flow so that it will not be imported. The "rename" resolution will append "_Import_NNNN" suffix to the original name and use the new name for the imported data flow, while the "replace" resolution will first remove the existing data flow with the same name and import the new data flow. For the "rename_replace" option, when the flow name is already used, a new flow name with the suffix "_DATASTAGE_ISX_IMPORT" will be used. If the name is not currently used, the imported flow will be created with this name. In case the new name is already used, the existing flow will be removed first before the imported flow is created. With the rename_replace option, job creation will be determined as follows. If the job name is already used, a new job name with the suffix ".DataStage job" will be used. If the new job name is not currently used, the job will be created with this name. In case the new job name is already used, the job creation will not happen and an error will be raised.
Allowable values: [
skip,rename,replace,rename_replace]Example:
renameenable/disable notification. Default value is true.
Skip flow compilation.
If set to false, skip dependencies, only import flow. If not specified or set to true, import flow and dependencies.
Asset types separated by commas (','). Only assets of these types will be imported. If not specified, all supported assets will be imported.
Example:
data_intg_flow,parameter_setSkip dependencies separated by commas (','), it is only used with replace option. If specified the skiped dependencies, only assets of these types will be skiped if the assets are existed.
Example:
connection,parameter_set,subflowThis parameter takes effect when conflict_resolution is set to replace or skip. soft- merge the parameter set, add new parameter only, keep the old value sets hard- replace all
Allowable values: [
soft,hard]Example:
hardImport binaries
The name of the project folder you would like to pull assets from.
If true, will pull all subfolders of the project being pulled
asset list
list of assets
The gitPull options.
list of assets.
- assets
asset name.
Type of asset.
Allowable values: [
connection,custom_stage_library,data_asset,data_definition,data_intg_build_stage,data_intg_cff_schema,data_intg_custom_stage,data_intg_data_set,data_intg_file_set,data_intg_ilogjrule,data_intg_java_library,data_intg_message_handler,data_intg_parallel_function,data_intg_subflow,data_intg_test_case,data_intg_wrapped_stage,data_rule,data_rule_definition,ds_match_specification,ds_xml_schema_library,function_library,job,orchestration_flow,parameter_set,standardization_rule,data_intg_flow]
The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:4c9adbb4-28ef-4a7d-b273-1cee0c38021fThe name of the git repo to use.
Examples:git-dsjob
The name of the git branch to use.
The name of the git folder where the project contents are committed/fetched.
Examples:my-project-folder
The name of the git tag to use.
Action when the first import failure occurs. The default action is "continue" which will continue importing the remaining data flows. The "stop" action will stop the import operation upon the first error.
Allowable values: [
continue,stop]Examples:continue
Resolution when data flow to be imported has a name conflict with an existing data flow in the project or catalog. The default conflict resolution is "skip" will skip the data flow so that it will not be imported. The "rename" resolution will append "_Import_NNNN" suffix to the original name and use the new name for the imported data flow, while the "replace" resolution will first remove the existing data flow with the same name and import the new data flow. For the "rename_replace" option, when the flow name is already used, a new flow name with the suffix "_DATASTAGE_ISX_IMPORT" will be used. If the name is not currently used, the imported flow will be created with this name. In case the new name is already used, the existing flow will be removed first before the imported flow is created. With the rename_replace option, job creation will be determined as follows. If the job name is already used, a new job name with the suffix ".DataStage job" will be used. If the new job name is not currently used, the job will be created with this name. In case the new job name is already used, the job creation will not happen and an error will be raised.
Allowable values: [
skip,rename,replace,rename_replace]Examples:rename
enable/disable notification. Default value is true.
Examples:falseSkip flow compilation.
Examples:falseIf set to false, skip dependencies, only import flow. If not specified or set to true, import flow and dependencies.
Examples:falseAsset types separated by commas (','). Only assets of these types will be imported. If not specified, all supported assets will be imported.
Examples:data_intg_flow,parameter_set
Skip dependencies separated by commas (','), it is only used with replace option. If specified the skiped dependencies, only assets of these types will be skiped if the assets are existed.
Examples:connection,parameter_set,subflow
This parameter takes effect when conflict_resolution is set to replace or skip. soft- merge the parameter set, add new parameter only, keep the old value sets hard- replace all.
Allowable values: [
soft,hard]Examples:hard
The encryption key to encrypt credentials on export or to decrypt them on import.
Import binaries.
Examples:falseThe name of the project folder you would like to pull assets from.
If true, will pull all subfolders of the project being pulled.
parameters
list of assets.
- assets
asset name.
Type of asset.
Allowable values: [
connection,custom_stage_library,data_asset,data_definition,data_intg_build_stage,data_intg_cff_schema,data_intg_custom_stage,data_intg_data_set,data_intg_file_set,data_intg_ilogjrule,data_intg_java_library,data_intg_message_handler,data_intg_parallel_function,data_intg_subflow,data_intg_test_case,data_intg_wrapped_stage,data_rule,data_rule_definition,ds_match_specification,ds_xml_schema_library,function_library,job,orchestration_flow,parameter_set,standardization_rule,data_intg_flow]
The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:The name of the git repo to use.
Examples:The name of the git branch to use.
The name of the git folder where the project contents are committed/fetched.
Examples:The name of the git tag to use.
Action when the first import failure occurs. The default action is "continue" which will continue importing the remaining data flows. The "stop" action will stop the import operation upon the first error.
Allowable values: [
continue,stop]Examples:Resolution when data flow to be imported has a name conflict with an existing data flow in the project or catalog. The default conflict resolution is "skip" will skip the data flow so that it will not be imported. The "rename" resolution will append "_Import_NNNN" suffix to the original name and use the new name for the imported data flow, while the "replace" resolution will first remove the existing data flow with the same name and import the new data flow. For the "rename_replace" option, when the flow name is already used, a new flow name with the suffix "_DATASTAGE_ISX_IMPORT" will be used. If the name is not currently used, the imported flow will be created with this name. In case the new name is already used, the existing flow will be removed first before the imported flow is created. With the rename_replace option, job creation will be determined as follows. If the job name is already used, a new job name with the suffix ".DataStage job" will be used. If the new job name is not currently used, the job will be created with this name. In case the new job name is already used, the job creation will not happen and an error will be raised.
Allowable values: [
skip,rename,replace,rename_replace]Examples:enable/disable notification. Default value is true.
Examples:Skip flow compilation.
Examples:If set to false, skip dependencies, only import flow. If not specified or set to true, import flow and dependencies.
Examples:Asset types separated by commas (','). Only assets of these types will be imported. If not specified, all supported assets will be imported.
Examples:Skip dependencies separated by commas (','), it is only used with replace option. If specified the skiped dependencies, only assets of these types will be skiped if the assets are existed.
Examples:This parameter takes effect when conflict_resolution is set to replace or skip. soft- merge the parameter set, add new parameter only, keep the old value sets hard- replace all.
Allowable values: [
soft,hard]Examples:The encryption key to encrypt credentials on export or to decrypt them on import.
Import binaries.
Examples:The name of the project folder you would like to pull assets from.
If true, will pull all subfolders of the project being pulled.
parameters
list of assets.
- assets
asset name.
Type of asset.
Allowable values: [
connection,custom_stage_library,data_asset,data_definition,data_intg_build_stage,data_intg_cff_schema,data_intg_custom_stage,data_intg_data_set,data_intg_file_set,data_intg_ilogjrule,data_intg_java_library,data_intg_message_handler,data_intg_parallel_function,data_intg_subflow,data_intg_test_case,data_intg_wrapped_stage,data_rule,data_rule_definition,ds_match_specification,ds_xml_schema_library,function_library,job,orchestration_flow,parameter_set,standardization_rule,data_intg_flow]
The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:The name of the git repo to use.
Examples:The name of the git branch to use.
The name of the git folder where the project contents are committed/fetched.
Examples:The name of the git tag to use.
Action when the first import failure occurs. The default action is "continue" which will continue importing the remaining data flows. The "stop" action will stop the import operation upon the first error.
Allowable values: [
continue,stop]Examples:Resolution when data flow to be imported has a name conflict with an existing data flow in the project or catalog. The default conflict resolution is "skip" will skip the data flow so that it will not be imported. The "rename" resolution will append "_Import_NNNN" suffix to the original name and use the new name for the imported data flow, while the "replace" resolution will first remove the existing data flow with the same name and import the new data flow. For the "rename_replace" option, when the flow name is already used, a new flow name with the suffix "_DATASTAGE_ISX_IMPORT" will be used. If the name is not currently used, the imported flow will be created with this name. In case the new name is already used, the existing flow will be removed first before the imported flow is created. With the rename_replace option, job creation will be determined as follows. If the job name is already used, a new job name with the suffix ".DataStage job" will be used. If the new job name is not currently used, the job will be created with this name. In case the new job name is already used, the job creation will not happen and an error will be raised.
Allowable values: [
skip,rename,replace,rename_replace]Examples:enable/disable notification. Default value is true.
Examples:Skip flow compilation.
Examples:If set to false, skip dependencies, only import flow. If not specified or set to true, import flow and dependencies.
Examples:Asset types separated by commas (','). Only assets of these types will be imported. If not specified, all supported assets will be imported.
Examples:Skip dependencies separated by commas (','), it is only used with replace option. If specified the skiped dependencies, only assets of these types will be skiped if the assets are existed.
Examples:This parameter takes effect when conflict_resolution is set to replace or skip. soft- merge the parameter set, add new parameter only, keep the old value sets hard- replace all.
Allowable values: [
soft,hard]Examples:The encryption key to encrypt credentials on export or to decrypt them on import.
Import binaries.
Examples:The name of the project folder you would like to pull assets from.
If true, will pull all subfolders of the project being pulled.
Response
Response object of an import request.
Import the response entity.
Show git pull information
Import the response metadata.
Response object of an import request.
Import the response entity.
- entity
Account ID of the user who cancelled the import request. This field is required only when the status field is "cancelled".
Examples:user1@company1.com
The conflict_resolution option used for the import.
The duration of import processing time.
The timestamp when the import opearton completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
All data flows imported or to be imported. Each ImportFlow object contains status for the individual data flow import operation.
- importDataFlows
conflict resolution status.
Possible values: [
flow_replacement_succeeded,flow_replacement_failed,import_flow_renamed,import_flow_skipped,connection_replacement_succeeded,connection_replacement_failed,connection_renamed,connection_skipped,parameter_set_replacement_succeeded,parameter_set_replacement_failed,parameter_set_renamed,parameter_set_skipped,table_definition_replacement_succeeded,table_definition_replacement_failed,table_definition_renamed,table_definition_skipped,sequence_job_replacement_succeeded,sequence_job_replacement_failed,sequence_job_renamed,sequence_job_skipped,subflow_replacement_succeeded,subflow_replacement_failed,subflow_renamed,subflow_skipped,message_handler_replacement_succeeded,message_handler_replacement_failed,message_handler_renamed,message_handler_skipped,build_stage_replacement_succeeded,build_stage_replacement_failed,build_stage_renamed,build_stage_skipped,wrapped_stage_replacement_succeeded,wrapped_stage_replacement_failed,wrapped_stage_renamed,wrapped_stage_skipped,custom_stage_replacement_succeeded,custom_stage_replacement_failed,custom_stage_renamed,custom_stage_skipped,data_quality_rule_replacement_succeeded,data_quality_rule_replacement_failed,data_quality_rule_renamed,data_quality_rule_skipped,data_quality_rule_definition_replacement_succeeded,data_quality_rule_definition_failed,data_quality_rule_definition_renamed,data_quality_rule_definition_skipped,cff_schema_replacement_succeeded,cff_schema_replacement_failed,cff_schema_renamed,cff_schema_skipped,test_case_replacement_succeeded,test_case_replacement_failed,test_case_renamed,test_case_skipped]Examples:import_flow_renamed
The timestamp when the flow import is completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
The errors array report all the problems preventing the data flow from being successfully imported.
- errors
additional error text.
error object name.
error stage type.
error type.
Possible values: [
unsupported_stage_type,unsupported_feature,empty_json,isx_conversion_error,model_conversion_error,invalid_input_type,invalid_json_format,json_conversion_error,flow_deletion_error,flow_creation_error,flow_response_parsing_error,auth_token_error,flow_compilation_error,empty_stage_list,empty_stage_node,missing_stage_type_class_name,dummy_stage,missing_stage_type,missing_repos_id,stage_conversion_error,unimplemented_stage_type,job_creation_error,job_run_error,flow_search_error,unsupported_job_type,internal_error,connection_creation_error,flow_rename_error,duplicate_job_error,parameter_set_creation_error,distributed_lock_error,duplicate_object_error,unbound_object_reference,table_def_creation_error,connection_creation_api_error,connection_patch_api_error,connection_deletion_api_error,sequence_job_creation_error,unsupported_stage_type_in_subflow,job_update_error,job_deletion_error,sequence_job_deletion_error,parameter_set_deletion_error,invalid_property_name,invalid_property_value,unsupported_subflow_with_multiple_parents,message_handler_creation_error,message_handler_deletion_error,unsupported_server_container,build_stage_creation_error,build_stage_deletion_error,build_stage_generation_error,data_quality_spec_creation_error,xml_schema_library_creation_error,wrapped_stage_creation_error,wrapped_stage_deletion_error,wrapped_stage_generation_error,insufficient_memory_error,out_of_memory_error,routine_conversion_error,routine_creation_error,routine_deletion_error,custom_stage_creation_error,custom_stage_deletion_error,invalid_file_type,sequence_job_exceeding_max_capacity_error,flow_update_error,parameter_set_update_error,invalid_job_file_type,incorrectset_xmlinputpx,missing_column_definition,cp4d_only_stage_type,unsupported_sybase_lookup,job_get_error,flow_export_error,odm_library_creation_error,generated_parameter_set_error,data_quality_rule_creation_error,xml_schema_library_empty_error,data_asset_creation_error,data_asset_deletion_error,test_case_creation_error,test_case_deletion_error,custom_function_creation_error,sequence_job_update_error,cp4d_only_stage_type_in_subflow,data_set_creation_error,file_set_creation_error,missing_job_referred_flow,custom_stage_name_conflict_error,data_rule_definition_creation_error,CFF_SCHEMA_CREATION_ERROR,build_op_registration_error]
Unique id of the data flow. This field is returned only if the underlying data flow has been successfully imported.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136Unique id of the job. This field is returned only if the corresponding job object has been successfully created.
Examples:ccfaaafd-810d-4f0e-b0a9-228c328a0136Job name. This field is returned only if the corresponding job object has been successfully created.
Examples:Aggregator12_DataStage_1
(deprecated) original type of the job or data flow in the import file.
Possible values: [
px_job,server_job,connection,table_def]Examples:px_job
Name of the imported data flow.
Examples:cancel-reservation-job
Name of the data flow to be imported.
Examples:cancel-reservation-job
The ID of an existing asset this object refers to. If ref_asset_id is specified, the id field will be the same as ref_asset_id for backward compatibility.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136data import status.
Possible values: [
completed,in_progress,failed,skipped,deprecated,unsupported,flow_conversion_failed,flow_creation_failed,flow_compilation_failed,job_creation_failed,job_run_failed,connection_conversion_failed,connection_creation_failed,parameter_set_conversion_failed,parameter_set_creation_failed,data_quality_spec_conversion_failed,data_quality_spec_creation_failed,table_definition_conversion_failed,table_definition_creation_failed,job_update_failed,job_deletion_failed,sequence_job_conversion_failed,sequence_job_creation_failed,message_handler_conversion_failed,message_handler_creation_failed,build_stage_conversion_failed,build_stage_creation_failed,build_stage_generation_failed,xml_schema_library_creation_failed,wrapped_stage_conversion_failed,wrapped_stage_creation_failed,wrapped_stage_generation_failed,routine_conversion_failed,routine_creation_failed,custom_stage_conversion_failed,custom_stage_creation_failed,flow_update_failed,parameter_set_update_failed,completed_with_error,data_quality_rule_creation_failed,data_asset_creation_failed,custom_function_creation_failed,data_set_creation_failed,file_set_creation_failed,data_rule_definition_creation_failed,cff_schema_creation_failed,cff_schema_conversion_failed,test_case_creation_failed,test_case_conversion_failed]Examples:completed
type of the job or data connection in the import file.
Possible values: [
px_job,server_job,connection,table_def,parameter_set,routine,subflow,sequence_job,data_quality_spec,server_container,message_handler,build_stage,wrapped_stage,custom_stage,xml_schema_library,custom_stage_library,function_library,data_intg_parallel_function,data_intg_java_library,odm_ruleset,job,data_quality_rule,data_quality_definition,data_asset,data_intg_data_set,data_intg_file_set,odbc_configuration,data_intg_cff_schema,data_intg_test_case,data_element,ims_database,ims_viewset,mainframe_job,machine_profile,mainframe_routine,parallel_routine,transform,jdbc_driver]Examples:px_job
The warnings array report all the warnings in the data flow import operation.
- warnings
additional warning text.
warning object name.
warning type.
Possible values: [
unreleased_stage_type,unreleased_feature,credentials_file_warning,transformer_trigger_unsupported,transformer_buildtab_unsupported,unsupported_secure_gateway,placeholder_connection_parameters,description_truncated,empty_stage_list,missing_parameter_set,missing_subflow,missing_table_definition,missing_data_connection,duplicate_job,unsupported_connection,unsupported_activity_trigger,unsupported_expression,unsupported_function,unsupported_operator,unsupported_variable,unsupported_activity_variable,default_connection_parameter,unmatched_for_var_warning,unsupported_routine_statement,unsupported_before_after_routine,attachment_creation_failed,unsupported_job_control,unsupported_drs_rdbms_parameter,dataconnection_conflict,placeholder_parameter_set,missing_routine,unsupported_stored_procedure_transform,unattached_links,unreleased_datasource_in_storedprocedure,unsupported_parquet_compression_type_lzo,unsupported_stppx_dbvendor_parameter,unsupported_sappack_connector_property,undefined_variable,unsupported_xmloutputpx_features,xml_transform_missed_schema,unsupported_stage_type,unbound_reference,missing_stage_type,unsupported_dependency,missing_data_asset,unsupported_svtransformer_before_after_subroutine,deprecated_load_control_properties,missing_parameter,unsupported_stage_type_in_subflow,unsupported_xmlinputpx_features,export_request_in_progress,unsupported_stppx_for_db2i_db2z,deprecated_connection_property,hard_coded_file_path,unsupported_property_value,unsupported_property,unsupported_dcn_data_connection,xmlstage_unsupported_function,attachment_update_failed,unchanged_parameter_set,connection_parameter_set_created,connection_parameter_set_patched,skipped_duplicate_connection,unchanged_connection,unsupported_xpath,parameters_in_existing_parameter_set,different_parameter_type_in_parameter_set,unsupported_dataquality_rename,unsupported_data_connection_secure_gateway,missing_default_parameter_value,missing_referenced_connection,unsupported_routine_activity,unsupported_execution_action_validate,unsupported_execution_action_reset,runtime_jis_generation_failed,missing_job,missing_dataset,missing_fileset,identical_parameter_set,missing_runtime_environment,missing_match_spec,unsupported_before_after_subroutine,deprecated_stage_type,missing_storage_volume,unsupported_websphereMQ_server_mode,unsupported_parameter_name,unsupported_property_on_cloud,invalid_job_schedule,missing_cff_schema,missing_data_intg_flow,job_import_skipped,unsupported_spparam_type,unsupported_multiple_reject_links,binaries_import_failed,missing_binaries,deprecated_job_environment_input,unclear_numeric_type,duplicate_parameter_name,user_status_migrated_to_global_user_variable,user_status_migrated_to_dssetuserstatus_function,subroutine_unsupported_on_cloud]
import error.
import type.
list of all missing assets.
- missingAssets
name of asset.
type of asset.
Name of the import request.
Examples:seat-reservation-jobs
Import event notifications.
- notifications
The timestamp when the notification was created. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Notification id.
Import/Export status associated with the notification.
The on_failure option used for the import.
Estimate of remaining time in seconds.
The timestamp when the import opearton started. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import status.
Possible values: [
in_progress,cancelled,queued,started,completed]Examples:in_progress
Import statistics. total = imported (including renamed and replaced) + skipped + failed + deprecated + unsupported + pending.
- tally
Total number of build stages.
Total number of cff schemas.
Total number of connection creation failures.
Total number of data connections.
Total number of custom stages.
Total number of data assets.
Total number of data elements.
Total number of data quality rules.
Total number of data quality rules.
Total number of data quality spec.
Total number of datastage datasets.
Total number of deprecated resources in the import file.
Total number of data flows that cannot be imported due to import errors.
Total number of datastage filesets.
Total number of data flows that failed to compile.
Total number of flow creation failures.
Total number of function libraries.
Total number of data flows successfully imported.
Total number of IMS databases.
Total number of IMS viewsets.
Total number of java libraries.
Total number of job creation failures.
Total number of machine profiles.
Total number of mainframe jobs.
Total number of mainframe routines.
Total number of message handlers.
Total number of ilogjrule/odm libraries.
Total number of parallel jobs.
Total number of parallel routines.
Total number of parameter sets.
Total number of data flows that have not been processed.
Total number of data flows successfully imported and renamed due to a name conflict. The renamed count is included in the imported count.
Total number of existing data flows replaced by imported flows. The replaced count is included in the imported count.
Total number of routines.
Total number of sequence job creation failures.
Total number of sequence jobs.
Total number of server containers.
Total number of data flows skipped due to name conflicts. The skipped count is not included in the failed count or imported count.
Total number of parallel job subflows.
Total number of table definitions.
Total number of test cases.
Total number of data flows to be imported.
Total number of transforms.
Total number of unsupported resources in the import file.
Total number of wrapped stages.
Total number of xml schema libraries.
Show git pull information.
- gitpull
name of the branch used for the pull.
Examples:my_branch
whether data is fetched from git successfully.
Possible values: [
fetched_git_content,import_ready,in_progress,cancelled,timeout,invalid,started,completed]Examples:in_progress
name of the folder used for the pull.
Examples:my_folder
name of the repo used for the pull.
Examples:my_repo
name of the tag used for the pull.
Examples:v1.0.0
Import the response metadata.
- metadata
Catalog id.
Catalog name.
The timestamp when the import API was submitted. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Account ID of the user who submitted the import request.
Examples:user1@company1.com
The unique import id.
The timestamp when the import status was last updated. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import file name.
Project id.
Project name.
Space id.
Space name.
The URL which can be used to get the status of the import request right after it is submitted.
Response object of an import request.
Import the response entity.
- entity
Account ID of the user who cancelled the import request. This field is required only when the status field is "cancelled".
Examples:user1@company1.com
The conflict_resolution option used for the import.
The duration of import processing time.
The timestamp when the import opearton completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
All data flows imported or to be imported. Each ImportFlow object contains status for the individual data flow import operation.
- import_data_flows
conflict resolution status.
Possible values: [
flow_replacement_succeeded,flow_replacement_failed,import_flow_renamed,import_flow_skipped,connection_replacement_succeeded,connection_replacement_failed,connection_renamed,connection_skipped,parameter_set_replacement_succeeded,parameter_set_replacement_failed,parameter_set_renamed,parameter_set_skipped,table_definition_replacement_succeeded,table_definition_replacement_failed,table_definition_renamed,table_definition_skipped,sequence_job_replacement_succeeded,sequence_job_replacement_failed,sequence_job_renamed,sequence_job_skipped,subflow_replacement_succeeded,subflow_replacement_failed,subflow_renamed,subflow_skipped,message_handler_replacement_succeeded,message_handler_replacement_failed,message_handler_renamed,message_handler_skipped,build_stage_replacement_succeeded,build_stage_replacement_failed,build_stage_renamed,build_stage_skipped,wrapped_stage_replacement_succeeded,wrapped_stage_replacement_failed,wrapped_stage_renamed,wrapped_stage_skipped,custom_stage_replacement_succeeded,custom_stage_replacement_failed,custom_stage_renamed,custom_stage_skipped,data_quality_rule_replacement_succeeded,data_quality_rule_replacement_failed,data_quality_rule_renamed,data_quality_rule_skipped,data_quality_rule_definition_replacement_succeeded,data_quality_rule_definition_failed,data_quality_rule_definition_renamed,data_quality_rule_definition_skipped,cff_schema_replacement_succeeded,cff_schema_replacement_failed,cff_schema_renamed,cff_schema_skipped,test_case_replacement_succeeded,test_case_replacement_failed,test_case_renamed,test_case_skipped]Examples:import_flow_renamed
The timestamp when the flow import is completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
The errors array report all the problems preventing the data flow from being successfully imported.
- errors
additional error text.
error object name.
error stage type.
error type.
Possible values: [
unsupported_stage_type,unsupported_feature,empty_json,isx_conversion_error,model_conversion_error,invalid_input_type,invalid_json_format,json_conversion_error,flow_deletion_error,flow_creation_error,flow_response_parsing_error,auth_token_error,flow_compilation_error,empty_stage_list,empty_stage_node,missing_stage_type_class_name,dummy_stage,missing_stage_type,missing_repos_id,stage_conversion_error,unimplemented_stage_type,job_creation_error,job_run_error,flow_search_error,unsupported_job_type,internal_error,connection_creation_error,flow_rename_error,duplicate_job_error,parameter_set_creation_error,distributed_lock_error,duplicate_object_error,unbound_object_reference,table_def_creation_error,connection_creation_api_error,connection_patch_api_error,connection_deletion_api_error,sequence_job_creation_error,unsupported_stage_type_in_subflow,job_update_error,job_deletion_error,sequence_job_deletion_error,parameter_set_deletion_error,invalid_property_name,invalid_property_value,unsupported_subflow_with_multiple_parents,message_handler_creation_error,message_handler_deletion_error,unsupported_server_container,build_stage_creation_error,build_stage_deletion_error,build_stage_generation_error,data_quality_spec_creation_error,xml_schema_library_creation_error,wrapped_stage_creation_error,wrapped_stage_deletion_error,wrapped_stage_generation_error,insufficient_memory_error,out_of_memory_error,routine_conversion_error,routine_creation_error,routine_deletion_error,custom_stage_creation_error,custom_stage_deletion_error,invalid_file_type,sequence_job_exceeding_max_capacity_error,flow_update_error,parameter_set_update_error,invalid_job_file_type,incorrectset_xmlinputpx,missing_column_definition,cp4d_only_stage_type,unsupported_sybase_lookup,job_get_error,flow_export_error,odm_library_creation_error,generated_parameter_set_error,data_quality_rule_creation_error,xml_schema_library_empty_error,data_asset_creation_error,data_asset_deletion_error,test_case_creation_error,test_case_deletion_error,custom_function_creation_error,sequence_job_update_error,cp4d_only_stage_type_in_subflow,data_set_creation_error,file_set_creation_error,missing_job_referred_flow,custom_stage_name_conflict_error,data_rule_definition_creation_error,CFF_SCHEMA_CREATION_ERROR,build_op_registration_error]
Unique id of the data flow. This field is returned only if the underlying data flow has been successfully imported.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136Unique id of the job. This field is returned only if the corresponding job object has been successfully created.
Examples:ccfaaafd-810d-4f0e-b0a9-228c328a0136Job name. This field is returned only if the corresponding job object has been successfully created.
Examples:Aggregator12_DataStage_1
(deprecated) original type of the job or data flow in the import file.
Possible values: [
px_job,server_job,connection,table_def]Examples:px_job
Name of the imported data flow.
Examples:cancel-reservation-job
Name of the data flow to be imported.
Examples:cancel-reservation-job
The ID of an existing asset this object refers to. If ref_asset_id is specified, the id field will be the same as ref_asset_id for backward compatibility.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136data import status.
Possible values: [
completed,in_progress,failed,skipped,deprecated,unsupported,flow_conversion_failed,flow_creation_failed,flow_compilation_failed,job_creation_failed,job_run_failed,connection_conversion_failed,connection_creation_failed,parameter_set_conversion_failed,parameter_set_creation_failed,data_quality_spec_conversion_failed,data_quality_spec_creation_failed,table_definition_conversion_failed,table_definition_creation_failed,job_update_failed,job_deletion_failed,sequence_job_conversion_failed,sequence_job_creation_failed,message_handler_conversion_failed,message_handler_creation_failed,build_stage_conversion_failed,build_stage_creation_failed,build_stage_generation_failed,xml_schema_library_creation_failed,wrapped_stage_conversion_failed,wrapped_stage_creation_failed,wrapped_stage_generation_failed,routine_conversion_failed,routine_creation_failed,custom_stage_conversion_failed,custom_stage_creation_failed,flow_update_failed,parameter_set_update_failed,completed_with_error,data_quality_rule_creation_failed,data_asset_creation_failed,custom_function_creation_failed,data_set_creation_failed,file_set_creation_failed,data_rule_definition_creation_failed,cff_schema_creation_failed,cff_schema_conversion_failed,test_case_creation_failed,test_case_conversion_failed]Examples:completed
type of the job or data connection in the import file.
Possible values: [
px_job,server_job,connection,table_def,parameter_set,routine,subflow,sequence_job,data_quality_spec,server_container,message_handler,build_stage,wrapped_stage,custom_stage,xml_schema_library,custom_stage_library,function_library,data_intg_parallel_function,data_intg_java_library,odm_ruleset,job,data_quality_rule,data_quality_definition,data_asset,data_intg_data_set,data_intg_file_set,odbc_configuration,data_intg_cff_schema,data_intg_test_case,data_element,ims_database,ims_viewset,mainframe_job,machine_profile,mainframe_routine,parallel_routine,transform,jdbc_driver]Examples:px_job
The warnings array report all the warnings in the data flow import operation.
- warnings
additional warning text.
warning object name.
warning type.
Possible values: [
unreleased_stage_type,unreleased_feature,credentials_file_warning,transformer_trigger_unsupported,transformer_buildtab_unsupported,unsupported_secure_gateway,placeholder_connection_parameters,description_truncated,empty_stage_list,missing_parameter_set,missing_subflow,missing_table_definition,missing_data_connection,duplicate_job,unsupported_connection,unsupported_activity_trigger,unsupported_expression,unsupported_function,unsupported_operator,unsupported_variable,unsupported_activity_variable,default_connection_parameter,unmatched_for_var_warning,unsupported_routine_statement,unsupported_before_after_routine,attachment_creation_failed,unsupported_job_control,unsupported_drs_rdbms_parameter,dataconnection_conflict,placeholder_parameter_set,missing_routine,unsupported_stored_procedure_transform,unattached_links,unreleased_datasource_in_storedprocedure,unsupported_parquet_compression_type_lzo,unsupported_stppx_dbvendor_parameter,unsupported_sappack_connector_property,undefined_variable,unsupported_xmloutputpx_features,xml_transform_missed_schema,unsupported_stage_type,unbound_reference,missing_stage_type,unsupported_dependency,missing_data_asset,unsupported_svtransformer_before_after_subroutine,deprecated_load_control_properties,missing_parameter,unsupported_stage_type_in_subflow,unsupported_xmlinputpx_features,export_request_in_progress,unsupported_stppx_for_db2i_db2z,deprecated_connection_property,hard_coded_file_path,unsupported_property_value,unsupported_property,unsupported_dcn_data_connection,xmlstage_unsupported_function,attachment_update_failed,unchanged_parameter_set,connection_parameter_set_created,connection_parameter_set_patched,skipped_duplicate_connection,unchanged_connection,unsupported_xpath,parameters_in_existing_parameter_set,different_parameter_type_in_parameter_set,unsupported_dataquality_rename,unsupported_data_connection_secure_gateway,missing_default_parameter_value,missing_referenced_connection,unsupported_routine_activity,unsupported_execution_action_validate,unsupported_execution_action_reset,runtime_jis_generation_failed,missing_job,missing_dataset,missing_fileset,identical_parameter_set,missing_runtime_environment,missing_match_spec,unsupported_before_after_subroutine,deprecated_stage_type,missing_storage_volume,unsupported_websphereMQ_server_mode,unsupported_parameter_name,unsupported_property_on_cloud,invalid_job_schedule,missing_cff_schema,missing_data_intg_flow,job_import_skipped,unsupported_spparam_type,unsupported_multiple_reject_links,binaries_import_failed,missing_binaries,deprecated_job_environment_input,unclear_numeric_type,duplicate_parameter_name,user_status_migrated_to_global_user_variable,user_status_migrated_to_dssetuserstatus_function,subroutine_unsupported_on_cloud]
import error.
import type.
list of all missing assets.
- missing_assets
name of asset.
type of asset.
Name of the import request.
Examples:seat-reservation-jobs
Import event notifications.
- notifications
The timestamp when the notification was created. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Notification id.
Import/Export status associated with the notification.
The on_failure option used for the import.
Estimate of remaining time in seconds.
The timestamp when the import opearton started. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import status.
Possible values: [
in_progress,cancelled,queued,started,completed]Examples:in_progress
Import statistics. total = imported (including renamed and replaced) + skipped + failed + deprecated + unsupported + pending.
- tally
Total number of build stages.
Total number of cff schemas.
Total number of connection creation failures.
Total number of data connections.
Total number of custom stages.
Total number of data assets.
Total number of data elements.
Total number of data quality rules.
Total number of data quality rules.
Total number of data quality spec.
Total number of datastage datasets.
Total number of deprecated resources in the import file.
Total number of data flows that cannot be imported due to import errors.
Total number of datastage filesets.
Total number of data flows that failed to compile.
Total number of flow creation failures.
Total number of function libraries.
Total number of data flows successfully imported.
Total number of IMS databases.
Total number of IMS viewsets.
Total number of java libraries.
Total number of job creation failures.
Total number of machine profiles.
Total number of mainframe jobs.
Total number of mainframe routines.
Total number of message handlers.
Total number of ilogjrule/odm libraries.
Total number of parallel jobs.
Total number of parallel routines.
Total number of parameter sets.
Total number of data flows that have not been processed.
Total number of data flows successfully imported and renamed due to a name conflict. The renamed count is included in the imported count.
Total number of existing data flows replaced by imported flows. The replaced count is included in the imported count.
Total number of routines.
Total number of sequence job creation failures.
Total number of sequence jobs.
Total number of server containers.
Total number of data flows skipped due to name conflicts. The skipped count is not included in the failed count or imported count.
Total number of parallel job subflows.
Total number of table definitions.
Total number of test cases.
Total number of data flows to be imported.
Total number of transforms.
Total number of unsupported resources in the import file.
Total number of wrapped stages.
Total number of xml schema libraries.
Show git pull information.
- gitpull
name of the branch used for the pull.
Examples:my_branch
whether data is fetched from git successfully.
Possible values: [
fetched_git_content,import_ready,in_progress,cancelled,timeout,invalid,started,completed]Examples:in_progress
name of the folder used for the pull.
Examples:my_folder
name of the repo used for the pull.
Examples:my_repo
name of the tag used for the pull.
Examples:v1.0.0
Import the response metadata.
- metadata
Catalog id.
Catalog name.
The timestamp when the import API was submitted. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Account ID of the user who submitted the import request.
Examples:user1@company1.com
The unique import id.
The timestamp when the import status was last updated. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import file name.
Project id.
Project name.
Space id.
Space name.
The URL which can be used to get the status of the import request right after it is submitted.
Response object of an import request.
Import the response entity.
- entity
Account ID of the user who cancelled the import request. This field is required only when the status field is "cancelled".
Examples:user1@company1.com
The conflict_resolution option used for the import.
The duration of import processing time.
The timestamp when the import opearton completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
All data flows imported or to be imported. Each ImportFlow object contains status for the individual data flow import operation.
- import_data_flows
conflict resolution status.
Possible values: [
flow_replacement_succeeded,flow_replacement_failed,import_flow_renamed,import_flow_skipped,connection_replacement_succeeded,connection_replacement_failed,connection_renamed,connection_skipped,parameter_set_replacement_succeeded,parameter_set_replacement_failed,parameter_set_renamed,parameter_set_skipped,table_definition_replacement_succeeded,table_definition_replacement_failed,table_definition_renamed,table_definition_skipped,sequence_job_replacement_succeeded,sequence_job_replacement_failed,sequence_job_renamed,sequence_job_skipped,subflow_replacement_succeeded,subflow_replacement_failed,subflow_renamed,subflow_skipped,message_handler_replacement_succeeded,message_handler_replacement_failed,message_handler_renamed,message_handler_skipped,build_stage_replacement_succeeded,build_stage_replacement_failed,build_stage_renamed,build_stage_skipped,wrapped_stage_replacement_succeeded,wrapped_stage_replacement_failed,wrapped_stage_renamed,wrapped_stage_skipped,custom_stage_replacement_succeeded,custom_stage_replacement_failed,custom_stage_renamed,custom_stage_skipped,data_quality_rule_replacement_succeeded,data_quality_rule_replacement_failed,data_quality_rule_renamed,data_quality_rule_skipped,data_quality_rule_definition_replacement_succeeded,data_quality_rule_definition_failed,data_quality_rule_definition_renamed,data_quality_rule_definition_skipped,cff_schema_replacement_succeeded,cff_schema_replacement_failed,cff_schema_renamed,cff_schema_skipped,test_case_replacement_succeeded,test_case_replacement_failed,test_case_renamed,test_case_skipped]Examples:import_flow_renamed
The timestamp when the flow import is completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
The errors array report all the problems preventing the data flow from being successfully imported.
- errors
additional error text.
error object name.
error stage type.
error type.
Possible values: [
unsupported_stage_type,unsupported_feature,empty_json,isx_conversion_error,model_conversion_error,invalid_input_type,invalid_json_format,json_conversion_error,flow_deletion_error,flow_creation_error,flow_response_parsing_error,auth_token_error,flow_compilation_error,empty_stage_list,empty_stage_node,missing_stage_type_class_name,dummy_stage,missing_stage_type,missing_repos_id,stage_conversion_error,unimplemented_stage_type,job_creation_error,job_run_error,flow_search_error,unsupported_job_type,internal_error,connection_creation_error,flow_rename_error,duplicate_job_error,parameter_set_creation_error,distributed_lock_error,duplicate_object_error,unbound_object_reference,table_def_creation_error,connection_creation_api_error,connection_patch_api_error,connection_deletion_api_error,sequence_job_creation_error,unsupported_stage_type_in_subflow,job_update_error,job_deletion_error,sequence_job_deletion_error,parameter_set_deletion_error,invalid_property_name,invalid_property_value,unsupported_subflow_with_multiple_parents,message_handler_creation_error,message_handler_deletion_error,unsupported_server_container,build_stage_creation_error,build_stage_deletion_error,build_stage_generation_error,data_quality_spec_creation_error,xml_schema_library_creation_error,wrapped_stage_creation_error,wrapped_stage_deletion_error,wrapped_stage_generation_error,insufficient_memory_error,out_of_memory_error,routine_conversion_error,routine_creation_error,routine_deletion_error,custom_stage_creation_error,custom_stage_deletion_error,invalid_file_type,sequence_job_exceeding_max_capacity_error,flow_update_error,parameter_set_update_error,invalid_job_file_type,incorrectset_xmlinputpx,missing_column_definition,cp4d_only_stage_type,unsupported_sybase_lookup,job_get_error,flow_export_error,odm_library_creation_error,generated_parameter_set_error,data_quality_rule_creation_error,xml_schema_library_empty_error,data_asset_creation_error,data_asset_deletion_error,test_case_creation_error,test_case_deletion_error,custom_function_creation_error,sequence_job_update_error,cp4d_only_stage_type_in_subflow,data_set_creation_error,file_set_creation_error,missing_job_referred_flow,custom_stage_name_conflict_error,data_rule_definition_creation_error,CFF_SCHEMA_CREATION_ERROR,build_op_registration_error]
Unique id of the data flow. This field is returned only if the underlying data flow has been successfully imported.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136Unique id of the job. This field is returned only if the corresponding job object has been successfully created.
Examples:ccfaaafd-810d-4f0e-b0a9-228c328a0136Job name. This field is returned only if the corresponding job object has been successfully created.
Examples:Aggregator12_DataStage_1
(deprecated) original type of the job or data flow in the import file.
Possible values: [
px_job,server_job,connection,table_def]Examples:px_job
Name of the imported data flow.
Examples:cancel-reservation-job
Name of the data flow to be imported.
Examples:cancel-reservation-job
The ID of an existing asset this object refers to. If ref_asset_id is specified, the id field will be the same as ref_asset_id for backward compatibility.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136data import status.
Possible values: [
completed,in_progress,failed,skipped,deprecated,unsupported,flow_conversion_failed,flow_creation_failed,flow_compilation_failed,job_creation_failed,job_run_failed,connection_conversion_failed,connection_creation_failed,parameter_set_conversion_failed,parameter_set_creation_failed,data_quality_spec_conversion_failed,data_quality_spec_creation_failed,table_definition_conversion_failed,table_definition_creation_failed,job_update_failed,job_deletion_failed,sequence_job_conversion_failed,sequence_job_creation_failed,message_handler_conversion_failed,message_handler_creation_failed,build_stage_conversion_failed,build_stage_creation_failed,build_stage_generation_failed,xml_schema_library_creation_failed,wrapped_stage_conversion_failed,wrapped_stage_creation_failed,wrapped_stage_generation_failed,routine_conversion_failed,routine_creation_failed,custom_stage_conversion_failed,custom_stage_creation_failed,flow_update_failed,parameter_set_update_failed,completed_with_error,data_quality_rule_creation_failed,data_asset_creation_failed,custom_function_creation_failed,data_set_creation_failed,file_set_creation_failed,data_rule_definition_creation_failed,cff_schema_creation_failed,cff_schema_conversion_failed,test_case_creation_failed,test_case_conversion_failed]Examples:completed
type of the job or data connection in the import file.
Possible values: [
px_job,server_job,connection,table_def,parameter_set,routine,subflow,sequence_job,data_quality_spec,server_container,message_handler,build_stage,wrapped_stage,custom_stage,xml_schema_library,custom_stage_library,function_library,data_intg_parallel_function,data_intg_java_library,odm_ruleset,job,data_quality_rule,data_quality_definition,data_asset,data_intg_data_set,data_intg_file_set,odbc_configuration,data_intg_cff_schema,data_intg_test_case,data_element,ims_database,ims_viewset,mainframe_job,machine_profile,mainframe_routine,parallel_routine,transform,jdbc_driver]Examples:px_job
The warnings array report all the warnings in the data flow import operation.
- warnings
additional warning text.
warning object name.
warning type.
Possible values: [
unreleased_stage_type,unreleased_feature,credentials_file_warning,transformer_trigger_unsupported,transformer_buildtab_unsupported,unsupported_secure_gateway,placeholder_connection_parameters,description_truncated,empty_stage_list,missing_parameter_set,missing_subflow,missing_table_definition,missing_data_connection,duplicate_job,unsupported_connection,unsupported_activity_trigger,unsupported_expression,unsupported_function,unsupported_operator,unsupported_variable,unsupported_activity_variable,default_connection_parameter,unmatched_for_var_warning,unsupported_routine_statement,unsupported_before_after_routine,attachment_creation_failed,unsupported_job_control,unsupported_drs_rdbms_parameter,dataconnection_conflict,placeholder_parameter_set,missing_routine,unsupported_stored_procedure_transform,unattached_links,unreleased_datasource_in_storedprocedure,unsupported_parquet_compression_type_lzo,unsupported_stppx_dbvendor_parameter,unsupported_sappack_connector_property,undefined_variable,unsupported_xmloutputpx_features,xml_transform_missed_schema,unsupported_stage_type,unbound_reference,missing_stage_type,unsupported_dependency,missing_data_asset,unsupported_svtransformer_before_after_subroutine,deprecated_load_control_properties,missing_parameter,unsupported_stage_type_in_subflow,unsupported_xmlinputpx_features,export_request_in_progress,unsupported_stppx_for_db2i_db2z,deprecated_connection_property,hard_coded_file_path,unsupported_property_value,unsupported_property,unsupported_dcn_data_connection,xmlstage_unsupported_function,attachment_update_failed,unchanged_parameter_set,connection_parameter_set_created,connection_parameter_set_patched,skipped_duplicate_connection,unchanged_connection,unsupported_xpath,parameters_in_existing_parameter_set,different_parameter_type_in_parameter_set,unsupported_dataquality_rename,unsupported_data_connection_secure_gateway,missing_default_parameter_value,missing_referenced_connection,unsupported_routine_activity,unsupported_execution_action_validate,unsupported_execution_action_reset,runtime_jis_generation_failed,missing_job,missing_dataset,missing_fileset,identical_parameter_set,missing_runtime_environment,missing_match_spec,unsupported_before_after_subroutine,deprecated_stage_type,missing_storage_volume,unsupported_websphereMQ_server_mode,unsupported_parameter_name,unsupported_property_on_cloud,invalid_job_schedule,missing_cff_schema,missing_data_intg_flow,job_import_skipped,unsupported_spparam_type,unsupported_multiple_reject_links,binaries_import_failed,missing_binaries,deprecated_job_environment_input,unclear_numeric_type,duplicate_parameter_name,user_status_migrated_to_global_user_variable,user_status_migrated_to_dssetuserstatus_function,subroutine_unsupported_on_cloud]
import error.
import type.
list of all missing assets.
- missing_assets
name of asset.
type of asset.
Name of the import request.
Examples:seat-reservation-jobs
Import event notifications.
- notifications
The timestamp when the notification was created. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Notification id.
Import/Export status associated with the notification.
The on_failure option used for the import.
Estimate of remaining time in seconds.
The timestamp when the import opearton started. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import status.
Possible values: [
in_progress,cancelled,queued,started,completed]Examples:in_progress
Import statistics. total = imported (including renamed and replaced) + skipped + failed + deprecated + unsupported + pending.
- tally
Total number of build stages.
Total number of cff schemas.
Total number of connection creation failures.
Total number of data connections.
Total number of custom stages.
Total number of data assets.
Total number of data elements.
Total number of data quality rules.
Total number of data quality rules.
Total number of data quality spec.
Total number of datastage datasets.
Total number of deprecated resources in the import file.
Total number of data flows that cannot be imported due to import errors.
Total number of datastage filesets.
Total number of data flows that failed to compile.
Total number of flow creation failures.
Total number of function libraries.
Total number of data flows successfully imported.
Total number of IMS databases.
Total number of IMS viewsets.
Total number of java libraries.
Total number of job creation failures.
Total number of machine profiles.
Total number of mainframe jobs.
Total number of mainframe routines.
Total number of message handlers.
Total number of ilogjrule/odm libraries.
Total number of parallel jobs.
Total number of parallel routines.
Total number of parameter sets.
Total number of data flows that have not been processed.
Total number of data flows successfully imported and renamed due to a name conflict. The renamed count is included in the imported count.
Total number of existing data flows replaced by imported flows. The replaced count is included in the imported count.
Total number of routines.
Total number of sequence job creation failures.
Total number of sequence jobs.
Total number of server containers.
Total number of data flows skipped due to name conflicts. The skipped count is not included in the failed count or imported count.
Total number of parallel job subflows.
Total number of table definitions.
Total number of test cases.
Total number of data flows to be imported.
Total number of transforms.
Total number of unsupported resources in the import file.
Total number of wrapped stages.
Total number of xml schema libraries.
Show git pull information.
- gitpull
name of the branch used for the pull.
Examples:my_branch
whether data is fetched from git successfully.
Possible values: [
fetched_git_content,import_ready,in_progress,cancelled,timeout,invalid,started,completed]Examples:in_progress
name of the folder used for the pull.
Examples:my_folder
name of the repo used for the pull.
Examples:my_repo
name of the tag used for the pull.
Examples:v1.0.0
Import the response metadata.
- metadata
Catalog id.
Catalog name.
The timestamp when the import API was submitted. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Account ID of the user who submitted the import request.
Examples:user1@company1.com
The unique import id.
The timestamp when the import status was last updated. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import file name.
Project id.
Project name.
Space id.
Space name.
The URL which can be used to get the status of the import request right after it is submitted.
Status Code
The requested import operation has been accepted. However, the import operation may or may not be completed. The status field in the import response object describes the current status of the import. The response "Location" header provides a convenient url for retrieving the status with a GET request.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred. See response for more information.
Cancel a previous import request
Cancel a previous import request.
Cancel a previous import request.
Cancel a previous import request.
Cancel a previous import request.
DELETE /v3/migration/git_pull/{import_id}ServiceCall<Void> deleteGitPull(DeleteGitPullOptions deleteGitPullOptions)deleteGitPull(params)
delete_git_pull(
self,
import_id: str,
*,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the DeleteGitPullOptions.Builder to create a DeleteGitPullOptions object that contains the parameter values for the deleteGitPull method.
Path Parameters
Unique ID of the import request.
Example:
cc6dbbfd-810d-4f0e-b0a9-228c328aff29
Query Parameters
The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Example:
4c9adbb4-28ef-4a7d-b273-1cee0c38021f
The deleteGitPull options.
Unique ID of the import request.
Examples:cc6dbbfd-810d-4f0e-b0a9-228c328aff29The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:4c9adbb4-28ef-4a7d-b273-1cee0c38021f
parameters
Unique ID of the import request.
Examples:The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:
parameters
Unique ID of the import request.
Examples:The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:
Response
Status Code
The import cancellation request was accepted.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
Status of the git pull request cannot be found. This can be due to the given import_id is not valid, or the git pull has been completed long ago and its status information is no longer available.
An error occurred. See response for more information.
No Sample Response
Get the status of a previous git pull request
Gets the status of a git pull request. The status field in the response object indicates if the given import is completed, in progress, or failed. Detailed status information about each imported data flow is also contained in the response object.
Gets the status of a git pull request. The status field in the response object indicates if the given import is completed, in progress, or failed. Detailed status information about each imported data flow is also contained in the response object.
Gets the status of a git pull request. The status field in the response object indicates if the given import is completed, in progress, or failed. Detailed status information about each imported data flow is also contained in the response object.
Gets the status of a git pull request. The status field in the response object indicates if the given import is completed, in progress, or failed. Detailed status information about each imported data flow is also contained in the response object.
GET /v3/migration/git_pull/{import_id}ServiceCall<GitPullResponse> getGitPull(GetGitPullOptions getGitPullOptions)getGitPull(params)
get_git_pull(
self,
import_id: str,
*,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
format: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the GetGitPullOptions.Builder to create a GetGitPullOptions object that contains the parameter values for the getGitPull method.
Path Parameters
Unique ID of the pull request.
Query Parameters
The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Example:
4c9adbb4-28ef-4a7d-b273-1cee0c38021fformat of isx import report
Allowable values: [
json,csv]Example:
json
The getGitPull options.
Unique ID of the pull request.
The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:4c9adbb4-28ef-4a7d-b273-1cee0c38021fformat of isx import report.
Allowable values: [
json,csv]Examples:json
parameters
Unique ID of the pull request.
The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:format of isx import report.
Allowable values: [
json,csv]Examples:
parameters
Unique ID of the pull request.
The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:format of isx import report.
Allowable values: [
json,csv]Examples:
Response
Response object of an import request.
Import the response entity.
Show git pull information
Import the response metadata.
Response object of an import request.
Import the response entity.
- entity
Account ID of the user who cancelled the import request. This field is required only when the status field is "cancelled".
Examples:user1@company1.com
The conflict_resolution option used for the import.
The duration of import processing time.
The timestamp when the import opearton completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
All data flows imported or to be imported. Each ImportFlow object contains status for the individual data flow import operation.
- importDataFlows
conflict resolution status.
Possible values: [
flow_replacement_succeeded,flow_replacement_failed,import_flow_renamed,import_flow_skipped,connection_replacement_succeeded,connection_replacement_failed,connection_renamed,connection_skipped,parameter_set_replacement_succeeded,parameter_set_replacement_failed,parameter_set_renamed,parameter_set_skipped,table_definition_replacement_succeeded,table_definition_replacement_failed,table_definition_renamed,table_definition_skipped,sequence_job_replacement_succeeded,sequence_job_replacement_failed,sequence_job_renamed,sequence_job_skipped,subflow_replacement_succeeded,subflow_replacement_failed,subflow_renamed,subflow_skipped,message_handler_replacement_succeeded,message_handler_replacement_failed,message_handler_renamed,message_handler_skipped,build_stage_replacement_succeeded,build_stage_replacement_failed,build_stage_renamed,build_stage_skipped,wrapped_stage_replacement_succeeded,wrapped_stage_replacement_failed,wrapped_stage_renamed,wrapped_stage_skipped,custom_stage_replacement_succeeded,custom_stage_replacement_failed,custom_stage_renamed,custom_stage_skipped,data_quality_rule_replacement_succeeded,data_quality_rule_replacement_failed,data_quality_rule_renamed,data_quality_rule_skipped,data_quality_rule_definition_replacement_succeeded,data_quality_rule_definition_failed,data_quality_rule_definition_renamed,data_quality_rule_definition_skipped,cff_schema_replacement_succeeded,cff_schema_replacement_failed,cff_schema_renamed,cff_schema_skipped,test_case_replacement_succeeded,test_case_replacement_failed,test_case_renamed,test_case_skipped]Examples:import_flow_renamed
The timestamp when the flow import is completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
The errors array report all the problems preventing the data flow from being successfully imported.
- errors
additional error text.
error object name.
error stage type.
error type.
Possible values: [
unsupported_stage_type,unsupported_feature,empty_json,isx_conversion_error,model_conversion_error,invalid_input_type,invalid_json_format,json_conversion_error,flow_deletion_error,flow_creation_error,flow_response_parsing_error,auth_token_error,flow_compilation_error,empty_stage_list,empty_stage_node,missing_stage_type_class_name,dummy_stage,missing_stage_type,missing_repos_id,stage_conversion_error,unimplemented_stage_type,job_creation_error,job_run_error,flow_search_error,unsupported_job_type,internal_error,connection_creation_error,flow_rename_error,duplicate_job_error,parameter_set_creation_error,distributed_lock_error,duplicate_object_error,unbound_object_reference,table_def_creation_error,connection_creation_api_error,connection_patch_api_error,connection_deletion_api_error,sequence_job_creation_error,unsupported_stage_type_in_subflow,job_update_error,job_deletion_error,sequence_job_deletion_error,parameter_set_deletion_error,invalid_property_name,invalid_property_value,unsupported_subflow_with_multiple_parents,message_handler_creation_error,message_handler_deletion_error,unsupported_server_container,build_stage_creation_error,build_stage_deletion_error,build_stage_generation_error,data_quality_spec_creation_error,xml_schema_library_creation_error,wrapped_stage_creation_error,wrapped_stage_deletion_error,wrapped_stage_generation_error,insufficient_memory_error,out_of_memory_error,routine_conversion_error,routine_creation_error,routine_deletion_error,custom_stage_creation_error,custom_stage_deletion_error,invalid_file_type,sequence_job_exceeding_max_capacity_error,flow_update_error,parameter_set_update_error,invalid_job_file_type,incorrectset_xmlinputpx,missing_column_definition,cp4d_only_stage_type,unsupported_sybase_lookup,job_get_error,flow_export_error,odm_library_creation_error,generated_parameter_set_error,data_quality_rule_creation_error,xml_schema_library_empty_error,data_asset_creation_error,data_asset_deletion_error,test_case_creation_error,test_case_deletion_error,custom_function_creation_error,sequence_job_update_error,cp4d_only_stage_type_in_subflow,data_set_creation_error,file_set_creation_error,missing_job_referred_flow,custom_stage_name_conflict_error,data_rule_definition_creation_error,CFF_SCHEMA_CREATION_ERROR,build_op_registration_error]
Unique id of the data flow. This field is returned only if the underlying data flow has been successfully imported.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136Unique id of the job. This field is returned only if the corresponding job object has been successfully created.
Examples:ccfaaafd-810d-4f0e-b0a9-228c328a0136Job name. This field is returned only if the corresponding job object has been successfully created.
Examples:Aggregator12_DataStage_1
(deprecated) original type of the job or data flow in the import file.
Possible values: [
px_job,server_job,connection,table_def]Examples:px_job
Name of the imported data flow.
Examples:cancel-reservation-job
Name of the data flow to be imported.
Examples:cancel-reservation-job
The ID of an existing asset this object refers to. If ref_asset_id is specified, the id field will be the same as ref_asset_id for backward compatibility.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136data import status.
Possible values: [
completed,in_progress,failed,skipped,deprecated,unsupported,flow_conversion_failed,flow_creation_failed,flow_compilation_failed,job_creation_failed,job_run_failed,connection_conversion_failed,connection_creation_failed,parameter_set_conversion_failed,parameter_set_creation_failed,data_quality_spec_conversion_failed,data_quality_spec_creation_failed,table_definition_conversion_failed,table_definition_creation_failed,job_update_failed,job_deletion_failed,sequence_job_conversion_failed,sequence_job_creation_failed,message_handler_conversion_failed,message_handler_creation_failed,build_stage_conversion_failed,build_stage_creation_failed,build_stage_generation_failed,xml_schema_library_creation_failed,wrapped_stage_conversion_failed,wrapped_stage_creation_failed,wrapped_stage_generation_failed,routine_conversion_failed,routine_creation_failed,custom_stage_conversion_failed,custom_stage_creation_failed,flow_update_failed,parameter_set_update_failed,completed_with_error,data_quality_rule_creation_failed,data_asset_creation_failed,custom_function_creation_failed,data_set_creation_failed,file_set_creation_failed,data_rule_definition_creation_failed,cff_schema_creation_failed,cff_schema_conversion_failed,test_case_creation_failed,test_case_conversion_failed]Examples:completed
type of the job or data connection in the import file.
Possible values: [
px_job,server_job,connection,table_def,parameter_set,routine,subflow,sequence_job,data_quality_spec,server_container,message_handler,build_stage,wrapped_stage,custom_stage,xml_schema_library,custom_stage_library,function_library,data_intg_parallel_function,data_intg_java_library,odm_ruleset,job,data_quality_rule,data_quality_definition,data_asset,data_intg_data_set,data_intg_file_set,odbc_configuration,data_intg_cff_schema,data_intg_test_case,data_element,ims_database,ims_viewset,mainframe_job,machine_profile,mainframe_routine,parallel_routine,transform,jdbc_driver]Examples:px_job
The warnings array report all the warnings in the data flow import operation.
- warnings
additional warning text.
warning object name.
warning type.
Possible values: [
unreleased_stage_type,unreleased_feature,credentials_file_warning,transformer_trigger_unsupported,transformer_buildtab_unsupported,unsupported_secure_gateway,placeholder_connection_parameters,description_truncated,empty_stage_list,missing_parameter_set,missing_subflow,missing_table_definition,missing_data_connection,duplicate_job,unsupported_connection,unsupported_activity_trigger,unsupported_expression,unsupported_function,unsupported_operator,unsupported_variable,unsupported_activity_variable,default_connection_parameter,unmatched_for_var_warning,unsupported_routine_statement,unsupported_before_after_routine,attachment_creation_failed,unsupported_job_control,unsupported_drs_rdbms_parameter,dataconnection_conflict,placeholder_parameter_set,missing_routine,unsupported_stored_procedure_transform,unattached_links,unreleased_datasource_in_storedprocedure,unsupported_parquet_compression_type_lzo,unsupported_stppx_dbvendor_parameter,unsupported_sappack_connector_property,undefined_variable,unsupported_xmloutputpx_features,xml_transform_missed_schema,unsupported_stage_type,unbound_reference,missing_stage_type,unsupported_dependency,missing_data_asset,unsupported_svtransformer_before_after_subroutine,deprecated_load_control_properties,missing_parameter,unsupported_stage_type_in_subflow,unsupported_xmlinputpx_features,export_request_in_progress,unsupported_stppx_for_db2i_db2z,deprecated_connection_property,hard_coded_file_path,unsupported_property_value,unsupported_property,unsupported_dcn_data_connection,xmlstage_unsupported_function,attachment_update_failed,unchanged_parameter_set,connection_parameter_set_created,connection_parameter_set_patched,skipped_duplicate_connection,unchanged_connection,unsupported_xpath,parameters_in_existing_parameter_set,different_parameter_type_in_parameter_set,unsupported_dataquality_rename,unsupported_data_connection_secure_gateway,missing_default_parameter_value,missing_referenced_connection,unsupported_routine_activity,unsupported_execution_action_validate,unsupported_execution_action_reset,runtime_jis_generation_failed,missing_job,missing_dataset,missing_fileset,identical_parameter_set,missing_runtime_environment,missing_match_spec,unsupported_before_after_subroutine,deprecated_stage_type,missing_storage_volume,unsupported_websphereMQ_server_mode,unsupported_parameter_name,unsupported_property_on_cloud,invalid_job_schedule,missing_cff_schema,missing_data_intg_flow,job_import_skipped,unsupported_spparam_type,unsupported_multiple_reject_links,binaries_import_failed,missing_binaries,deprecated_job_environment_input,unclear_numeric_type,duplicate_parameter_name,user_status_migrated_to_global_user_variable,user_status_migrated_to_dssetuserstatus_function,subroutine_unsupported_on_cloud]
import error.
import type.
list of all missing assets.
- missingAssets
name of asset.
type of asset.
Name of the import request.
Examples:seat-reservation-jobs
Import event notifications.
- notifications
The timestamp when the notification was created. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Notification id.
Import/Export status associated with the notification.
The on_failure option used for the import.
Estimate of remaining time in seconds.
The timestamp when the import opearton started. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import status.
Possible values: [
in_progress,cancelled,queued,started,completed]Examples:in_progress
Import statistics. total = imported (including renamed and replaced) + skipped + failed + deprecated + unsupported + pending.
- tally
Total number of build stages.
Total number of cff schemas.
Total number of connection creation failures.
Total number of data connections.
Total number of custom stages.
Total number of data assets.
Total number of data elements.
Total number of data quality rules.
Total number of data quality rules.
Total number of data quality spec.
Total number of datastage datasets.
Total number of deprecated resources in the import file.
Total number of data flows that cannot be imported due to import errors.
Total number of datastage filesets.
Total number of data flows that failed to compile.
Total number of flow creation failures.
Total number of function libraries.
Total number of data flows successfully imported.
Total number of IMS databases.
Total number of IMS viewsets.
Total number of java libraries.
Total number of job creation failures.
Total number of machine profiles.
Total number of mainframe jobs.
Total number of mainframe routines.
Total number of message handlers.
Total number of ilogjrule/odm libraries.
Total number of parallel jobs.
Total number of parallel routines.
Total number of parameter sets.
Total number of data flows that have not been processed.
Total number of data flows successfully imported and renamed due to a name conflict. The renamed count is included in the imported count.
Total number of existing data flows replaced by imported flows. The replaced count is included in the imported count.
Total number of routines.
Total number of sequence job creation failures.
Total number of sequence jobs.
Total number of server containers.
Total number of data flows skipped due to name conflicts. The skipped count is not included in the failed count or imported count.
Total number of parallel job subflows.
Total number of table definitions.
Total number of test cases.
Total number of data flows to be imported.
Total number of transforms.
Total number of unsupported resources in the import file.
Total number of wrapped stages.
Total number of xml schema libraries.
Show git pull information.
- gitpull
name of the branch used for the pull.
Examples:my_branch
whether data is fetched from git successfully.
Possible values: [
fetched_git_content,import_ready,in_progress,cancelled,timeout,invalid,started,completed]Examples:in_progress
name of the folder used for the pull.
Examples:my_folder
name of the repo used for the pull.
Examples:my_repo
name of the tag used for the pull.
Examples:v1.0.0
Import the response metadata.
- metadata
Catalog id.
Catalog name.
The timestamp when the import API was submitted. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Account ID of the user who submitted the import request.
Examples:user1@company1.com
The unique import id.
The timestamp when the import status was last updated. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import file name.
Project id.
Project name.
Space id.
Space name.
The URL which can be used to get the status of the import request right after it is submitted.
Response object of an import request.
Import the response entity.
- entity
Account ID of the user who cancelled the import request. This field is required only when the status field is "cancelled".
Examples:user1@company1.com
The conflict_resolution option used for the import.
The duration of import processing time.
The timestamp when the import opearton completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
All data flows imported or to be imported. Each ImportFlow object contains status for the individual data flow import operation.
- import_data_flows
conflict resolution status.
Possible values: [
flow_replacement_succeeded,flow_replacement_failed,import_flow_renamed,import_flow_skipped,connection_replacement_succeeded,connection_replacement_failed,connection_renamed,connection_skipped,parameter_set_replacement_succeeded,parameter_set_replacement_failed,parameter_set_renamed,parameter_set_skipped,table_definition_replacement_succeeded,table_definition_replacement_failed,table_definition_renamed,table_definition_skipped,sequence_job_replacement_succeeded,sequence_job_replacement_failed,sequence_job_renamed,sequence_job_skipped,subflow_replacement_succeeded,subflow_replacement_failed,subflow_renamed,subflow_skipped,message_handler_replacement_succeeded,message_handler_replacement_failed,message_handler_renamed,message_handler_skipped,build_stage_replacement_succeeded,build_stage_replacement_failed,build_stage_renamed,build_stage_skipped,wrapped_stage_replacement_succeeded,wrapped_stage_replacement_failed,wrapped_stage_renamed,wrapped_stage_skipped,custom_stage_replacement_succeeded,custom_stage_replacement_failed,custom_stage_renamed,custom_stage_skipped,data_quality_rule_replacement_succeeded,data_quality_rule_replacement_failed,data_quality_rule_renamed,data_quality_rule_skipped,data_quality_rule_definition_replacement_succeeded,data_quality_rule_definition_failed,data_quality_rule_definition_renamed,data_quality_rule_definition_skipped,cff_schema_replacement_succeeded,cff_schema_replacement_failed,cff_schema_renamed,cff_schema_skipped,test_case_replacement_succeeded,test_case_replacement_failed,test_case_renamed,test_case_skipped]Examples:import_flow_renamed
The timestamp when the flow import is completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
The errors array report all the problems preventing the data flow from being successfully imported.
- errors
additional error text.
error object name.
error stage type.
error type.
Possible values: [
unsupported_stage_type,unsupported_feature,empty_json,isx_conversion_error,model_conversion_error,invalid_input_type,invalid_json_format,json_conversion_error,flow_deletion_error,flow_creation_error,flow_response_parsing_error,auth_token_error,flow_compilation_error,empty_stage_list,empty_stage_node,missing_stage_type_class_name,dummy_stage,missing_stage_type,missing_repos_id,stage_conversion_error,unimplemented_stage_type,job_creation_error,job_run_error,flow_search_error,unsupported_job_type,internal_error,connection_creation_error,flow_rename_error,duplicate_job_error,parameter_set_creation_error,distributed_lock_error,duplicate_object_error,unbound_object_reference,table_def_creation_error,connection_creation_api_error,connection_patch_api_error,connection_deletion_api_error,sequence_job_creation_error,unsupported_stage_type_in_subflow,job_update_error,job_deletion_error,sequence_job_deletion_error,parameter_set_deletion_error,invalid_property_name,invalid_property_value,unsupported_subflow_with_multiple_parents,message_handler_creation_error,message_handler_deletion_error,unsupported_server_container,build_stage_creation_error,build_stage_deletion_error,build_stage_generation_error,data_quality_spec_creation_error,xml_schema_library_creation_error,wrapped_stage_creation_error,wrapped_stage_deletion_error,wrapped_stage_generation_error,insufficient_memory_error,out_of_memory_error,routine_conversion_error,routine_creation_error,routine_deletion_error,custom_stage_creation_error,custom_stage_deletion_error,invalid_file_type,sequence_job_exceeding_max_capacity_error,flow_update_error,parameter_set_update_error,invalid_job_file_type,incorrectset_xmlinputpx,missing_column_definition,cp4d_only_stage_type,unsupported_sybase_lookup,job_get_error,flow_export_error,odm_library_creation_error,generated_parameter_set_error,data_quality_rule_creation_error,xml_schema_library_empty_error,data_asset_creation_error,data_asset_deletion_error,test_case_creation_error,test_case_deletion_error,custom_function_creation_error,sequence_job_update_error,cp4d_only_stage_type_in_subflow,data_set_creation_error,file_set_creation_error,missing_job_referred_flow,custom_stage_name_conflict_error,data_rule_definition_creation_error,CFF_SCHEMA_CREATION_ERROR,build_op_registration_error]
Unique id of the data flow. This field is returned only if the underlying data flow has been successfully imported.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136Unique id of the job. This field is returned only if the corresponding job object has been successfully created.
Examples:ccfaaafd-810d-4f0e-b0a9-228c328a0136Job name. This field is returned only if the corresponding job object has been successfully created.
Examples:Aggregator12_DataStage_1
(deprecated) original type of the job or data flow in the import file.
Possible values: [
px_job,server_job,connection,table_def]Examples:px_job
Name of the imported data flow.
Examples:cancel-reservation-job
Name of the data flow to be imported.
Examples:cancel-reservation-job
The ID of an existing asset this object refers to. If ref_asset_id is specified, the id field will be the same as ref_asset_id for backward compatibility.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136data import status.
Possible values: [
completed,in_progress,failed,skipped,deprecated,unsupported,flow_conversion_failed,flow_creation_failed,flow_compilation_failed,job_creation_failed,job_run_failed,connection_conversion_failed,connection_creation_failed,parameter_set_conversion_failed,parameter_set_creation_failed,data_quality_spec_conversion_failed,data_quality_spec_creation_failed,table_definition_conversion_failed,table_definition_creation_failed,job_update_failed,job_deletion_failed,sequence_job_conversion_failed,sequence_job_creation_failed,message_handler_conversion_failed,message_handler_creation_failed,build_stage_conversion_failed,build_stage_creation_failed,build_stage_generation_failed,xml_schema_library_creation_failed,wrapped_stage_conversion_failed,wrapped_stage_creation_failed,wrapped_stage_generation_failed,routine_conversion_failed,routine_creation_failed,custom_stage_conversion_failed,custom_stage_creation_failed,flow_update_failed,parameter_set_update_failed,completed_with_error,data_quality_rule_creation_failed,data_asset_creation_failed,custom_function_creation_failed,data_set_creation_failed,file_set_creation_failed,data_rule_definition_creation_failed,cff_schema_creation_failed,cff_schema_conversion_failed,test_case_creation_failed,test_case_conversion_failed]Examples:completed
type of the job or data connection in the import file.
Possible values: [
px_job,server_job,connection,table_def,parameter_set,routine,subflow,sequence_job,data_quality_spec,server_container,message_handler,build_stage,wrapped_stage,custom_stage,xml_schema_library,custom_stage_library,function_library,data_intg_parallel_function,data_intg_java_library,odm_ruleset,job,data_quality_rule,data_quality_definition,data_asset,data_intg_data_set,data_intg_file_set,odbc_configuration,data_intg_cff_schema,data_intg_test_case,data_element,ims_database,ims_viewset,mainframe_job,machine_profile,mainframe_routine,parallel_routine,transform,jdbc_driver]Examples:px_job
The warnings array report all the warnings in the data flow import operation.
- warnings
additional warning text.
warning object name.
warning type.
Possible values: [
unreleased_stage_type,unreleased_feature,credentials_file_warning,transformer_trigger_unsupported,transformer_buildtab_unsupported,unsupported_secure_gateway,placeholder_connection_parameters,description_truncated,empty_stage_list,missing_parameter_set,missing_subflow,missing_table_definition,missing_data_connection,duplicate_job,unsupported_connection,unsupported_activity_trigger,unsupported_expression,unsupported_function,unsupported_operator,unsupported_variable,unsupported_activity_variable,default_connection_parameter,unmatched_for_var_warning,unsupported_routine_statement,unsupported_before_after_routine,attachment_creation_failed,unsupported_job_control,unsupported_drs_rdbms_parameter,dataconnection_conflict,placeholder_parameter_set,missing_routine,unsupported_stored_procedure_transform,unattached_links,unreleased_datasource_in_storedprocedure,unsupported_parquet_compression_type_lzo,unsupported_stppx_dbvendor_parameter,unsupported_sappack_connector_property,undefined_variable,unsupported_xmloutputpx_features,xml_transform_missed_schema,unsupported_stage_type,unbound_reference,missing_stage_type,unsupported_dependency,missing_data_asset,unsupported_svtransformer_before_after_subroutine,deprecated_load_control_properties,missing_parameter,unsupported_stage_type_in_subflow,unsupported_xmlinputpx_features,export_request_in_progress,unsupported_stppx_for_db2i_db2z,deprecated_connection_property,hard_coded_file_path,unsupported_property_value,unsupported_property,unsupported_dcn_data_connection,xmlstage_unsupported_function,attachment_update_failed,unchanged_parameter_set,connection_parameter_set_created,connection_parameter_set_patched,skipped_duplicate_connection,unchanged_connection,unsupported_xpath,parameters_in_existing_parameter_set,different_parameter_type_in_parameter_set,unsupported_dataquality_rename,unsupported_data_connection_secure_gateway,missing_default_parameter_value,missing_referenced_connection,unsupported_routine_activity,unsupported_execution_action_validate,unsupported_execution_action_reset,runtime_jis_generation_failed,missing_job,missing_dataset,missing_fileset,identical_parameter_set,missing_runtime_environment,missing_match_spec,unsupported_before_after_subroutine,deprecated_stage_type,missing_storage_volume,unsupported_websphereMQ_server_mode,unsupported_parameter_name,unsupported_property_on_cloud,invalid_job_schedule,missing_cff_schema,missing_data_intg_flow,job_import_skipped,unsupported_spparam_type,unsupported_multiple_reject_links,binaries_import_failed,missing_binaries,deprecated_job_environment_input,unclear_numeric_type,duplicate_parameter_name,user_status_migrated_to_global_user_variable,user_status_migrated_to_dssetuserstatus_function,subroutine_unsupported_on_cloud]
import error.
import type.
list of all missing assets.
- missing_assets
name of asset.
type of asset.
Name of the import request.
Examples:seat-reservation-jobs
Import event notifications.
- notifications
The timestamp when the notification was created. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Notification id.
Import/Export status associated with the notification.
The on_failure option used for the import.
Estimate of remaining time in seconds.
The timestamp when the import opearton started. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import status.
Possible values: [
in_progress,cancelled,queued,started,completed]Examples:in_progress
Import statistics. total = imported (including renamed and replaced) + skipped + failed + deprecated + unsupported + pending.
- tally
Total number of build stages.
Total number of cff schemas.
Total number of connection creation failures.
Total number of data connections.
Total number of custom stages.
Total number of data assets.
Total number of data elements.
Total number of data quality rules.
Total number of data quality rules.
Total number of data quality spec.
Total number of datastage datasets.
Total number of deprecated resources in the import file.
Total number of data flows that cannot be imported due to import errors.
Total number of datastage filesets.
Total number of data flows that failed to compile.
Total number of flow creation failures.
Total number of function libraries.
Total number of data flows successfully imported.
Total number of IMS databases.
Total number of IMS viewsets.
Total number of java libraries.
Total number of job creation failures.
Total number of machine profiles.
Total number of mainframe jobs.
Total number of mainframe routines.
Total number of message handlers.
Total number of ilogjrule/odm libraries.
Total number of parallel jobs.
Total number of parallel routines.
Total number of parameter sets.
Total number of data flows that have not been processed.
Total number of data flows successfully imported and renamed due to a name conflict. The renamed count is included in the imported count.
Total number of existing data flows replaced by imported flows. The replaced count is included in the imported count.
Total number of routines.
Total number of sequence job creation failures.
Total number of sequence jobs.
Total number of server containers.
Total number of data flows skipped due to name conflicts. The skipped count is not included in the failed count or imported count.
Total number of parallel job subflows.
Total number of table definitions.
Total number of test cases.
Total number of data flows to be imported.
Total number of transforms.
Total number of unsupported resources in the import file.
Total number of wrapped stages.
Total number of xml schema libraries.
Show git pull information.
- gitpull
name of the branch used for the pull.
Examples:my_branch
whether data is fetched from git successfully.
Possible values: [
fetched_git_content,import_ready,in_progress,cancelled,timeout,invalid,started,completed]Examples:in_progress
name of the folder used for the pull.
Examples:my_folder
name of the repo used for the pull.
Examples:my_repo
name of the tag used for the pull.
Examples:v1.0.0
Import the response metadata.
- metadata
Catalog id.
Catalog name.
The timestamp when the import API was submitted. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Account ID of the user who submitted the import request.
Examples:user1@company1.com
The unique import id.
The timestamp when the import status was last updated. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import file name.
Project id.
Project name.
Space id.
Space name.
The URL which can be used to get the status of the import request right after it is submitted.
Response object of an import request.
Import the response entity.
- entity
Account ID of the user who cancelled the import request. This field is required only when the status field is "cancelled".
Examples:user1@company1.com
The conflict_resolution option used for the import.
The duration of import processing time.
The timestamp when the import opearton completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
All data flows imported or to be imported. Each ImportFlow object contains status for the individual data flow import operation.
- import_data_flows
conflict resolution status.
Possible values: [
flow_replacement_succeeded,flow_replacement_failed,import_flow_renamed,import_flow_skipped,connection_replacement_succeeded,connection_replacement_failed,connection_renamed,connection_skipped,parameter_set_replacement_succeeded,parameter_set_replacement_failed,parameter_set_renamed,parameter_set_skipped,table_definition_replacement_succeeded,table_definition_replacement_failed,table_definition_renamed,table_definition_skipped,sequence_job_replacement_succeeded,sequence_job_replacement_failed,sequence_job_renamed,sequence_job_skipped,subflow_replacement_succeeded,subflow_replacement_failed,subflow_renamed,subflow_skipped,message_handler_replacement_succeeded,message_handler_replacement_failed,message_handler_renamed,message_handler_skipped,build_stage_replacement_succeeded,build_stage_replacement_failed,build_stage_renamed,build_stage_skipped,wrapped_stage_replacement_succeeded,wrapped_stage_replacement_failed,wrapped_stage_renamed,wrapped_stage_skipped,custom_stage_replacement_succeeded,custom_stage_replacement_failed,custom_stage_renamed,custom_stage_skipped,data_quality_rule_replacement_succeeded,data_quality_rule_replacement_failed,data_quality_rule_renamed,data_quality_rule_skipped,data_quality_rule_definition_replacement_succeeded,data_quality_rule_definition_failed,data_quality_rule_definition_renamed,data_quality_rule_definition_skipped,cff_schema_replacement_succeeded,cff_schema_replacement_failed,cff_schema_renamed,cff_schema_skipped,test_case_replacement_succeeded,test_case_replacement_failed,test_case_renamed,test_case_skipped]Examples:import_flow_renamed
The timestamp when the flow import is completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
The errors array report all the problems preventing the data flow from being successfully imported.
- errors
additional error text.
error object name.
error stage type.
error type.
Possible values: [
unsupported_stage_type,unsupported_feature,empty_json,isx_conversion_error,model_conversion_error,invalid_input_type,invalid_json_format,json_conversion_error,flow_deletion_error,flow_creation_error,flow_response_parsing_error,auth_token_error,flow_compilation_error,empty_stage_list,empty_stage_node,missing_stage_type_class_name,dummy_stage,missing_stage_type,missing_repos_id,stage_conversion_error,unimplemented_stage_type,job_creation_error,job_run_error,flow_search_error,unsupported_job_type,internal_error,connection_creation_error,flow_rename_error,duplicate_job_error,parameter_set_creation_error,distributed_lock_error,duplicate_object_error,unbound_object_reference,table_def_creation_error,connection_creation_api_error,connection_patch_api_error,connection_deletion_api_error,sequence_job_creation_error,unsupported_stage_type_in_subflow,job_update_error,job_deletion_error,sequence_job_deletion_error,parameter_set_deletion_error,invalid_property_name,invalid_property_value,unsupported_subflow_with_multiple_parents,message_handler_creation_error,message_handler_deletion_error,unsupported_server_container,build_stage_creation_error,build_stage_deletion_error,build_stage_generation_error,data_quality_spec_creation_error,xml_schema_library_creation_error,wrapped_stage_creation_error,wrapped_stage_deletion_error,wrapped_stage_generation_error,insufficient_memory_error,out_of_memory_error,routine_conversion_error,routine_creation_error,routine_deletion_error,custom_stage_creation_error,custom_stage_deletion_error,invalid_file_type,sequence_job_exceeding_max_capacity_error,flow_update_error,parameter_set_update_error,invalid_job_file_type,incorrectset_xmlinputpx,missing_column_definition,cp4d_only_stage_type,unsupported_sybase_lookup,job_get_error,flow_export_error,odm_library_creation_error,generated_parameter_set_error,data_quality_rule_creation_error,xml_schema_library_empty_error,data_asset_creation_error,data_asset_deletion_error,test_case_creation_error,test_case_deletion_error,custom_function_creation_error,sequence_job_update_error,cp4d_only_stage_type_in_subflow,data_set_creation_error,file_set_creation_error,missing_job_referred_flow,custom_stage_name_conflict_error,data_rule_definition_creation_error,CFF_SCHEMA_CREATION_ERROR,build_op_registration_error]
Unique id of the data flow. This field is returned only if the underlying data flow has been successfully imported.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136Unique id of the job. This field is returned only if the corresponding job object has been successfully created.
Examples:ccfaaafd-810d-4f0e-b0a9-228c328a0136Job name. This field is returned only if the corresponding job object has been successfully created.
Examples:Aggregator12_DataStage_1
(deprecated) original type of the job or data flow in the import file.
Possible values: [
px_job,server_job,connection,table_def]Examples:px_job
Name of the imported data flow.
Examples:cancel-reservation-job
Name of the data flow to be imported.
Examples:cancel-reservation-job
The ID of an existing asset this object refers to. If ref_asset_id is specified, the id field will be the same as ref_asset_id for backward compatibility.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136data import status.
Possible values: [
completed,in_progress,failed,skipped,deprecated,unsupported,flow_conversion_failed,flow_creation_failed,flow_compilation_failed,job_creation_failed,job_run_failed,connection_conversion_failed,connection_creation_failed,parameter_set_conversion_failed,parameter_set_creation_failed,data_quality_spec_conversion_failed,data_quality_spec_creation_failed,table_definition_conversion_failed,table_definition_creation_failed,job_update_failed,job_deletion_failed,sequence_job_conversion_failed,sequence_job_creation_failed,message_handler_conversion_failed,message_handler_creation_failed,build_stage_conversion_failed,build_stage_creation_failed,build_stage_generation_failed,xml_schema_library_creation_failed,wrapped_stage_conversion_failed,wrapped_stage_creation_failed,wrapped_stage_generation_failed,routine_conversion_failed,routine_creation_failed,custom_stage_conversion_failed,custom_stage_creation_failed,flow_update_failed,parameter_set_update_failed,completed_with_error,data_quality_rule_creation_failed,data_asset_creation_failed,custom_function_creation_failed,data_set_creation_failed,file_set_creation_failed,data_rule_definition_creation_failed,cff_schema_creation_failed,cff_schema_conversion_failed,test_case_creation_failed,test_case_conversion_failed]Examples:completed
type of the job or data connection in the import file.
Possible values: [
px_job,server_job,connection,table_def,parameter_set,routine,subflow,sequence_job,data_quality_spec,server_container,message_handler,build_stage,wrapped_stage,custom_stage,xml_schema_library,custom_stage_library,function_library,data_intg_parallel_function,data_intg_java_library,odm_ruleset,job,data_quality_rule,data_quality_definition,data_asset,data_intg_data_set,data_intg_file_set,odbc_configuration,data_intg_cff_schema,data_intg_test_case,data_element,ims_database,ims_viewset,mainframe_job,machine_profile,mainframe_routine,parallel_routine,transform,jdbc_driver]Examples:px_job
The warnings array report all the warnings in the data flow import operation.
- warnings
additional warning text.
warning object name.
warning type.
Possible values: [
unreleased_stage_type,unreleased_feature,credentials_file_warning,transformer_trigger_unsupported,transformer_buildtab_unsupported,unsupported_secure_gateway,placeholder_connection_parameters,description_truncated,empty_stage_list,missing_parameter_set,missing_subflow,missing_table_definition,missing_data_connection,duplicate_job,unsupported_connection,unsupported_activity_trigger,unsupported_expression,unsupported_function,unsupported_operator,unsupported_variable,unsupported_activity_variable,default_connection_parameter,unmatched_for_var_warning,unsupported_routine_statement,unsupported_before_after_routine,attachment_creation_failed,unsupported_job_control,unsupported_drs_rdbms_parameter,dataconnection_conflict,placeholder_parameter_set,missing_routine,unsupported_stored_procedure_transform,unattached_links,unreleased_datasource_in_storedprocedure,unsupported_parquet_compression_type_lzo,unsupported_stppx_dbvendor_parameter,unsupported_sappack_connector_property,undefined_variable,unsupported_xmloutputpx_features,xml_transform_missed_schema,unsupported_stage_type,unbound_reference,missing_stage_type,unsupported_dependency,missing_data_asset,unsupported_svtransformer_before_after_subroutine,deprecated_load_control_properties,missing_parameter,unsupported_stage_type_in_subflow,unsupported_xmlinputpx_features,export_request_in_progress,unsupported_stppx_for_db2i_db2z,deprecated_connection_property,hard_coded_file_path,unsupported_property_value,unsupported_property,unsupported_dcn_data_connection,xmlstage_unsupported_function,attachment_update_failed,unchanged_parameter_set,connection_parameter_set_created,connection_parameter_set_patched,skipped_duplicate_connection,unchanged_connection,unsupported_xpath,parameters_in_existing_parameter_set,different_parameter_type_in_parameter_set,unsupported_dataquality_rename,unsupported_data_connection_secure_gateway,missing_default_parameter_value,missing_referenced_connection,unsupported_routine_activity,unsupported_execution_action_validate,unsupported_execution_action_reset,runtime_jis_generation_failed,missing_job,missing_dataset,missing_fileset,identical_parameter_set,missing_runtime_environment,missing_match_spec,unsupported_before_after_subroutine,deprecated_stage_type,missing_storage_volume,unsupported_websphereMQ_server_mode,unsupported_parameter_name,unsupported_property_on_cloud,invalid_job_schedule,missing_cff_schema,missing_data_intg_flow,job_import_skipped,unsupported_spparam_type,unsupported_multiple_reject_links,binaries_import_failed,missing_binaries,deprecated_job_environment_input,unclear_numeric_type,duplicate_parameter_name,user_status_migrated_to_global_user_variable,user_status_migrated_to_dssetuserstatus_function,subroutine_unsupported_on_cloud]
import error.
import type.
list of all missing assets.
- missing_assets
name of asset.
type of asset.
Name of the import request.
Examples:seat-reservation-jobs
Import event notifications.
- notifications
The timestamp when the notification was created. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Notification id.
Import/Export status associated with the notification.
The on_failure option used for the import.
Estimate of remaining time in seconds.
The timestamp when the import opearton started. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import status.
Possible values: [
in_progress,cancelled,queued,started,completed]Examples:in_progress
Import statistics. total = imported (including renamed and replaced) + skipped + failed + deprecated + unsupported + pending.
- tally
Total number of build stages.
Total number of cff schemas.
Total number of connection creation failures.
Total number of data connections.
Total number of custom stages.
Total number of data assets.
Total number of data elements.
Total number of data quality rules.
Total number of data quality rules.
Total number of data quality spec.
Total number of datastage datasets.
Total number of deprecated resources in the import file.
Total number of data flows that cannot be imported due to import errors.
Total number of datastage filesets.
Total number of data flows that failed to compile.
Total number of flow creation failures.
Total number of function libraries.
Total number of data flows successfully imported.
Total number of IMS databases.
Total number of IMS viewsets.
Total number of java libraries.
Total number of job creation failures.
Total number of machine profiles.
Total number of mainframe jobs.
Total number of mainframe routines.
Total number of message handlers.
Total number of ilogjrule/odm libraries.
Total number of parallel jobs.
Total number of parallel routines.
Total number of parameter sets.
Total number of data flows that have not been processed.
Total number of data flows successfully imported and renamed due to a name conflict. The renamed count is included in the imported count.
Total number of existing data flows replaced by imported flows. The replaced count is included in the imported count.
Total number of routines.
Total number of sequence job creation failures.
Total number of sequence jobs.
Total number of server containers.
Total number of data flows skipped due to name conflicts. The skipped count is not included in the failed count or imported count.
Total number of parallel job subflows.
Total number of table definitions.
Total number of test cases.
Total number of data flows to be imported.
Total number of transforms.
Total number of unsupported resources in the import file.
Total number of wrapped stages.
Total number of xml schema libraries.
Show git pull information.
- gitpull
name of the branch used for the pull.
Examples:my_branch
whether data is fetched from git successfully.
Possible values: [
fetched_git_content,import_ready,in_progress,cancelled,timeout,invalid,started,completed]Examples:in_progress
name of the folder used for the pull.
Examples:my_folder
name of the repo used for the pull.
Examples:my_repo
name of the tag used for the pull.
Examples:v1.0.0
Import the response metadata.
- metadata
Catalog id.
Catalog name.
The timestamp when the import API was submitted. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Account ID of the user who submitted the import request.
Examples:user1@company1.com
The unique import id.
The timestamp when the import status was last updated. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import file name.
Project id.
Project name.
Space id.
Space name.
The URL which can be used to get the status of the import request right after it is submitted.
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
Status of the git pull request cannot be found. This can be due to the given import_id is not valid, or the git pull has been completed long ago and its status information is no longer available.
An error occurred. See response for more information.
Get the status of a git repo
Gets the status on objects changed in git repo with respect to a project. Detailed status information about changes committed in a project as well as from git.
Gets the status on objects changed in git repo with respect to a project. Detailed status information about changes committed in a project as well as from git.
Gets the status on objects changed in git repo with respect to a project. Detailed status information about changes committed in a project as well as from git.
Gets the status on objects changed in git repo with respect to a project. Detailed status information about changes committed in a project as well as from git.
GET /v3/migration/git_status
ServiceCall<GitStatusResponse> gitStatus(GitStatusOptions gitStatusOptions)gitStatus(params)
git_status(
self,
*,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
git_repo: Optional[str] = None,
git_branch: Optional[str] = None,
git_tag: Optional[str] = None,
git_folder: Optional[str] = None,
format: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the GitStatusOptions.Builder to create a GitStatusOptions object that contains the parameter values for the gitStatus method.
Query Parameters
The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Example:
4c9adbb4-28ef-4a7d-b273-1cee0c38021fThe name of the git repo to use.
Example:
git-dsjobThe name of the git branch to use.
The name of the git tag to use.
The name of the git folder where the project contents are committed/fetched.
Example:
my-project-folderformat of isx import report
Allowable values: [
json,csv]Example:
json
The gitStatus options.
The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:4c9adbb4-28ef-4a7d-b273-1cee0c38021fThe name of the git repo to use.
Examples:git-dsjob
The name of the git branch to use.
The name of the git tag to use.
The name of the git folder where the project contents are committed/fetched.
Examples:my-project-folder
format of isx import report.
Allowable values: [
json,csv]Examples:json
parameters
The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:The name of the git repo to use.
Examples:The name of the git branch to use.
The name of the git tag to use.
The name of the git folder where the project contents are committed/fetched.
Examples:format of isx import report.
Allowable values: [
json,csv]Examples:
parameters
The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:The name of the git repo to use.
Examples:The name of the git branch to use.
The name of the git tag to use.
The name of the git folder where the project contents are committed/fetched.
Examples:format of isx import report.
Allowable values: [
json,csv]Examples:
Response
Changes between Project and Git
The errors array report all the problems preventing the status report generation.
Object that is either added, modified or deleted
Object that is either added, modified or deleted
Changes between Project and Git.
The errors array report all the problems preventing the status report generation.
Object that is either added, modified or deleted.
- gitChanges
The timestamp when the resource is updated.
The timestamp when the resource is updated before last git commit.
The user who updated the resource.
The user who updated the resource in git last commit.
The timestamp when the resource in git.
The user who committed the resource to git.
object id.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136object name.
Examples:RowGenToPeek
object added, modified, deleted, identical or renamed.
Possible values: [
modified,added,deleted,renamed,identical]Examples:modified
object type.
Examples:data_intg_flow
Object that is either added, modified or deleted.
- projectChanges
The timestamp when the resource is updated.
The timestamp when the resource is updated before last git commit.
The user who updated the resource.
The user who updated the resource in git last commit.
The timestamp when the resource in git.
The user who committed the resource to git.
object id.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136object name.
Examples:RowGenToPeek
object added, modified, deleted, identical or renamed.
Possible values: [
modified,added,deleted,renamed,identical]Examples:modified
object type.
Examples:data_intg_flow
Changes between Project and Git.
The errors array report all the problems preventing the status report generation.
Object that is either added, modified or deleted.
- git_changes
The timestamp when the resource is updated.
The timestamp when the resource is updated before last git commit.
The user who updated the resource.
The user who updated the resource in git last commit.
The timestamp when the resource in git.
The user who committed the resource to git.
object id.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136object name.
Examples:RowGenToPeek
object added, modified, deleted, identical or renamed.
Possible values: [
modified,added,deleted,renamed,identical]Examples:modified
object type.
Examples:data_intg_flow
Object that is either added, modified or deleted.
- project_changes
The timestamp when the resource is updated.
The timestamp when the resource is updated before last git commit.
The user who updated the resource.
The user who updated the resource in git last commit.
The timestamp when the resource in git.
The user who committed the resource to git.
object id.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136object name.
Examples:RowGenToPeek
object added, modified, deleted, identical or renamed.
Possible values: [
modified,added,deleted,renamed,identical]Examples:modified
object type.
Examples:data_intg_flow
Changes between Project and Git.
The errors array report all the problems preventing the status report generation.
Object that is either added, modified or deleted.
- git_changes
The timestamp when the resource is updated.
The timestamp when the resource is updated before last git commit.
The user who updated the resource.
The user who updated the resource in git last commit.
The timestamp when the resource in git.
The user who committed the resource to git.
object id.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136object name.
Examples:RowGenToPeek
object added, modified, deleted, identical or renamed.
Possible values: [
modified,added,deleted,renamed,identical]Examples:modified
object type.
Examples:data_intg_flow
Object that is either added, modified or deleted.
- project_changes
The timestamp when the resource is updated.
The timestamp when the resource is updated before last git commit.
The user who updated the resource.
The user who updated the resource in git last commit.
The timestamp when the resource in git.
The user who committed the resource to git.
object id.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136object name.
Examples:RowGenToPeek
object added, modified, deleted, identical or renamed.
Possible values: [
modified,added,deleted,renamed,identical]Examples:modified
object type.
Examples:data_intg_flow
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
Status of the differences between a project and git repo.
An error occurred. See response for more information.
No Sample Response
Create V3 data flows from the attached job export file
Creates data flows from the attached job export file. This is an asynchronous call. The API call returns almost immediately which does not necessarily imply the completion of the import request. It only means that the import request has been accepted. The status field of the import request is included in the import response object. The status "completed" ("in_progress", "failed", resp.) indicates the import request is completed (in progress, and failed, resp.) The job export file for an import request may contain one mor more data flows. Unless the on_failure option is set to "stop", a completed import request may contain not only successfully imported data flows but also data flows that cannot be imported.
Creates data flows from the attached job export file. This is an asynchronous call. The API call returns almost immediately which does not necessarily imply the completion of the import request. It only means that the import request has been accepted. The status field of the import request is included in the import response object. The status "completed" ("in_progress", "failed", resp.) indicates the import request is completed (in progress, and failed, resp.) The job export file for an import request may contain one mor more data flows. Unless the on_failure option is set to "stop", a completed import request may contain not only successfully imported data flows but also data flows that cannot be imported.
Creates data flows from the attached job export file. This is an asynchronous call. The API call returns almost immediately which does not necessarily imply the completion of the import request. It only means that the import request has been accepted. The status field of the import request is included in the import response object. The status "completed" ("in_progress", "failed", resp.) indicates the import request is completed (in progress, and failed, resp.) The job export file for an import request may contain one mor more data flows. Unless the on_failure option is set to "stop", a completed import request may contain not only successfully imported data flows but also data flows that cannot be imported.
Creates data flows from the attached job export file. This is an asynchronous call. The API call returns almost immediately which does not necessarily imply the completion of the import request. It only means that the import request has been accepted. The status field of the import request is included in the import response object. The status "completed" ("in_progress", "failed", resp.) indicates the import request is completed (in progress, and failed, resp.) The job export file for an import request may contain one mor more data flows. Unless the on_failure option is set to "stop", a completed import request may contain not only successfully imported data flows but also data flows that cannot be imported.
POST /v3/migration/isx_imports
ServiceCall<ImportResponse> createMigration(CreateMigrationOptions createMigrationOptions)createMigration(params)
create_migration(
self,
body: BinaryIO,
*,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
on_failure: Optional[str] = None,
conflict_resolution: Optional[str] = None,
attachment_type: Optional[str] = None,
file_name: Optional[str] = None,
enable_notification: Optional[bool] = None,
import_only: Optional[bool] = None,
create_missing_parameters: Optional[bool] = None,
enable_rulestage_integration: Optional[bool] = None,
enable_local_connection: Optional[bool] = None,
asset_type: Optional[str] = None,
create_connection_parametersets: Optional[bool] = None,
storage_path: Optional[str] = None,
replace_mode: Optional[str] = None,
migrate_to_platform_connection: Optional[bool] = None,
use_dsn_name: Optional[bool] = None,
migrate_to_send_email: Optional[bool] = None,
enable_folder: Optional[bool] = None,
migrate_hive_impala: Optional[bool] = None,
from_: Optional[str] = None,
to: Optional[str] = None,
job_name_with_invocation_id: Optional[bool] = None,
annotation_styling: Optional[str] = None,
migrate_to_datastage_division: Optional[bool] = None,
run_job_by_name: Optional[bool] = None,
enable_optimized_pipeline: Optional[bool] = None,
use_jinja_template: Optional[bool] = None,
enable_flow_autosave: Optional[bool] = None,
migrate_jdbc_impala: Optional[bool] = None,
enable_inline_pipeline: Optional[bool] = None,
optimized_job_name_suffix: Optional[str] = None,
migrate_bash_param: Optional[bool] = None,
is_define_parameter_toggled: Optional[bool] = None,
migrate_stp_plugin: Optional[bool] = None,
migrate_userstatus: Optional[bool] = None,
skip_connections: Optional[bool] = None,
**kwargs,
) -> DetailedResponseRequest
Use the CreateMigrationOptions.Builder to create a CreateMigrationOptions object that contains the parameter values for the createMigration method.
Query Parameters
The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Example:
4c9adbb4-28ef-4a7d-b273-1cee0c38021fAction when the first import failure occurs. The default action is "continue" which will continue importing the remaining data flows. The "stop" action will stop the import operation upon the first error.
Allowable values: [
continue,stop]Example:
continueResolution when data flow to be imported has a name conflict with an existing data flow in the project or catalog. The default conflict resolution is "skip" will skip the data flow so that it will not be imported. The "rename" resolution will append "_Import_NNNN" suffix to the original name and use the new name for the imported data flow, while the "replace" resolution will first remove the existing data flow with the same name and import the new data flow. For the "rename_replace" option, when the flow name is already used, a new flow name with the suffix "_DATASTAGE_ISX_IMPORT" will be used. If the name is not currently used, the imported flow will be created with this name. In case the new name is already used, the existing flow will be removed first before the imported flow is created. With the rename_replace option, job creation will be determined as follows. If the job name is already used, a new job name with the suffix ".DataStage job" will be used. If the new job name is not currently used, the job will be created with this name. In case the new job name is already used, the job creation will not happen and an error will be raised.
Allowable values: [
skip,rename,replace,rename_replace]Example:
renameType of attachment. The default attachment type is "isx".
Allowable values: [
isx]Example:
isxName of the input file, if it exists.
Example:
myFlows.isxenable/disable notification. Default value is true.
Skip flow compilation.
Create missing parameter sets and job parameters.
enable/disable wkc rule stage migration. Default value is false.
enable local connection migration. Default value is false.
Asset types separated by commas (','). Only assets of these types will be imported. If not specified, all supported assets will be imported.
Example:
data_intg_flow,parameter_setCreate generic parameter sets for default connection values migration. Default value is true.
Folder path of the storage volume for routine scripts and other data assets.
Example:
/mnts/my-script-storageThis parameter takes effect when conflict_resolution is set to replace or skip. soft- merge the parameter set, add new parameter only, keep the old value sets hard- replace all
Allowable values: [
soft,hard]Example:
hardWill migrate all isx connections to available platform connections instead of their datastage optimized versions.
Enables private cloud migration of ODBC connector 'datasource' to 'dsn_name' instead of generating parameter references. Default is false.
Will migrate all notification activitie stages in sequence job to send email task nodes.
Enable folder support.
Will migrate hive isx connections to impala.
Migrate from which stage.
Example:
Db2ConnectorPXMigrate to which stage.
Example:
Db2zoswhether to migrate $JobName to ${jobName}.${DSInvocationJobId} if invocation id present. Default value is false.
format of annotation styling
Allowable values: [
markdown,html]Example:
markdownmigrate the division operator in any expression to datastage division ds.DIV.
Example:
truerun the datastage/pipeline job by job name, default is by id.
Example:
trueEnable optimized pipeline. Default value is false.
Use jinja template in bash script, all environment vars will be invoked as "{{env_var}}" instead of "${env_var}" in bash script. Default value is false.
Enable flow autosave or not. Default value is false.
Will migrate JDBC Impala isx connections to impala.
enable inline mode for the pipeline compilation.
Optimized sequence job name suffix, project default setting will take effect if not specified.
migrate bash script parameter or not for pipeline.
Prompt runtime parameters settings before running a flow.
Will migrate STPPX Teradata to Teradata Plugin
Will migrate setuserstatus calls to DSSetUserStatus bash function
Applies ONLY when --conflict-resolution replace is selected. When this flag is set we will NOT create new connections for the existing flows that are being remigrated but keep the connection links that are already present in the existing flows.
The ISX file to import. The maximum file size is 1GB.
The createMigration options.
The ISX file to import. The maximum file size is 1GB.
The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:4c9adbb4-28ef-4a7d-b273-1cee0c38021fAction when the first import failure occurs. The default action is "continue" which will continue importing the remaining data flows. The "stop" action will stop the import operation upon the first error.
Allowable values: [
continue,stop]Examples:continue
Resolution when data flow to be imported has a name conflict with an existing data flow in the project or catalog. The default conflict resolution is "skip" will skip the data flow so that it will not be imported. The "rename" resolution will append "_Import_NNNN" suffix to the original name and use the new name for the imported data flow, while the "replace" resolution will first remove the existing data flow with the same name and import the new data flow. For the "rename_replace" option, when the flow name is already used, a new flow name with the suffix "_DATASTAGE_ISX_IMPORT" will be used. If the name is not currently used, the imported flow will be created with this name. In case the new name is already used, the existing flow will be removed first before the imported flow is created. With the rename_replace option, job creation will be determined as follows. If the job name is already used, a new job name with the suffix ".DataStage job" will be used. If the new job name is not currently used, the job will be created with this name. In case the new job name is already used, the job creation will not happen and an error will be raised.
Allowable values: [
skip,rename,replace,rename_replace]Examples:rename
Type of attachment. The default attachment type is "isx".
Allowable values: [
isx]Examples:isx
Name of the input file, if it exists.
Examples:myFlows.isx
enable/disable notification. Default value is true.
Examples:falseSkip flow compilation.
Examples:falseCreate missing parameter sets and job parameters.
Examples:falseenable/disable wkc rule stage migration. Default value is false.
Examples:falseenable local connection migration. Default value is false.
Examples:falseAsset types separated by commas (','). Only assets of these types will be imported. If not specified, all supported assets will be imported.
Examples:data_intg_flow,parameter_set
Create generic parameter sets for default connection values migration. Default value is true.
Examples:falseFolder path of the storage volume for routine scripts and other data assets.
Examples:/mnts/my-script-storage
This parameter takes effect when conflict_resolution is set to replace or skip. soft- merge the parameter set, add new parameter only, keep the old value sets hard- replace all.
Allowable values: [
soft,hard]Examples:hard
Will migrate all isx connections to available platform connections instead of their datastage optimized versions.
Examples:falseEnables private cloud migration of ODBC connector 'datasource' to 'dsn_name' instead of generating parameter references. Default is false.
Examples:falseWill migrate all notification activitie stages in sequence job to send email task nodes.
Examples:falseEnable folder support.
Examples:falseWill migrate hive isx connections to impala.
Examples:falseMigrate from which stage.
Examples:Db2ConnectorPX
Migrate to which stage.
Examples:Db2zos
whether to migrate $JobName to ${jobName}.${DSInvocationJobId} if invocation id present. Default value is false.
Examples:falseformat of annotation styling.
Allowable values: [
markdown,html]Examples:markdown
migrate the division operator in any expression to datastage division ds.DIV.
Examples:truerun the datastage/pipeline job by job name, default is by id.
Examples:trueEnable optimized pipeline. Default value is false.
Examples:falseUse jinja template in bash script, all environment vars will be invoked as "{{env_var}}" instead of "${env_var}" in bash script. Default value is false.
Examples:falseEnable flow autosave or not. Default value is false.
Examples:falseWill migrate JDBC Impala isx connections to impala.
Examples:falseenable inline mode for the pipeline compilation.
Examples:falseOptimized sequence job name suffix, project default setting will take effect if not specified.
migrate bash script parameter or not for pipeline.
Examples:falsePrompt runtime parameters settings before running a flow.
Examples:falseWill migrate STPPX Teradata to Teradata Plugin.
Examples:falseWill migrate setuserstatus calls to DSSetUserStatus bash function.
Examples:falseApplies ONLY when --conflict-resolution replace is selected. When this flag is set we will NOT create new connections for the existing flows that are being remigrated but keep the connection links that are already present in the existing flows.
Examples:false
parameters
The ISX file to import. The maximum file size is 1GB.
The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:Action when the first import failure occurs. The default action is "continue" which will continue importing the remaining data flows. The "stop" action will stop the import operation upon the first error.
Allowable values: [
continue,stop]Examples:Resolution when data flow to be imported has a name conflict with an existing data flow in the project or catalog. The default conflict resolution is "skip" will skip the data flow so that it will not be imported. The "rename" resolution will append "_Import_NNNN" suffix to the original name and use the new name for the imported data flow, while the "replace" resolution will first remove the existing data flow with the same name and import the new data flow. For the "rename_replace" option, when the flow name is already used, a new flow name with the suffix "_DATASTAGE_ISX_IMPORT" will be used. If the name is not currently used, the imported flow will be created with this name. In case the new name is already used, the existing flow will be removed first before the imported flow is created. With the rename_replace option, job creation will be determined as follows. If the job name is already used, a new job name with the suffix ".DataStage job" will be used. If the new job name is not currently used, the job will be created with this name. In case the new job name is already used, the job creation will not happen and an error will be raised.
Allowable values: [
skip,rename,replace,rename_replace]Examples:Type of attachment. The default attachment type is "isx".
Allowable values: [
isx]Examples:Name of the input file, if it exists.
Examples:enable/disable notification. Default value is true.
Examples:Skip flow compilation.
Examples:Create missing parameter sets and job parameters.
Examples:enable/disable wkc rule stage migration. Default value is false.
Examples:enable local connection migration. Default value is false.
Examples:Asset types separated by commas (','). Only assets of these types will be imported. If not specified, all supported assets will be imported.
Examples:Create generic parameter sets for default connection values migration. Default value is true.
Examples:Folder path of the storage volume for routine scripts and other data assets.
Examples:This parameter takes effect when conflict_resolution is set to replace or skip. soft- merge the parameter set, add new parameter only, keep the old value sets hard- replace all.
Allowable values: [
soft,hard]Examples:Will migrate all isx connections to available platform connections instead of their datastage optimized versions.
Examples:Enables private cloud migration of ODBC connector 'datasource' to 'dsn_name' instead of generating parameter references. Default is false.
Examples:Will migrate all notification activitie stages in sequence job to send email task nodes.
Examples:Enable folder support.
Examples:Will migrate hive isx connections to impala.
Examples:Migrate from which stage.
Examples:Migrate to which stage.
Examples:whether to migrate $JobName to ${jobName}.${DSInvocationJobId} if invocation id present. Default value is false.
Examples:format of annotation styling.
Allowable values: [
markdown,html]Examples:migrate the division operator in any expression to datastage division ds.DIV.
Examples:run the datastage/pipeline job by job name, default is by id.
Examples:Enable optimized pipeline. Default value is false.
Examples:Use jinja template in bash script, all environment vars will be invoked as "{{env_var}}" instead of "${env_var}" in bash script. Default value is false.
Examples:Enable flow autosave or not. Default value is false.
Examples:Will migrate JDBC Impala isx connections to impala.
Examples:enable inline mode for the pipeline compilation.
Examples:Optimized sequence job name suffix, project default setting will take effect if not specified.
migrate bash script parameter or not for pipeline.
Examples:Prompt runtime parameters settings before running a flow.
Examples:Will migrate STPPX Teradata to Teradata Plugin.
Examples:Will migrate setuserstatus calls to DSSetUserStatus bash function.
Examples:Applies ONLY when --conflict-resolution replace is selected. When this flag is set we will NOT create new connections for the existing flows that are being remigrated but keep the connection links that are already present in the existing flows.
Examples:
parameters
The ISX file to import. The maximum file size is 1GB.
The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:Action when the first import failure occurs. The default action is "continue" which will continue importing the remaining data flows. The "stop" action will stop the import operation upon the first error.
Allowable values: [
continue,stop]Examples:Resolution when data flow to be imported has a name conflict with an existing data flow in the project or catalog. The default conflict resolution is "skip" will skip the data flow so that it will not be imported. The "rename" resolution will append "_Import_NNNN" suffix to the original name and use the new name for the imported data flow, while the "replace" resolution will first remove the existing data flow with the same name and import the new data flow. For the "rename_replace" option, when the flow name is already used, a new flow name with the suffix "_DATASTAGE_ISX_IMPORT" will be used. If the name is not currently used, the imported flow will be created with this name. In case the new name is already used, the existing flow will be removed first before the imported flow is created. With the rename_replace option, job creation will be determined as follows. If the job name is already used, a new job name with the suffix ".DataStage job" will be used. If the new job name is not currently used, the job will be created with this name. In case the new job name is already used, the job creation will not happen and an error will be raised.
Allowable values: [
skip,rename,replace,rename_replace]Examples:Type of attachment. The default attachment type is "isx".
Allowable values: [
isx]Examples:Name of the input file, if it exists.
Examples:enable/disable notification. Default value is true.
Examples:Skip flow compilation.
Examples:Create missing parameter sets and job parameters.
Examples:enable/disable wkc rule stage migration. Default value is false.
Examples:enable local connection migration. Default value is false.
Examples:Asset types separated by commas (','). Only assets of these types will be imported. If not specified, all supported assets will be imported.
Examples:Create generic parameter sets for default connection values migration. Default value is true.
Examples:Folder path of the storage volume for routine scripts and other data assets.
Examples:This parameter takes effect when conflict_resolution is set to replace or skip. soft- merge the parameter set, add new parameter only, keep the old value sets hard- replace all.
Allowable values: [
soft,hard]Examples:Will migrate all isx connections to available platform connections instead of their datastage optimized versions.
Examples:Enables private cloud migration of ODBC connector 'datasource' to 'dsn_name' instead of generating parameter references. Default is false.
Examples:Will migrate all notification activitie stages in sequence job to send email task nodes.
Examples:Enable folder support.
Examples:Will migrate hive isx connections to impala.
Examples:Migrate from which stage.
Examples:Migrate to which stage.
Examples:whether to migrate $JobName to ${jobName}.${DSInvocationJobId} if invocation id present. Default value is false.
Examples:format of annotation styling.
Allowable values: [
markdown,html]Examples:migrate the division operator in any expression to datastage division ds.DIV.
Examples:run the datastage/pipeline job by job name, default is by id.
Examples:Enable optimized pipeline. Default value is false.
Examples:Use jinja template in bash script, all environment vars will be invoked as "{{env_var}}" instead of "${env_var}" in bash script. Default value is false.
Examples:Enable flow autosave or not. Default value is false.
Examples:Will migrate JDBC Impala isx connections to impala.
Examples:enable inline mode for the pipeline compilation.
Examples:Optimized sequence job name suffix, project default setting will take effect if not specified.
migrate bash script parameter or not for pipeline.
Examples:Prompt runtime parameters settings before running a flow.
Examples:Will migrate STPPX Teradata to Teradata Plugin.
Examples:Will migrate setuserstatus calls to DSSetUserStatus bash function.
Examples:Applies ONLY when --conflict-resolution replace is selected. When this flag is set we will NOT create new connections for the existing flows that are being remigrated but keep the connection links that are already present in the existing flows.
Examples:
curl -X POST --location --header "Authorization: Bearer {iam_token}" --header "Accept: application/json;charset=utf-8" --header "Content-Type: application/octet-stream" --data 'createMockStream(This is a mock file.)' "{base_url}/v3/migration/isx_imports?project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23&on_failure=continue&conflict_resolution=rename&attachment_type=isx&file_name=myFlows.isx"
CreateMigrationOptions createMigrationOptions = new CreateMigrationOptions.Builder() .body(rowGenIsx) .projectId(projectID) .onFailure("continue") .conflictResolution("rename") .attachmentType("isx") .fileName("rowgen_peek.isx") .build(); Response<ImportResponse> response = datastageService.createMigration(createMigrationOptions).execute(); ImportResponse importResponse = response.getResult(); System.out.println(importResponse);
const params = { body: Buffer.from(fs.readFileSync('testInput/rowgen_peek.isx')), projectId: projectID, onFailure: 'continue', conflictResolution: 'rename', attachmentType: 'isx', fileName: 'rowgen_peek.isx', }; const res = await datastageService.createMigration(params);
import_response = datastage_service.create_migration( body=open(Path(__file__).parent / 'inputFiles/rowgen_peek.isx', "rb").read(), project_id=config['PROJECT_ID'], on_failure='continue', conflict_resolution='rename', attachment_type='isx', file_name='rowgen_peek.isx' ).get_result() print(json.dumps(import_response, indent=2))
Response
Response object of an import request.
Import the response entity.
Import the response metadata.
Response object of an import request.
Import the response entity.
- entity
Account ID of the user who cancelled the import request. This field is required only when the status field is "cancelled".
Examples:user1@company1.com
The conflict_resolution option used for the import.
The duration of import processing time.
The timestamp when the import opearton completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
All data flows imported or to be imported. Each ImportFlow object contains status for the individual data flow import operation.
- importDataFlows
conflict resolution status.
Possible values: [
flow_replacement_succeeded,flow_replacement_failed,import_flow_renamed,import_flow_skipped,connection_replacement_succeeded,connection_replacement_failed,connection_renamed,connection_skipped,parameter_set_replacement_succeeded,parameter_set_replacement_failed,parameter_set_renamed,parameter_set_skipped,table_definition_replacement_succeeded,table_definition_replacement_failed,table_definition_renamed,table_definition_skipped,sequence_job_replacement_succeeded,sequence_job_replacement_failed,sequence_job_renamed,sequence_job_skipped,subflow_replacement_succeeded,subflow_replacement_failed,subflow_renamed,subflow_skipped,message_handler_replacement_succeeded,message_handler_replacement_failed,message_handler_renamed,message_handler_skipped,build_stage_replacement_succeeded,build_stage_replacement_failed,build_stage_renamed,build_stage_skipped,wrapped_stage_replacement_succeeded,wrapped_stage_replacement_failed,wrapped_stage_renamed,wrapped_stage_skipped,custom_stage_replacement_succeeded,custom_stage_replacement_failed,custom_stage_renamed,custom_stage_skipped,data_quality_rule_replacement_succeeded,data_quality_rule_replacement_failed,data_quality_rule_renamed,data_quality_rule_skipped,data_quality_rule_definition_replacement_succeeded,data_quality_rule_definition_failed,data_quality_rule_definition_renamed,data_quality_rule_definition_skipped,cff_schema_replacement_succeeded,cff_schema_replacement_failed,cff_schema_renamed,cff_schema_skipped,test_case_replacement_succeeded,test_case_replacement_failed,test_case_renamed,test_case_skipped]Examples:import_flow_renamed
The timestamp when the flow import is completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
The errors array report all the problems preventing the data flow from being successfully imported.
- errors
additional error text.
error object name.
error stage type.
error type.
Possible values: [
unsupported_stage_type,unsupported_feature,empty_json,isx_conversion_error,model_conversion_error,invalid_input_type,invalid_json_format,json_conversion_error,flow_deletion_error,flow_creation_error,flow_response_parsing_error,auth_token_error,flow_compilation_error,empty_stage_list,empty_stage_node,missing_stage_type_class_name,dummy_stage,missing_stage_type,missing_repos_id,stage_conversion_error,unimplemented_stage_type,job_creation_error,job_run_error,flow_search_error,unsupported_job_type,internal_error,connection_creation_error,flow_rename_error,duplicate_job_error,parameter_set_creation_error,distributed_lock_error,duplicate_object_error,unbound_object_reference,table_def_creation_error,connection_creation_api_error,connection_patch_api_error,connection_deletion_api_error,sequence_job_creation_error,unsupported_stage_type_in_subflow,job_update_error,job_deletion_error,sequence_job_deletion_error,parameter_set_deletion_error,invalid_property_name,invalid_property_value,unsupported_subflow_with_multiple_parents,message_handler_creation_error,message_handler_deletion_error,unsupported_server_container,build_stage_creation_error,build_stage_deletion_error,build_stage_generation_error,data_quality_spec_creation_error,xml_schema_library_creation_error,wrapped_stage_creation_error,wrapped_stage_deletion_error,wrapped_stage_generation_error,insufficient_memory_error,out_of_memory_error,routine_conversion_error,routine_creation_error,routine_deletion_error,custom_stage_creation_error,custom_stage_deletion_error,invalid_file_type,sequence_job_exceeding_max_capacity_error,flow_update_error,parameter_set_update_error,invalid_job_file_type,incorrectset_xmlinputpx,missing_column_definition,cp4d_only_stage_type,unsupported_sybase_lookup,job_get_error,flow_export_error,odm_library_creation_error,generated_parameter_set_error,data_quality_rule_creation_error,xml_schema_library_empty_error,data_asset_creation_error,data_asset_deletion_error,test_case_creation_error,test_case_deletion_error,custom_function_creation_error,sequence_job_update_error,cp4d_only_stage_type_in_subflow,data_set_creation_error,file_set_creation_error,missing_job_referred_flow,custom_stage_name_conflict_error,data_rule_definition_creation_error,CFF_SCHEMA_CREATION_ERROR,build_op_registration_error]
Unique id of the data flow. This field is returned only if the underlying data flow has been successfully imported.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136Unique id of the job. This field is returned only if the corresponding job object has been successfully created.
Examples:ccfaaafd-810d-4f0e-b0a9-228c328a0136Job name. This field is returned only if the corresponding job object has been successfully created.
Examples:Aggregator12_DataStage_1
(deprecated) original type of the job or data flow in the import file.
Possible values: [
px_job,server_job,connection,table_def]Examples:px_job
Name of the imported data flow.
Examples:cancel-reservation-job
Name of the data flow to be imported.
Examples:cancel-reservation-job
The ID of an existing asset this object refers to. If ref_asset_id is specified, the id field will be the same as ref_asset_id for backward compatibility.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136data import status.
Possible values: [
completed,in_progress,failed,skipped,deprecated,unsupported,flow_conversion_failed,flow_creation_failed,flow_compilation_failed,job_creation_failed,job_run_failed,connection_conversion_failed,connection_creation_failed,parameter_set_conversion_failed,parameter_set_creation_failed,data_quality_spec_conversion_failed,data_quality_spec_creation_failed,table_definition_conversion_failed,table_definition_creation_failed,job_update_failed,job_deletion_failed,sequence_job_conversion_failed,sequence_job_creation_failed,message_handler_conversion_failed,message_handler_creation_failed,build_stage_conversion_failed,build_stage_creation_failed,build_stage_generation_failed,xml_schema_library_creation_failed,wrapped_stage_conversion_failed,wrapped_stage_creation_failed,wrapped_stage_generation_failed,routine_conversion_failed,routine_creation_failed,custom_stage_conversion_failed,custom_stage_creation_failed,flow_update_failed,parameter_set_update_failed,completed_with_error,data_quality_rule_creation_failed,data_asset_creation_failed,custom_function_creation_failed,data_set_creation_failed,file_set_creation_failed,data_rule_definition_creation_failed,cff_schema_creation_failed,cff_schema_conversion_failed,test_case_creation_failed,test_case_conversion_failed]Examples:completed
type of the job or data connection in the import file.
Possible values: [
px_job,server_job,connection,table_def,parameter_set,routine,subflow,sequence_job,data_quality_spec,server_container,message_handler,build_stage,wrapped_stage,custom_stage,xml_schema_library,custom_stage_library,function_library,data_intg_parallel_function,data_intg_java_library,odm_ruleset,job,data_quality_rule,data_quality_definition,data_asset,data_intg_data_set,data_intg_file_set,odbc_configuration,data_intg_cff_schema,data_intg_test_case,data_element,ims_database,ims_viewset,mainframe_job,machine_profile,mainframe_routine,parallel_routine,transform,jdbc_driver]Examples:px_job
The warnings array report all the warnings in the data flow import operation.
- warnings
additional warning text.
warning object name.
warning type.
Possible values: [
unreleased_stage_type,unreleased_feature,credentials_file_warning,transformer_trigger_unsupported,transformer_buildtab_unsupported,unsupported_secure_gateway,placeholder_connection_parameters,description_truncated,empty_stage_list,missing_parameter_set,missing_subflow,missing_table_definition,missing_data_connection,duplicate_job,unsupported_connection,unsupported_activity_trigger,unsupported_expression,unsupported_function,unsupported_operator,unsupported_variable,unsupported_activity_variable,default_connection_parameter,unmatched_for_var_warning,unsupported_routine_statement,unsupported_before_after_routine,attachment_creation_failed,unsupported_job_control,unsupported_drs_rdbms_parameter,dataconnection_conflict,placeholder_parameter_set,missing_routine,unsupported_stored_procedure_transform,unattached_links,unreleased_datasource_in_storedprocedure,unsupported_parquet_compression_type_lzo,unsupported_stppx_dbvendor_parameter,unsupported_sappack_connector_property,undefined_variable,unsupported_xmloutputpx_features,xml_transform_missed_schema,unsupported_stage_type,unbound_reference,missing_stage_type,unsupported_dependency,missing_data_asset,unsupported_svtransformer_before_after_subroutine,deprecated_load_control_properties,missing_parameter,unsupported_stage_type_in_subflow,unsupported_xmlinputpx_features,export_request_in_progress,unsupported_stppx_for_db2i_db2z,deprecated_connection_property,hard_coded_file_path,unsupported_property_value,unsupported_property,unsupported_dcn_data_connection,xmlstage_unsupported_function,attachment_update_failed,unchanged_parameter_set,connection_parameter_set_created,connection_parameter_set_patched,skipped_duplicate_connection,unchanged_connection,unsupported_xpath,parameters_in_existing_parameter_set,different_parameter_type_in_parameter_set,unsupported_dataquality_rename,unsupported_data_connection_secure_gateway,missing_default_parameter_value,missing_referenced_connection,unsupported_routine_activity,unsupported_execution_action_validate,unsupported_execution_action_reset,runtime_jis_generation_failed,missing_job,missing_dataset,missing_fileset,identical_parameter_set,missing_runtime_environment,missing_match_spec,unsupported_before_after_subroutine,deprecated_stage_type,missing_storage_volume,unsupported_websphereMQ_server_mode,unsupported_parameter_name,unsupported_property_on_cloud,invalid_job_schedule,missing_cff_schema,missing_data_intg_flow,job_import_skipped,unsupported_spparam_type,unsupported_multiple_reject_links,binaries_import_failed,missing_binaries,deprecated_job_environment_input,unclear_numeric_type,duplicate_parameter_name,user_status_migrated_to_global_user_variable,user_status_migrated_to_dssetuserstatus_function,subroutine_unsupported_on_cloud]
import error.
import type.
list of all missing assets.
- missingAssets
name of asset.
type of asset.
Name of the import request.
Examples:seat-reservation-jobs
Import event notifications.
- notifications
The timestamp when the notification was created. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Notification id.
Import/Export status associated with the notification.
The on_failure option used for the import.
Estimate of remaining time in seconds.
The timestamp when the import opearton started. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import status.
Possible values: [
in_progress,cancelled,queued,started,completed]Examples:in_progress
Import statistics. total = imported (including renamed and replaced) + skipped + failed + deprecated + unsupported + pending.
- tally
Total number of build stages.
Total number of cff schemas.
Total number of connection creation failures.
Total number of data connections.
Total number of custom stages.
Total number of data assets.
Total number of data elements.
Total number of data quality rules.
Total number of data quality rules.
Total number of data quality spec.
Total number of datastage datasets.
Total number of deprecated resources in the import file.
Total number of data flows that cannot be imported due to import errors.
Total number of datastage filesets.
Total number of data flows that failed to compile.
Total number of flow creation failures.
Total number of function libraries.
Total number of data flows successfully imported.
Total number of IMS databases.
Total number of IMS viewsets.
Total number of java libraries.
Total number of job creation failures.
Total number of machine profiles.
Total number of mainframe jobs.
Total number of mainframe routines.
Total number of message handlers.
Total number of ilogjrule/odm libraries.
Total number of parallel jobs.
Total number of parallel routines.
Total number of parameter sets.
Total number of data flows that have not been processed.
Total number of data flows successfully imported and renamed due to a name conflict. The renamed count is included in the imported count.
Total number of existing data flows replaced by imported flows. The replaced count is included in the imported count.
Total number of routines.
Total number of sequence job creation failures.
Total number of sequence jobs.
Total number of server containers.
Total number of data flows skipped due to name conflicts. The skipped count is not included in the failed count or imported count.
Total number of parallel job subflows.
Total number of table definitions.
Total number of test cases.
Total number of data flows to be imported.
Total number of transforms.
Total number of unsupported resources in the import file.
Total number of wrapped stages.
Total number of xml schema libraries.
Import the response metadata.
- metadata
Catalog id.
Catalog name.
The timestamp when the import API was submitted. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Account ID of the user who submitted the import request.
Examples:user1@company1.com
The unique import id.
The timestamp when the import status was last updated. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import file name.
Project id.
Project name.
Space id.
Space name.
The URL which can be used to get the status of the import request right after it is submitted.
Response object of an import request.
Import the response entity.
- entity
Account ID of the user who cancelled the import request. This field is required only when the status field is "cancelled".
Examples:user1@company1.com
The conflict_resolution option used for the import.
The duration of import processing time.
The timestamp when the import opearton completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
All data flows imported or to be imported. Each ImportFlow object contains status for the individual data flow import operation.
- import_data_flows
conflict resolution status.
Possible values: [
flow_replacement_succeeded,flow_replacement_failed,import_flow_renamed,import_flow_skipped,connection_replacement_succeeded,connection_replacement_failed,connection_renamed,connection_skipped,parameter_set_replacement_succeeded,parameter_set_replacement_failed,parameter_set_renamed,parameter_set_skipped,table_definition_replacement_succeeded,table_definition_replacement_failed,table_definition_renamed,table_definition_skipped,sequence_job_replacement_succeeded,sequence_job_replacement_failed,sequence_job_renamed,sequence_job_skipped,subflow_replacement_succeeded,subflow_replacement_failed,subflow_renamed,subflow_skipped,message_handler_replacement_succeeded,message_handler_replacement_failed,message_handler_renamed,message_handler_skipped,build_stage_replacement_succeeded,build_stage_replacement_failed,build_stage_renamed,build_stage_skipped,wrapped_stage_replacement_succeeded,wrapped_stage_replacement_failed,wrapped_stage_renamed,wrapped_stage_skipped,custom_stage_replacement_succeeded,custom_stage_replacement_failed,custom_stage_renamed,custom_stage_skipped,data_quality_rule_replacement_succeeded,data_quality_rule_replacement_failed,data_quality_rule_renamed,data_quality_rule_skipped,data_quality_rule_definition_replacement_succeeded,data_quality_rule_definition_failed,data_quality_rule_definition_renamed,data_quality_rule_definition_skipped,cff_schema_replacement_succeeded,cff_schema_replacement_failed,cff_schema_renamed,cff_schema_skipped,test_case_replacement_succeeded,test_case_replacement_failed,test_case_renamed,test_case_skipped]Examples:import_flow_renamed
The timestamp when the flow import is completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
The errors array report all the problems preventing the data flow from being successfully imported.
- errors
additional error text.
error object name.
error stage type.
error type.
Possible values: [
unsupported_stage_type,unsupported_feature,empty_json,isx_conversion_error,model_conversion_error,invalid_input_type,invalid_json_format,json_conversion_error,flow_deletion_error,flow_creation_error,flow_response_parsing_error,auth_token_error,flow_compilation_error,empty_stage_list,empty_stage_node,missing_stage_type_class_name,dummy_stage,missing_stage_type,missing_repos_id,stage_conversion_error,unimplemented_stage_type,job_creation_error,job_run_error,flow_search_error,unsupported_job_type,internal_error,connection_creation_error,flow_rename_error,duplicate_job_error,parameter_set_creation_error,distributed_lock_error,duplicate_object_error,unbound_object_reference,table_def_creation_error,connection_creation_api_error,connection_patch_api_error,connection_deletion_api_error,sequence_job_creation_error,unsupported_stage_type_in_subflow,job_update_error,job_deletion_error,sequence_job_deletion_error,parameter_set_deletion_error,invalid_property_name,invalid_property_value,unsupported_subflow_with_multiple_parents,message_handler_creation_error,message_handler_deletion_error,unsupported_server_container,build_stage_creation_error,build_stage_deletion_error,build_stage_generation_error,data_quality_spec_creation_error,xml_schema_library_creation_error,wrapped_stage_creation_error,wrapped_stage_deletion_error,wrapped_stage_generation_error,insufficient_memory_error,out_of_memory_error,routine_conversion_error,routine_creation_error,routine_deletion_error,custom_stage_creation_error,custom_stage_deletion_error,invalid_file_type,sequence_job_exceeding_max_capacity_error,flow_update_error,parameter_set_update_error,invalid_job_file_type,incorrectset_xmlinputpx,missing_column_definition,cp4d_only_stage_type,unsupported_sybase_lookup,job_get_error,flow_export_error,odm_library_creation_error,generated_parameter_set_error,data_quality_rule_creation_error,xml_schema_library_empty_error,data_asset_creation_error,data_asset_deletion_error,test_case_creation_error,test_case_deletion_error,custom_function_creation_error,sequence_job_update_error,cp4d_only_stage_type_in_subflow,data_set_creation_error,file_set_creation_error,missing_job_referred_flow,custom_stage_name_conflict_error,data_rule_definition_creation_error,CFF_SCHEMA_CREATION_ERROR,build_op_registration_error]
Unique id of the data flow. This field is returned only if the underlying data flow has been successfully imported.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136Unique id of the job. This field is returned only if the corresponding job object has been successfully created.
Examples:ccfaaafd-810d-4f0e-b0a9-228c328a0136Job name. This field is returned only if the corresponding job object has been successfully created.
Examples:Aggregator12_DataStage_1
(deprecated) original type of the job or data flow in the import file.
Possible values: [
px_job,server_job,connection,table_def]Examples:px_job
Name of the imported data flow.
Examples:cancel-reservation-job
Name of the data flow to be imported.
Examples:cancel-reservation-job
The ID of an existing asset this object refers to. If ref_asset_id is specified, the id field will be the same as ref_asset_id for backward compatibility.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136data import status.
Possible values: [
completed,in_progress,failed,skipped,deprecated,unsupported,flow_conversion_failed,flow_creation_failed,flow_compilation_failed,job_creation_failed,job_run_failed,connection_conversion_failed,connection_creation_failed,parameter_set_conversion_failed,parameter_set_creation_failed,data_quality_spec_conversion_failed,data_quality_spec_creation_failed,table_definition_conversion_failed,table_definition_creation_failed,job_update_failed,job_deletion_failed,sequence_job_conversion_failed,sequence_job_creation_failed,message_handler_conversion_failed,message_handler_creation_failed,build_stage_conversion_failed,build_stage_creation_failed,build_stage_generation_failed,xml_schema_library_creation_failed,wrapped_stage_conversion_failed,wrapped_stage_creation_failed,wrapped_stage_generation_failed,routine_conversion_failed,routine_creation_failed,custom_stage_conversion_failed,custom_stage_creation_failed,flow_update_failed,parameter_set_update_failed,completed_with_error,data_quality_rule_creation_failed,data_asset_creation_failed,custom_function_creation_failed,data_set_creation_failed,file_set_creation_failed,data_rule_definition_creation_failed,cff_schema_creation_failed,cff_schema_conversion_failed,test_case_creation_failed,test_case_conversion_failed]Examples:completed
type of the job or data connection in the import file.
Possible values: [
px_job,server_job,connection,table_def,parameter_set,routine,subflow,sequence_job,data_quality_spec,server_container,message_handler,build_stage,wrapped_stage,custom_stage,xml_schema_library,custom_stage_library,function_library,data_intg_parallel_function,data_intg_java_library,odm_ruleset,job,data_quality_rule,data_quality_definition,data_asset,data_intg_data_set,data_intg_file_set,odbc_configuration,data_intg_cff_schema,data_intg_test_case,data_element,ims_database,ims_viewset,mainframe_job,machine_profile,mainframe_routine,parallel_routine,transform,jdbc_driver]Examples:px_job
The warnings array report all the warnings in the data flow import operation.
- warnings
additional warning text.
warning object name.
warning type.
Possible values: [
unreleased_stage_type,unreleased_feature,credentials_file_warning,transformer_trigger_unsupported,transformer_buildtab_unsupported,unsupported_secure_gateway,placeholder_connection_parameters,description_truncated,empty_stage_list,missing_parameter_set,missing_subflow,missing_table_definition,missing_data_connection,duplicate_job,unsupported_connection,unsupported_activity_trigger,unsupported_expression,unsupported_function,unsupported_operator,unsupported_variable,unsupported_activity_variable,default_connection_parameter,unmatched_for_var_warning,unsupported_routine_statement,unsupported_before_after_routine,attachment_creation_failed,unsupported_job_control,unsupported_drs_rdbms_parameter,dataconnection_conflict,placeholder_parameter_set,missing_routine,unsupported_stored_procedure_transform,unattached_links,unreleased_datasource_in_storedprocedure,unsupported_parquet_compression_type_lzo,unsupported_stppx_dbvendor_parameter,unsupported_sappack_connector_property,undefined_variable,unsupported_xmloutputpx_features,xml_transform_missed_schema,unsupported_stage_type,unbound_reference,missing_stage_type,unsupported_dependency,missing_data_asset,unsupported_svtransformer_before_after_subroutine,deprecated_load_control_properties,missing_parameter,unsupported_stage_type_in_subflow,unsupported_xmlinputpx_features,export_request_in_progress,unsupported_stppx_for_db2i_db2z,deprecated_connection_property,hard_coded_file_path,unsupported_property_value,unsupported_property,unsupported_dcn_data_connection,xmlstage_unsupported_function,attachment_update_failed,unchanged_parameter_set,connection_parameter_set_created,connection_parameter_set_patched,skipped_duplicate_connection,unchanged_connection,unsupported_xpath,parameters_in_existing_parameter_set,different_parameter_type_in_parameter_set,unsupported_dataquality_rename,unsupported_data_connection_secure_gateway,missing_default_parameter_value,missing_referenced_connection,unsupported_routine_activity,unsupported_execution_action_validate,unsupported_execution_action_reset,runtime_jis_generation_failed,missing_job,missing_dataset,missing_fileset,identical_parameter_set,missing_runtime_environment,missing_match_spec,unsupported_before_after_subroutine,deprecated_stage_type,missing_storage_volume,unsupported_websphereMQ_server_mode,unsupported_parameter_name,unsupported_property_on_cloud,invalid_job_schedule,missing_cff_schema,missing_data_intg_flow,job_import_skipped,unsupported_spparam_type,unsupported_multiple_reject_links,binaries_import_failed,missing_binaries,deprecated_job_environment_input,unclear_numeric_type,duplicate_parameter_name,user_status_migrated_to_global_user_variable,user_status_migrated_to_dssetuserstatus_function,subroutine_unsupported_on_cloud]
import error.
import type.
list of all missing assets.
- missing_assets
name of asset.
type of asset.
Name of the import request.
Examples:seat-reservation-jobs
Import event notifications.
- notifications
The timestamp when the notification was created. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Notification id.
Import/Export status associated with the notification.
The on_failure option used for the import.
Estimate of remaining time in seconds.
The timestamp when the import opearton started. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import status.
Possible values: [
in_progress,cancelled,queued,started,completed]Examples:in_progress
Import statistics. total = imported (including renamed and replaced) + skipped + failed + deprecated + unsupported + pending.
- tally
Total number of build stages.
Total number of cff schemas.
Total number of connection creation failures.
Total number of data connections.
Total number of custom stages.
Total number of data assets.
Total number of data elements.
Total number of data quality rules.
Total number of data quality rules.
Total number of data quality spec.
Total number of datastage datasets.
Total number of deprecated resources in the import file.
Total number of data flows that cannot be imported due to import errors.
Total number of datastage filesets.
Total number of data flows that failed to compile.
Total number of flow creation failures.
Total number of function libraries.
Total number of data flows successfully imported.
Total number of IMS databases.
Total number of IMS viewsets.
Total number of java libraries.
Total number of job creation failures.
Total number of machine profiles.
Total number of mainframe jobs.
Total number of mainframe routines.
Total number of message handlers.
Total number of ilogjrule/odm libraries.
Total number of parallel jobs.
Total number of parallel routines.
Total number of parameter sets.
Total number of data flows that have not been processed.
Total number of data flows successfully imported and renamed due to a name conflict. The renamed count is included in the imported count.
Total number of existing data flows replaced by imported flows. The replaced count is included in the imported count.
Total number of routines.
Total number of sequence job creation failures.
Total number of sequence jobs.
Total number of server containers.
Total number of data flows skipped due to name conflicts. The skipped count is not included in the failed count or imported count.
Total number of parallel job subflows.
Total number of table definitions.
Total number of test cases.
Total number of data flows to be imported.
Total number of transforms.
Total number of unsupported resources in the import file.
Total number of wrapped stages.
Total number of xml schema libraries.
Import the response metadata.
- metadata
Catalog id.
Catalog name.
The timestamp when the import API was submitted. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Account ID of the user who submitted the import request.
Examples:user1@company1.com
The unique import id.
The timestamp when the import status was last updated. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import file name.
Project id.
Project name.
Space id.
Space name.
The URL which can be used to get the status of the import request right after it is submitted.
Response object of an import request.
Import the response entity.
- entity
Account ID of the user who cancelled the import request. This field is required only when the status field is "cancelled".
Examples:user1@company1.com
The conflict_resolution option used for the import.
The duration of import processing time.
The timestamp when the import opearton completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
All data flows imported or to be imported. Each ImportFlow object contains status for the individual data flow import operation.
- import_data_flows
conflict resolution status.
Possible values: [
flow_replacement_succeeded,flow_replacement_failed,import_flow_renamed,import_flow_skipped,connection_replacement_succeeded,connection_replacement_failed,connection_renamed,connection_skipped,parameter_set_replacement_succeeded,parameter_set_replacement_failed,parameter_set_renamed,parameter_set_skipped,table_definition_replacement_succeeded,table_definition_replacement_failed,table_definition_renamed,table_definition_skipped,sequence_job_replacement_succeeded,sequence_job_replacement_failed,sequence_job_renamed,sequence_job_skipped,subflow_replacement_succeeded,subflow_replacement_failed,subflow_renamed,subflow_skipped,message_handler_replacement_succeeded,message_handler_replacement_failed,message_handler_renamed,message_handler_skipped,build_stage_replacement_succeeded,build_stage_replacement_failed,build_stage_renamed,build_stage_skipped,wrapped_stage_replacement_succeeded,wrapped_stage_replacement_failed,wrapped_stage_renamed,wrapped_stage_skipped,custom_stage_replacement_succeeded,custom_stage_replacement_failed,custom_stage_renamed,custom_stage_skipped,data_quality_rule_replacement_succeeded,data_quality_rule_replacement_failed,data_quality_rule_renamed,data_quality_rule_skipped,data_quality_rule_definition_replacement_succeeded,data_quality_rule_definition_failed,data_quality_rule_definition_renamed,data_quality_rule_definition_skipped,cff_schema_replacement_succeeded,cff_schema_replacement_failed,cff_schema_renamed,cff_schema_skipped,test_case_replacement_succeeded,test_case_replacement_failed,test_case_renamed,test_case_skipped]Examples:import_flow_renamed
The timestamp when the flow import is completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
The errors array report all the problems preventing the data flow from being successfully imported.
- errors
additional error text.
error object name.
error stage type.
error type.
Possible values: [
unsupported_stage_type,unsupported_feature,empty_json,isx_conversion_error,model_conversion_error,invalid_input_type,invalid_json_format,json_conversion_error,flow_deletion_error,flow_creation_error,flow_response_parsing_error,auth_token_error,flow_compilation_error,empty_stage_list,empty_stage_node,missing_stage_type_class_name,dummy_stage,missing_stage_type,missing_repos_id,stage_conversion_error,unimplemented_stage_type,job_creation_error,job_run_error,flow_search_error,unsupported_job_type,internal_error,connection_creation_error,flow_rename_error,duplicate_job_error,parameter_set_creation_error,distributed_lock_error,duplicate_object_error,unbound_object_reference,table_def_creation_error,connection_creation_api_error,connection_patch_api_error,connection_deletion_api_error,sequence_job_creation_error,unsupported_stage_type_in_subflow,job_update_error,job_deletion_error,sequence_job_deletion_error,parameter_set_deletion_error,invalid_property_name,invalid_property_value,unsupported_subflow_with_multiple_parents,message_handler_creation_error,message_handler_deletion_error,unsupported_server_container,build_stage_creation_error,build_stage_deletion_error,build_stage_generation_error,data_quality_spec_creation_error,xml_schema_library_creation_error,wrapped_stage_creation_error,wrapped_stage_deletion_error,wrapped_stage_generation_error,insufficient_memory_error,out_of_memory_error,routine_conversion_error,routine_creation_error,routine_deletion_error,custom_stage_creation_error,custom_stage_deletion_error,invalid_file_type,sequence_job_exceeding_max_capacity_error,flow_update_error,parameter_set_update_error,invalid_job_file_type,incorrectset_xmlinputpx,missing_column_definition,cp4d_only_stage_type,unsupported_sybase_lookup,job_get_error,flow_export_error,odm_library_creation_error,generated_parameter_set_error,data_quality_rule_creation_error,xml_schema_library_empty_error,data_asset_creation_error,data_asset_deletion_error,test_case_creation_error,test_case_deletion_error,custom_function_creation_error,sequence_job_update_error,cp4d_only_stage_type_in_subflow,data_set_creation_error,file_set_creation_error,missing_job_referred_flow,custom_stage_name_conflict_error,data_rule_definition_creation_error,CFF_SCHEMA_CREATION_ERROR,build_op_registration_error]
Unique id of the data flow. This field is returned only if the underlying data flow has been successfully imported.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136Unique id of the job. This field is returned only if the corresponding job object has been successfully created.
Examples:ccfaaafd-810d-4f0e-b0a9-228c328a0136Job name. This field is returned only if the corresponding job object has been successfully created.
Examples:Aggregator12_DataStage_1
(deprecated) original type of the job or data flow in the import file.
Possible values: [
px_job,server_job,connection,table_def]Examples:px_job
Name of the imported data flow.
Examples:cancel-reservation-job
Name of the data flow to be imported.
Examples:cancel-reservation-job
The ID of an existing asset this object refers to. If ref_asset_id is specified, the id field will be the same as ref_asset_id for backward compatibility.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136data import status.
Possible values: [
completed,in_progress,failed,skipped,deprecated,unsupported,flow_conversion_failed,flow_creation_failed,flow_compilation_failed,job_creation_failed,job_run_failed,connection_conversion_failed,connection_creation_failed,parameter_set_conversion_failed,parameter_set_creation_failed,data_quality_spec_conversion_failed,data_quality_spec_creation_failed,table_definition_conversion_failed,table_definition_creation_failed,job_update_failed,job_deletion_failed,sequence_job_conversion_failed,sequence_job_creation_failed,message_handler_conversion_failed,message_handler_creation_failed,build_stage_conversion_failed,build_stage_creation_failed,build_stage_generation_failed,xml_schema_library_creation_failed,wrapped_stage_conversion_failed,wrapped_stage_creation_failed,wrapped_stage_generation_failed,routine_conversion_failed,routine_creation_failed,custom_stage_conversion_failed,custom_stage_creation_failed,flow_update_failed,parameter_set_update_failed,completed_with_error,data_quality_rule_creation_failed,data_asset_creation_failed,custom_function_creation_failed,data_set_creation_failed,file_set_creation_failed,data_rule_definition_creation_failed,cff_schema_creation_failed,cff_schema_conversion_failed,test_case_creation_failed,test_case_conversion_failed]Examples:completed
type of the job or data connection in the import file.
Possible values: [
px_job,server_job,connection,table_def,parameter_set,routine,subflow,sequence_job,data_quality_spec,server_container,message_handler,build_stage,wrapped_stage,custom_stage,xml_schema_library,custom_stage_library,function_library,data_intg_parallel_function,data_intg_java_library,odm_ruleset,job,data_quality_rule,data_quality_definition,data_asset,data_intg_data_set,data_intg_file_set,odbc_configuration,data_intg_cff_schema,data_intg_test_case,data_element,ims_database,ims_viewset,mainframe_job,machine_profile,mainframe_routine,parallel_routine,transform,jdbc_driver]Examples:px_job
The warnings array report all the warnings in the data flow import operation.
- warnings
additional warning text.
warning object name.
warning type.
Possible values: [
unreleased_stage_type,unreleased_feature,credentials_file_warning,transformer_trigger_unsupported,transformer_buildtab_unsupported,unsupported_secure_gateway,placeholder_connection_parameters,description_truncated,empty_stage_list,missing_parameter_set,missing_subflow,missing_table_definition,missing_data_connection,duplicate_job,unsupported_connection,unsupported_activity_trigger,unsupported_expression,unsupported_function,unsupported_operator,unsupported_variable,unsupported_activity_variable,default_connection_parameter,unmatched_for_var_warning,unsupported_routine_statement,unsupported_before_after_routine,attachment_creation_failed,unsupported_job_control,unsupported_drs_rdbms_parameter,dataconnection_conflict,placeholder_parameter_set,missing_routine,unsupported_stored_procedure_transform,unattached_links,unreleased_datasource_in_storedprocedure,unsupported_parquet_compression_type_lzo,unsupported_stppx_dbvendor_parameter,unsupported_sappack_connector_property,undefined_variable,unsupported_xmloutputpx_features,xml_transform_missed_schema,unsupported_stage_type,unbound_reference,missing_stage_type,unsupported_dependency,missing_data_asset,unsupported_svtransformer_before_after_subroutine,deprecated_load_control_properties,missing_parameter,unsupported_stage_type_in_subflow,unsupported_xmlinputpx_features,export_request_in_progress,unsupported_stppx_for_db2i_db2z,deprecated_connection_property,hard_coded_file_path,unsupported_property_value,unsupported_property,unsupported_dcn_data_connection,xmlstage_unsupported_function,attachment_update_failed,unchanged_parameter_set,connection_parameter_set_created,connection_parameter_set_patched,skipped_duplicate_connection,unchanged_connection,unsupported_xpath,parameters_in_existing_parameter_set,different_parameter_type_in_parameter_set,unsupported_dataquality_rename,unsupported_data_connection_secure_gateway,missing_default_parameter_value,missing_referenced_connection,unsupported_routine_activity,unsupported_execution_action_validate,unsupported_execution_action_reset,runtime_jis_generation_failed,missing_job,missing_dataset,missing_fileset,identical_parameter_set,missing_runtime_environment,missing_match_spec,unsupported_before_after_subroutine,deprecated_stage_type,missing_storage_volume,unsupported_websphereMQ_server_mode,unsupported_parameter_name,unsupported_property_on_cloud,invalid_job_schedule,missing_cff_schema,missing_data_intg_flow,job_import_skipped,unsupported_spparam_type,unsupported_multiple_reject_links,binaries_import_failed,missing_binaries,deprecated_job_environment_input,unclear_numeric_type,duplicate_parameter_name,user_status_migrated_to_global_user_variable,user_status_migrated_to_dssetuserstatus_function,subroutine_unsupported_on_cloud]
import error.
import type.
list of all missing assets.
- missing_assets
name of asset.
type of asset.
Name of the import request.
Examples:seat-reservation-jobs
Import event notifications.
- notifications
The timestamp when the notification was created. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Notification id.
Import/Export status associated with the notification.
The on_failure option used for the import.
Estimate of remaining time in seconds.
The timestamp when the import opearton started. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import status.
Possible values: [
in_progress,cancelled,queued,started,completed]Examples:in_progress
Import statistics. total = imported (including renamed and replaced) + skipped + failed + deprecated + unsupported + pending.
- tally
Total number of build stages.
Total number of cff schemas.
Total number of connection creation failures.
Total number of data connections.
Total number of custom stages.
Total number of data assets.
Total number of data elements.
Total number of data quality rules.
Total number of data quality rules.
Total number of data quality spec.
Total number of datastage datasets.
Total number of deprecated resources in the import file.
Total number of data flows that cannot be imported due to import errors.
Total number of datastage filesets.
Total number of data flows that failed to compile.
Total number of flow creation failures.
Total number of function libraries.
Total number of data flows successfully imported.
Total number of IMS databases.
Total number of IMS viewsets.
Total number of java libraries.
Total number of job creation failures.
Total number of machine profiles.
Total number of mainframe jobs.
Total number of mainframe routines.
Total number of message handlers.
Total number of ilogjrule/odm libraries.
Total number of parallel jobs.
Total number of parallel routines.
Total number of parameter sets.
Total number of data flows that have not been processed.
Total number of data flows successfully imported and renamed due to a name conflict. The renamed count is included in the imported count.
Total number of existing data flows replaced by imported flows. The replaced count is included in the imported count.
Total number of routines.
Total number of sequence job creation failures.
Total number of sequence jobs.
Total number of server containers.
Total number of data flows skipped due to name conflicts. The skipped count is not included in the failed count or imported count.
Total number of parallel job subflows.
Total number of table definitions.
Total number of test cases.
Total number of data flows to be imported.
Total number of transforms.
Total number of unsupported resources in the import file.
Total number of wrapped stages.
Total number of xml schema libraries.
Import the response metadata.
- metadata
Catalog id.
Catalog name.
The timestamp when the import API was submitted. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Account ID of the user who submitted the import request.
Examples:user1@company1.com
The unique import id.
The timestamp when the import status was last updated. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import file name.
Project id.
Project name.
Space id.
Space name.
The URL which can be used to get the status of the import request right after it is submitted.
Status Code
The requested import operation has been accepted. However, the import operation may or may not be completed. The status field in the import response object describes the current status of the import. The response "Location" header provides a convenient url for retrieving the status with a GET request.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred. See response for more information.
Cancel a previous import request
Cancel a previous import request. Use GET /v3/migration/imports/{import_id} to obtain the current status of the import, including whether it has been cancelled.
Cancel a previous import request. Use GET /v3/migration/imports/{import_id} to obtain the current status of the import, including whether it has been cancelled.
Cancel a previous import request. Use GET /v3/migration/imports/{import_id} to obtain the current status of the import, including whether it has been cancelled.
Cancel a previous import request. Use GET /v3/migration/imports/{import_id} to obtain the current status of the import, including whether it has been cancelled.
DELETE /v3/migration/isx_imports/{import_id}ServiceCall<Void> deleteMigration(DeleteMigrationOptions deleteMigrationOptions)deleteMigration(params)
delete_migration(
self,
import_id: str,
*,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the DeleteMigrationOptions.Builder to create a DeleteMigrationOptions object that contains the parameter values for the deleteMigration method.
Path Parameters
Unique ID of the import request.
Example:
cc6dbbfd-810d-4f0e-b0a9-228c328aff29
Query Parameters
The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Example:
4c9adbb4-28ef-4a7d-b273-1cee0c38021f
The deleteMigration options.
Unique ID of the import request.
Examples:cc6dbbfd-810d-4f0e-b0a9-228c328aff29The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:4c9adbb4-28ef-4a7d-b273-1cee0c38021f
parameters
Unique ID of the import request.
Examples:The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:
parameters
Unique ID of the import request.
Examples:The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:
curl -X DELETE --location --header "Authorization: Bearer {iam_token}" "{base_url}/v3/migration/isx_imports/{import_id}?project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
DeleteMigrationOptions deleteMigrationOptions = new DeleteMigrationOptions.Builder() .importId(importID) .projectId(projectID) .build(); datastageService.deleteMigration(deleteMigrationOptions).execute();
const params = { importId: importID, projectId: projectID, }; const res = await datastageService.deleteMigration(params);
response = datastage_service.delete_migration( import_id=importId, project_id=config['PROJECT_ID'] )
Response
Status Code
The import cancellation request was accepted.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
Status of the import request cannot be found. This can be due to the given import_id is not valid, or the import has been completed long ago and its status information is no longer available.
An error occurred. See response for more information.
No Sample Response
Get the status of a previous import request
Gets the status of an import request. The status field in the response object indicates if the given import is completed, in progress, or failed. Detailed status information about each imported data flow is also contained in the response object.
Gets the status of an import request. The status field in the response object indicates if the given import is completed, in progress, or failed. Detailed status information about each imported data flow is also contained in the response object.
Gets the status of an import request. The status field in the response object indicates if the given import is completed, in progress, or failed. Detailed status information about each imported data flow is also contained in the response object.
Gets the status of an import request. The status field in the response object indicates if the given import is completed, in progress, or failed. Detailed status information about each imported data flow is also contained in the response object.
GET /v3/migration/isx_imports/{import_id}ServiceCall<ImportResponse> getMigration(GetMigrationOptions getMigrationOptions)getMigration(params)
get_migration(
self,
import_id: str,
*,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
format: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the GetMigrationOptions.Builder to create a GetMigrationOptions object that contains the parameter values for the getMigration method.
Path Parameters
Unique ID of the import request.
Query Parameters
The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Example:
4c9adbb4-28ef-4a7d-b273-1cee0c38021fformat of isx import report
Allowable values: [
json,csv]Example:
json
The getMigration options.
Unique ID of the import request.
The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:4c9adbb4-28ef-4a7d-b273-1cee0c38021fformat of isx import report.
Allowable values: [
json,csv]Examples:json
parameters
Unique ID of the import request.
The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:format of isx import report.
Allowable values: [
json,csv]Examples:
parameters
Unique ID of the import request.
The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:format of isx import report.
Allowable values: [
json,csv]Examples:
curl -X GET --location --header "Authorization: Bearer {iam_token}" --header "Accept: application/json;charset=utf-8" "{base_url}/v3/migration/isx_imports/{import_id}?project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
GetMigrationOptions getMigrationOptions = new GetMigrationOptions.Builder() .importId(importID) .projectId(projectID) .build(); Response<ImportResponse> response = datastageService.getMigration(getMigrationOptions).execute(); ImportResponse importResponse = response.getResult(); System.out.println(importResponse);
const params = { importId: importID, projectId: projectID, }; const res = await datastageService.getMigration(params);
import_response = datastage_service.get_migration( import_id=importId, project_id=config['PROJECT_ID'] ).get_result() print(json.dumps(import_response, indent=2))
Response
Response object of an import request.
Import the response entity.
Import the response metadata.
Response object of an import request.
Import the response entity.
- entity
Account ID of the user who cancelled the import request. This field is required only when the status field is "cancelled".
Examples:user1@company1.com
The conflict_resolution option used for the import.
The duration of import processing time.
The timestamp when the import opearton completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
All data flows imported or to be imported. Each ImportFlow object contains status for the individual data flow import operation.
- importDataFlows
conflict resolution status.
Possible values: [
flow_replacement_succeeded,flow_replacement_failed,import_flow_renamed,import_flow_skipped,connection_replacement_succeeded,connection_replacement_failed,connection_renamed,connection_skipped,parameter_set_replacement_succeeded,parameter_set_replacement_failed,parameter_set_renamed,parameter_set_skipped,table_definition_replacement_succeeded,table_definition_replacement_failed,table_definition_renamed,table_definition_skipped,sequence_job_replacement_succeeded,sequence_job_replacement_failed,sequence_job_renamed,sequence_job_skipped,subflow_replacement_succeeded,subflow_replacement_failed,subflow_renamed,subflow_skipped,message_handler_replacement_succeeded,message_handler_replacement_failed,message_handler_renamed,message_handler_skipped,build_stage_replacement_succeeded,build_stage_replacement_failed,build_stage_renamed,build_stage_skipped,wrapped_stage_replacement_succeeded,wrapped_stage_replacement_failed,wrapped_stage_renamed,wrapped_stage_skipped,custom_stage_replacement_succeeded,custom_stage_replacement_failed,custom_stage_renamed,custom_stage_skipped,data_quality_rule_replacement_succeeded,data_quality_rule_replacement_failed,data_quality_rule_renamed,data_quality_rule_skipped,data_quality_rule_definition_replacement_succeeded,data_quality_rule_definition_failed,data_quality_rule_definition_renamed,data_quality_rule_definition_skipped,cff_schema_replacement_succeeded,cff_schema_replacement_failed,cff_schema_renamed,cff_schema_skipped,test_case_replacement_succeeded,test_case_replacement_failed,test_case_renamed,test_case_skipped]Examples:import_flow_renamed
The timestamp when the flow import is completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
The errors array report all the problems preventing the data flow from being successfully imported.
- errors
additional error text.
error object name.
error stage type.
error type.
Possible values: [
unsupported_stage_type,unsupported_feature,empty_json,isx_conversion_error,model_conversion_error,invalid_input_type,invalid_json_format,json_conversion_error,flow_deletion_error,flow_creation_error,flow_response_parsing_error,auth_token_error,flow_compilation_error,empty_stage_list,empty_stage_node,missing_stage_type_class_name,dummy_stage,missing_stage_type,missing_repos_id,stage_conversion_error,unimplemented_stage_type,job_creation_error,job_run_error,flow_search_error,unsupported_job_type,internal_error,connection_creation_error,flow_rename_error,duplicate_job_error,parameter_set_creation_error,distributed_lock_error,duplicate_object_error,unbound_object_reference,table_def_creation_error,connection_creation_api_error,connection_patch_api_error,connection_deletion_api_error,sequence_job_creation_error,unsupported_stage_type_in_subflow,job_update_error,job_deletion_error,sequence_job_deletion_error,parameter_set_deletion_error,invalid_property_name,invalid_property_value,unsupported_subflow_with_multiple_parents,message_handler_creation_error,message_handler_deletion_error,unsupported_server_container,build_stage_creation_error,build_stage_deletion_error,build_stage_generation_error,data_quality_spec_creation_error,xml_schema_library_creation_error,wrapped_stage_creation_error,wrapped_stage_deletion_error,wrapped_stage_generation_error,insufficient_memory_error,out_of_memory_error,routine_conversion_error,routine_creation_error,routine_deletion_error,custom_stage_creation_error,custom_stage_deletion_error,invalid_file_type,sequence_job_exceeding_max_capacity_error,flow_update_error,parameter_set_update_error,invalid_job_file_type,incorrectset_xmlinputpx,missing_column_definition,cp4d_only_stage_type,unsupported_sybase_lookup,job_get_error,flow_export_error,odm_library_creation_error,generated_parameter_set_error,data_quality_rule_creation_error,xml_schema_library_empty_error,data_asset_creation_error,data_asset_deletion_error,test_case_creation_error,test_case_deletion_error,custom_function_creation_error,sequence_job_update_error,cp4d_only_stage_type_in_subflow,data_set_creation_error,file_set_creation_error,missing_job_referred_flow,custom_stage_name_conflict_error,data_rule_definition_creation_error,CFF_SCHEMA_CREATION_ERROR,build_op_registration_error]
Unique id of the data flow. This field is returned only if the underlying data flow has been successfully imported.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136Unique id of the job. This field is returned only if the corresponding job object has been successfully created.
Examples:ccfaaafd-810d-4f0e-b0a9-228c328a0136Job name. This field is returned only if the corresponding job object has been successfully created.
Examples:Aggregator12_DataStage_1
(deprecated) original type of the job or data flow in the import file.
Possible values: [
px_job,server_job,connection,table_def]Examples:px_job
Name of the imported data flow.
Examples:cancel-reservation-job
Name of the data flow to be imported.
Examples:cancel-reservation-job
The ID of an existing asset this object refers to. If ref_asset_id is specified, the id field will be the same as ref_asset_id for backward compatibility.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136data import status.
Possible values: [
completed,in_progress,failed,skipped,deprecated,unsupported,flow_conversion_failed,flow_creation_failed,flow_compilation_failed,job_creation_failed,job_run_failed,connection_conversion_failed,connection_creation_failed,parameter_set_conversion_failed,parameter_set_creation_failed,data_quality_spec_conversion_failed,data_quality_spec_creation_failed,table_definition_conversion_failed,table_definition_creation_failed,job_update_failed,job_deletion_failed,sequence_job_conversion_failed,sequence_job_creation_failed,message_handler_conversion_failed,message_handler_creation_failed,build_stage_conversion_failed,build_stage_creation_failed,build_stage_generation_failed,xml_schema_library_creation_failed,wrapped_stage_conversion_failed,wrapped_stage_creation_failed,wrapped_stage_generation_failed,routine_conversion_failed,routine_creation_failed,custom_stage_conversion_failed,custom_stage_creation_failed,flow_update_failed,parameter_set_update_failed,completed_with_error,data_quality_rule_creation_failed,data_asset_creation_failed,custom_function_creation_failed,data_set_creation_failed,file_set_creation_failed,data_rule_definition_creation_failed,cff_schema_creation_failed,cff_schema_conversion_failed,test_case_creation_failed,test_case_conversion_failed]Examples:completed
type of the job or data connection in the import file.
Possible values: [
px_job,server_job,connection,table_def,parameter_set,routine,subflow,sequence_job,data_quality_spec,server_container,message_handler,build_stage,wrapped_stage,custom_stage,xml_schema_library,custom_stage_library,function_library,data_intg_parallel_function,data_intg_java_library,odm_ruleset,job,data_quality_rule,data_quality_definition,data_asset,data_intg_data_set,data_intg_file_set,odbc_configuration,data_intg_cff_schema,data_intg_test_case,data_element,ims_database,ims_viewset,mainframe_job,machine_profile,mainframe_routine,parallel_routine,transform,jdbc_driver]Examples:px_job
The warnings array report all the warnings in the data flow import operation.
- warnings
additional warning text.
warning object name.
warning type.
Possible values: [
unreleased_stage_type,unreleased_feature,credentials_file_warning,transformer_trigger_unsupported,transformer_buildtab_unsupported,unsupported_secure_gateway,placeholder_connection_parameters,description_truncated,empty_stage_list,missing_parameter_set,missing_subflow,missing_table_definition,missing_data_connection,duplicate_job,unsupported_connection,unsupported_activity_trigger,unsupported_expression,unsupported_function,unsupported_operator,unsupported_variable,unsupported_activity_variable,default_connection_parameter,unmatched_for_var_warning,unsupported_routine_statement,unsupported_before_after_routine,attachment_creation_failed,unsupported_job_control,unsupported_drs_rdbms_parameter,dataconnection_conflict,placeholder_parameter_set,missing_routine,unsupported_stored_procedure_transform,unattached_links,unreleased_datasource_in_storedprocedure,unsupported_parquet_compression_type_lzo,unsupported_stppx_dbvendor_parameter,unsupported_sappack_connector_property,undefined_variable,unsupported_xmloutputpx_features,xml_transform_missed_schema,unsupported_stage_type,unbound_reference,missing_stage_type,unsupported_dependency,missing_data_asset,unsupported_svtransformer_before_after_subroutine,deprecated_load_control_properties,missing_parameter,unsupported_stage_type_in_subflow,unsupported_xmlinputpx_features,export_request_in_progress,unsupported_stppx_for_db2i_db2z,deprecated_connection_property,hard_coded_file_path,unsupported_property_value,unsupported_property,unsupported_dcn_data_connection,xmlstage_unsupported_function,attachment_update_failed,unchanged_parameter_set,connection_parameter_set_created,connection_parameter_set_patched,skipped_duplicate_connection,unchanged_connection,unsupported_xpath,parameters_in_existing_parameter_set,different_parameter_type_in_parameter_set,unsupported_dataquality_rename,unsupported_data_connection_secure_gateway,missing_default_parameter_value,missing_referenced_connection,unsupported_routine_activity,unsupported_execution_action_validate,unsupported_execution_action_reset,runtime_jis_generation_failed,missing_job,missing_dataset,missing_fileset,identical_parameter_set,missing_runtime_environment,missing_match_spec,unsupported_before_after_subroutine,deprecated_stage_type,missing_storage_volume,unsupported_websphereMQ_server_mode,unsupported_parameter_name,unsupported_property_on_cloud,invalid_job_schedule,missing_cff_schema,missing_data_intg_flow,job_import_skipped,unsupported_spparam_type,unsupported_multiple_reject_links,binaries_import_failed,missing_binaries,deprecated_job_environment_input,unclear_numeric_type,duplicate_parameter_name,user_status_migrated_to_global_user_variable,user_status_migrated_to_dssetuserstatus_function,subroutine_unsupported_on_cloud]
import error.
import type.
list of all missing assets.
- missingAssets
name of asset.
type of asset.
Name of the import request.
Examples:seat-reservation-jobs
Import event notifications.
- notifications
The timestamp when the notification was created. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Notification id.
Import/Export status associated with the notification.
The on_failure option used for the import.
Estimate of remaining time in seconds.
The timestamp when the import opearton started. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import status.
Possible values: [
in_progress,cancelled,queued,started,completed]Examples:in_progress
Import statistics. total = imported (including renamed and replaced) + skipped + failed + deprecated + unsupported + pending.
- tally
Total number of build stages.
Total number of cff schemas.
Total number of connection creation failures.
Total number of data connections.
Total number of custom stages.
Total number of data assets.
Total number of data elements.
Total number of data quality rules.
Total number of data quality rules.
Total number of data quality spec.
Total number of datastage datasets.
Total number of deprecated resources in the import file.
Total number of data flows that cannot be imported due to import errors.
Total number of datastage filesets.
Total number of data flows that failed to compile.
Total number of flow creation failures.
Total number of function libraries.
Total number of data flows successfully imported.
Total number of IMS databases.
Total number of IMS viewsets.
Total number of java libraries.
Total number of job creation failures.
Total number of machine profiles.
Total number of mainframe jobs.
Total number of mainframe routines.
Total number of message handlers.
Total number of ilogjrule/odm libraries.
Total number of parallel jobs.
Total number of parallel routines.
Total number of parameter sets.
Total number of data flows that have not been processed.
Total number of data flows successfully imported and renamed due to a name conflict. The renamed count is included in the imported count.
Total number of existing data flows replaced by imported flows. The replaced count is included in the imported count.
Total number of routines.
Total number of sequence job creation failures.
Total number of sequence jobs.
Total number of server containers.
Total number of data flows skipped due to name conflicts. The skipped count is not included in the failed count or imported count.
Total number of parallel job subflows.
Total number of table definitions.
Total number of test cases.
Total number of data flows to be imported.
Total number of transforms.
Total number of unsupported resources in the import file.
Total number of wrapped stages.
Total number of xml schema libraries.
Import the response metadata.
- metadata
Catalog id.
Catalog name.
The timestamp when the import API was submitted. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Account ID of the user who submitted the import request.
Examples:user1@company1.com
The unique import id.
The timestamp when the import status was last updated. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import file name.
Project id.
Project name.
Space id.
Space name.
The URL which can be used to get the status of the import request right after it is submitted.
Response object of an import request.
Import the response entity.
- entity
Account ID of the user who cancelled the import request. This field is required only when the status field is "cancelled".
Examples:user1@company1.com
The conflict_resolution option used for the import.
The duration of import processing time.
The timestamp when the import opearton completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
All data flows imported or to be imported. Each ImportFlow object contains status for the individual data flow import operation.
- import_data_flows
conflict resolution status.
Possible values: [
flow_replacement_succeeded,flow_replacement_failed,import_flow_renamed,import_flow_skipped,connection_replacement_succeeded,connection_replacement_failed,connection_renamed,connection_skipped,parameter_set_replacement_succeeded,parameter_set_replacement_failed,parameter_set_renamed,parameter_set_skipped,table_definition_replacement_succeeded,table_definition_replacement_failed,table_definition_renamed,table_definition_skipped,sequence_job_replacement_succeeded,sequence_job_replacement_failed,sequence_job_renamed,sequence_job_skipped,subflow_replacement_succeeded,subflow_replacement_failed,subflow_renamed,subflow_skipped,message_handler_replacement_succeeded,message_handler_replacement_failed,message_handler_renamed,message_handler_skipped,build_stage_replacement_succeeded,build_stage_replacement_failed,build_stage_renamed,build_stage_skipped,wrapped_stage_replacement_succeeded,wrapped_stage_replacement_failed,wrapped_stage_renamed,wrapped_stage_skipped,custom_stage_replacement_succeeded,custom_stage_replacement_failed,custom_stage_renamed,custom_stage_skipped,data_quality_rule_replacement_succeeded,data_quality_rule_replacement_failed,data_quality_rule_renamed,data_quality_rule_skipped,data_quality_rule_definition_replacement_succeeded,data_quality_rule_definition_failed,data_quality_rule_definition_renamed,data_quality_rule_definition_skipped,cff_schema_replacement_succeeded,cff_schema_replacement_failed,cff_schema_renamed,cff_schema_skipped,test_case_replacement_succeeded,test_case_replacement_failed,test_case_renamed,test_case_skipped]Examples:import_flow_renamed
The timestamp when the flow import is completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
The errors array report all the problems preventing the data flow from being successfully imported.
- errors
additional error text.
error object name.
error stage type.
error type.
Possible values: [
unsupported_stage_type,unsupported_feature,empty_json,isx_conversion_error,model_conversion_error,invalid_input_type,invalid_json_format,json_conversion_error,flow_deletion_error,flow_creation_error,flow_response_parsing_error,auth_token_error,flow_compilation_error,empty_stage_list,empty_stage_node,missing_stage_type_class_name,dummy_stage,missing_stage_type,missing_repos_id,stage_conversion_error,unimplemented_stage_type,job_creation_error,job_run_error,flow_search_error,unsupported_job_type,internal_error,connection_creation_error,flow_rename_error,duplicate_job_error,parameter_set_creation_error,distributed_lock_error,duplicate_object_error,unbound_object_reference,table_def_creation_error,connection_creation_api_error,connection_patch_api_error,connection_deletion_api_error,sequence_job_creation_error,unsupported_stage_type_in_subflow,job_update_error,job_deletion_error,sequence_job_deletion_error,parameter_set_deletion_error,invalid_property_name,invalid_property_value,unsupported_subflow_with_multiple_parents,message_handler_creation_error,message_handler_deletion_error,unsupported_server_container,build_stage_creation_error,build_stage_deletion_error,build_stage_generation_error,data_quality_spec_creation_error,xml_schema_library_creation_error,wrapped_stage_creation_error,wrapped_stage_deletion_error,wrapped_stage_generation_error,insufficient_memory_error,out_of_memory_error,routine_conversion_error,routine_creation_error,routine_deletion_error,custom_stage_creation_error,custom_stage_deletion_error,invalid_file_type,sequence_job_exceeding_max_capacity_error,flow_update_error,parameter_set_update_error,invalid_job_file_type,incorrectset_xmlinputpx,missing_column_definition,cp4d_only_stage_type,unsupported_sybase_lookup,job_get_error,flow_export_error,odm_library_creation_error,generated_parameter_set_error,data_quality_rule_creation_error,xml_schema_library_empty_error,data_asset_creation_error,data_asset_deletion_error,test_case_creation_error,test_case_deletion_error,custom_function_creation_error,sequence_job_update_error,cp4d_only_stage_type_in_subflow,data_set_creation_error,file_set_creation_error,missing_job_referred_flow,custom_stage_name_conflict_error,data_rule_definition_creation_error,CFF_SCHEMA_CREATION_ERROR,build_op_registration_error]
Unique id of the data flow. This field is returned only if the underlying data flow has been successfully imported.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136Unique id of the job. This field is returned only if the corresponding job object has been successfully created.
Examples:ccfaaafd-810d-4f0e-b0a9-228c328a0136Job name. This field is returned only if the corresponding job object has been successfully created.
Examples:Aggregator12_DataStage_1
(deprecated) original type of the job or data flow in the import file.
Possible values: [
px_job,server_job,connection,table_def]Examples:px_job
Name of the imported data flow.
Examples:cancel-reservation-job
Name of the data flow to be imported.
Examples:cancel-reservation-job
The ID of an existing asset this object refers to. If ref_asset_id is specified, the id field will be the same as ref_asset_id for backward compatibility.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136data import status.
Possible values: [
completed,in_progress,failed,skipped,deprecated,unsupported,flow_conversion_failed,flow_creation_failed,flow_compilation_failed,job_creation_failed,job_run_failed,connection_conversion_failed,connection_creation_failed,parameter_set_conversion_failed,parameter_set_creation_failed,data_quality_spec_conversion_failed,data_quality_spec_creation_failed,table_definition_conversion_failed,table_definition_creation_failed,job_update_failed,job_deletion_failed,sequence_job_conversion_failed,sequence_job_creation_failed,message_handler_conversion_failed,message_handler_creation_failed,build_stage_conversion_failed,build_stage_creation_failed,build_stage_generation_failed,xml_schema_library_creation_failed,wrapped_stage_conversion_failed,wrapped_stage_creation_failed,wrapped_stage_generation_failed,routine_conversion_failed,routine_creation_failed,custom_stage_conversion_failed,custom_stage_creation_failed,flow_update_failed,parameter_set_update_failed,completed_with_error,data_quality_rule_creation_failed,data_asset_creation_failed,custom_function_creation_failed,data_set_creation_failed,file_set_creation_failed,data_rule_definition_creation_failed,cff_schema_creation_failed,cff_schema_conversion_failed,test_case_creation_failed,test_case_conversion_failed]Examples:completed
type of the job or data connection in the import file.
Possible values: [
px_job,server_job,connection,table_def,parameter_set,routine,subflow,sequence_job,data_quality_spec,server_container,message_handler,build_stage,wrapped_stage,custom_stage,xml_schema_library,custom_stage_library,function_library,data_intg_parallel_function,data_intg_java_library,odm_ruleset,job,data_quality_rule,data_quality_definition,data_asset,data_intg_data_set,data_intg_file_set,odbc_configuration,data_intg_cff_schema,data_intg_test_case,data_element,ims_database,ims_viewset,mainframe_job,machine_profile,mainframe_routine,parallel_routine,transform,jdbc_driver]Examples:px_job
The warnings array report all the warnings in the data flow import operation.
- warnings
additional warning text.
warning object name.
warning type.
Possible values: [
unreleased_stage_type,unreleased_feature,credentials_file_warning,transformer_trigger_unsupported,transformer_buildtab_unsupported,unsupported_secure_gateway,placeholder_connection_parameters,description_truncated,empty_stage_list,missing_parameter_set,missing_subflow,missing_table_definition,missing_data_connection,duplicate_job,unsupported_connection,unsupported_activity_trigger,unsupported_expression,unsupported_function,unsupported_operator,unsupported_variable,unsupported_activity_variable,default_connection_parameter,unmatched_for_var_warning,unsupported_routine_statement,unsupported_before_after_routine,attachment_creation_failed,unsupported_job_control,unsupported_drs_rdbms_parameter,dataconnection_conflict,placeholder_parameter_set,missing_routine,unsupported_stored_procedure_transform,unattached_links,unreleased_datasource_in_storedprocedure,unsupported_parquet_compression_type_lzo,unsupported_stppx_dbvendor_parameter,unsupported_sappack_connector_property,undefined_variable,unsupported_xmloutputpx_features,xml_transform_missed_schema,unsupported_stage_type,unbound_reference,missing_stage_type,unsupported_dependency,missing_data_asset,unsupported_svtransformer_before_after_subroutine,deprecated_load_control_properties,missing_parameter,unsupported_stage_type_in_subflow,unsupported_xmlinputpx_features,export_request_in_progress,unsupported_stppx_for_db2i_db2z,deprecated_connection_property,hard_coded_file_path,unsupported_property_value,unsupported_property,unsupported_dcn_data_connection,xmlstage_unsupported_function,attachment_update_failed,unchanged_parameter_set,connection_parameter_set_created,connection_parameter_set_patched,skipped_duplicate_connection,unchanged_connection,unsupported_xpath,parameters_in_existing_parameter_set,different_parameter_type_in_parameter_set,unsupported_dataquality_rename,unsupported_data_connection_secure_gateway,missing_default_parameter_value,missing_referenced_connection,unsupported_routine_activity,unsupported_execution_action_validate,unsupported_execution_action_reset,runtime_jis_generation_failed,missing_job,missing_dataset,missing_fileset,identical_parameter_set,missing_runtime_environment,missing_match_spec,unsupported_before_after_subroutine,deprecated_stage_type,missing_storage_volume,unsupported_websphereMQ_server_mode,unsupported_parameter_name,unsupported_property_on_cloud,invalid_job_schedule,missing_cff_schema,missing_data_intg_flow,job_import_skipped,unsupported_spparam_type,unsupported_multiple_reject_links,binaries_import_failed,missing_binaries,deprecated_job_environment_input,unclear_numeric_type,duplicate_parameter_name,user_status_migrated_to_global_user_variable,user_status_migrated_to_dssetuserstatus_function,subroutine_unsupported_on_cloud]
import error.
import type.
list of all missing assets.
- missing_assets
name of asset.
type of asset.
Name of the import request.
Examples:seat-reservation-jobs
Import event notifications.
- notifications
The timestamp when the notification was created. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Notification id.
Import/Export status associated with the notification.
The on_failure option used for the import.
Estimate of remaining time in seconds.
The timestamp when the import opearton started. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import status.
Possible values: [
in_progress,cancelled,queued,started,completed]Examples:in_progress
Import statistics. total = imported (including renamed and replaced) + skipped + failed + deprecated + unsupported + pending.
- tally
Total number of build stages.
Total number of cff schemas.
Total number of connection creation failures.
Total number of data connections.
Total number of custom stages.
Total number of data assets.
Total number of data elements.
Total number of data quality rules.
Total number of data quality rules.
Total number of data quality spec.
Total number of datastage datasets.
Total number of deprecated resources in the import file.
Total number of data flows that cannot be imported due to import errors.
Total number of datastage filesets.
Total number of data flows that failed to compile.
Total number of flow creation failures.
Total number of function libraries.
Total number of data flows successfully imported.
Total number of IMS databases.
Total number of IMS viewsets.
Total number of java libraries.
Total number of job creation failures.
Total number of machine profiles.
Total number of mainframe jobs.
Total number of mainframe routines.
Total number of message handlers.
Total number of ilogjrule/odm libraries.
Total number of parallel jobs.
Total number of parallel routines.
Total number of parameter sets.
Total number of data flows that have not been processed.
Total number of data flows successfully imported and renamed due to a name conflict. The renamed count is included in the imported count.
Total number of existing data flows replaced by imported flows. The replaced count is included in the imported count.
Total number of routines.
Total number of sequence job creation failures.
Total number of sequence jobs.
Total number of server containers.
Total number of data flows skipped due to name conflicts. The skipped count is not included in the failed count or imported count.
Total number of parallel job subflows.
Total number of table definitions.
Total number of test cases.
Total number of data flows to be imported.
Total number of transforms.
Total number of unsupported resources in the import file.
Total number of wrapped stages.
Total number of xml schema libraries.
Import the response metadata.
- metadata
Catalog id.
Catalog name.
The timestamp when the import API was submitted. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Account ID of the user who submitted the import request.
Examples:user1@company1.com
The unique import id.
The timestamp when the import status was last updated. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import file name.
Project id.
Project name.
Space id.
Space name.
The URL which can be used to get the status of the import request right after it is submitted.
Response object of an import request.
Import the response entity.
- entity
Account ID of the user who cancelled the import request. This field is required only when the status field is "cancelled".
Examples:user1@company1.com
The conflict_resolution option used for the import.
The duration of import processing time.
The timestamp when the import opearton completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
All data flows imported or to be imported. Each ImportFlow object contains status for the individual data flow import operation.
- import_data_flows
conflict resolution status.
Possible values: [
flow_replacement_succeeded,flow_replacement_failed,import_flow_renamed,import_flow_skipped,connection_replacement_succeeded,connection_replacement_failed,connection_renamed,connection_skipped,parameter_set_replacement_succeeded,parameter_set_replacement_failed,parameter_set_renamed,parameter_set_skipped,table_definition_replacement_succeeded,table_definition_replacement_failed,table_definition_renamed,table_definition_skipped,sequence_job_replacement_succeeded,sequence_job_replacement_failed,sequence_job_renamed,sequence_job_skipped,subflow_replacement_succeeded,subflow_replacement_failed,subflow_renamed,subflow_skipped,message_handler_replacement_succeeded,message_handler_replacement_failed,message_handler_renamed,message_handler_skipped,build_stage_replacement_succeeded,build_stage_replacement_failed,build_stage_renamed,build_stage_skipped,wrapped_stage_replacement_succeeded,wrapped_stage_replacement_failed,wrapped_stage_renamed,wrapped_stage_skipped,custom_stage_replacement_succeeded,custom_stage_replacement_failed,custom_stage_renamed,custom_stage_skipped,data_quality_rule_replacement_succeeded,data_quality_rule_replacement_failed,data_quality_rule_renamed,data_quality_rule_skipped,data_quality_rule_definition_replacement_succeeded,data_quality_rule_definition_failed,data_quality_rule_definition_renamed,data_quality_rule_definition_skipped,cff_schema_replacement_succeeded,cff_schema_replacement_failed,cff_schema_renamed,cff_schema_skipped,test_case_replacement_succeeded,test_case_replacement_failed,test_case_renamed,test_case_skipped]Examples:import_flow_renamed
The timestamp when the flow import is completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
The errors array report all the problems preventing the data flow from being successfully imported.
- errors
additional error text.
error object name.
error stage type.
error type.
Possible values: [
unsupported_stage_type,unsupported_feature,empty_json,isx_conversion_error,model_conversion_error,invalid_input_type,invalid_json_format,json_conversion_error,flow_deletion_error,flow_creation_error,flow_response_parsing_error,auth_token_error,flow_compilation_error,empty_stage_list,empty_stage_node,missing_stage_type_class_name,dummy_stage,missing_stage_type,missing_repos_id,stage_conversion_error,unimplemented_stage_type,job_creation_error,job_run_error,flow_search_error,unsupported_job_type,internal_error,connection_creation_error,flow_rename_error,duplicate_job_error,parameter_set_creation_error,distributed_lock_error,duplicate_object_error,unbound_object_reference,table_def_creation_error,connection_creation_api_error,connection_patch_api_error,connection_deletion_api_error,sequence_job_creation_error,unsupported_stage_type_in_subflow,job_update_error,job_deletion_error,sequence_job_deletion_error,parameter_set_deletion_error,invalid_property_name,invalid_property_value,unsupported_subflow_with_multiple_parents,message_handler_creation_error,message_handler_deletion_error,unsupported_server_container,build_stage_creation_error,build_stage_deletion_error,build_stage_generation_error,data_quality_spec_creation_error,xml_schema_library_creation_error,wrapped_stage_creation_error,wrapped_stage_deletion_error,wrapped_stage_generation_error,insufficient_memory_error,out_of_memory_error,routine_conversion_error,routine_creation_error,routine_deletion_error,custom_stage_creation_error,custom_stage_deletion_error,invalid_file_type,sequence_job_exceeding_max_capacity_error,flow_update_error,parameter_set_update_error,invalid_job_file_type,incorrectset_xmlinputpx,missing_column_definition,cp4d_only_stage_type,unsupported_sybase_lookup,job_get_error,flow_export_error,odm_library_creation_error,generated_parameter_set_error,data_quality_rule_creation_error,xml_schema_library_empty_error,data_asset_creation_error,data_asset_deletion_error,test_case_creation_error,test_case_deletion_error,custom_function_creation_error,sequence_job_update_error,cp4d_only_stage_type_in_subflow,data_set_creation_error,file_set_creation_error,missing_job_referred_flow,custom_stage_name_conflict_error,data_rule_definition_creation_error,CFF_SCHEMA_CREATION_ERROR,build_op_registration_error]
Unique id of the data flow. This field is returned only if the underlying data flow has been successfully imported.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136Unique id of the job. This field is returned only if the corresponding job object has been successfully created.
Examples:ccfaaafd-810d-4f0e-b0a9-228c328a0136Job name. This field is returned only if the corresponding job object has been successfully created.
Examples:Aggregator12_DataStage_1
(deprecated) original type of the job or data flow in the import file.
Possible values: [
px_job,server_job,connection,table_def]Examples:px_job
Name of the imported data flow.
Examples:cancel-reservation-job
Name of the data flow to be imported.
Examples:cancel-reservation-job
The ID of an existing asset this object refers to. If ref_asset_id is specified, the id field will be the same as ref_asset_id for backward compatibility.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136data import status.
Possible values: [
completed,in_progress,failed,skipped,deprecated,unsupported,flow_conversion_failed,flow_creation_failed,flow_compilation_failed,job_creation_failed,job_run_failed,connection_conversion_failed,connection_creation_failed,parameter_set_conversion_failed,parameter_set_creation_failed,data_quality_spec_conversion_failed,data_quality_spec_creation_failed,table_definition_conversion_failed,table_definition_creation_failed,job_update_failed,job_deletion_failed,sequence_job_conversion_failed,sequence_job_creation_failed,message_handler_conversion_failed,message_handler_creation_failed,build_stage_conversion_failed,build_stage_creation_failed,build_stage_generation_failed,xml_schema_library_creation_failed,wrapped_stage_conversion_failed,wrapped_stage_creation_failed,wrapped_stage_generation_failed,routine_conversion_failed,routine_creation_failed,custom_stage_conversion_failed,custom_stage_creation_failed,flow_update_failed,parameter_set_update_failed,completed_with_error,data_quality_rule_creation_failed,data_asset_creation_failed,custom_function_creation_failed,data_set_creation_failed,file_set_creation_failed,data_rule_definition_creation_failed,cff_schema_creation_failed,cff_schema_conversion_failed,test_case_creation_failed,test_case_conversion_failed]Examples:completed
type of the job or data connection in the import file.
Possible values: [
px_job,server_job,connection,table_def,parameter_set,routine,subflow,sequence_job,data_quality_spec,server_container,message_handler,build_stage,wrapped_stage,custom_stage,xml_schema_library,custom_stage_library,function_library,data_intg_parallel_function,data_intg_java_library,odm_ruleset,job,data_quality_rule,data_quality_definition,data_asset,data_intg_data_set,data_intg_file_set,odbc_configuration,data_intg_cff_schema,data_intg_test_case,data_element,ims_database,ims_viewset,mainframe_job,machine_profile,mainframe_routine,parallel_routine,transform,jdbc_driver]Examples:px_job
The warnings array report all the warnings in the data flow import operation.
- warnings
additional warning text.
warning object name.
warning type.
Possible values: [
unreleased_stage_type,unreleased_feature,credentials_file_warning,transformer_trigger_unsupported,transformer_buildtab_unsupported,unsupported_secure_gateway,placeholder_connection_parameters,description_truncated,empty_stage_list,missing_parameter_set,missing_subflow,missing_table_definition,missing_data_connection,duplicate_job,unsupported_connection,unsupported_activity_trigger,unsupported_expression,unsupported_function,unsupported_operator,unsupported_variable,unsupported_activity_variable,default_connection_parameter,unmatched_for_var_warning,unsupported_routine_statement,unsupported_before_after_routine,attachment_creation_failed,unsupported_job_control,unsupported_drs_rdbms_parameter,dataconnection_conflict,placeholder_parameter_set,missing_routine,unsupported_stored_procedure_transform,unattached_links,unreleased_datasource_in_storedprocedure,unsupported_parquet_compression_type_lzo,unsupported_stppx_dbvendor_parameter,unsupported_sappack_connector_property,undefined_variable,unsupported_xmloutputpx_features,xml_transform_missed_schema,unsupported_stage_type,unbound_reference,missing_stage_type,unsupported_dependency,missing_data_asset,unsupported_svtransformer_before_after_subroutine,deprecated_load_control_properties,missing_parameter,unsupported_stage_type_in_subflow,unsupported_xmlinputpx_features,export_request_in_progress,unsupported_stppx_for_db2i_db2z,deprecated_connection_property,hard_coded_file_path,unsupported_property_value,unsupported_property,unsupported_dcn_data_connection,xmlstage_unsupported_function,attachment_update_failed,unchanged_parameter_set,connection_parameter_set_created,connection_parameter_set_patched,skipped_duplicate_connection,unchanged_connection,unsupported_xpath,parameters_in_existing_parameter_set,different_parameter_type_in_parameter_set,unsupported_dataquality_rename,unsupported_data_connection_secure_gateway,missing_default_parameter_value,missing_referenced_connection,unsupported_routine_activity,unsupported_execution_action_validate,unsupported_execution_action_reset,runtime_jis_generation_failed,missing_job,missing_dataset,missing_fileset,identical_parameter_set,missing_runtime_environment,missing_match_spec,unsupported_before_after_subroutine,deprecated_stage_type,missing_storage_volume,unsupported_websphereMQ_server_mode,unsupported_parameter_name,unsupported_property_on_cloud,invalid_job_schedule,missing_cff_schema,missing_data_intg_flow,job_import_skipped,unsupported_spparam_type,unsupported_multiple_reject_links,binaries_import_failed,missing_binaries,deprecated_job_environment_input,unclear_numeric_type,duplicate_parameter_name,user_status_migrated_to_global_user_variable,user_status_migrated_to_dssetuserstatus_function,subroutine_unsupported_on_cloud]
import error.
import type.
list of all missing assets.
- missing_assets
name of asset.
type of asset.
Name of the import request.
Examples:seat-reservation-jobs
Import event notifications.
- notifications
The timestamp when the notification was created. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Notification id.
Import/Export status associated with the notification.
The on_failure option used for the import.
Estimate of remaining time in seconds.
The timestamp when the import opearton started. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import status.
Possible values: [
in_progress,cancelled,queued,started,completed]Examples:in_progress
Import statistics. total = imported (including renamed and replaced) + skipped + failed + deprecated + unsupported + pending.
- tally
Total number of build stages.
Total number of cff schemas.
Total number of connection creation failures.
Total number of data connections.
Total number of custom stages.
Total number of data assets.
Total number of data elements.
Total number of data quality rules.
Total number of data quality rules.
Total number of data quality spec.
Total number of datastage datasets.
Total number of deprecated resources in the import file.
Total number of data flows that cannot be imported due to import errors.
Total number of datastage filesets.
Total number of data flows that failed to compile.
Total number of flow creation failures.
Total number of function libraries.
Total number of data flows successfully imported.
Total number of IMS databases.
Total number of IMS viewsets.
Total number of java libraries.
Total number of job creation failures.
Total number of machine profiles.
Total number of mainframe jobs.
Total number of mainframe routines.
Total number of message handlers.
Total number of ilogjrule/odm libraries.
Total number of parallel jobs.
Total number of parallel routines.
Total number of parameter sets.
Total number of data flows that have not been processed.
Total number of data flows successfully imported and renamed due to a name conflict. The renamed count is included in the imported count.
Total number of existing data flows replaced by imported flows. The replaced count is included in the imported count.
Total number of routines.
Total number of sequence job creation failures.
Total number of sequence jobs.
Total number of server containers.
Total number of data flows skipped due to name conflicts. The skipped count is not included in the failed count or imported count.
Total number of parallel job subflows.
Total number of table definitions.
Total number of test cases.
Total number of data flows to be imported.
Total number of transforms.
Total number of unsupported resources in the import file.
Total number of wrapped stages.
Total number of xml schema libraries.
Import the response metadata.
- metadata
Catalog id.
Catalog name.
The timestamp when the import API was submitted. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Account ID of the user who submitted the import request.
Examples:user1@company1.com
The unique import id.
The timestamp when the import status was last updated. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import file name.
Project id.
Project name.
Space id.
Space name.
The URL which can be used to get the status of the import request right after it is submitted.
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
Status of the import request cannot be found. This can be due to the given import_id is not valid, or the import has been completed long ago and its status information is no longer available.
An error occurred. See response for more information.
Get Project and cache to reduce rate limit issue.
Get Project id from name, should leverage ds-common-services to cache more information.
Get Project id from name, should leverage ds-common-services to cache more information.
Get Project id from name, should leverage ds-common-services to cache more information.
Get Project id from name, should leverage ds-common-services to cache more information.
GET /v3/migration/project_info
ServiceCall<List<ProjectInfoResponseItem>> projectInfo(ProjectInfoOptions projectInfoOptions)
projectInfo(params)
project_info(
self,
project_name: str,
*,
is_space: Optional[bool] = None,
**kwargs,
) -> DetailedResponseRequest
Use the ProjectInfoOptions.Builder to create a ProjectInfoOptions object that contains the parameter values for the projectInfo method.
Query Parameters
Name of the project/space.
Is Space.
The projectInfo options.
Name of the project/space.
Is Space.
parameters
Name of the project/space.
Is Space.
parameters
Name of the project/space.
Is Space.
Response
Response type: List<ProjectInfoResponseItem>
Response type: ProjectInfoResponseItem[]
Response type: List[ProjectInfoResponseItem]
ID of the project.
Name of the project.
Status Code
Success.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
No Sample Response
export flows with dependencies as a zip file.
export flows with dependencies as a zip file.
export flows with dependencies as a zip file.
export flows with dependencies as a zip file.
export flows with dependencies as a zip file.
POST /v3/migration/zip_exports
ServiceCall<InputStream> exportFlowsWithDependencies(ExportFlowsWithDependenciesOptions exportFlowsWithDependenciesOptions)exportFlowsWithDependencies(params)
export_flows_with_dependencies(
self,
flows: List['FlowDependencyTree'],
*,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
remove_secrets: Optional[bool] = None,
include_dependencies: Optional[bool] = None,
id: Optional[List[str]] = None,
type: Optional[str] = None,
include_data_assets: Optional[bool] = None,
exclude_data_files: Optional[bool] = None,
asset_type_filter: Optional[List[str]] = None,
x_migration_enc_key: Optional[str] = None,
export_binaries: Optional[bool] = None,
**kwargs,
) -> DetailedResponseRequest
Use the ExportFlowsWithDependenciesOptions.Builder to create a ExportFlowsWithDependenciesOptions object that contains the parameter values for the exportFlowsWithDependencies method.
Custom Headers
The encryption key to encrypt credentials on export or to decrypt them on import
Query Parameters
The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Example:
4c9adbb4-28ef-4a7d-b273-1cee0c38021fremove secrets from exported flows, default value is false.
include dependencies. If no dependencies are specified in the payload, all dependencies will be included.
The list of flow IDs to export.
Type of flow. The default flow type is "data_intg_flow". It is only used with 'id' parameter.
include data_assets. If set as true, all referenced data_assets will be included.
skip the actual data files for datastage dataset and fileset when exporting flows as zip.
Filter assets to be exported by asset type
Export binaries
flows and dependencies metadata
list of flows and their dependencies
The exportFlowsWithDependencies options.
list of flows and their dependencies.
- flows
list of flow dependencies.
- dependencies
dependency id.
dependency name.
dependency type.
flow id.
flow name.
Type of flow. The default flow type is "data_intg_flow".
Allowable values: [
data_intg_flow,sequence_job,job,data_intg_subflow,data_intg_test_case]This option applies to sequence job only, whenever the version to be retrieved should be volatile or not. In case this is set to false, last non-volatile version will be get. In case it's set to true, volatile version will be get.
The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:4c9adbb4-28ef-4a7d-b273-1cee0c38021fremove secrets from exported flows, default value is false.
Examples:falseinclude dependencies. If no dependencies are specified in the payload, all dependencies will be included.
Examples:falseThe list of flow IDs to export.
Type of flow. The default flow type is "data_intg_flow". It is only used with 'id' parameter.
include data_assets. If set as true, all referenced data_assets will be included.
Examples:falseskip the actual data files for datastage dataset and fileset when exporting flows as zip.
Examples:falseFilter assets to be exported by asset type.
The encryption key to encrypt credentials on export or to decrypt them on import.
Export binaries.
Examples:false
parameters
list of flows and their dependencies.
- flows
list of flow dependencies.
- dependencies
dependency id.
dependency name.
dependency type.
flow id.
flow name.
Type of flow. The default flow type is "data_intg_flow".
Allowable values: [
data_intg_flow,sequence_job,job,data_intg_subflow,data_intg_test_case]This option applies to sequence job only, whenever the version to be retrieved should be volatile or not. In case this is set to false, last non-volatile version will be get. In case it's set to true, volatile version will be get.
The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:remove secrets from exported flows, default value is false.
Examples:include dependencies. If no dependencies are specified in the payload, all dependencies will be included.
Examples:The list of flow IDs to export.
Type of flow. The default flow type is "data_intg_flow". It is only used with 'id' parameter.
include data_assets. If set as true, all referenced data_assets will be included.
Examples:skip the actual data files for datastage dataset and fileset when exporting flows as zip.
Examples:Filter assets to be exported by asset type.
The encryption key to encrypt credentials on export or to decrypt them on import.
Export binaries.
Examples:
parameters
list of flows and their dependencies.
- flows
list of flow dependencies.
- dependencies
dependency id.
dependency name.
dependency type.
flow id.
flow name.
Type of flow. The default flow type is "data_intg_flow".
Allowable values: [
data_intg_flow,sequence_job,job,data_intg_subflow,data_intg_test_case]This option applies to sequence job only, whenever the version to be retrieved should be volatile or not. In case this is set to false, last non-volatile version will be get. In case it's set to true, volatile version will be get.
The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:remove secrets from exported flows, default value is false.
Examples:include dependencies. If no dependencies are specified in the payload, all dependencies will be included.
Examples:The list of flow IDs to export.
Type of flow. The default flow type is "data_intg_flow". It is only used with 'id' parameter.
include data_assets. If set as true, all referenced data_assets will be included.
Examples:skip the actual data files for datastage dataset and fileset when exporting flows as zip.
Examples:Filter assets to be exported by asset type.
The encryption key to encrypt credentials on export or to decrypt them on import.
Export binaries.
Examples:
Response
Response type: InputStream
Response type: NodeJS.ReadableStream
Response type: BinaryIO
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred. See response for more information.
No Sample Response
Create data flows from zip file
Creates data flows from the attached job export file. This is an asynchronous call. The API call returns almost immediately which does not necessarily imply the completion of the import request. It only means that the import request has been accepted. The status field of the import request is included in the import response object. The status "completed" ("in_progress", "failed", resp.) indicates the import request is completed (in progress, and failed, resp.) The job export file for an import request may contain one mor more data flows. Unless the on_failure option is set to "stop", a completed import request may contain not only successfully imported data flows but also data flows that cannot be imported.
Creates data flows from the attached job export file. This is an asynchronous call. The API call returns almost immediately which does not necessarily imply the completion of the import request. It only means that the import request has been accepted. The status field of the import request is included in the import response object. The status "completed" ("in_progress", "failed", resp.) indicates the import request is completed (in progress, and failed, resp.) The job export file for an import request may contain one mor more data flows. Unless the on_failure option is set to "stop", a completed import request may contain not only successfully imported data flows but also data flows that cannot be imported.
Creates data flows from the attached job export file. This is an asynchronous call. The API call returns almost immediately which does not necessarily imply the completion of the import request. It only means that the import request has been accepted. The status field of the import request is included in the import response object. The status "completed" ("in_progress", "failed", resp.) indicates the import request is completed (in progress, and failed, resp.) The job export file for an import request may contain one mor more data flows. Unless the on_failure option is set to "stop", a completed import request may contain not only successfully imported data flows but also data flows that cannot be imported.
Creates data flows from the attached job export file. This is an asynchronous call. The API call returns almost immediately which does not necessarily imply the completion of the import request. It only means that the import request has been accepted. The status field of the import request is included in the import response object. The status "completed" ("in_progress", "failed", resp.) indicates the import request is completed (in progress, and failed, resp.) The job export file for an import request may contain one mor more data flows. Unless the on_failure option is set to "stop", a completed import request may contain not only successfully imported data flows but also data flows that cannot be imported.
POST /v3/migration/zip_imports
ServiceCall<ImportResponse> createFromZipMigration(CreateFromZipMigrationOptions createFromZipMigrationOptions)createFromZipMigration(params)
create_from_zip_migration(
self,
body: BinaryIO,
*,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
on_failure: Optional[str] = None,
conflict_resolution: Optional[str] = None,
file_name: Optional[str] = None,
enable_notification: Optional[bool] = None,
import_only: Optional[bool] = None,
include_dependencies: Optional[bool] = None,
asset_type: Optional[str] = None,
skip_dependencies: Optional[str] = None,
replace_mode: Optional[str] = None,
x_migration_enc_key: Optional[str] = None,
import_binaries: Optional[bool] = None,
run_job_by_name: Optional[bool] = None,
enable_optimized_pipeline: Optional[bool] = None,
enable_inline_pipeline: Optional[bool] = None,
optimized_job_name_suffix: Optional[str] = None,
is_define_parameter_toggled: Optional[bool] = None,
override_dataset: Optional[bool] = None,
**kwargs,
) -> DetailedResponseRequest
Use the CreateFromZipMigrationOptions.Builder to create a CreateFromZipMigrationOptions object that contains the parameter values for the createFromZipMigration method.
Custom Headers
The encryption key to encrypt credentials on export or to decrypt them on import
Query Parameters
The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Example:
4c9adbb4-28ef-4a7d-b273-1cee0c38021fAction when the first import failure occurs. The default action is "continue" which will continue importing the remaining data flows. The "stop" action will stop the import operation upon the first error.
Allowable values: [
continue,stop]Example:
continueResolution when data flow to be imported has a name conflict with an existing data flow in the project or catalog. The default conflict resolution is "skip" will skip the data flow so that it will not be imported. The "rename" resolution will append "_Import_NNNN" suffix to the original name and use the new name for the imported data flow, while the "replace" resolution will first remove the existing data flow with the same name and import the new data flow. For the "rename_replace" option, when the flow name is already used, a new flow name with the suffix "_DATASTAGE_ISX_IMPORT" will be used. If the name is not currently used, the imported flow will be created with this name. In case the new name is already used, the existing flow will be removed first before the imported flow is created. With the rename_replace option, job creation will be determined as follows. If the job name is already used, a new job name with the suffix ".DataStage job" will be used. If the new job name is not currently used, the job will be created with this name. In case the new job name is already used, the job creation will not happen and an error will be raised.
Allowable values: [
skip,rename,replace,rename_replace]Example:
renameName of the input file, if it exists.
Example:
myFlows.isxenable/disable notification. Default value is true.
Skip flow compilation.
If set to false, skip dependencies, only import flow. If not specified or set to true, import flow and dependencies.
Asset types separated by commas (','). Only assets of these types will be imported. If not specified, all supported assets will be imported.
Example:
data_intg_flow,parameter_setSkip dependencies separated by commas (','), it is only used with replace option. If specified the skiped dependencies, only assets of these types will be skiped if the assets are existed.
Example:
connection,parameter_set,subflowThis parameter takes effect when conflict_resolution is set to replace or skip. soft- merge the parameter set, add new parameter only, keep the old value sets hard- replace all
Allowable values: [
soft,hard]Example:
hardImport binaries
run the datastage/pipeline job by job name, default is by id.
Example:
trueEnable optimized pipeline. Default value is false.
enable inline mode for the pipeline compilation.
Optimized sequence job name suffix, project default setting will take effect if not specified.
Prompt runtime parameters settings before running a flow.
This parameter takes effect when conflict_resolution is set to replace for data_set and file_set. by defaut it is false, break importing if the header file is a common path and already exists in the cluster, otherwise force override it
The zip file to import. The maximum file size is 1GB.
The createFromZipMigration options.
The zip file to import. The maximum file size is 1GB.
The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:4c9adbb4-28ef-4a7d-b273-1cee0c38021fAction when the first import failure occurs. The default action is "continue" which will continue importing the remaining data flows. The "stop" action will stop the import operation upon the first error.
Allowable values: [
continue,stop]Examples:continue
Resolution when data flow to be imported has a name conflict with an existing data flow in the project or catalog. The default conflict resolution is "skip" will skip the data flow so that it will not be imported. The "rename" resolution will append "_Import_NNNN" suffix to the original name and use the new name for the imported data flow, while the "replace" resolution will first remove the existing data flow with the same name and import the new data flow. For the "rename_replace" option, when the flow name is already used, a new flow name with the suffix "_DATASTAGE_ISX_IMPORT" will be used. If the name is not currently used, the imported flow will be created with this name. In case the new name is already used, the existing flow will be removed first before the imported flow is created. With the rename_replace option, job creation will be determined as follows. If the job name is already used, a new job name with the suffix ".DataStage job" will be used. If the new job name is not currently used, the job will be created with this name. In case the new job name is already used, the job creation will not happen and an error will be raised.
Allowable values: [
skip,rename,replace,rename_replace]Examples:rename
Name of the input file, if it exists.
Examples:myFlows.isx
enable/disable notification. Default value is true.
Examples:falseSkip flow compilation.
Examples:falseIf set to false, skip dependencies, only import flow. If not specified or set to true, import flow and dependencies.
Examples:falseAsset types separated by commas (','). Only assets of these types will be imported. If not specified, all supported assets will be imported.
Examples:data_intg_flow,parameter_set
Skip dependencies separated by commas (','), it is only used with replace option. If specified the skiped dependencies, only assets of these types will be skiped if the assets are existed.
Examples:connection,parameter_set,subflow
This parameter takes effect when conflict_resolution is set to replace or skip. soft- merge the parameter set, add new parameter only, keep the old value sets hard- replace all.
Allowable values: [
soft,hard]Examples:hard
The encryption key to encrypt credentials on export or to decrypt them on import.
Import binaries.
Examples:falserun the datastage/pipeline job by job name, default is by id.
Examples:trueEnable optimized pipeline. Default value is false.
Examples:falseenable inline mode for the pipeline compilation.
Examples:falseOptimized sequence job name suffix, project default setting will take effect if not specified.
Prompt runtime parameters settings before running a flow.
Examples:falseThis parameter takes effect when conflict_resolution is set to replace for data_set and file_set. by defaut it is false, break importing if the header file is a common path and already exists in the cluster, otherwise force override it.
parameters
The zip file to import. The maximum file size is 1GB.
The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:Action when the first import failure occurs. The default action is "continue" which will continue importing the remaining data flows. The "stop" action will stop the import operation upon the first error.
Allowable values: [
continue,stop]Examples:Resolution when data flow to be imported has a name conflict with an existing data flow in the project or catalog. The default conflict resolution is "skip" will skip the data flow so that it will not be imported. The "rename" resolution will append "_Import_NNNN" suffix to the original name and use the new name for the imported data flow, while the "replace" resolution will first remove the existing data flow with the same name and import the new data flow. For the "rename_replace" option, when the flow name is already used, a new flow name with the suffix "_DATASTAGE_ISX_IMPORT" will be used. If the name is not currently used, the imported flow will be created with this name. In case the new name is already used, the existing flow will be removed first before the imported flow is created. With the rename_replace option, job creation will be determined as follows. If the job name is already used, a new job name with the suffix ".DataStage job" will be used. If the new job name is not currently used, the job will be created with this name. In case the new job name is already used, the job creation will not happen and an error will be raised.
Allowable values: [
skip,rename,replace,rename_replace]Examples:Name of the input file, if it exists.
Examples:enable/disable notification. Default value is true.
Examples:Skip flow compilation.
Examples:If set to false, skip dependencies, only import flow. If not specified or set to true, import flow and dependencies.
Examples:Asset types separated by commas (','). Only assets of these types will be imported. If not specified, all supported assets will be imported.
Examples:Skip dependencies separated by commas (','), it is only used with replace option. If specified the skiped dependencies, only assets of these types will be skiped if the assets are existed.
Examples:This parameter takes effect when conflict_resolution is set to replace or skip. soft- merge the parameter set, add new parameter only, keep the old value sets hard- replace all.
Allowable values: [
soft,hard]Examples:The encryption key to encrypt credentials on export or to decrypt them on import.
Import binaries.
Examples:run the datastage/pipeline job by job name, default is by id.
Examples:Enable optimized pipeline. Default value is false.
Examples:enable inline mode for the pipeline compilation.
Examples:Optimized sequence job name suffix, project default setting will take effect if not specified.
Prompt runtime parameters settings before running a flow.
Examples:This parameter takes effect when conflict_resolution is set to replace for data_set and file_set. by defaut it is false, break importing if the header file is a common path and already exists in the cluster, otherwise force override it.
parameters
The zip file to import. The maximum file size is 1GB.
The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:Action when the first import failure occurs. The default action is "continue" which will continue importing the remaining data flows. The "stop" action will stop the import operation upon the first error.
Allowable values: [
continue,stop]Examples:Resolution when data flow to be imported has a name conflict with an existing data flow in the project or catalog. The default conflict resolution is "skip" will skip the data flow so that it will not be imported. The "rename" resolution will append "_Import_NNNN" suffix to the original name and use the new name for the imported data flow, while the "replace" resolution will first remove the existing data flow with the same name and import the new data flow. For the "rename_replace" option, when the flow name is already used, a new flow name with the suffix "_DATASTAGE_ISX_IMPORT" will be used. If the name is not currently used, the imported flow will be created with this name. In case the new name is already used, the existing flow will be removed first before the imported flow is created. With the rename_replace option, job creation will be determined as follows. If the job name is already used, a new job name with the suffix ".DataStage job" will be used. If the new job name is not currently used, the job will be created with this name. In case the new job name is already used, the job creation will not happen and an error will be raised.
Allowable values: [
skip,rename,replace,rename_replace]Examples:Name of the input file, if it exists.
Examples:enable/disable notification. Default value is true.
Examples:Skip flow compilation.
Examples:If set to false, skip dependencies, only import flow. If not specified or set to true, import flow and dependencies.
Examples:Asset types separated by commas (','). Only assets of these types will be imported. If not specified, all supported assets will be imported.
Examples:Skip dependencies separated by commas (','), it is only used with replace option. If specified the skiped dependencies, only assets of these types will be skiped if the assets are existed.
Examples:This parameter takes effect when conflict_resolution is set to replace or skip. soft- merge the parameter set, add new parameter only, keep the old value sets hard- replace all.
Allowable values: [
soft,hard]Examples:The encryption key to encrypt credentials on export or to decrypt them on import.
Import binaries.
Examples:run the datastage/pipeline job by job name, default is by id.
Examples:Enable optimized pipeline. Default value is false.
Examples:enable inline mode for the pipeline compilation.
Examples:Optimized sequence job name suffix, project default setting will take effect if not specified.
Prompt runtime parameters settings before running a flow.
Examples:This parameter takes effect when conflict_resolution is set to replace for data_set and file_set. by defaut it is false, break importing if the header file is a common path and already exists in the cluster, otherwise force override it.
Response
Response object of an import request.
Import the response entity.
Import the response metadata.
Response object of an import request.
Import the response entity.
- entity
Account ID of the user who cancelled the import request. This field is required only when the status field is "cancelled".
Examples:user1@company1.com
The conflict_resolution option used for the import.
The duration of import processing time.
The timestamp when the import opearton completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
All data flows imported or to be imported. Each ImportFlow object contains status for the individual data flow import operation.
- importDataFlows
conflict resolution status.
Possible values: [
flow_replacement_succeeded,flow_replacement_failed,import_flow_renamed,import_flow_skipped,connection_replacement_succeeded,connection_replacement_failed,connection_renamed,connection_skipped,parameter_set_replacement_succeeded,parameter_set_replacement_failed,parameter_set_renamed,parameter_set_skipped,table_definition_replacement_succeeded,table_definition_replacement_failed,table_definition_renamed,table_definition_skipped,sequence_job_replacement_succeeded,sequence_job_replacement_failed,sequence_job_renamed,sequence_job_skipped,subflow_replacement_succeeded,subflow_replacement_failed,subflow_renamed,subflow_skipped,message_handler_replacement_succeeded,message_handler_replacement_failed,message_handler_renamed,message_handler_skipped,build_stage_replacement_succeeded,build_stage_replacement_failed,build_stage_renamed,build_stage_skipped,wrapped_stage_replacement_succeeded,wrapped_stage_replacement_failed,wrapped_stage_renamed,wrapped_stage_skipped,custom_stage_replacement_succeeded,custom_stage_replacement_failed,custom_stage_renamed,custom_stage_skipped,data_quality_rule_replacement_succeeded,data_quality_rule_replacement_failed,data_quality_rule_renamed,data_quality_rule_skipped,data_quality_rule_definition_replacement_succeeded,data_quality_rule_definition_failed,data_quality_rule_definition_renamed,data_quality_rule_definition_skipped,cff_schema_replacement_succeeded,cff_schema_replacement_failed,cff_schema_renamed,cff_schema_skipped,test_case_replacement_succeeded,test_case_replacement_failed,test_case_renamed,test_case_skipped]Examples:import_flow_renamed
The timestamp when the flow import is completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
The errors array report all the problems preventing the data flow from being successfully imported.
- errors
additional error text.
error object name.
error stage type.
error type.
Possible values: [
unsupported_stage_type,unsupported_feature,empty_json,isx_conversion_error,model_conversion_error,invalid_input_type,invalid_json_format,json_conversion_error,flow_deletion_error,flow_creation_error,flow_response_parsing_error,auth_token_error,flow_compilation_error,empty_stage_list,empty_stage_node,missing_stage_type_class_name,dummy_stage,missing_stage_type,missing_repos_id,stage_conversion_error,unimplemented_stage_type,job_creation_error,job_run_error,flow_search_error,unsupported_job_type,internal_error,connection_creation_error,flow_rename_error,duplicate_job_error,parameter_set_creation_error,distributed_lock_error,duplicate_object_error,unbound_object_reference,table_def_creation_error,connection_creation_api_error,connection_patch_api_error,connection_deletion_api_error,sequence_job_creation_error,unsupported_stage_type_in_subflow,job_update_error,job_deletion_error,sequence_job_deletion_error,parameter_set_deletion_error,invalid_property_name,invalid_property_value,unsupported_subflow_with_multiple_parents,message_handler_creation_error,message_handler_deletion_error,unsupported_server_container,build_stage_creation_error,build_stage_deletion_error,build_stage_generation_error,data_quality_spec_creation_error,xml_schema_library_creation_error,wrapped_stage_creation_error,wrapped_stage_deletion_error,wrapped_stage_generation_error,insufficient_memory_error,out_of_memory_error,routine_conversion_error,routine_creation_error,routine_deletion_error,custom_stage_creation_error,custom_stage_deletion_error,invalid_file_type,sequence_job_exceeding_max_capacity_error,flow_update_error,parameter_set_update_error,invalid_job_file_type,incorrectset_xmlinputpx,missing_column_definition,cp4d_only_stage_type,unsupported_sybase_lookup,job_get_error,flow_export_error,odm_library_creation_error,generated_parameter_set_error,data_quality_rule_creation_error,xml_schema_library_empty_error,data_asset_creation_error,data_asset_deletion_error,test_case_creation_error,test_case_deletion_error,custom_function_creation_error,sequence_job_update_error,cp4d_only_stage_type_in_subflow,data_set_creation_error,file_set_creation_error,missing_job_referred_flow,custom_stage_name_conflict_error,data_rule_definition_creation_error,CFF_SCHEMA_CREATION_ERROR,build_op_registration_error]
Unique id of the data flow. This field is returned only if the underlying data flow has been successfully imported.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136Unique id of the job. This field is returned only if the corresponding job object has been successfully created.
Examples:ccfaaafd-810d-4f0e-b0a9-228c328a0136Job name. This field is returned only if the corresponding job object has been successfully created.
Examples:Aggregator12_DataStage_1
(deprecated) original type of the job or data flow in the import file.
Possible values: [
px_job,server_job,connection,table_def]Examples:px_job
Name of the imported data flow.
Examples:cancel-reservation-job
Name of the data flow to be imported.
Examples:cancel-reservation-job
The ID of an existing asset this object refers to. If ref_asset_id is specified, the id field will be the same as ref_asset_id for backward compatibility.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136data import status.
Possible values: [
completed,in_progress,failed,skipped,deprecated,unsupported,flow_conversion_failed,flow_creation_failed,flow_compilation_failed,job_creation_failed,job_run_failed,connection_conversion_failed,connection_creation_failed,parameter_set_conversion_failed,parameter_set_creation_failed,data_quality_spec_conversion_failed,data_quality_spec_creation_failed,table_definition_conversion_failed,table_definition_creation_failed,job_update_failed,job_deletion_failed,sequence_job_conversion_failed,sequence_job_creation_failed,message_handler_conversion_failed,message_handler_creation_failed,build_stage_conversion_failed,build_stage_creation_failed,build_stage_generation_failed,xml_schema_library_creation_failed,wrapped_stage_conversion_failed,wrapped_stage_creation_failed,wrapped_stage_generation_failed,routine_conversion_failed,routine_creation_failed,custom_stage_conversion_failed,custom_stage_creation_failed,flow_update_failed,parameter_set_update_failed,completed_with_error,data_quality_rule_creation_failed,data_asset_creation_failed,custom_function_creation_failed,data_set_creation_failed,file_set_creation_failed,data_rule_definition_creation_failed,cff_schema_creation_failed,cff_schema_conversion_failed,test_case_creation_failed,test_case_conversion_failed]Examples:completed
type of the job or data connection in the import file.
Possible values: [
px_job,server_job,connection,table_def,parameter_set,routine,subflow,sequence_job,data_quality_spec,server_container,message_handler,build_stage,wrapped_stage,custom_stage,xml_schema_library,custom_stage_library,function_library,data_intg_parallel_function,data_intg_java_library,odm_ruleset,job,data_quality_rule,data_quality_definition,data_asset,data_intg_data_set,data_intg_file_set,odbc_configuration,data_intg_cff_schema,data_intg_test_case,data_element,ims_database,ims_viewset,mainframe_job,machine_profile,mainframe_routine,parallel_routine,transform,jdbc_driver]Examples:px_job
The warnings array report all the warnings in the data flow import operation.
- warnings
additional warning text.
warning object name.
warning type.
Possible values: [
unreleased_stage_type,unreleased_feature,credentials_file_warning,transformer_trigger_unsupported,transformer_buildtab_unsupported,unsupported_secure_gateway,placeholder_connection_parameters,description_truncated,empty_stage_list,missing_parameter_set,missing_subflow,missing_table_definition,missing_data_connection,duplicate_job,unsupported_connection,unsupported_activity_trigger,unsupported_expression,unsupported_function,unsupported_operator,unsupported_variable,unsupported_activity_variable,default_connection_parameter,unmatched_for_var_warning,unsupported_routine_statement,unsupported_before_after_routine,attachment_creation_failed,unsupported_job_control,unsupported_drs_rdbms_parameter,dataconnection_conflict,placeholder_parameter_set,missing_routine,unsupported_stored_procedure_transform,unattached_links,unreleased_datasource_in_storedprocedure,unsupported_parquet_compression_type_lzo,unsupported_stppx_dbvendor_parameter,unsupported_sappack_connector_property,undefined_variable,unsupported_xmloutputpx_features,xml_transform_missed_schema,unsupported_stage_type,unbound_reference,missing_stage_type,unsupported_dependency,missing_data_asset,unsupported_svtransformer_before_after_subroutine,deprecated_load_control_properties,missing_parameter,unsupported_stage_type_in_subflow,unsupported_xmlinputpx_features,export_request_in_progress,unsupported_stppx_for_db2i_db2z,deprecated_connection_property,hard_coded_file_path,unsupported_property_value,unsupported_property,unsupported_dcn_data_connection,xmlstage_unsupported_function,attachment_update_failed,unchanged_parameter_set,connection_parameter_set_created,connection_parameter_set_patched,skipped_duplicate_connection,unchanged_connection,unsupported_xpath,parameters_in_existing_parameter_set,different_parameter_type_in_parameter_set,unsupported_dataquality_rename,unsupported_data_connection_secure_gateway,missing_default_parameter_value,missing_referenced_connection,unsupported_routine_activity,unsupported_execution_action_validate,unsupported_execution_action_reset,runtime_jis_generation_failed,missing_job,missing_dataset,missing_fileset,identical_parameter_set,missing_runtime_environment,missing_match_spec,unsupported_before_after_subroutine,deprecated_stage_type,missing_storage_volume,unsupported_websphereMQ_server_mode,unsupported_parameter_name,unsupported_property_on_cloud,invalid_job_schedule,missing_cff_schema,missing_data_intg_flow,job_import_skipped,unsupported_spparam_type,unsupported_multiple_reject_links,binaries_import_failed,missing_binaries,deprecated_job_environment_input,unclear_numeric_type,duplicate_parameter_name,user_status_migrated_to_global_user_variable,user_status_migrated_to_dssetuserstatus_function,subroutine_unsupported_on_cloud]
import error.
import type.
list of all missing assets.
- missingAssets
name of asset.
type of asset.
Name of the import request.
Examples:seat-reservation-jobs
Import event notifications.
- notifications
The timestamp when the notification was created. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Notification id.
Import/Export status associated with the notification.
The on_failure option used for the import.
Estimate of remaining time in seconds.
The timestamp when the import opearton started. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import status.
Possible values: [
in_progress,cancelled,queued,started,completed]Examples:in_progress
Import statistics. total = imported (including renamed and replaced) + skipped + failed + deprecated + unsupported + pending.
- tally
Total number of build stages.
Total number of cff schemas.
Total number of connection creation failures.
Total number of data connections.
Total number of custom stages.
Total number of data assets.
Total number of data elements.
Total number of data quality rules.
Total number of data quality rules.
Total number of data quality spec.
Total number of datastage datasets.
Total number of deprecated resources in the import file.
Total number of data flows that cannot be imported due to import errors.
Total number of datastage filesets.
Total number of data flows that failed to compile.
Total number of flow creation failures.
Total number of function libraries.
Total number of data flows successfully imported.
Total number of IMS databases.
Total number of IMS viewsets.
Total number of java libraries.
Total number of job creation failures.
Total number of machine profiles.
Total number of mainframe jobs.
Total number of mainframe routines.
Total number of message handlers.
Total number of ilogjrule/odm libraries.
Total number of parallel jobs.
Total number of parallel routines.
Total number of parameter sets.
Total number of data flows that have not been processed.
Total number of data flows successfully imported and renamed due to a name conflict. The renamed count is included in the imported count.
Total number of existing data flows replaced by imported flows. The replaced count is included in the imported count.
Total number of routines.
Total number of sequence job creation failures.
Total number of sequence jobs.
Total number of server containers.
Total number of data flows skipped due to name conflicts. The skipped count is not included in the failed count or imported count.
Total number of parallel job subflows.
Total number of table definitions.
Total number of test cases.
Total number of data flows to be imported.
Total number of transforms.
Total number of unsupported resources in the import file.
Total number of wrapped stages.
Total number of xml schema libraries.
Import the response metadata.
- metadata
Catalog id.
Catalog name.
The timestamp when the import API was submitted. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Account ID of the user who submitted the import request.
Examples:user1@company1.com
The unique import id.
The timestamp when the import status was last updated. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import file name.
Project id.
Project name.
Space id.
Space name.
The URL which can be used to get the status of the import request right after it is submitted.
Response object of an import request.
Import the response entity.
- entity
Account ID of the user who cancelled the import request. This field is required only when the status field is "cancelled".
Examples:user1@company1.com
The conflict_resolution option used for the import.
The duration of import processing time.
The timestamp when the import opearton completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
All data flows imported or to be imported. Each ImportFlow object contains status for the individual data flow import operation.
- import_data_flows
conflict resolution status.
Possible values: [
flow_replacement_succeeded,flow_replacement_failed,import_flow_renamed,import_flow_skipped,connection_replacement_succeeded,connection_replacement_failed,connection_renamed,connection_skipped,parameter_set_replacement_succeeded,parameter_set_replacement_failed,parameter_set_renamed,parameter_set_skipped,table_definition_replacement_succeeded,table_definition_replacement_failed,table_definition_renamed,table_definition_skipped,sequence_job_replacement_succeeded,sequence_job_replacement_failed,sequence_job_renamed,sequence_job_skipped,subflow_replacement_succeeded,subflow_replacement_failed,subflow_renamed,subflow_skipped,message_handler_replacement_succeeded,message_handler_replacement_failed,message_handler_renamed,message_handler_skipped,build_stage_replacement_succeeded,build_stage_replacement_failed,build_stage_renamed,build_stage_skipped,wrapped_stage_replacement_succeeded,wrapped_stage_replacement_failed,wrapped_stage_renamed,wrapped_stage_skipped,custom_stage_replacement_succeeded,custom_stage_replacement_failed,custom_stage_renamed,custom_stage_skipped,data_quality_rule_replacement_succeeded,data_quality_rule_replacement_failed,data_quality_rule_renamed,data_quality_rule_skipped,data_quality_rule_definition_replacement_succeeded,data_quality_rule_definition_failed,data_quality_rule_definition_renamed,data_quality_rule_definition_skipped,cff_schema_replacement_succeeded,cff_schema_replacement_failed,cff_schema_renamed,cff_schema_skipped,test_case_replacement_succeeded,test_case_replacement_failed,test_case_renamed,test_case_skipped]Examples:import_flow_renamed
The timestamp when the flow import is completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
The errors array report all the problems preventing the data flow from being successfully imported.
- errors
additional error text.
error object name.
error stage type.
error type.
Possible values: [
unsupported_stage_type,unsupported_feature,empty_json,isx_conversion_error,model_conversion_error,invalid_input_type,invalid_json_format,json_conversion_error,flow_deletion_error,flow_creation_error,flow_response_parsing_error,auth_token_error,flow_compilation_error,empty_stage_list,empty_stage_node,missing_stage_type_class_name,dummy_stage,missing_stage_type,missing_repos_id,stage_conversion_error,unimplemented_stage_type,job_creation_error,job_run_error,flow_search_error,unsupported_job_type,internal_error,connection_creation_error,flow_rename_error,duplicate_job_error,parameter_set_creation_error,distributed_lock_error,duplicate_object_error,unbound_object_reference,table_def_creation_error,connection_creation_api_error,connection_patch_api_error,connection_deletion_api_error,sequence_job_creation_error,unsupported_stage_type_in_subflow,job_update_error,job_deletion_error,sequence_job_deletion_error,parameter_set_deletion_error,invalid_property_name,invalid_property_value,unsupported_subflow_with_multiple_parents,message_handler_creation_error,message_handler_deletion_error,unsupported_server_container,build_stage_creation_error,build_stage_deletion_error,build_stage_generation_error,data_quality_spec_creation_error,xml_schema_library_creation_error,wrapped_stage_creation_error,wrapped_stage_deletion_error,wrapped_stage_generation_error,insufficient_memory_error,out_of_memory_error,routine_conversion_error,routine_creation_error,routine_deletion_error,custom_stage_creation_error,custom_stage_deletion_error,invalid_file_type,sequence_job_exceeding_max_capacity_error,flow_update_error,parameter_set_update_error,invalid_job_file_type,incorrectset_xmlinputpx,missing_column_definition,cp4d_only_stage_type,unsupported_sybase_lookup,job_get_error,flow_export_error,odm_library_creation_error,generated_parameter_set_error,data_quality_rule_creation_error,xml_schema_library_empty_error,data_asset_creation_error,data_asset_deletion_error,test_case_creation_error,test_case_deletion_error,custom_function_creation_error,sequence_job_update_error,cp4d_only_stage_type_in_subflow,data_set_creation_error,file_set_creation_error,missing_job_referred_flow,custom_stage_name_conflict_error,data_rule_definition_creation_error,CFF_SCHEMA_CREATION_ERROR,build_op_registration_error]
Unique id of the data flow. This field is returned only if the underlying data flow has been successfully imported.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136Unique id of the job. This field is returned only if the corresponding job object has been successfully created.
Examples:ccfaaafd-810d-4f0e-b0a9-228c328a0136Job name. This field is returned only if the corresponding job object has been successfully created.
Examples:Aggregator12_DataStage_1
(deprecated) original type of the job or data flow in the import file.
Possible values: [
px_job,server_job,connection,table_def]Examples:px_job
Name of the imported data flow.
Examples:cancel-reservation-job
Name of the data flow to be imported.
Examples:cancel-reservation-job
The ID of an existing asset this object refers to. If ref_asset_id is specified, the id field will be the same as ref_asset_id for backward compatibility.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136data import status.
Possible values: [
completed,in_progress,failed,skipped,deprecated,unsupported,flow_conversion_failed,flow_creation_failed,flow_compilation_failed,job_creation_failed,job_run_failed,connection_conversion_failed,connection_creation_failed,parameter_set_conversion_failed,parameter_set_creation_failed,data_quality_spec_conversion_failed,data_quality_spec_creation_failed,table_definition_conversion_failed,table_definition_creation_failed,job_update_failed,job_deletion_failed,sequence_job_conversion_failed,sequence_job_creation_failed,message_handler_conversion_failed,message_handler_creation_failed,build_stage_conversion_failed,build_stage_creation_failed,build_stage_generation_failed,xml_schema_library_creation_failed,wrapped_stage_conversion_failed,wrapped_stage_creation_failed,wrapped_stage_generation_failed,routine_conversion_failed,routine_creation_failed,custom_stage_conversion_failed,custom_stage_creation_failed,flow_update_failed,parameter_set_update_failed,completed_with_error,data_quality_rule_creation_failed,data_asset_creation_failed,custom_function_creation_failed,data_set_creation_failed,file_set_creation_failed,data_rule_definition_creation_failed,cff_schema_creation_failed,cff_schema_conversion_failed,test_case_creation_failed,test_case_conversion_failed]Examples:completed
type of the job or data connection in the import file.
Possible values: [
px_job,server_job,connection,table_def,parameter_set,routine,subflow,sequence_job,data_quality_spec,server_container,message_handler,build_stage,wrapped_stage,custom_stage,xml_schema_library,custom_stage_library,function_library,data_intg_parallel_function,data_intg_java_library,odm_ruleset,job,data_quality_rule,data_quality_definition,data_asset,data_intg_data_set,data_intg_file_set,odbc_configuration,data_intg_cff_schema,data_intg_test_case,data_element,ims_database,ims_viewset,mainframe_job,machine_profile,mainframe_routine,parallel_routine,transform,jdbc_driver]Examples:px_job
The warnings array report all the warnings in the data flow import operation.
- warnings
additional warning text.
warning object name.
warning type.
Possible values: [
unreleased_stage_type,unreleased_feature,credentials_file_warning,transformer_trigger_unsupported,transformer_buildtab_unsupported,unsupported_secure_gateway,placeholder_connection_parameters,description_truncated,empty_stage_list,missing_parameter_set,missing_subflow,missing_table_definition,missing_data_connection,duplicate_job,unsupported_connection,unsupported_activity_trigger,unsupported_expression,unsupported_function,unsupported_operator,unsupported_variable,unsupported_activity_variable,default_connection_parameter,unmatched_for_var_warning,unsupported_routine_statement,unsupported_before_after_routine,attachment_creation_failed,unsupported_job_control,unsupported_drs_rdbms_parameter,dataconnection_conflict,placeholder_parameter_set,missing_routine,unsupported_stored_procedure_transform,unattached_links,unreleased_datasource_in_storedprocedure,unsupported_parquet_compression_type_lzo,unsupported_stppx_dbvendor_parameter,unsupported_sappack_connector_property,undefined_variable,unsupported_xmloutputpx_features,xml_transform_missed_schema,unsupported_stage_type,unbound_reference,missing_stage_type,unsupported_dependency,missing_data_asset,unsupported_svtransformer_before_after_subroutine,deprecated_load_control_properties,missing_parameter,unsupported_stage_type_in_subflow,unsupported_xmlinputpx_features,export_request_in_progress,unsupported_stppx_for_db2i_db2z,deprecated_connection_property,hard_coded_file_path,unsupported_property_value,unsupported_property,unsupported_dcn_data_connection,xmlstage_unsupported_function,attachment_update_failed,unchanged_parameter_set,connection_parameter_set_created,connection_parameter_set_patched,skipped_duplicate_connection,unchanged_connection,unsupported_xpath,parameters_in_existing_parameter_set,different_parameter_type_in_parameter_set,unsupported_dataquality_rename,unsupported_data_connection_secure_gateway,missing_default_parameter_value,missing_referenced_connection,unsupported_routine_activity,unsupported_execution_action_validate,unsupported_execution_action_reset,runtime_jis_generation_failed,missing_job,missing_dataset,missing_fileset,identical_parameter_set,missing_runtime_environment,missing_match_spec,unsupported_before_after_subroutine,deprecated_stage_type,missing_storage_volume,unsupported_websphereMQ_server_mode,unsupported_parameter_name,unsupported_property_on_cloud,invalid_job_schedule,missing_cff_schema,missing_data_intg_flow,job_import_skipped,unsupported_spparam_type,unsupported_multiple_reject_links,binaries_import_failed,missing_binaries,deprecated_job_environment_input,unclear_numeric_type,duplicate_parameter_name,user_status_migrated_to_global_user_variable,user_status_migrated_to_dssetuserstatus_function,subroutine_unsupported_on_cloud]
import error.
import type.
list of all missing assets.
- missing_assets
name of asset.
type of asset.
Name of the import request.
Examples:seat-reservation-jobs
Import event notifications.
- notifications
The timestamp when the notification was created. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Notification id.
Import/Export status associated with the notification.
The on_failure option used for the import.
Estimate of remaining time in seconds.
The timestamp when the import opearton started. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import status.
Possible values: [
in_progress,cancelled,queued,started,completed]Examples:in_progress
Import statistics. total = imported (including renamed and replaced) + skipped + failed + deprecated + unsupported + pending.
- tally
Total number of build stages.
Total number of cff schemas.
Total number of connection creation failures.
Total number of data connections.
Total number of custom stages.
Total number of data assets.
Total number of data elements.
Total number of data quality rules.
Total number of data quality rules.
Total number of data quality spec.
Total number of datastage datasets.
Total number of deprecated resources in the import file.
Total number of data flows that cannot be imported due to import errors.
Total number of datastage filesets.
Total number of data flows that failed to compile.
Total number of flow creation failures.
Total number of function libraries.
Total number of data flows successfully imported.
Total number of IMS databases.
Total number of IMS viewsets.
Total number of java libraries.
Total number of job creation failures.
Total number of machine profiles.
Total number of mainframe jobs.
Total number of mainframe routines.
Total number of message handlers.
Total number of ilogjrule/odm libraries.
Total number of parallel jobs.
Total number of parallel routines.
Total number of parameter sets.
Total number of data flows that have not been processed.
Total number of data flows successfully imported and renamed due to a name conflict. The renamed count is included in the imported count.
Total number of existing data flows replaced by imported flows. The replaced count is included in the imported count.
Total number of routines.
Total number of sequence job creation failures.
Total number of sequence jobs.
Total number of server containers.
Total number of data flows skipped due to name conflicts. The skipped count is not included in the failed count or imported count.
Total number of parallel job subflows.
Total number of table definitions.
Total number of test cases.
Total number of data flows to be imported.
Total number of transforms.
Total number of unsupported resources in the import file.
Total number of wrapped stages.
Total number of xml schema libraries.
Import the response metadata.
- metadata
Catalog id.
Catalog name.
The timestamp when the import API was submitted. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Account ID of the user who submitted the import request.
Examples:user1@company1.com
The unique import id.
The timestamp when the import status was last updated. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import file name.
Project id.
Project name.
Space id.
Space name.
The URL which can be used to get the status of the import request right after it is submitted.
Response object of an import request.
Import the response entity.
- entity
Account ID of the user who cancelled the import request. This field is required only when the status field is "cancelled".
Examples:user1@company1.com
The conflict_resolution option used for the import.
The duration of import processing time.
The timestamp when the import opearton completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
All data flows imported or to be imported. Each ImportFlow object contains status for the individual data flow import operation.
- import_data_flows
conflict resolution status.
Possible values: [
flow_replacement_succeeded,flow_replacement_failed,import_flow_renamed,import_flow_skipped,connection_replacement_succeeded,connection_replacement_failed,connection_renamed,connection_skipped,parameter_set_replacement_succeeded,parameter_set_replacement_failed,parameter_set_renamed,parameter_set_skipped,table_definition_replacement_succeeded,table_definition_replacement_failed,table_definition_renamed,table_definition_skipped,sequence_job_replacement_succeeded,sequence_job_replacement_failed,sequence_job_renamed,sequence_job_skipped,subflow_replacement_succeeded,subflow_replacement_failed,subflow_renamed,subflow_skipped,message_handler_replacement_succeeded,message_handler_replacement_failed,message_handler_renamed,message_handler_skipped,build_stage_replacement_succeeded,build_stage_replacement_failed,build_stage_renamed,build_stage_skipped,wrapped_stage_replacement_succeeded,wrapped_stage_replacement_failed,wrapped_stage_renamed,wrapped_stage_skipped,custom_stage_replacement_succeeded,custom_stage_replacement_failed,custom_stage_renamed,custom_stage_skipped,data_quality_rule_replacement_succeeded,data_quality_rule_replacement_failed,data_quality_rule_renamed,data_quality_rule_skipped,data_quality_rule_definition_replacement_succeeded,data_quality_rule_definition_failed,data_quality_rule_definition_renamed,data_quality_rule_definition_skipped,cff_schema_replacement_succeeded,cff_schema_replacement_failed,cff_schema_renamed,cff_schema_skipped,test_case_replacement_succeeded,test_case_replacement_failed,test_case_renamed,test_case_skipped]Examples:import_flow_renamed
The timestamp when the flow import is completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
The errors array report all the problems preventing the data flow from being successfully imported.
- errors
additional error text.
error object name.
error stage type.
error type.
Possible values: [
unsupported_stage_type,unsupported_feature,empty_json,isx_conversion_error,model_conversion_error,invalid_input_type,invalid_json_format,json_conversion_error,flow_deletion_error,flow_creation_error,flow_response_parsing_error,auth_token_error,flow_compilation_error,empty_stage_list,empty_stage_node,missing_stage_type_class_name,dummy_stage,missing_stage_type,missing_repos_id,stage_conversion_error,unimplemented_stage_type,job_creation_error,job_run_error,flow_search_error,unsupported_job_type,internal_error,connection_creation_error,flow_rename_error,duplicate_job_error,parameter_set_creation_error,distributed_lock_error,duplicate_object_error,unbound_object_reference,table_def_creation_error,connection_creation_api_error,connection_patch_api_error,connection_deletion_api_error,sequence_job_creation_error,unsupported_stage_type_in_subflow,job_update_error,job_deletion_error,sequence_job_deletion_error,parameter_set_deletion_error,invalid_property_name,invalid_property_value,unsupported_subflow_with_multiple_parents,message_handler_creation_error,message_handler_deletion_error,unsupported_server_container,build_stage_creation_error,build_stage_deletion_error,build_stage_generation_error,data_quality_spec_creation_error,xml_schema_library_creation_error,wrapped_stage_creation_error,wrapped_stage_deletion_error,wrapped_stage_generation_error,insufficient_memory_error,out_of_memory_error,routine_conversion_error,routine_creation_error,routine_deletion_error,custom_stage_creation_error,custom_stage_deletion_error,invalid_file_type,sequence_job_exceeding_max_capacity_error,flow_update_error,parameter_set_update_error,invalid_job_file_type,incorrectset_xmlinputpx,missing_column_definition,cp4d_only_stage_type,unsupported_sybase_lookup,job_get_error,flow_export_error,odm_library_creation_error,generated_parameter_set_error,data_quality_rule_creation_error,xml_schema_library_empty_error,data_asset_creation_error,data_asset_deletion_error,test_case_creation_error,test_case_deletion_error,custom_function_creation_error,sequence_job_update_error,cp4d_only_stage_type_in_subflow,data_set_creation_error,file_set_creation_error,missing_job_referred_flow,custom_stage_name_conflict_error,data_rule_definition_creation_error,CFF_SCHEMA_CREATION_ERROR,build_op_registration_error]
Unique id of the data flow. This field is returned only if the underlying data flow has been successfully imported.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136Unique id of the job. This field is returned only if the corresponding job object has been successfully created.
Examples:ccfaaafd-810d-4f0e-b0a9-228c328a0136Job name. This field is returned only if the corresponding job object has been successfully created.
Examples:Aggregator12_DataStage_1
(deprecated) original type of the job or data flow in the import file.
Possible values: [
px_job,server_job,connection,table_def]Examples:px_job
Name of the imported data flow.
Examples:cancel-reservation-job
Name of the data flow to be imported.
Examples:cancel-reservation-job
The ID of an existing asset this object refers to. If ref_asset_id is specified, the id field will be the same as ref_asset_id for backward compatibility.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136data import status.
Possible values: [
completed,in_progress,failed,skipped,deprecated,unsupported,flow_conversion_failed,flow_creation_failed,flow_compilation_failed,job_creation_failed,job_run_failed,connection_conversion_failed,connection_creation_failed,parameter_set_conversion_failed,parameter_set_creation_failed,data_quality_spec_conversion_failed,data_quality_spec_creation_failed,table_definition_conversion_failed,table_definition_creation_failed,job_update_failed,job_deletion_failed,sequence_job_conversion_failed,sequence_job_creation_failed,message_handler_conversion_failed,message_handler_creation_failed,build_stage_conversion_failed,build_stage_creation_failed,build_stage_generation_failed,xml_schema_library_creation_failed,wrapped_stage_conversion_failed,wrapped_stage_creation_failed,wrapped_stage_generation_failed,routine_conversion_failed,routine_creation_failed,custom_stage_conversion_failed,custom_stage_creation_failed,flow_update_failed,parameter_set_update_failed,completed_with_error,data_quality_rule_creation_failed,data_asset_creation_failed,custom_function_creation_failed,data_set_creation_failed,file_set_creation_failed,data_rule_definition_creation_failed,cff_schema_creation_failed,cff_schema_conversion_failed,test_case_creation_failed,test_case_conversion_failed]Examples:completed
type of the job or data connection in the import file.
Possible values: [
px_job,server_job,connection,table_def,parameter_set,routine,subflow,sequence_job,data_quality_spec,server_container,message_handler,build_stage,wrapped_stage,custom_stage,xml_schema_library,custom_stage_library,function_library,data_intg_parallel_function,data_intg_java_library,odm_ruleset,job,data_quality_rule,data_quality_definition,data_asset,data_intg_data_set,data_intg_file_set,odbc_configuration,data_intg_cff_schema,data_intg_test_case,data_element,ims_database,ims_viewset,mainframe_job,machine_profile,mainframe_routine,parallel_routine,transform,jdbc_driver]Examples:px_job
The warnings array report all the warnings in the data flow import operation.
- warnings
additional warning text.
warning object name.
warning type.
Possible values: [
unreleased_stage_type,unreleased_feature,credentials_file_warning,transformer_trigger_unsupported,transformer_buildtab_unsupported,unsupported_secure_gateway,placeholder_connection_parameters,description_truncated,empty_stage_list,missing_parameter_set,missing_subflow,missing_table_definition,missing_data_connection,duplicate_job,unsupported_connection,unsupported_activity_trigger,unsupported_expression,unsupported_function,unsupported_operator,unsupported_variable,unsupported_activity_variable,default_connection_parameter,unmatched_for_var_warning,unsupported_routine_statement,unsupported_before_after_routine,attachment_creation_failed,unsupported_job_control,unsupported_drs_rdbms_parameter,dataconnection_conflict,placeholder_parameter_set,missing_routine,unsupported_stored_procedure_transform,unattached_links,unreleased_datasource_in_storedprocedure,unsupported_parquet_compression_type_lzo,unsupported_stppx_dbvendor_parameter,unsupported_sappack_connector_property,undefined_variable,unsupported_xmloutputpx_features,xml_transform_missed_schema,unsupported_stage_type,unbound_reference,missing_stage_type,unsupported_dependency,missing_data_asset,unsupported_svtransformer_before_after_subroutine,deprecated_load_control_properties,missing_parameter,unsupported_stage_type_in_subflow,unsupported_xmlinputpx_features,export_request_in_progress,unsupported_stppx_for_db2i_db2z,deprecated_connection_property,hard_coded_file_path,unsupported_property_value,unsupported_property,unsupported_dcn_data_connection,xmlstage_unsupported_function,attachment_update_failed,unchanged_parameter_set,connection_parameter_set_created,connection_parameter_set_patched,skipped_duplicate_connection,unchanged_connection,unsupported_xpath,parameters_in_existing_parameter_set,different_parameter_type_in_parameter_set,unsupported_dataquality_rename,unsupported_data_connection_secure_gateway,missing_default_parameter_value,missing_referenced_connection,unsupported_routine_activity,unsupported_execution_action_validate,unsupported_execution_action_reset,runtime_jis_generation_failed,missing_job,missing_dataset,missing_fileset,identical_parameter_set,missing_runtime_environment,missing_match_spec,unsupported_before_after_subroutine,deprecated_stage_type,missing_storage_volume,unsupported_websphereMQ_server_mode,unsupported_parameter_name,unsupported_property_on_cloud,invalid_job_schedule,missing_cff_schema,missing_data_intg_flow,job_import_skipped,unsupported_spparam_type,unsupported_multiple_reject_links,binaries_import_failed,missing_binaries,deprecated_job_environment_input,unclear_numeric_type,duplicate_parameter_name,user_status_migrated_to_global_user_variable,user_status_migrated_to_dssetuserstatus_function,subroutine_unsupported_on_cloud]
import error.
import type.
list of all missing assets.
- missing_assets
name of asset.
type of asset.
Name of the import request.
Examples:seat-reservation-jobs
Import event notifications.
- notifications
The timestamp when the notification was created. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Notification id.
Import/Export status associated with the notification.
The on_failure option used for the import.
Estimate of remaining time in seconds.
The timestamp when the import opearton started. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import status.
Possible values: [
in_progress,cancelled,queued,started,completed]Examples:in_progress
Import statistics. total = imported (including renamed and replaced) + skipped + failed + deprecated + unsupported + pending.
- tally
Total number of build stages.
Total number of cff schemas.
Total number of connection creation failures.
Total number of data connections.
Total number of custom stages.
Total number of data assets.
Total number of data elements.
Total number of data quality rules.
Total number of data quality rules.
Total number of data quality spec.
Total number of datastage datasets.
Total number of deprecated resources in the import file.
Total number of data flows that cannot be imported due to import errors.
Total number of datastage filesets.
Total number of data flows that failed to compile.
Total number of flow creation failures.
Total number of function libraries.
Total number of data flows successfully imported.
Total number of IMS databases.
Total number of IMS viewsets.
Total number of java libraries.
Total number of job creation failures.
Total number of machine profiles.
Total number of mainframe jobs.
Total number of mainframe routines.
Total number of message handlers.
Total number of ilogjrule/odm libraries.
Total number of parallel jobs.
Total number of parallel routines.
Total number of parameter sets.
Total number of data flows that have not been processed.
Total number of data flows successfully imported and renamed due to a name conflict. The renamed count is included in the imported count.
Total number of existing data flows replaced by imported flows. The replaced count is included in the imported count.
Total number of routines.
Total number of sequence job creation failures.
Total number of sequence jobs.
Total number of server containers.
Total number of data flows skipped due to name conflicts. The skipped count is not included in the failed count or imported count.
Total number of parallel job subflows.
Total number of table definitions.
Total number of test cases.
Total number of data flows to be imported.
Total number of transforms.
Total number of unsupported resources in the import file.
Total number of wrapped stages.
Total number of xml schema libraries.
Import the response metadata.
- metadata
Catalog id.
Catalog name.
The timestamp when the import API was submitted. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Account ID of the user who submitted the import request.
Examples:user1@company1.com
The unique import id.
The timestamp when the import status was last updated. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import file name.
Project id.
Project name.
Space id.
Space name.
The URL which can be used to get the status of the import request right after it is submitted.
Status Code
The requested import operation has been accepted. However, the import operation may or may not be completed. The status field in the import response object describes the current status of the import. The response "Location" header provides a convenient url for retrieving the status with a GET request.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred. See response for more information.
Cancel a previous import request
Cancel a previous import request.
Cancel a previous import request.
Cancel a previous import request.
Cancel a previous import request.
DELETE /v3/migration/zip_imports/{import_id}ServiceCall<Void> deleteZipMigration(DeleteZipMigrationOptions deleteZipMigrationOptions)deleteZipMigration(params)
delete_zip_migration(
self,
import_id: str,
*,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the DeleteZipMigrationOptions.Builder to create a DeleteZipMigrationOptions object that contains the parameter values for the deleteZipMigration method.
Path Parameters
Unique ID of the import request.
Example:
cc6dbbfd-810d-4f0e-b0a9-228c328aff29
Query Parameters
The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Example:
4c9adbb4-28ef-4a7d-b273-1cee0c38021f
The deleteZipMigration options.
Unique ID of the import request.
Examples:cc6dbbfd-810d-4f0e-b0a9-228c328aff29The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:4c9adbb4-28ef-4a7d-b273-1cee0c38021f
parameters
Unique ID of the import request.
Examples:The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:
parameters
Unique ID of the import request.
Examples:The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:
Response
Status Code
The import cancellation request was accepted.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
Status of the import request cannot be found. This can be due to the given import_id is not valid, or the import has been completed long ago and its status information is no longer available.
An error occurred. See response for more information.
No Sample Response
Get the status of a previous import request
Gets the status of an import request. The status field in the response object indicates if the given import is completed, in progress, or failed. Detailed status information about each imported data flow is also contained in the response object.
Gets the status of an import request. The status field in the response object indicates if the given import is completed, in progress, or failed. Detailed status information about each imported data flow is also contained in the response object.
Gets the status of an import request. The status field in the response object indicates if the given import is completed, in progress, or failed. Detailed status information about each imported data flow is also contained in the response object.
Gets the status of an import request. The status field in the response object indicates if the given import is completed, in progress, or failed. Detailed status information about each imported data flow is also contained in the response object.
GET /v3/migration/zip_imports/{import_id}ServiceCall<ImportResponse> getZipMigration(GetZipMigrationOptions getZipMigrationOptions)getZipMigration(params)
get_zip_migration(
self,
import_id: str,
*,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
format: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the GetZipMigrationOptions.Builder to create a GetZipMigrationOptions object that contains the parameter values for the getZipMigration method.
Path Parameters
Unique ID of the import request.
Query Parameters
The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Example:
4c9adbb4-28ef-4a7d-b273-1cee0c38021fformat of isx import report
Allowable values: [
json,csv]Example:
json
The getZipMigration options.
Unique ID of the import request.
The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:4c9adbb4-28ef-4a7d-b273-1cee0c38021fformat of isx import report.
Allowable values: [
json,csv]Examples:json
parameters
Unique ID of the import request.
The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:format of isx import report.
Allowable values: [
json,csv]Examples:
parameters
Unique ID of the import request.
The ID of the catalog to use.
catalog_idorproject_idis required.The ID of the project to use.
project_idorcatalog_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:format of isx import report.
Allowable values: [
json,csv]Examples:
Response
Response object of an import request.
Import the response entity.
Import the response metadata.
Response object of an import request.
Import the response entity.
- entity
Account ID of the user who cancelled the import request. This field is required only when the status field is "cancelled".
Examples:user1@company1.com
The conflict_resolution option used for the import.
The duration of import processing time.
The timestamp when the import opearton completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
All data flows imported or to be imported. Each ImportFlow object contains status for the individual data flow import operation.
- importDataFlows
conflict resolution status.
Possible values: [
flow_replacement_succeeded,flow_replacement_failed,import_flow_renamed,import_flow_skipped,connection_replacement_succeeded,connection_replacement_failed,connection_renamed,connection_skipped,parameter_set_replacement_succeeded,parameter_set_replacement_failed,parameter_set_renamed,parameter_set_skipped,table_definition_replacement_succeeded,table_definition_replacement_failed,table_definition_renamed,table_definition_skipped,sequence_job_replacement_succeeded,sequence_job_replacement_failed,sequence_job_renamed,sequence_job_skipped,subflow_replacement_succeeded,subflow_replacement_failed,subflow_renamed,subflow_skipped,message_handler_replacement_succeeded,message_handler_replacement_failed,message_handler_renamed,message_handler_skipped,build_stage_replacement_succeeded,build_stage_replacement_failed,build_stage_renamed,build_stage_skipped,wrapped_stage_replacement_succeeded,wrapped_stage_replacement_failed,wrapped_stage_renamed,wrapped_stage_skipped,custom_stage_replacement_succeeded,custom_stage_replacement_failed,custom_stage_renamed,custom_stage_skipped,data_quality_rule_replacement_succeeded,data_quality_rule_replacement_failed,data_quality_rule_renamed,data_quality_rule_skipped,data_quality_rule_definition_replacement_succeeded,data_quality_rule_definition_failed,data_quality_rule_definition_renamed,data_quality_rule_definition_skipped,cff_schema_replacement_succeeded,cff_schema_replacement_failed,cff_schema_renamed,cff_schema_skipped,test_case_replacement_succeeded,test_case_replacement_failed,test_case_renamed,test_case_skipped]Examples:import_flow_renamed
The timestamp when the flow import is completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
The errors array report all the problems preventing the data flow from being successfully imported.
- errors
additional error text.
error object name.
error stage type.
error type.
Possible values: [
unsupported_stage_type,unsupported_feature,empty_json,isx_conversion_error,model_conversion_error,invalid_input_type,invalid_json_format,json_conversion_error,flow_deletion_error,flow_creation_error,flow_response_parsing_error,auth_token_error,flow_compilation_error,empty_stage_list,empty_stage_node,missing_stage_type_class_name,dummy_stage,missing_stage_type,missing_repos_id,stage_conversion_error,unimplemented_stage_type,job_creation_error,job_run_error,flow_search_error,unsupported_job_type,internal_error,connection_creation_error,flow_rename_error,duplicate_job_error,parameter_set_creation_error,distributed_lock_error,duplicate_object_error,unbound_object_reference,table_def_creation_error,connection_creation_api_error,connection_patch_api_error,connection_deletion_api_error,sequence_job_creation_error,unsupported_stage_type_in_subflow,job_update_error,job_deletion_error,sequence_job_deletion_error,parameter_set_deletion_error,invalid_property_name,invalid_property_value,unsupported_subflow_with_multiple_parents,message_handler_creation_error,message_handler_deletion_error,unsupported_server_container,build_stage_creation_error,build_stage_deletion_error,build_stage_generation_error,data_quality_spec_creation_error,xml_schema_library_creation_error,wrapped_stage_creation_error,wrapped_stage_deletion_error,wrapped_stage_generation_error,insufficient_memory_error,out_of_memory_error,routine_conversion_error,routine_creation_error,routine_deletion_error,custom_stage_creation_error,custom_stage_deletion_error,invalid_file_type,sequence_job_exceeding_max_capacity_error,flow_update_error,parameter_set_update_error,invalid_job_file_type,incorrectset_xmlinputpx,missing_column_definition,cp4d_only_stage_type,unsupported_sybase_lookup,job_get_error,flow_export_error,odm_library_creation_error,generated_parameter_set_error,data_quality_rule_creation_error,xml_schema_library_empty_error,data_asset_creation_error,data_asset_deletion_error,test_case_creation_error,test_case_deletion_error,custom_function_creation_error,sequence_job_update_error,cp4d_only_stage_type_in_subflow,data_set_creation_error,file_set_creation_error,missing_job_referred_flow,custom_stage_name_conflict_error,data_rule_definition_creation_error,CFF_SCHEMA_CREATION_ERROR,build_op_registration_error]
Unique id of the data flow. This field is returned only if the underlying data flow has been successfully imported.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136Unique id of the job. This field is returned only if the corresponding job object has been successfully created.
Examples:ccfaaafd-810d-4f0e-b0a9-228c328a0136Job name. This field is returned only if the corresponding job object has been successfully created.
Examples:Aggregator12_DataStage_1
(deprecated) original type of the job or data flow in the import file.
Possible values: [
px_job,server_job,connection,table_def]Examples:px_job
Name of the imported data flow.
Examples:cancel-reservation-job
Name of the data flow to be imported.
Examples:cancel-reservation-job
The ID of an existing asset this object refers to. If ref_asset_id is specified, the id field will be the same as ref_asset_id for backward compatibility.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136data import status.
Possible values: [
completed,in_progress,failed,skipped,deprecated,unsupported,flow_conversion_failed,flow_creation_failed,flow_compilation_failed,job_creation_failed,job_run_failed,connection_conversion_failed,connection_creation_failed,parameter_set_conversion_failed,parameter_set_creation_failed,data_quality_spec_conversion_failed,data_quality_spec_creation_failed,table_definition_conversion_failed,table_definition_creation_failed,job_update_failed,job_deletion_failed,sequence_job_conversion_failed,sequence_job_creation_failed,message_handler_conversion_failed,message_handler_creation_failed,build_stage_conversion_failed,build_stage_creation_failed,build_stage_generation_failed,xml_schema_library_creation_failed,wrapped_stage_conversion_failed,wrapped_stage_creation_failed,wrapped_stage_generation_failed,routine_conversion_failed,routine_creation_failed,custom_stage_conversion_failed,custom_stage_creation_failed,flow_update_failed,parameter_set_update_failed,completed_with_error,data_quality_rule_creation_failed,data_asset_creation_failed,custom_function_creation_failed,data_set_creation_failed,file_set_creation_failed,data_rule_definition_creation_failed,cff_schema_creation_failed,cff_schema_conversion_failed,test_case_creation_failed,test_case_conversion_failed]Examples:completed
type of the job or data connection in the import file.
Possible values: [
px_job,server_job,connection,table_def,parameter_set,routine,subflow,sequence_job,data_quality_spec,server_container,message_handler,build_stage,wrapped_stage,custom_stage,xml_schema_library,custom_stage_library,function_library,data_intg_parallel_function,data_intg_java_library,odm_ruleset,job,data_quality_rule,data_quality_definition,data_asset,data_intg_data_set,data_intg_file_set,odbc_configuration,data_intg_cff_schema,data_intg_test_case,data_element,ims_database,ims_viewset,mainframe_job,machine_profile,mainframe_routine,parallel_routine,transform,jdbc_driver]Examples:px_job
The warnings array report all the warnings in the data flow import operation.
- warnings
additional warning text.
warning object name.
warning type.
Possible values: [
unreleased_stage_type,unreleased_feature,credentials_file_warning,transformer_trigger_unsupported,transformer_buildtab_unsupported,unsupported_secure_gateway,placeholder_connection_parameters,description_truncated,empty_stage_list,missing_parameter_set,missing_subflow,missing_table_definition,missing_data_connection,duplicate_job,unsupported_connection,unsupported_activity_trigger,unsupported_expression,unsupported_function,unsupported_operator,unsupported_variable,unsupported_activity_variable,default_connection_parameter,unmatched_for_var_warning,unsupported_routine_statement,unsupported_before_after_routine,attachment_creation_failed,unsupported_job_control,unsupported_drs_rdbms_parameter,dataconnection_conflict,placeholder_parameter_set,missing_routine,unsupported_stored_procedure_transform,unattached_links,unreleased_datasource_in_storedprocedure,unsupported_parquet_compression_type_lzo,unsupported_stppx_dbvendor_parameter,unsupported_sappack_connector_property,undefined_variable,unsupported_xmloutputpx_features,xml_transform_missed_schema,unsupported_stage_type,unbound_reference,missing_stage_type,unsupported_dependency,missing_data_asset,unsupported_svtransformer_before_after_subroutine,deprecated_load_control_properties,missing_parameter,unsupported_stage_type_in_subflow,unsupported_xmlinputpx_features,export_request_in_progress,unsupported_stppx_for_db2i_db2z,deprecated_connection_property,hard_coded_file_path,unsupported_property_value,unsupported_property,unsupported_dcn_data_connection,xmlstage_unsupported_function,attachment_update_failed,unchanged_parameter_set,connection_parameter_set_created,connection_parameter_set_patched,skipped_duplicate_connection,unchanged_connection,unsupported_xpath,parameters_in_existing_parameter_set,different_parameter_type_in_parameter_set,unsupported_dataquality_rename,unsupported_data_connection_secure_gateway,missing_default_parameter_value,missing_referenced_connection,unsupported_routine_activity,unsupported_execution_action_validate,unsupported_execution_action_reset,runtime_jis_generation_failed,missing_job,missing_dataset,missing_fileset,identical_parameter_set,missing_runtime_environment,missing_match_spec,unsupported_before_after_subroutine,deprecated_stage_type,missing_storage_volume,unsupported_websphereMQ_server_mode,unsupported_parameter_name,unsupported_property_on_cloud,invalid_job_schedule,missing_cff_schema,missing_data_intg_flow,job_import_skipped,unsupported_spparam_type,unsupported_multiple_reject_links,binaries_import_failed,missing_binaries,deprecated_job_environment_input,unclear_numeric_type,duplicate_parameter_name,user_status_migrated_to_global_user_variable,user_status_migrated_to_dssetuserstatus_function,subroutine_unsupported_on_cloud]
import error.
import type.
list of all missing assets.
- missingAssets
name of asset.
type of asset.
Name of the import request.
Examples:seat-reservation-jobs
Import event notifications.
- notifications
The timestamp when the notification was created. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Notification id.
Import/Export status associated with the notification.
The on_failure option used for the import.
Estimate of remaining time in seconds.
The timestamp when the import opearton started. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import status.
Possible values: [
in_progress,cancelled,queued,started,completed]Examples:in_progress
Import statistics. total = imported (including renamed and replaced) + skipped + failed + deprecated + unsupported + pending.
- tally
Total number of build stages.
Total number of cff schemas.
Total number of connection creation failures.
Total number of data connections.
Total number of custom stages.
Total number of data assets.
Total number of data elements.
Total number of data quality rules.
Total number of data quality rules.
Total number of data quality spec.
Total number of datastage datasets.
Total number of deprecated resources in the import file.
Total number of data flows that cannot be imported due to import errors.
Total number of datastage filesets.
Total number of data flows that failed to compile.
Total number of flow creation failures.
Total number of function libraries.
Total number of data flows successfully imported.
Total number of IMS databases.
Total number of IMS viewsets.
Total number of java libraries.
Total number of job creation failures.
Total number of machine profiles.
Total number of mainframe jobs.
Total number of mainframe routines.
Total number of message handlers.
Total number of ilogjrule/odm libraries.
Total number of parallel jobs.
Total number of parallel routines.
Total number of parameter sets.
Total number of data flows that have not been processed.
Total number of data flows successfully imported and renamed due to a name conflict. The renamed count is included in the imported count.
Total number of existing data flows replaced by imported flows. The replaced count is included in the imported count.
Total number of routines.
Total number of sequence job creation failures.
Total number of sequence jobs.
Total number of server containers.
Total number of data flows skipped due to name conflicts. The skipped count is not included in the failed count or imported count.
Total number of parallel job subflows.
Total number of table definitions.
Total number of test cases.
Total number of data flows to be imported.
Total number of transforms.
Total number of unsupported resources in the import file.
Total number of wrapped stages.
Total number of xml schema libraries.
Import the response metadata.
- metadata
Catalog id.
Catalog name.
The timestamp when the import API was submitted. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Account ID of the user who submitted the import request.
Examples:user1@company1.com
The unique import id.
The timestamp when the import status was last updated. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import file name.
Project id.
Project name.
Space id.
Space name.
The URL which can be used to get the status of the import request right after it is submitted.
Response object of an import request.
Import the response entity.
- entity
Account ID of the user who cancelled the import request. This field is required only when the status field is "cancelled".
Examples:user1@company1.com
The conflict_resolution option used for the import.
The duration of import processing time.
The timestamp when the import opearton completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
All data flows imported or to be imported. Each ImportFlow object contains status for the individual data flow import operation.
- import_data_flows
conflict resolution status.
Possible values: [
flow_replacement_succeeded,flow_replacement_failed,import_flow_renamed,import_flow_skipped,connection_replacement_succeeded,connection_replacement_failed,connection_renamed,connection_skipped,parameter_set_replacement_succeeded,parameter_set_replacement_failed,parameter_set_renamed,parameter_set_skipped,table_definition_replacement_succeeded,table_definition_replacement_failed,table_definition_renamed,table_definition_skipped,sequence_job_replacement_succeeded,sequence_job_replacement_failed,sequence_job_renamed,sequence_job_skipped,subflow_replacement_succeeded,subflow_replacement_failed,subflow_renamed,subflow_skipped,message_handler_replacement_succeeded,message_handler_replacement_failed,message_handler_renamed,message_handler_skipped,build_stage_replacement_succeeded,build_stage_replacement_failed,build_stage_renamed,build_stage_skipped,wrapped_stage_replacement_succeeded,wrapped_stage_replacement_failed,wrapped_stage_renamed,wrapped_stage_skipped,custom_stage_replacement_succeeded,custom_stage_replacement_failed,custom_stage_renamed,custom_stage_skipped,data_quality_rule_replacement_succeeded,data_quality_rule_replacement_failed,data_quality_rule_renamed,data_quality_rule_skipped,data_quality_rule_definition_replacement_succeeded,data_quality_rule_definition_failed,data_quality_rule_definition_renamed,data_quality_rule_definition_skipped,cff_schema_replacement_succeeded,cff_schema_replacement_failed,cff_schema_renamed,cff_schema_skipped,test_case_replacement_succeeded,test_case_replacement_failed,test_case_renamed,test_case_skipped]Examples:import_flow_renamed
The timestamp when the flow import is completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
The errors array report all the problems preventing the data flow from being successfully imported.
- errors
additional error text.
error object name.
error stage type.
error type.
Possible values: [
unsupported_stage_type,unsupported_feature,empty_json,isx_conversion_error,model_conversion_error,invalid_input_type,invalid_json_format,json_conversion_error,flow_deletion_error,flow_creation_error,flow_response_parsing_error,auth_token_error,flow_compilation_error,empty_stage_list,empty_stage_node,missing_stage_type_class_name,dummy_stage,missing_stage_type,missing_repos_id,stage_conversion_error,unimplemented_stage_type,job_creation_error,job_run_error,flow_search_error,unsupported_job_type,internal_error,connection_creation_error,flow_rename_error,duplicate_job_error,parameter_set_creation_error,distributed_lock_error,duplicate_object_error,unbound_object_reference,table_def_creation_error,connection_creation_api_error,connection_patch_api_error,connection_deletion_api_error,sequence_job_creation_error,unsupported_stage_type_in_subflow,job_update_error,job_deletion_error,sequence_job_deletion_error,parameter_set_deletion_error,invalid_property_name,invalid_property_value,unsupported_subflow_with_multiple_parents,message_handler_creation_error,message_handler_deletion_error,unsupported_server_container,build_stage_creation_error,build_stage_deletion_error,build_stage_generation_error,data_quality_spec_creation_error,xml_schema_library_creation_error,wrapped_stage_creation_error,wrapped_stage_deletion_error,wrapped_stage_generation_error,insufficient_memory_error,out_of_memory_error,routine_conversion_error,routine_creation_error,routine_deletion_error,custom_stage_creation_error,custom_stage_deletion_error,invalid_file_type,sequence_job_exceeding_max_capacity_error,flow_update_error,parameter_set_update_error,invalid_job_file_type,incorrectset_xmlinputpx,missing_column_definition,cp4d_only_stage_type,unsupported_sybase_lookup,job_get_error,flow_export_error,odm_library_creation_error,generated_parameter_set_error,data_quality_rule_creation_error,xml_schema_library_empty_error,data_asset_creation_error,data_asset_deletion_error,test_case_creation_error,test_case_deletion_error,custom_function_creation_error,sequence_job_update_error,cp4d_only_stage_type_in_subflow,data_set_creation_error,file_set_creation_error,missing_job_referred_flow,custom_stage_name_conflict_error,data_rule_definition_creation_error,CFF_SCHEMA_CREATION_ERROR,build_op_registration_error]
Unique id of the data flow. This field is returned only if the underlying data flow has been successfully imported.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136Unique id of the job. This field is returned only if the corresponding job object has been successfully created.
Examples:ccfaaafd-810d-4f0e-b0a9-228c328a0136Job name. This field is returned only if the corresponding job object has been successfully created.
Examples:Aggregator12_DataStage_1
(deprecated) original type of the job or data flow in the import file.
Possible values: [
px_job,server_job,connection,table_def]Examples:px_job
Name of the imported data flow.
Examples:cancel-reservation-job
Name of the data flow to be imported.
Examples:cancel-reservation-job
The ID of an existing asset this object refers to. If ref_asset_id is specified, the id field will be the same as ref_asset_id for backward compatibility.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136data import status.
Possible values: [
completed,in_progress,failed,skipped,deprecated,unsupported,flow_conversion_failed,flow_creation_failed,flow_compilation_failed,job_creation_failed,job_run_failed,connection_conversion_failed,connection_creation_failed,parameter_set_conversion_failed,parameter_set_creation_failed,data_quality_spec_conversion_failed,data_quality_spec_creation_failed,table_definition_conversion_failed,table_definition_creation_failed,job_update_failed,job_deletion_failed,sequence_job_conversion_failed,sequence_job_creation_failed,message_handler_conversion_failed,message_handler_creation_failed,build_stage_conversion_failed,build_stage_creation_failed,build_stage_generation_failed,xml_schema_library_creation_failed,wrapped_stage_conversion_failed,wrapped_stage_creation_failed,wrapped_stage_generation_failed,routine_conversion_failed,routine_creation_failed,custom_stage_conversion_failed,custom_stage_creation_failed,flow_update_failed,parameter_set_update_failed,completed_with_error,data_quality_rule_creation_failed,data_asset_creation_failed,custom_function_creation_failed,data_set_creation_failed,file_set_creation_failed,data_rule_definition_creation_failed,cff_schema_creation_failed,cff_schema_conversion_failed,test_case_creation_failed,test_case_conversion_failed]Examples:completed
type of the job or data connection in the import file.
Possible values: [
px_job,server_job,connection,table_def,parameter_set,routine,subflow,sequence_job,data_quality_spec,server_container,message_handler,build_stage,wrapped_stage,custom_stage,xml_schema_library,custom_stage_library,function_library,data_intg_parallel_function,data_intg_java_library,odm_ruleset,job,data_quality_rule,data_quality_definition,data_asset,data_intg_data_set,data_intg_file_set,odbc_configuration,data_intg_cff_schema,data_intg_test_case,data_element,ims_database,ims_viewset,mainframe_job,machine_profile,mainframe_routine,parallel_routine,transform,jdbc_driver]Examples:px_job
The warnings array report all the warnings in the data flow import operation.
- warnings
additional warning text.
warning object name.
warning type.
Possible values: [
unreleased_stage_type,unreleased_feature,credentials_file_warning,transformer_trigger_unsupported,transformer_buildtab_unsupported,unsupported_secure_gateway,placeholder_connection_parameters,description_truncated,empty_stage_list,missing_parameter_set,missing_subflow,missing_table_definition,missing_data_connection,duplicate_job,unsupported_connection,unsupported_activity_trigger,unsupported_expression,unsupported_function,unsupported_operator,unsupported_variable,unsupported_activity_variable,default_connection_parameter,unmatched_for_var_warning,unsupported_routine_statement,unsupported_before_after_routine,attachment_creation_failed,unsupported_job_control,unsupported_drs_rdbms_parameter,dataconnection_conflict,placeholder_parameter_set,missing_routine,unsupported_stored_procedure_transform,unattached_links,unreleased_datasource_in_storedprocedure,unsupported_parquet_compression_type_lzo,unsupported_stppx_dbvendor_parameter,unsupported_sappack_connector_property,undefined_variable,unsupported_xmloutputpx_features,xml_transform_missed_schema,unsupported_stage_type,unbound_reference,missing_stage_type,unsupported_dependency,missing_data_asset,unsupported_svtransformer_before_after_subroutine,deprecated_load_control_properties,missing_parameter,unsupported_stage_type_in_subflow,unsupported_xmlinputpx_features,export_request_in_progress,unsupported_stppx_for_db2i_db2z,deprecated_connection_property,hard_coded_file_path,unsupported_property_value,unsupported_property,unsupported_dcn_data_connection,xmlstage_unsupported_function,attachment_update_failed,unchanged_parameter_set,connection_parameter_set_created,connection_parameter_set_patched,skipped_duplicate_connection,unchanged_connection,unsupported_xpath,parameters_in_existing_parameter_set,different_parameter_type_in_parameter_set,unsupported_dataquality_rename,unsupported_data_connection_secure_gateway,missing_default_parameter_value,missing_referenced_connection,unsupported_routine_activity,unsupported_execution_action_validate,unsupported_execution_action_reset,runtime_jis_generation_failed,missing_job,missing_dataset,missing_fileset,identical_parameter_set,missing_runtime_environment,missing_match_spec,unsupported_before_after_subroutine,deprecated_stage_type,missing_storage_volume,unsupported_websphereMQ_server_mode,unsupported_parameter_name,unsupported_property_on_cloud,invalid_job_schedule,missing_cff_schema,missing_data_intg_flow,job_import_skipped,unsupported_spparam_type,unsupported_multiple_reject_links,binaries_import_failed,missing_binaries,deprecated_job_environment_input,unclear_numeric_type,duplicate_parameter_name,user_status_migrated_to_global_user_variable,user_status_migrated_to_dssetuserstatus_function,subroutine_unsupported_on_cloud]
import error.
import type.
list of all missing assets.
- missing_assets
name of asset.
type of asset.
Name of the import request.
Examples:seat-reservation-jobs
Import event notifications.
- notifications
The timestamp when the notification was created. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Notification id.
Import/Export status associated with the notification.
The on_failure option used for the import.
Estimate of remaining time in seconds.
The timestamp when the import opearton started. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import status.
Possible values: [
in_progress,cancelled,queued,started,completed]Examples:in_progress
Import statistics. total = imported (including renamed and replaced) + skipped + failed + deprecated + unsupported + pending.
- tally
Total number of build stages.
Total number of cff schemas.
Total number of connection creation failures.
Total number of data connections.
Total number of custom stages.
Total number of data assets.
Total number of data elements.
Total number of data quality rules.
Total number of data quality rules.
Total number of data quality spec.
Total number of datastage datasets.
Total number of deprecated resources in the import file.
Total number of data flows that cannot be imported due to import errors.
Total number of datastage filesets.
Total number of data flows that failed to compile.
Total number of flow creation failures.
Total number of function libraries.
Total number of data flows successfully imported.
Total number of IMS databases.
Total number of IMS viewsets.
Total number of java libraries.
Total number of job creation failures.
Total number of machine profiles.
Total number of mainframe jobs.
Total number of mainframe routines.
Total number of message handlers.
Total number of ilogjrule/odm libraries.
Total number of parallel jobs.
Total number of parallel routines.
Total number of parameter sets.
Total number of data flows that have not been processed.
Total number of data flows successfully imported and renamed due to a name conflict. The renamed count is included in the imported count.
Total number of existing data flows replaced by imported flows. The replaced count is included in the imported count.
Total number of routines.
Total number of sequence job creation failures.
Total number of sequence jobs.
Total number of server containers.
Total number of data flows skipped due to name conflicts. The skipped count is not included in the failed count or imported count.
Total number of parallel job subflows.
Total number of table definitions.
Total number of test cases.
Total number of data flows to be imported.
Total number of transforms.
Total number of unsupported resources in the import file.
Total number of wrapped stages.
Total number of xml schema libraries.
Import the response metadata.
- metadata
Catalog id.
Catalog name.
The timestamp when the import API was submitted. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Account ID of the user who submitted the import request.
Examples:user1@company1.com
The unique import id.
The timestamp when the import status was last updated. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import file name.
Project id.
Project name.
Space id.
Space name.
The URL which can be used to get the status of the import request right after it is submitted.
Response object of an import request.
Import the response entity.
- entity
Account ID of the user who cancelled the import request. This field is required only when the status field is "cancelled".
Examples:user1@company1.com
The conflict_resolution option used for the import.
The duration of import processing time.
The timestamp when the import opearton completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
All data flows imported or to be imported. Each ImportFlow object contains status for the individual data flow import operation.
- import_data_flows
conflict resolution status.
Possible values: [
flow_replacement_succeeded,flow_replacement_failed,import_flow_renamed,import_flow_skipped,connection_replacement_succeeded,connection_replacement_failed,connection_renamed,connection_skipped,parameter_set_replacement_succeeded,parameter_set_replacement_failed,parameter_set_renamed,parameter_set_skipped,table_definition_replacement_succeeded,table_definition_replacement_failed,table_definition_renamed,table_definition_skipped,sequence_job_replacement_succeeded,sequence_job_replacement_failed,sequence_job_renamed,sequence_job_skipped,subflow_replacement_succeeded,subflow_replacement_failed,subflow_renamed,subflow_skipped,message_handler_replacement_succeeded,message_handler_replacement_failed,message_handler_renamed,message_handler_skipped,build_stage_replacement_succeeded,build_stage_replacement_failed,build_stage_renamed,build_stage_skipped,wrapped_stage_replacement_succeeded,wrapped_stage_replacement_failed,wrapped_stage_renamed,wrapped_stage_skipped,custom_stage_replacement_succeeded,custom_stage_replacement_failed,custom_stage_renamed,custom_stage_skipped,data_quality_rule_replacement_succeeded,data_quality_rule_replacement_failed,data_quality_rule_renamed,data_quality_rule_skipped,data_quality_rule_definition_replacement_succeeded,data_quality_rule_definition_failed,data_quality_rule_definition_renamed,data_quality_rule_definition_skipped,cff_schema_replacement_succeeded,cff_schema_replacement_failed,cff_schema_renamed,cff_schema_skipped,test_case_replacement_succeeded,test_case_replacement_failed,test_case_renamed,test_case_skipped]Examples:import_flow_renamed
The timestamp when the flow import is completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
The errors array report all the problems preventing the data flow from being successfully imported.
- errors
additional error text.
error object name.
error stage type.
error type.
Possible values: [
unsupported_stage_type,unsupported_feature,empty_json,isx_conversion_error,model_conversion_error,invalid_input_type,invalid_json_format,json_conversion_error,flow_deletion_error,flow_creation_error,flow_response_parsing_error,auth_token_error,flow_compilation_error,empty_stage_list,empty_stage_node,missing_stage_type_class_name,dummy_stage,missing_stage_type,missing_repos_id,stage_conversion_error,unimplemented_stage_type,job_creation_error,job_run_error,flow_search_error,unsupported_job_type,internal_error,connection_creation_error,flow_rename_error,duplicate_job_error,parameter_set_creation_error,distributed_lock_error,duplicate_object_error,unbound_object_reference,table_def_creation_error,connection_creation_api_error,connection_patch_api_error,connection_deletion_api_error,sequence_job_creation_error,unsupported_stage_type_in_subflow,job_update_error,job_deletion_error,sequence_job_deletion_error,parameter_set_deletion_error,invalid_property_name,invalid_property_value,unsupported_subflow_with_multiple_parents,message_handler_creation_error,message_handler_deletion_error,unsupported_server_container,build_stage_creation_error,build_stage_deletion_error,build_stage_generation_error,data_quality_spec_creation_error,xml_schema_library_creation_error,wrapped_stage_creation_error,wrapped_stage_deletion_error,wrapped_stage_generation_error,insufficient_memory_error,out_of_memory_error,routine_conversion_error,routine_creation_error,routine_deletion_error,custom_stage_creation_error,custom_stage_deletion_error,invalid_file_type,sequence_job_exceeding_max_capacity_error,flow_update_error,parameter_set_update_error,invalid_job_file_type,incorrectset_xmlinputpx,missing_column_definition,cp4d_only_stage_type,unsupported_sybase_lookup,job_get_error,flow_export_error,odm_library_creation_error,generated_parameter_set_error,data_quality_rule_creation_error,xml_schema_library_empty_error,data_asset_creation_error,data_asset_deletion_error,test_case_creation_error,test_case_deletion_error,custom_function_creation_error,sequence_job_update_error,cp4d_only_stage_type_in_subflow,data_set_creation_error,file_set_creation_error,missing_job_referred_flow,custom_stage_name_conflict_error,data_rule_definition_creation_error,CFF_SCHEMA_CREATION_ERROR,build_op_registration_error]
Unique id of the data flow. This field is returned only if the underlying data flow has been successfully imported.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136Unique id of the job. This field is returned only if the corresponding job object has been successfully created.
Examples:ccfaaafd-810d-4f0e-b0a9-228c328a0136Job name. This field is returned only if the corresponding job object has been successfully created.
Examples:Aggregator12_DataStage_1
(deprecated) original type of the job or data flow in the import file.
Possible values: [
px_job,server_job,connection,table_def]Examples:px_job
Name of the imported data flow.
Examples:cancel-reservation-job
Name of the data flow to be imported.
Examples:cancel-reservation-job
The ID of an existing asset this object refers to. If ref_asset_id is specified, the id field will be the same as ref_asset_id for backward compatibility.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136data import status.
Possible values: [
completed,in_progress,failed,skipped,deprecated,unsupported,flow_conversion_failed,flow_creation_failed,flow_compilation_failed,job_creation_failed,job_run_failed,connection_conversion_failed,connection_creation_failed,parameter_set_conversion_failed,parameter_set_creation_failed,data_quality_spec_conversion_failed,data_quality_spec_creation_failed,table_definition_conversion_failed,table_definition_creation_failed,job_update_failed,job_deletion_failed,sequence_job_conversion_failed,sequence_job_creation_failed,message_handler_conversion_failed,message_handler_creation_failed,build_stage_conversion_failed,build_stage_creation_failed,build_stage_generation_failed,xml_schema_library_creation_failed,wrapped_stage_conversion_failed,wrapped_stage_creation_failed,wrapped_stage_generation_failed,routine_conversion_failed,routine_creation_failed,custom_stage_conversion_failed,custom_stage_creation_failed,flow_update_failed,parameter_set_update_failed,completed_with_error,data_quality_rule_creation_failed,data_asset_creation_failed,custom_function_creation_failed,data_set_creation_failed,file_set_creation_failed,data_rule_definition_creation_failed,cff_schema_creation_failed,cff_schema_conversion_failed,test_case_creation_failed,test_case_conversion_failed]Examples:completed
type of the job or data connection in the import file.
Possible values: [
px_job,server_job,connection,table_def,parameter_set,routine,subflow,sequence_job,data_quality_spec,server_container,message_handler,build_stage,wrapped_stage,custom_stage,xml_schema_library,custom_stage_library,function_library,data_intg_parallel_function,data_intg_java_library,odm_ruleset,job,data_quality_rule,data_quality_definition,data_asset,data_intg_data_set,data_intg_file_set,odbc_configuration,data_intg_cff_schema,data_intg_test_case,data_element,ims_database,ims_viewset,mainframe_job,machine_profile,mainframe_routine,parallel_routine,transform,jdbc_driver]Examples:px_job
The warnings array report all the warnings in the data flow import operation.
- warnings
additional warning text.
warning object name.
warning type.
Possible values: [
unreleased_stage_type,unreleased_feature,credentials_file_warning,transformer_trigger_unsupported,transformer_buildtab_unsupported,unsupported_secure_gateway,placeholder_connection_parameters,description_truncated,empty_stage_list,missing_parameter_set,missing_subflow,missing_table_definition,missing_data_connection,duplicate_job,unsupported_connection,unsupported_activity_trigger,unsupported_expression,unsupported_function,unsupported_operator,unsupported_variable,unsupported_activity_variable,default_connection_parameter,unmatched_for_var_warning,unsupported_routine_statement,unsupported_before_after_routine,attachment_creation_failed,unsupported_job_control,unsupported_drs_rdbms_parameter,dataconnection_conflict,placeholder_parameter_set,missing_routine,unsupported_stored_procedure_transform,unattached_links,unreleased_datasource_in_storedprocedure,unsupported_parquet_compression_type_lzo,unsupported_stppx_dbvendor_parameter,unsupported_sappack_connector_property,undefined_variable,unsupported_xmloutputpx_features,xml_transform_missed_schema,unsupported_stage_type,unbound_reference,missing_stage_type,unsupported_dependency,missing_data_asset,unsupported_svtransformer_before_after_subroutine,deprecated_load_control_properties,missing_parameter,unsupported_stage_type_in_subflow,unsupported_xmlinputpx_features,export_request_in_progress,unsupported_stppx_for_db2i_db2z,deprecated_connection_property,hard_coded_file_path,unsupported_property_value,unsupported_property,unsupported_dcn_data_connection,xmlstage_unsupported_function,attachment_update_failed,unchanged_parameter_set,connection_parameter_set_created,connection_parameter_set_patched,skipped_duplicate_connection,unchanged_connection,unsupported_xpath,parameters_in_existing_parameter_set,different_parameter_type_in_parameter_set,unsupported_dataquality_rename,unsupported_data_connection_secure_gateway,missing_default_parameter_value,missing_referenced_connection,unsupported_routine_activity,unsupported_execution_action_validate,unsupported_execution_action_reset,runtime_jis_generation_failed,missing_job,missing_dataset,missing_fileset,identical_parameter_set,missing_runtime_environment,missing_match_spec,unsupported_before_after_subroutine,deprecated_stage_type,missing_storage_volume,unsupported_websphereMQ_server_mode,unsupported_parameter_name,unsupported_property_on_cloud,invalid_job_schedule,missing_cff_schema,missing_data_intg_flow,job_import_skipped,unsupported_spparam_type,unsupported_multiple_reject_links,binaries_import_failed,missing_binaries,deprecated_job_environment_input,unclear_numeric_type,duplicate_parameter_name,user_status_migrated_to_global_user_variable,user_status_migrated_to_dssetuserstatus_function,subroutine_unsupported_on_cloud]
import error.
import type.
list of all missing assets.
- missing_assets
name of asset.
type of asset.
Name of the import request.
Examples:seat-reservation-jobs
Import event notifications.
- notifications
The timestamp when the notification was created. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Notification id.
Import/Export status associated with the notification.
The on_failure option used for the import.
Estimate of remaining time in seconds.
The timestamp when the import opearton started. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import status.
Possible values: [
in_progress,cancelled,queued,started,completed]Examples:in_progress
Import statistics. total = imported (including renamed and replaced) + skipped + failed + deprecated + unsupported + pending.
- tally
Total number of build stages.
Total number of cff schemas.
Total number of connection creation failures.
Total number of data connections.
Total number of custom stages.
Total number of data assets.
Total number of data elements.
Total number of data quality rules.
Total number of data quality rules.
Total number of data quality spec.
Total number of datastage datasets.
Total number of deprecated resources in the import file.
Total number of data flows that cannot be imported due to import errors.
Total number of datastage filesets.
Total number of data flows that failed to compile.
Total number of flow creation failures.
Total number of function libraries.
Total number of data flows successfully imported.
Total number of IMS databases.
Total number of IMS viewsets.
Total number of java libraries.
Total number of job creation failures.
Total number of machine profiles.
Total number of mainframe jobs.
Total number of mainframe routines.
Total number of message handlers.
Total number of ilogjrule/odm libraries.
Total number of parallel jobs.
Total number of parallel routines.
Total number of parameter sets.
Total number of data flows that have not been processed.
Total number of data flows successfully imported and renamed due to a name conflict. The renamed count is included in the imported count.
Total number of existing data flows replaced by imported flows. The replaced count is included in the imported count.
Total number of routines.
Total number of sequence job creation failures.
Total number of sequence jobs.
Total number of server containers.
Total number of data flows skipped due to name conflicts. The skipped count is not included in the failed count or imported count.
Total number of parallel job subflows.
Total number of table definitions.
Total number of test cases.
Total number of data flows to be imported.
Total number of transforms.
Total number of unsupported resources in the import file.
Total number of wrapped stages.
Total number of xml schema libraries.
Import the response metadata.
- metadata
Catalog id.
Catalog name.
The timestamp when the import API was submitted. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Account ID of the user who submitted the import request.
Examples:user1@company1.com
The unique import id.
The timestamp when the import status was last updated. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import file name.
Project id.
Project name.
Space id.
Space name.
The URL which can be used to get the status of the import request right after it is submitted.
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
Status of the import request cannot be found. This can be due to the given import_id is not valid, or the import has been completed long ago and its status information is no longer available.
An error occurred. See response for more information.
Analyze a Flow for Unit Testing
Analyze a DataStage Flow to determine stages and link that need to be stubbed during unit testing.
Analyze a DataStage Flow to determine stages and link that need to be stubbed during unit testing.
Analyze a DataStage Flow to determine stages and link that need to be stubbed during unit testing.
Analyze a DataStage Flow to determine stages and link that need to be stubbed during unit testing.
GET /v3/assets/test_cases/{flow_id}ServiceCall<TestCaseAnalysis> testCaseAnalysis(TestCaseAnalysisOptions testCaseAnalysisOptions)testCaseAnalysis(params)
test_case_analysis(
self,
flow_id: str,
*,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the TestCaseAnalysisOptions.Builder to create a TestCaseAnalysisOptions object that contains the parameter values for the testCaseAnalysis method.
Path Parameters
The id of the flow to analyze
Query Parameters
The ID of the project to use. catalog_id, space_id, or project_id is required.
Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use. catalog_id, space_id, or project_id is required.
The testCaseAnalysis options.
The id of the flow to analyze.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use. catalog_id, space_id, or project_id is required.
parameters
The id of the flow to analyze.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:The ID of the space to use. catalog_id, space_id, or project_id is required.
parameters
The id of the flow to analyze.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:The ID of the space to use. catalog_id, space_id, or project_id is required.
Response
Possible values: number of items ≥ 1
- parameters
Possible values: number of items ≥ 1
- destinations
- fields
- sparseLookup
- sources
- fields
- sparseLookup
Possible values: number of items ≥ 1
- destinations
- fields
- sparseLookup
- sources
- fields
- sparseLookup
Possible values: number of items ≥ 1
- destinations
- fields
- sparse_lookup
- sources
- fields
- sparse_lookup
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An unexpected error occurred. See response for more information.
No Sample Response
Create Test Case specification from Flow Analysis
Create Test Case specification from Flow Analysis
Create Test Case specification from Flow Analysis.
Create Test Case specification from Flow Analysis.
Create Test Case specification from Flow Analysis.
POST /v3/assets/test_cases/{flow_id}ServiceCall<Void> createTestCaseFromAnalysis(CreateTestCaseFromAnalysisOptions createTestCaseFromAnalysisOptions)createTestCaseFromAnalysis(params)
create_test_case_from_analysis(
self,
flow_id: str,
analysis: 'TestCaseAnalysis',
name: str,
*,
description: Optional[str] = None,
tags: Optional[List[str]] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
directory_asset_id: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the CreateTestCaseFromAnalysisOptions.Builder to create a CreateTestCaseFromAnalysisOptions object that contains the parameter values for the createTestCaseFromAnalysis method.
Path Parameters
The id of the flow to analyze
Query Parameters
The ID of the project to use. catalog_id, space_id, or project_id is required.
Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use. catalog_id, space_id, or project_id is required.
The directory asset id to create the asset in or move to
Flow Analysis
asset name
asset description
List of tags to identify the asset
The createTestCaseFromAnalysis options.
The id of the flow to analyze.
- analysis
Possible values: number of items ≥ 1
Default:
[]- destinations
- fields
- sparseLookup
Default:
{}Default:
[]- sources
- fields
- sparseLookup
asset name.
asset description.
List of tags to identify the asset.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use. catalog_id, space_id, or project_id is required.
The directory asset id to create the asset in or move to.
parameters
The id of the flow to analyze.
- analysis
Possible values: number of items ≥ 1
Default:
[]- destinations
- fields
- sparseLookup
Default:
{}Default:
[]- sources
- fields
- sparseLookup
asset name.
asset description.
List of tags to identify the asset.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:The ID of the space to use. catalog_id, space_id, or project_id is required.
The directory asset id to create the asset in or move to.
parameters
The id of the flow to analyze.
- analysis
Possible values: number of items ≥ 1
Default:
[]- destinations
- fields
- sparse_lookup
Default:
{}Default:
[]- sources
- fields
- sparse_lookup
asset name.
asset description.
List of tags to identify the asset.
The ID of the project to use. catalog_id, space_id, or project_id is required.
Examples:The ID of the space to use. catalog_id, space_id, or project_id is required.
The directory asset id to create the asset in or move to.
Response
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An unexpected error occurred. See response for more information.
No Sample Response
Delete asset and their attachments
Delete asset and their attachments
Delete asset and their attachments.
Delete asset and their attachments.
Delete asset and their attachments.
DELETE /v3/assets/{asset_id}ServiceCall<Void> deleteAsset(DeleteAssetOptions deleteAssetOptions)deleteAsset(params)
delete_asset(
self,
asset_id: str,
*,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
purge_test_data: Optional[bool] = None,
**kwargs,
) -> DetailedResponseRequest
Use the DeleteAssetOptions.Builder to create a DeleteAssetOptions object that contains the parameter values for the deleteAsset method.
Path Parameters
The ID of the asset
Query Parameters
The ID of the project to use. space_id, or project_id is required.
Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the project to use. space_id, or project_id is required.
Only take effect when delete data_intg_test_case. by default, it is false which means keep data files
The deleteAsset options.
The ID of the asset.
The ID of the project to use. space_id, or project_id is required.
Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the project to use. space_id, or project_id is required.
Only take effect when delete data_intg_test_case. by default, it is false which means keep data files.
parameters
The ID of the asset.
The ID of the project to use. space_id, or project_id is required.
Examples:The ID of the project to use. space_id, or project_id is required.
Only take effect when delete data_intg_test_case. by default, it is false which means keep data files.
parameters
The ID of the asset.
The ID of the project to use. space_id, or project_id is required.
Examples:The ID of the project to use. space_id, or project_id is required.
Only take effect when delete data_intg_test_case. by default, it is false which means keep data files.
Response
Status Code
The requested operation completed successfully.
The requested operation is in progress.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An unexpected error occurred. See response for more information.
No Sample Response
Generate OPD-code for DataStage buildop
Generate the runtime assets for a DataStage buildop in the specified project or catalog for a specified runtime type. Either project_id or catalog_id must be specified.
Generate the runtime assets for a DataStage buildop in the specified project or catalog for a specified runtime type. Either project_id or catalog_id must be specified.
Generate the runtime assets for a DataStage buildop in the specified project or catalog for a specified runtime type. Either project_id or catalog_id must be specified.
Generate the runtime assets for a DataStage buildop in the specified project or catalog for a specified runtime type. Either project_id or catalog_id must be specified.
POST /v3/ds_codegen/generateBuildOp/{data_intg_bldop_id}ServiceCall<GenerateBuildOpResponse> generateDatastageBuildop(GenerateDatastageBuildopOptions generateDatastageBuildopOptions)generateDatastageBuildop(params)
generate_datastage_buildop(
self,
data_intg_bldop_id: str,
*,
build: Optional['BuildopBuild'] = None,
creator: Optional['BuildopCreator'] = None,
directory_asset: Optional[dict] = None,
general: Optional['BuildopGeneral'] = None,
properties: Optional[List['BuildopPropertiesItem']] = None,
schemas: Optional[List[dict]] = None,
type: Optional[str] = None,
ui_data: Optional[dict] = None,
wrapped: Optional['BuildopWrapped'] = None,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
runtime_type: Optional[str] = None,
enable_async_compile: Optional[bool] = None,
**kwargs,
) -> DetailedResponseRequest
Use the GenerateDatastageBuildopOptions.Builder to create a GenerateDatastageBuildopOptions object that contains the parameter values for the generateDatastageBuildop method.
Path Parameters
The DataStage BuildOp-Asset-ID to use.
Query Parameters
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Example:
4c9adbb4-28ef-4a7d-b273-1cee0c38021fThe type of the runtime to use. e.g. dspxosh or Spark etc. If not provided queried from within pipeline flow if available otherwise default of dspxosh is used.
Whether to compile the flow asynchronously or not. When set to true, the compile request will be queued and then compiled. Response will be returned immediately as "Compiling". For compile status, call get compile status api.
BuildOp json to be attached.
Build info
- build
null
- interfaces
Input port
- input
Alias
Example:
aliasAuto read
Example:
trueinputID
Example:
inpGUIDName of input port
Example:
input_portRuntime column propagation
Table name
Example:
table_name
Inputs-Order
Example:
GUID|GUID|...Output port
- output
Alias
Example:
aliasAuto write
Example:
trueoutputID
Example:
outpGUIDName of output port
Example:
output_portRuntime column propagation
Table name
Example:
table_name
Outputs-Order
Example:
GUID|GUID|...input-output column mapping
- transfer
Auto transfer
Example:
trueInput
Example:
input1Output
Example:
output1Separate
Operator business logic
- logic
Definitions
Example:
variable-definitionsLogic for each record
Example:
logic-for-each-recordPost-loop logic
Example:
post-loop-logicPre-loop logic
Example:
pre-loop-logic
Creator information.
- creator
Author name
Example:
IBMVendor name
Example:
IBM CorporationVersion
Example:
1.0
directory information.
General information
- general
Class name
Example:
TestBld01Command name
Example:
sortExec Mode
Example:
default_parNode type name
Example:
nodenameOperator name
Example:
OpBld01Wrapped name
Example:
SortValues
List of stage properties
- properties
Category
Example:
Category-stringConditions
Example:
Condition-stringConversion
Example:
Value-stringData type
Example:
IntegerDefault value
Example:
9Description
Example:
DESCRhidden
Example:
falselist values
Example:
list-valuesParents
Example:
Parents-stringPrompt
Example:
promptName of property
Example:
stagePropNameRepeats
Example:
falseRequired
Example:
falseTemplate
Example:
Template-stringuse Quoting
Example:
false
Array of data record schemas used in the buildop.
Examples:[ { "fields": [ { "app_data": { "is_unicode_string": false, "odbc_type": "INTEGER", "type_code": "INT32" }, "metadata": { "decimal_precision": 6, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 6, "min_length": 0 }, "name": "ID", "nullable": false, "type": "integer" } ], "id": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ]The operator type.
Example:
buildopUI data.
Examples:[ { "description": "This is develop description", "show_on_palette": false } ]Wrapped info
- wrapped
Environment information
- environment
Exit codes
- exit_codes
All exit codes successful
Example:
trueFailure codes
Success codes
Each element is JSONObject containing name and value
- name_value
Environment Name
Example:
name1Environment Value
Example:
value1
Interfaces
- interfaces
Input link
- input
Command line argument or Environment variable name
Example:
arg1File descriptor
Example:
stdininputID
Example:
inpGUIDIs command line argument or environment variable
Example:
trueName of input link
Example:
input_linkNamed pipe
Example:
test_pipeTable name
Example:
table_nameUse stream or not
Example:
true
Inputs-Order
Example:
GUID|GUID|...Output link
- output
Command line argument or Environment variable name
Example:
arg1File descriptor
Example:
stdinoutputID
Example:
outpGUIDIs command line argument or environment variable
Example:
trueName of output link
Example:
output_linkNamed pipe
Example:
test_pipeTable name
Example:
table_nameUse stream or not
Example:
true
Outputs-Order
Example:
GUID|GUID|...
The generateDatastageBuildop options.
The DataStage BuildOp-Asset-ID to use.
Build info.
- build
null.
- interfaces
Input port.
- input
Alias.
Examples:alias
Auto read.
Examples:trueinputID.
Examples:inpGUID
Name of input port.
Examples:input_port
Runtime column propagation.
Examples:falseTable name.
Examples:table_name
Inputs-Order.
Examples:GUID|GUID|...
Output port.
- output
Alias.
Examples:alias
Auto write.
Examples:trueoutputID.
Examples:outpGUID
Name of output port.
Examples:output_port
Runtime column propagation.
Examples:falseTable name.
Examples:table_name
Outputs-Order.
Examples:GUID|GUID|...
input-output column mapping.
- transfer
Auto transfer.
Examples:trueInput.
Examples:input1
Output.
Examples:output1
Separate.
Examples:false
Operator business logic.
- logic
Definitions.
Examples:variable-definitions
Logic for each record.
Examples:logic-for-each-record
Post-loop logic.
Examples:post-loop-logic
Pre-loop logic.
Examples:pre-loop-logic
Creator information.
- creator
Author name.
Examples:IBM
Vendor name.
Examples:IBM Corporation
Version.
Examples:1.0
directory information.
General information.
- general
Class name.
Examples:TestBld01
Command name.
Examples:sort
Exec Mode.
Examples:default_par
Node type name.
Examples:nodename
Operator name.
Examples:OpBld01
Wrapped name.
Examples:SortValues
List of stage properties.
- xProperties
Category.
Examples:Category-string
Conditions.
Examples:Condition-string
Conversion.
Examples:Value-string
Data type.
Examples:Integer
Default value.
Examples:9Description.
Examples:DESCR
hidden.
Examples:falselist values.
Examples:list-values
Parents.
Examples:Parents-string
Prompt.
Examples:prompt
Name of property.
Examples:stagePropName
Repeats.
Examples:falseRequired.
Examples:falseTemplate.
Examples:Template-string
use Quoting.
Examples:false
Array of data record schemas used in the buildop.
Examples:[ { "fields": [ { "app_data": { "is_unicode_string": false, "odbc_type": "INTEGER", "type_code": "INT32" }, "metadata": { "decimal_precision": 6, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 6, "min_length": 0 }, "name": "ID", "nullable": false, "type": "integer" } ], "id": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ]The operator type.
Examples:buildop
UI data.
Examples:[ { "description": "This is develop description", "show_on_palette": false } ]Wrapped info.
- wrapped
Environment information.
- environment
Exit codes.
- exitCodes
All exit codes successful.
Examples:true
Each element is JSONObject containing name and value.
- nameValue
Environment Name.
Examples:name1
Environment Value.
Examples:value1
Interfaces.
- interfaces
Input link.
- input
Command line argument or Environment variable name.
Examples:arg1
File descriptor.
Examples:stdin
inputID.
Examples:inpGUID
Is command line argument or environment variable.
Examples:trueName of input link.
Examples:input_link
Named pipe.
Examples:test_pipe
Table name.
Examples:table_name
Use stream or not.
Examples:true
Inputs-Order.
Examples:GUID|GUID|...
Output link.
- output
Command line argument or Environment variable name.
Examples:arg1
File descriptor.
Examples:stdin
outputID.
Examples:outpGUID
Is command line argument or environment variable.
Examples:trueName of output link.
Examples:output_link
Named pipe.
Examples:test_pipe
Table name.
Examples:table_name
Use stream or not.
Examples:true
Outputs-Order.
Examples:GUID|GUID|...
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:4c9adbb4-28ef-4a7d-b273-1cee0c38021fThe type of the runtime to use. e.g. dspxosh or Spark etc. If not provided queried from within pipeline flow if available otherwise default of dspxosh is used.
Whether to compile the flow asynchronously or not. When set to true, the compile request will be queued and then compiled. Response will be returned immediately as "Compiling". For compile status, call get compile status api.
parameters
The DataStage BuildOp-Asset-ID to use.
Build info.
- build
null.
- interfaces
Input port.
- input
Alias.
Examples:alias
Auto read.
Examples:trueinputID.
Examples:inpGUID
Name of input port.
Examples:input_port
Runtime column propagation.
Examples:falseTable name.
Examples:table_name
Inputs-Order.
Examples:GUID|GUID|...
Output port.
- output
Alias.
Examples:alias
Auto write.
Examples:trueoutputID.
Examples:outpGUID
Name of output port.
Examples:output_port
Runtime column propagation.
Examples:falseTable name.
Examples:table_name
Outputs-Order.
Examples:GUID|GUID|...
input-output column mapping.
- transfer
Auto transfer.
Examples:trueInput.
Examples:input1
Output.
Examples:output1
Separate.
Examples:false
Operator business logic.
- logic
Definitions.
Examples:variable-definitions
Logic for each record.
Examples:logic-for-each-record
Post-loop logic.
Examples:post-loop-logic
Pre-loop logic.
Examples:pre-loop-logic
Creator information.
- creator
Author name.
Examples:IBM
Vendor name.
Examples:IBM Corporation
Version.
Examples:1.0
directory information.
General information.
- general
Class name.
Examples:TestBld01
Command name.
Examples:sort
Exec Mode.
Examples:default_par
Node type name.
Examples:nodename
Operator name.
Examples:OpBld01
Wrapped name.
Examples:SortValues
List of stage properties.
- properties
Category.
Examples:Category-string
Conditions.
Examples:Condition-string
Conversion.
Examples:Value-string
Data type.
Examples:Integer
Default value.
Examples:9Description.
Examples:DESCR
hidden.
Examples:falselist values.
Examples:list-values
Parents.
Examples:Parents-string
Prompt.
Examples:prompt
Name of property.
Examples:stagePropName
Repeats.
Examples:falseRequired.
Examples:falseTemplate.
Examples:Template-string
use Quoting.
Examples:false
Array of data record schemas used in the buildop.
Examples:The operator type.
Examples:UI data.
Examples:Wrapped info.
- wrapped
Environment information.
- environment
Exit codes.
- exit_codes
All exit codes successful.
Examples:true
Each element is JSONObject containing name and value.
- name_value
Environment Name.
Examples:name1
Environment Value.
Examples:value1
Interfaces.
- interfaces
Input link.
- input
Command line argument or Environment variable name.
Examples:arg1
File descriptor.
Examples:stdin
inputID.
Examples:inpGUID
Is command line argument or environment variable.
Examples:trueName of input link.
Examples:input_link
Named pipe.
Examples:test_pipe
Table name.
Examples:table_name
Use stream or not.
Examples:true
Inputs-Order.
Examples:GUID|GUID|...
Output link.
- output
Command line argument or Environment variable name.
Examples:arg1
File descriptor.
Examples:stdin
outputID.
Examples:outpGUID
Is command line argument or environment variable.
Examples:trueName of output link.
Examples:output_link
Named pipe.
Examples:test_pipe
Table name.
Examples:table_name
Use stream or not.
Examples:true
Outputs-Order.
Examples:GUID|GUID|...
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:The type of the runtime to use. e.g. dspxosh or Spark etc. If not provided queried from within pipeline flow if available otherwise default of dspxosh is used.
Whether to compile the flow asynchronously or not. When set to true, the compile request will be queued and then compiled. Response will be returned immediately as "Compiling". For compile status, call get compile status api.
parameters
The DataStage BuildOp-Asset-ID to use.
Build info.
- build
null.
- interfaces
Input port.
- input
Alias.
Examples:alias
Auto read.
Examples:trueinputID.
Examples:inpGUID
Name of input port.
Examples:input_port
Runtime column propagation.
Examples:falseTable name.
Examples:table_name
Inputs-Order.
Examples:GUID|GUID|...
Output port.
- output
Alias.
Examples:alias
Auto write.
Examples:trueoutputID.
Examples:outpGUID
Name of output port.
Examples:output_port
Runtime column propagation.
Examples:falseTable name.
Examples:table_name
Outputs-Order.
Examples:GUID|GUID|...
input-output column mapping.
- transfer
Auto transfer.
Examples:trueInput.
Examples:input1
Output.
Examples:output1
Separate.
Examples:false
Operator business logic.
- logic
Definitions.
Examples:variable-definitions
Logic for each record.
Examples:logic-for-each-record
Post-loop logic.
Examples:post-loop-logic
Pre-loop logic.
Examples:pre-loop-logic
Creator information.
- creator
Author name.
Examples:IBM
Vendor name.
Examples:IBM Corporation
Version.
Examples:1.0
directory information.
General information.
- general
Class name.
Examples:TestBld01
Command name.
Examples:sort
Exec Mode.
Examples:default_par
Node type name.
Examples:nodename
Operator name.
Examples:OpBld01
Wrapped name.
Examples:SortValues
List of stage properties.
- properties
Category.
Examples:Category-string
Conditions.
Examples:Condition-string
Conversion.
Examples:Value-string
Data type.
Examples:Integer
Default value.
Examples:9Description.
Examples:DESCR
hidden.
Examples:falselist values.
Examples:list-values
Parents.
Examples:Parents-string
Prompt.
Examples:prompt
Name of property.
Examples:stagePropName
Repeats.
Examples:falseRequired.
Examples:falseTemplate.
Examples:Template-string
use Quoting.
Examples:false
Array of data record schemas used in the buildop.
Examples:The operator type.
Examples:UI data.
Examples:Wrapped info.
- wrapped
Environment information.
- environment
Exit codes.
- exit_codes
All exit codes successful.
Examples:true
Each element is JSONObject containing name and value.
- name_value
Environment Name.
Examples:name1
Environment Value.
Examples:value1
Interfaces.
- interfaces
Input link.
- input
Command line argument or Environment variable name.
Examples:arg1
File descriptor.
Examples:stdin
inputID.
Examples:inpGUID
Is command line argument or environment variable.
Examples:trueName of input link.
Examples:input_link
Named pipe.
Examples:test_pipe
Table name.
Examples:table_name
Use stream or not.
Examples:true
Inputs-Order.
Examples:GUID|GUID|...
Output link.
- output
Command line argument or Environment variable name.
Examples:arg1
File descriptor.
Examples:stdin
outputID.
Examples:outpGUID
Is command line argument or environment variable.
Examples:trueName of output link.
Examples:output_link
Named pipe.
Examples:test_pipe
Table name.
Examples:table_name
Use stream or not.
Examples:true
Outputs-Order.
Examples:GUID|GUID|...
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:The type of the runtime to use. e.g. dspxosh or Spark etc. If not provided queried from within pipeline flow if available otherwise default of dspxosh is used.
Whether to compile the flow asynchronously or not. When set to true, the compile request will be queued and then compiled. Response will be returned immediately as "Compiling". For compile status, call get compile status api.
Response
Describes the generateBuildOp response model.
generateBuildOp result for DataStage BuildOp.
- message
generateBuildop response type. For example ok or error.
Describes the generateBuildOp response model.
generateBuildOp result for DataStage BuildOp.
generateBuildop response type. For example ok or error.
Describes the generateBuildOp response model.
generateBuildOp result for DataStage BuildOp.
generateBuildop response type. For example ok or error.
Describes the generateBuildOp response model.
generateBuildOp result for DataStage BuildOp.
generateBuildop response type. For example ok or error.
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
Request object contains invalid information. Server is not able to process the request object.
Unexpected error.
Delete pipeline cache
Permanently remove all optimized runner cache (if any) for a Watson pipeline in the specified project or catalog for a specified runtime type. Either project_id or catalog_id must be specified.
Permanently remove all optimized runner cache (if any) for a Watson pipeline in the specified project or catalog for a specified runtime type. Either project_id or catalog_id must be specified.
Permanently remove all optimized runner cache (if any) for a Watson pipeline in the specified project or catalog for a specified runtime type. Either project_id or catalog_id must be specified.
Permanently remove all optimized runner cache (if any) for a Watson pipeline in the specified project or catalog for a specified runtime type. Either project_id or catalog_id must be specified.
DELETE /v3/ds_codegen/pipeline/cache/{pipeline_id}ServiceCall<Void> deletePipelineCache(DeletePipelineCacheOptions deletePipelineCacheOptions)deletePipelineCache(params)
delete_pipeline_cache(
self,
pipeline_id: str,
*,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
cascade: Optional[bool] = None,
**kwargs,
) -> DetailedResponseRequest
Use the DeletePipelineCacheOptions.Builder to create a DeletePipelineCacheOptions object that contains the parameter values for the deletePipelineCache method.
Path Parameters
The Watson Pipeline ID to use.
Query Parameters
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Example:
4c9adbb4-28ef-4a7d-b273-1cee0c38021fFor Compile Watson pipeline API, whether or not to compile the primary(main) pipeline and all its nested pipelines. When the flag is set to false, only the primary pipeline will be compiled. The flag is set to true by default. For Delete pipeline cache API, whether or not to delete pipeline cache for all nested pipelines referenced by the pipeline. When this flag is set to true , the entire cache for the pipeline and all nested pipelines referenced in this pipeline will be permanently removed. The flag is set to false by default.
The deletePipelineCache options.
The Watson Pipeline ID to use.
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:4c9adbb4-28ef-4a7d-b273-1cee0c38021fFor Compile Watson pipeline API, whether or not to compile the primary(main) pipeline and all its nested pipelines. When the flag is set to false, only the primary pipeline will be compiled. The flag is set to true by default. For Delete pipeline cache API, whether or not to delete pipeline cache for all nested pipelines referenced by the pipeline. When this flag is set to true , the entire cache for the pipeline and all nested pipelines referenced in this pipeline will be permanently removed. The flag is set to false by default.
parameters
The Watson Pipeline ID to use.
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:For Compile Watson pipeline API, whether or not to compile the primary(main) pipeline and all its nested pipelines. When the flag is set to false, only the primary pipeline will be compiled. The flag is set to true by default. For Delete pipeline cache API, whether or not to delete pipeline cache for all nested pipelines referenced by the pipeline. When this flag is set to true , the entire cache for the pipeline and all nested pipelines referenced in this pipeline will be permanently removed. The flag is set to false by default.
parameters
The Watson Pipeline ID to use.
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:For Compile Watson pipeline API, whether or not to compile the primary(main) pipeline and all its nested pipelines. When the flag is set to false, only the primary pipeline will be compiled. The flag is set to true by default. For Delete pipeline cache API, whether or not to delete pipeline cache for all nested pipelines referenced by the pipeline. When this flag is set to true , the entire cache for the pipeline and all nested pipelines referenced in this pipeline will be permanently removed. The flag is set to false by default.
Compile Watson pipeline to generate runtime code
Generate Runtime code for a Watson pipeline in the specified project or catalog for a specified runtime type. Either project_id or catalog_id must be specified.
Generate Runtime code for a Watson pipeline in the specified project or catalog for a specified runtime type. Either project_id or catalog_id must be specified.
Generate Runtime code for a Watson pipeline in the specified project or catalog for a specified runtime type. Either project_id or catalog_id must be specified.
Generate Runtime code for a Watson pipeline in the specified project or catalog for a specified runtime type. Either project_id or catalog_id must be specified.
POST /v3/ds_codegen/pipeline/compile/{pipeline_id}ServiceCall<FlowCompileResponse> compileWatsonPipeline(CompileWatsonPipelineOptions compileWatsonPipelineOptions)compileWatsonPipeline(params)
compile_watson_pipeline(
self,
pipeline_id: str,
*,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
enable_inline_pipeline: Optional[bool] = None,
runtime_type: Optional[str] = None,
job_name_suffix: Optional[str] = None,
enable_caching: Optional[bool] = None,
enable_versioning: Optional[bool] = None,
cascade: Optional[bool] = None,
**kwargs,
) -> DetailedResponseRequest
Use the CompileWatsonPipelineOptions.Builder to create a CompileWatsonPipelineOptions object that contains the parameter values for the compileWatsonPipeline method.
Path Parameters
The Watson Pipeline ID to use.
Query Parameters
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Example:
4c9adbb4-28ef-4a7d-b273-1cee0c38021fWhether to enable inline pipeline execution or not. When this flag is set to true, no individual job runs will be created for nested pipelines. The flag is set to false by default.
The type of the runtime to use. e.g. dspxosh or Spark etc. If not provided queried from within pipeline flow if available otherwise default of dspxosh is used.
The name suffix for the created job, will use the pipeline name suffix configured in datastage project settings.
Whether to enable pipeline caching or not. Caching is disabled by default.
Whether to enable pipeline versioning or not. Versioning is disabled by default.
For Compile Watson pipeline API, whether or not to compile the primary(main) pipeline and all its nested pipelines. When the flag is set to false, only the primary pipeline will be compiled. The flag is set to true by default. For Delete pipeline cache API, whether or not to delete pipeline cache for all nested pipelines referenced by the pipeline. When this flag is set to true , the entire cache for the pipeline and all nested pipelines referenced in this pipeline will be permanently removed. The flag is set to false by default.
The compileWatsonPipeline options.
The Watson Pipeline ID to use.
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:4c9adbb4-28ef-4a7d-b273-1cee0c38021fWhether to enable inline pipeline execution or not. When this flag is set to true, no individual job runs will be created for nested pipelines. The flag is set to false by default.
The type of the runtime to use. e.g. dspxosh or Spark etc. If not provided queried from within pipeline flow if available otherwise default of dspxosh is used.
The name suffix for the created job, will use the pipeline name suffix configured in datastage project settings.
Whether to enable pipeline caching or not. Caching is disabled by default.
Whether to enable pipeline versioning or not. Versioning is disabled by default.
For Compile Watson pipeline API, whether or not to compile the primary(main) pipeline and all its nested pipelines. When the flag is set to false, only the primary pipeline will be compiled. The flag is set to true by default. For Delete pipeline cache API, whether or not to delete pipeline cache for all nested pipelines referenced by the pipeline. When this flag is set to true , the entire cache for the pipeline and all nested pipelines referenced in this pipeline will be permanently removed. The flag is set to false by default.
parameters
The Watson Pipeline ID to use.
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:Whether to enable inline pipeline execution or not. When this flag is set to true, no individual job runs will be created for nested pipelines. The flag is set to false by default.
The type of the runtime to use. e.g. dspxosh or Spark etc. If not provided queried from within pipeline flow if available otherwise default of dspxosh is used.
The name suffix for the created job, will use the pipeline name suffix configured in datastage project settings.
Whether to enable pipeline caching or not. Caching is disabled by default.
Whether to enable pipeline versioning or not. Versioning is disabled by default.
For Compile Watson pipeline API, whether or not to compile the primary(main) pipeline and all its nested pipelines. When the flag is set to false, only the primary pipeline will be compiled. The flag is set to true by default. For Delete pipeline cache API, whether or not to delete pipeline cache for all nested pipelines referenced by the pipeline. When this flag is set to true , the entire cache for the pipeline and all nested pipelines referenced in this pipeline will be permanently removed. The flag is set to false by default.
parameters
The Watson Pipeline ID to use.
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:Whether to enable inline pipeline execution or not. When this flag is set to true, no individual job runs will be created for nested pipelines. The flag is set to false by default.
The type of the runtime to use. e.g. dspxosh or Spark etc. If not provided queried from within pipeline flow if available otherwise default of dspxosh is used.
The name suffix for the created job, will use the pipeline name suffix configured in datastage project settings.
Whether to enable pipeline caching or not. Caching is disabled by default.
Whether to enable pipeline versioning or not. Versioning is disabled by default.
For Compile Watson pipeline API, whether or not to compile the primary(main) pipeline and all its nested pipelines. When the flag is set to false, only the primary pipeline will be compiled. The flag is set to true by default. For Delete pipeline cache API, whether or not to delete pipeline cache for all nested pipelines referenced by the pipeline. When this flag is set to true , the entire cache for the pipeline and all nested pipelines referenced in this pipeline will be permanently removed. The flag is set to false by default.
Response
Describes the compile response model.
Compile result for DataStage flow.
- message
Compile response type. For example ok or error.
Describes the compile response model.
Compile result for DataStage flow.
Compile response type. For example ok or error.
Describes the compile response model.
Compile result for DataStage flow.
Compile response type. For example ok or error.
Describes the compile response model.
Compile result for DataStage flow.
Compile response type. For example ok or error.
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
Request object contains invalid information. Server is not able to process the request object.
Unexpected error.
Retrieve optimized pipeline info
Retrieve pipeline info including generated python code, version, last compile timestamp, and cached results.
Retrieve pipeline info including generated python code, version, last compile timestamp, and cached results.
Retrieve pipeline info including generated python code, version, last compile timestamp, and cached results.
Retrieve pipeline info including generated python code, version, last compile timestamp, and cached results.
GET /v3/ds_codegen/pipeline/info/{pipeline_id}ServiceCall<OptimizedPipelineInfo> getPipelineInfo(GetPipelineInfoOptions getPipelineInfoOptions)getPipelineInfo(params)
get_pipeline_info(
self,
pipeline_id: str,
*,
catalog_id: Optional[str] = None,
project_id: Optional[str] = None,
space_id: Optional[str] = None,
**kwargs,
) -> DetailedResponseRequest
Use the GetPipelineInfoOptions.Builder to create a GetPipelineInfoOptions object that contains the parameter values for the getPipelineInfo method.
Path Parameters
The Watson Pipeline ID to use.
Query Parameters
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Example:
4c9adbb4-28ef-4a7d-b273-1cee0c38021f
The getPipelineInfo options.
The Watson Pipeline ID to use.
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:4c9adbb4-28ef-4a7d-b273-1cee0c38021f
parameters
The Watson Pipeline ID to use.
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:
parameters
The Watson Pipeline ID to use.
The ID of the catalog to use.
catalog_idorproject_idorspace_idis required.The ID of the project to use.
catalog_idorproject_idorspace_idis required.Examples:The ID of the space to use.
catalog_idorproject_idorspace_idis required.Examples:
Response
Information about the optimized pipeline.
Information about the optimized pipeline cache.
Information about the optimized pipeline compilation.
Pipeline name.
Build version of the optimized runner compiler
Information about the optimized pipeline.
Information about the optimized pipeline cache.
- cache
timestamp of last cache update.
whether cached results exist.
Information about the optimized pipeline compilation.
- compilation
whether the pipeline has job parameters.
whether caching is enabled.
whether inline mode is enabled.
whether the pipeline has ever been compiled.
whether the pipeline has been updated after last compilation.
job name suffix.
timestamp of last compilation.
Pipeline name.
Build version of the optimized runner compiler.
Information about the optimized pipeline.
Information about the optimized pipeline cache.
- cache
timestamp of last cache update.
whether cached results exist.
Information about the optimized pipeline compilation.
- compilation
whether the pipeline has job parameters.
whether caching is enabled.
whether inline mode is enabled.
whether the pipeline has ever been compiled.
whether the pipeline has been updated after last compilation.
job name suffix.
timestamp of last compilation.
Pipeline name.
Build version of the optimized runner compiler.
Information about the optimized pipeline.
Information about the optimized pipeline cache.
- cache
timestamp of last cache update.
whether cached results exist.
Information about the optimized pipeline compilation.
- compilation
whether the pipeline has job parameters.
whether caching is enabled.
whether inline mode is enabled.
whether the pipeline has ever been compiled.
whether the pipeline has been updated after last compilation.
job name suffix.
timestamp of last compilation.
Pipeline name.
Build version of the optimized runner compiler.
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred. See response for more information.
{ "cache": { "cache_last_modified_time": 1741718218210, "has_cached_results": true }, "compilation": { "has_parameters": false, "is_caching_enabled": true, "is_inline_mode": true, "is_pipeline_compiled": true, "job_name_suffix": "_eltjob", "pipeline_last_compiled_time": 1741718217973 }, "name": "pipeline_test", "version": "1.0.0" }{ "cache": { "cache_last_modified_time": 1741718218210, "has_cached_results": true }, "compilation": { "has_parameters": false, "is_caching_enabled": true, "is_inline_mode": true, "is_pipeline_compiled": true, "job_name_suffix": "_eltjob", "pipeline_last_compiled_time": 1741718217973 }, "name": "pipeline_test", "version": "1.0.0" }