Google Cloud Native is in preview. Google Cloud Classic is fully supported.
google-native.datapipelines/v1.Pipeline
Explore with Pulumi AI
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Creates a pipeline. For a batch pipeline, you can pass scheduler information. Data Pipelines uses the scheduler information to create an internal scheduler that runs jobs periodically. If the internal scheduler is not configured, you can use RunPipeline to run jobs.
Create Pipeline Resource
Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.
Constructor syntax
new Pipeline(name: string, args: PipelineArgs, opts?: CustomResourceOptions);@overload
def Pipeline(resource_name: str,
             args: PipelineArgs,
             opts: Optional[ResourceOptions] = None)
@overload
def Pipeline(resource_name: str,
             opts: Optional[ResourceOptions] = None,
             display_name: Optional[str] = None,
             state: Optional[PipelineState] = None,
             type: Optional[PipelineType] = None,
             location: Optional[str] = None,
             name: Optional[str] = None,
             pipeline_sources: Optional[Mapping[str, str]] = None,
             project: Optional[str] = None,
             schedule_info: Optional[GoogleCloudDatapipelinesV1ScheduleSpecArgs] = None,
             scheduler_service_account_email: Optional[str] = None,
             workload: Optional[GoogleCloudDatapipelinesV1WorkloadArgs] = None)func NewPipeline(ctx *Context, name string, args PipelineArgs, opts ...ResourceOption) (*Pipeline, error)public Pipeline(string name, PipelineArgs args, CustomResourceOptions? opts = null)
public Pipeline(String name, PipelineArgs args)
public Pipeline(String name, PipelineArgs args, CustomResourceOptions options)
type: google-native:datapipelines/v1:Pipeline
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
Parameters
- name string
- The unique name of the resource.
- args PipelineArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args PipelineArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args PipelineArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args PipelineArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args PipelineArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
Constructor example
The following reference example uses placeholder values for all input properties.
var pipelineResource = new GoogleNative.Datapipelines.V1.Pipeline("pipelineResource", new()
{
    DisplayName = "string",
    State = GoogleNative.Datapipelines.V1.PipelineState.StateUnspecified,
    Type = GoogleNative.Datapipelines.V1.PipelineType.PipelineTypeUnspecified,
    Location = "string",
    Name = "string",
    PipelineSources = 
    {
        { "string", "string" },
    },
    Project = "string",
    ScheduleInfo = new GoogleNative.Datapipelines.V1.Inputs.GoogleCloudDatapipelinesV1ScheduleSpecArgs
    {
        Schedule = "string",
        TimeZone = "string",
    },
    SchedulerServiceAccountEmail = "string",
    Workload = new GoogleNative.Datapipelines.V1.Inputs.GoogleCloudDatapipelinesV1WorkloadArgs
    {
        DataflowFlexTemplateRequest = new GoogleNative.Datapipelines.V1.Inputs.GoogleCloudDatapipelinesV1LaunchFlexTemplateRequestArgs
        {
            LaunchParameter = new GoogleNative.Datapipelines.V1.Inputs.GoogleCloudDatapipelinesV1LaunchFlexTemplateParameterArgs
            {
                JobName = "string",
                ContainerSpecGcsPath = "string",
                Environment = new GoogleNative.Datapipelines.V1.Inputs.GoogleCloudDatapipelinesV1FlexTemplateRuntimeEnvironmentArgs
                {
                    AdditionalExperiments = new[]
                    {
                        "string",
                    },
                    AdditionalUserLabels = 
                    {
                        { "string", "string" },
                    },
                    EnableStreamingEngine = false,
                    FlexrsGoal = GoogleNative.Datapipelines.V1.GoogleCloudDatapipelinesV1FlexTemplateRuntimeEnvironmentFlexrsGoal.FlexrsUnspecified,
                    IpConfiguration = GoogleNative.Datapipelines.V1.GoogleCloudDatapipelinesV1FlexTemplateRuntimeEnvironmentIpConfiguration.WorkerIpUnspecified,
                    KmsKeyName = "string",
                    MachineType = "string",
                    MaxWorkers = 0,
                    Network = "string",
                    NumWorkers = 0,
                    ServiceAccountEmail = "string",
                    Subnetwork = "string",
                    TempLocation = "string",
                    WorkerRegion = "string",
                    WorkerZone = "string",
                    Zone = "string",
                },
                LaunchOptions = 
                {
                    { "string", "string" },
                },
                Parameters = 
                {
                    { "string", "string" },
                },
                TransformNameMappings = 
                {
                    { "string", "string" },
                },
                Update = false,
            },
            Location = "string",
            Project = "string",
            ValidateOnly = false,
        },
        DataflowLaunchTemplateRequest = new GoogleNative.Datapipelines.V1.Inputs.GoogleCloudDatapipelinesV1LaunchTemplateRequestArgs
        {
            Project = "string",
            GcsPath = "string",
            LaunchParameters = new GoogleNative.Datapipelines.V1.Inputs.GoogleCloudDatapipelinesV1LaunchTemplateParametersArgs
            {
                JobName = "string",
                Environment = new GoogleNative.Datapipelines.V1.Inputs.GoogleCloudDatapipelinesV1RuntimeEnvironmentArgs
                {
                    AdditionalExperiments = new[]
                    {
                        "string",
                    },
                    AdditionalUserLabels = 
                    {
                        { "string", "string" },
                    },
                    BypassTempDirValidation = false,
                    EnableStreamingEngine = false,
                    IpConfiguration = GoogleNative.Datapipelines.V1.GoogleCloudDatapipelinesV1RuntimeEnvironmentIpConfiguration.WorkerIpUnspecified,
                    KmsKeyName = "string",
                    MachineType = "string",
                    MaxWorkers = 0,
                    Network = "string",
                    NumWorkers = 0,
                    ServiceAccountEmail = "string",
                    Subnetwork = "string",
                    TempLocation = "string",
                    WorkerRegion = "string",
                    WorkerZone = "string",
                    Zone = "string",
                },
                Parameters = 
                {
                    { "string", "string" },
                },
                TransformNameMapping = 
                {
                    { "string", "string" },
                },
                Update = false,
            },
            Location = "string",
            ValidateOnly = false,
        },
    },
});
example, err := datapipelines.NewPipeline(ctx, "pipelineResource", &datapipelines.PipelineArgs{
	DisplayName: pulumi.String("string"),
	State:       datapipelines.PipelineStateStateUnspecified,
	Type:        datapipelines.PipelineTypePipelineTypeUnspecified,
	Location:    pulumi.String("string"),
	Name:        pulumi.String("string"),
	PipelineSources: pulumi.StringMap{
		"string": pulumi.String("string"),
	},
	Project: pulumi.String("string"),
	ScheduleInfo: &datapipelines.GoogleCloudDatapipelinesV1ScheduleSpecArgs{
		Schedule: pulumi.String("string"),
		TimeZone: pulumi.String("string"),
	},
	SchedulerServiceAccountEmail: pulumi.String("string"),
	Workload: &datapipelines.GoogleCloudDatapipelinesV1WorkloadArgs{
		DataflowFlexTemplateRequest: &datapipelines.GoogleCloudDatapipelinesV1LaunchFlexTemplateRequestArgs{
			LaunchParameter: &datapipelines.GoogleCloudDatapipelinesV1LaunchFlexTemplateParameterArgs{
				JobName:              pulumi.String("string"),
				ContainerSpecGcsPath: pulumi.String("string"),
				Environment: &datapipelines.GoogleCloudDatapipelinesV1FlexTemplateRuntimeEnvironmentArgs{
					AdditionalExperiments: pulumi.StringArray{
						pulumi.String("string"),
					},
					AdditionalUserLabels: pulumi.StringMap{
						"string": pulumi.String("string"),
					},
					EnableStreamingEngine: pulumi.Bool(false),
					FlexrsGoal:            datapipelines.GoogleCloudDatapipelinesV1FlexTemplateRuntimeEnvironmentFlexrsGoalFlexrsUnspecified,
					IpConfiguration:       datapipelines.GoogleCloudDatapipelinesV1FlexTemplateRuntimeEnvironmentIpConfigurationWorkerIpUnspecified,
					KmsKeyName:            pulumi.String("string"),
					MachineType:           pulumi.String("string"),
					MaxWorkers:            pulumi.Int(0),
					Network:               pulumi.String("string"),
					NumWorkers:            pulumi.Int(0),
					ServiceAccountEmail:   pulumi.String("string"),
					Subnetwork:            pulumi.String("string"),
					TempLocation:          pulumi.String("string"),
					WorkerRegion:          pulumi.String("string"),
					WorkerZone:            pulumi.String("string"),
					Zone:                  pulumi.String("string"),
				},
				LaunchOptions: pulumi.StringMap{
					"string": pulumi.String("string"),
				},
				Parameters: pulumi.StringMap{
					"string": pulumi.String("string"),
				},
				TransformNameMappings: pulumi.StringMap{
					"string": pulumi.String("string"),
				},
				Update: pulumi.Bool(false),
			},
			Location:     pulumi.String("string"),
			Project:      pulumi.String("string"),
			ValidateOnly: pulumi.Bool(false),
		},
		DataflowLaunchTemplateRequest: &datapipelines.GoogleCloudDatapipelinesV1LaunchTemplateRequestArgs{
			Project: pulumi.String("string"),
			GcsPath: pulumi.String("string"),
			LaunchParameters: &datapipelines.GoogleCloudDatapipelinesV1LaunchTemplateParametersArgs{
				JobName: pulumi.String("string"),
				Environment: &datapipelines.GoogleCloudDatapipelinesV1RuntimeEnvironmentArgs{
					AdditionalExperiments: pulumi.StringArray{
						pulumi.String("string"),
					},
					AdditionalUserLabels: pulumi.StringMap{
						"string": pulumi.String("string"),
					},
					BypassTempDirValidation: pulumi.Bool(false),
					EnableStreamingEngine:   pulumi.Bool(false),
					IpConfiguration:         datapipelines.GoogleCloudDatapipelinesV1RuntimeEnvironmentIpConfigurationWorkerIpUnspecified,
					KmsKeyName:              pulumi.String("string"),
					MachineType:             pulumi.String("string"),
					MaxWorkers:              pulumi.Int(0),
					Network:                 pulumi.String("string"),
					NumWorkers:              pulumi.Int(0),
					ServiceAccountEmail:     pulumi.String("string"),
					Subnetwork:              pulumi.String("string"),
					TempLocation:            pulumi.String("string"),
					WorkerRegion:            pulumi.String("string"),
					WorkerZone:              pulumi.String("string"),
					Zone:                    pulumi.String("string"),
				},
				Parameters: pulumi.StringMap{
					"string": pulumi.String("string"),
				},
				TransformNameMapping: pulumi.StringMap{
					"string": pulumi.String("string"),
				},
				Update: pulumi.Bool(false),
			},
			Location:     pulumi.String("string"),
			ValidateOnly: pulumi.Bool(false),
		},
	},
})
var pipelineResource = new Pipeline("pipelineResource", PipelineArgs.builder()
    .displayName("string")
    .state("STATE_UNSPECIFIED")
    .type("PIPELINE_TYPE_UNSPECIFIED")
    .location("string")
    .name("string")
    .pipelineSources(Map.of("string", "string"))
    .project("string")
    .scheduleInfo(GoogleCloudDatapipelinesV1ScheduleSpecArgs.builder()
        .schedule("string")
        .timeZone("string")
        .build())
    .schedulerServiceAccountEmail("string")
    .workload(GoogleCloudDatapipelinesV1WorkloadArgs.builder()
        .dataflowFlexTemplateRequest(GoogleCloudDatapipelinesV1LaunchFlexTemplateRequestArgs.builder()
            .launchParameter(GoogleCloudDatapipelinesV1LaunchFlexTemplateParameterArgs.builder()
                .jobName("string")
                .containerSpecGcsPath("string")
                .environment(GoogleCloudDatapipelinesV1FlexTemplateRuntimeEnvironmentArgs.builder()
                    .additionalExperiments("string")
                    .additionalUserLabels(Map.of("string", "string"))
                    .enableStreamingEngine(false)
                    .flexrsGoal("FLEXRS_UNSPECIFIED")
                    .ipConfiguration("WORKER_IP_UNSPECIFIED")
                    .kmsKeyName("string")
                    .machineType("string")
                    .maxWorkers(0)
                    .network("string")
                    .numWorkers(0)
                    .serviceAccountEmail("string")
                    .subnetwork("string")
                    .tempLocation("string")
                    .workerRegion("string")
                    .workerZone("string")
                    .zone("string")
                    .build())
                .launchOptions(Map.of("string", "string"))
                .parameters(Map.of("string", "string"))
                .transformNameMappings(Map.of("string", "string"))
                .update(false)
                .build())
            .location("string")
            .project("string")
            .validateOnly(false)
            .build())
        .dataflowLaunchTemplateRequest(GoogleCloudDatapipelinesV1LaunchTemplateRequestArgs.builder()
            .project("string")
            .gcsPath("string")
            .launchParameters(GoogleCloudDatapipelinesV1LaunchTemplateParametersArgs.builder()
                .jobName("string")
                .environment(GoogleCloudDatapipelinesV1RuntimeEnvironmentArgs.builder()
                    .additionalExperiments("string")
                    .additionalUserLabels(Map.of("string", "string"))
                    .bypassTempDirValidation(false)
                    .enableStreamingEngine(false)
                    .ipConfiguration("WORKER_IP_UNSPECIFIED")
                    .kmsKeyName("string")
                    .machineType("string")
                    .maxWorkers(0)
                    .network("string")
                    .numWorkers(0)
                    .serviceAccountEmail("string")
                    .subnetwork("string")
                    .tempLocation("string")
                    .workerRegion("string")
                    .workerZone("string")
                    .zone("string")
                    .build())
                .parameters(Map.of("string", "string"))
                .transformNameMapping(Map.of("string", "string"))
                .update(false)
                .build())
            .location("string")
            .validateOnly(false)
            .build())
        .build())
    .build());
pipeline_resource = google_native.datapipelines.v1.Pipeline("pipelineResource",
    display_name="string",
    state=google_native.datapipelines.v1.PipelineState.STATE_UNSPECIFIED,
    type=google_native.datapipelines.v1.PipelineType.PIPELINE_TYPE_UNSPECIFIED,
    location="string",
    name="string",
    pipeline_sources={
        "string": "string",
    },
    project="string",
    schedule_info={
        "schedule": "string",
        "time_zone": "string",
    },
    scheduler_service_account_email="string",
    workload={
        "dataflow_flex_template_request": {
            "launch_parameter": {
                "job_name": "string",
                "container_spec_gcs_path": "string",
                "environment": {
                    "additional_experiments": ["string"],
                    "additional_user_labels": {
                        "string": "string",
                    },
                    "enable_streaming_engine": False,
                    "flexrs_goal": google_native.datapipelines.v1.GoogleCloudDatapipelinesV1FlexTemplateRuntimeEnvironmentFlexrsGoal.FLEXRS_UNSPECIFIED,
                    "ip_configuration": google_native.datapipelines.v1.GoogleCloudDatapipelinesV1FlexTemplateRuntimeEnvironmentIpConfiguration.WORKER_IP_UNSPECIFIED,
                    "kms_key_name": "string",
                    "machine_type": "string",
                    "max_workers": 0,
                    "network": "string",
                    "num_workers": 0,
                    "service_account_email": "string",
                    "subnetwork": "string",
                    "temp_location": "string",
                    "worker_region": "string",
                    "worker_zone": "string",
                    "zone": "string",
                },
                "launch_options": {
                    "string": "string",
                },
                "parameters": {
                    "string": "string",
                },
                "transform_name_mappings": {
                    "string": "string",
                },
                "update": False,
            },
            "location": "string",
            "project": "string",
            "validate_only": False,
        },
        "dataflow_launch_template_request": {
            "project": "string",
            "gcs_path": "string",
            "launch_parameters": {
                "job_name": "string",
                "environment": {
                    "additional_experiments": ["string"],
                    "additional_user_labels": {
                        "string": "string",
                    },
                    "bypass_temp_dir_validation": False,
                    "enable_streaming_engine": False,
                    "ip_configuration": google_native.datapipelines.v1.GoogleCloudDatapipelinesV1RuntimeEnvironmentIpConfiguration.WORKER_IP_UNSPECIFIED,
                    "kms_key_name": "string",
                    "machine_type": "string",
                    "max_workers": 0,
                    "network": "string",
                    "num_workers": 0,
                    "service_account_email": "string",
                    "subnetwork": "string",
                    "temp_location": "string",
                    "worker_region": "string",
                    "worker_zone": "string",
                    "zone": "string",
                },
                "parameters": {
                    "string": "string",
                },
                "transform_name_mapping": {
                    "string": "string",
                },
                "update": False,
            },
            "location": "string",
            "validate_only": False,
        },
    })
const pipelineResource = new google_native.datapipelines.v1.Pipeline("pipelineResource", {
    displayName: "string",
    state: google_native.datapipelines.v1.PipelineState.StateUnspecified,
    type: google_native.datapipelines.v1.PipelineType.PipelineTypeUnspecified,
    location: "string",
    name: "string",
    pipelineSources: {
        string: "string",
    },
    project: "string",
    scheduleInfo: {
        schedule: "string",
        timeZone: "string",
    },
    schedulerServiceAccountEmail: "string",
    workload: {
        dataflowFlexTemplateRequest: {
            launchParameter: {
                jobName: "string",
                containerSpecGcsPath: "string",
                environment: {
                    additionalExperiments: ["string"],
                    additionalUserLabels: {
                        string: "string",
                    },
                    enableStreamingEngine: false,
                    flexrsGoal: google_native.datapipelines.v1.GoogleCloudDatapipelinesV1FlexTemplateRuntimeEnvironmentFlexrsGoal.FlexrsUnspecified,
                    ipConfiguration: google_native.datapipelines.v1.GoogleCloudDatapipelinesV1FlexTemplateRuntimeEnvironmentIpConfiguration.WorkerIpUnspecified,
                    kmsKeyName: "string",
                    machineType: "string",
                    maxWorkers: 0,
                    network: "string",
                    numWorkers: 0,
                    serviceAccountEmail: "string",
                    subnetwork: "string",
                    tempLocation: "string",
                    workerRegion: "string",
                    workerZone: "string",
                    zone: "string",
                },
                launchOptions: {
                    string: "string",
                },
                parameters: {
                    string: "string",
                },
                transformNameMappings: {
                    string: "string",
                },
                update: false,
            },
            location: "string",
            project: "string",
            validateOnly: false,
        },
        dataflowLaunchTemplateRequest: {
            project: "string",
            gcsPath: "string",
            launchParameters: {
                jobName: "string",
                environment: {
                    additionalExperiments: ["string"],
                    additionalUserLabels: {
                        string: "string",
                    },
                    bypassTempDirValidation: false,
                    enableStreamingEngine: false,
                    ipConfiguration: google_native.datapipelines.v1.GoogleCloudDatapipelinesV1RuntimeEnvironmentIpConfiguration.WorkerIpUnspecified,
                    kmsKeyName: "string",
                    machineType: "string",
                    maxWorkers: 0,
                    network: "string",
                    numWorkers: 0,
                    serviceAccountEmail: "string",
                    subnetwork: "string",
                    tempLocation: "string",
                    workerRegion: "string",
                    workerZone: "string",
                    zone: "string",
                },
                parameters: {
                    string: "string",
                },
                transformNameMapping: {
                    string: "string",
                },
                update: false,
            },
            location: "string",
            validateOnly: false,
        },
    },
});
type: google-native:datapipelines/v1:Pipeline
properties:
    displayName: string
    location: string
    name: string
    pipelineSources:
        string: string
    project: string
    scheduleInfo:
        schedule: string
        timeZone: string
    schedulerServiceAccountEmail: string
    state: STATE_UNSPECIFIED
    type: PIPELINE_TYPE_UNSPECIFIED
    workload:
        dataflowFlexTemplateRequest:
            launchParameter:
                containerSpecGcsPath: string
                environment:
                    additionalExperiments:
                        - string
                    additionalUserLabels:
                        string: string
                    enableStreamingEngine: false
                    flexrsGoal: FLEXRS_UNSPECIFIED
                    ipConfiguration: WORKER_IP_UNSPECIFIED
                    kmsKeyName: string
                    machineType: string
                    maxWorkers: 0
                    network: string
                    numWorkers: 0
                    serviceAccountEmail: string
                    subnetwork: string
                    tempLocation: string
                    workerRegion: string
                    workerZone: string
                    zone: string
                jobName: string
                launchOptions:
                    string: string
                parameters:
                    string: string
                transformNameMappings:
                    string: string
                update: false
            location: string
            project: string
            validateOnly: false
        dataflowLaunchTemplateRequest:
            gcsPath: string
            launchParameters:
                environment:
                    additionalExperiments:
                        - string
                    additionalUserLabels:
                        string: string
                    bypassTempDirValidation: false
                    enableStreamingEngine: false
                    ipConfiguration: WORKER_IP_UNSPECIFIED
                    kmsKeyName: string
                    machineType: string
                    maxWorkers: 0
                    network: string
                    numWorkers: 0
                    serviceAccountEmail: string
                    subnetwork: string
                    tempLocation: string
                    workerRegion: string
                    workerZone: string
                    zone: string
                jobName: string
                parameters:
                    string: string
                transformNameMapping:
                    string: string
                update: false
            location: string
            project: string
            validateOnly: false
Pipeline Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.
The Pipeline resource accepts the following input properties:
- DisplayName string
- The display name of the pipeline. It can contain only letters ([A-Za-z]), numbers ([0-9]), hyphens (-), and underscores (_).
- State
Pulumi.Google Native. Datapipelines. V1. Pipeline State 
- The state of the pipeline. When the pipeline is created, the state is set to 'PIPELINE_STATE_ACTIVE' by default. State changes can be requested by setting the state to stopping, paused, or resuming. State cannot be changed through UpdatePipeline requests.
- Type
Pulumi.Google Native. Datapipelines. V1. Pipeline Type 
- The type of the pipeline. This field affects the scheduling of the pipeline and the type of metrics to show for the pipeline.
- Location string
- Name string
- The pipeline name. For example: projects/PROJECT_ID/locations/LOCATION_ID/pipelines/PIPELINE_ID. *PROJECT_IDcan contain letters ([A-Za-z]), numbers ([0-9]), hyphens (-), colons (:), and periods (.). For more information, see Identifying projects. *LOCATION_IDis the canonical ID for the pipeline's location. The list of available locations can be obtained by callinggoogle.cloud.location.Locations.ListLocations. Note that the Data Pipelines service is not available in all regions. It depends on Cloud Scheduler, an App Engine application, so it's only available in App Engine regions. *PIPELINE_IDis the ID of the pipeline. Must be unique for the selected project and location.
- PipelineSources Dictionary<string, string>
- Immutable. The sources of the pipeline (for example, Dataplex). The keys and values are set by the corresponding sources during pipeline creation.
- Project string
- ScheduleInfo Pulumi.Google Native. Datapipelines. V1. Inputs. Google Cloud Datapipelines V1Schedule Spec 
- Internal scheduling information for a pipeline. If this information is provided, periodic jobs will be created per the schedule. If not, users are responsible for creating jobs externally.
- SchedulerService stringAccount Email 
- Optional. A service account email to be used with the Cloud Scheduler job. If not specified, the default compute engine service account will be used.
- Workload
Pulumi.Google Native. Datapipelines. V1. Inputs. Google Cloud Datapipelines V1Workload 
- Workload information for creating new jobs.
- DisplayName string
- The display name of the pipeline. It can contain only letters ([A-Za-z]), numbers ([0-9]), hyphens (-), and underscores (_).
- State
PipelineState Enum 
- The state of the pipeline. When the pipeline is created, the state is set to 'PIPELINE_STATE_ACTIVE' by default. State changes can be requested by setting the state to stopping, paused, or resuming. State cannot be changed through UpdatePipeline requests.
- Type
PipelineType 
- The type of the pipeline. This field affects the scheduling of the pipeline and the type of metrics to show for the pipeline.
- Location string
- Name string
- The pipeline name. For example: projects/PROJECT_ID/locations/LOCATION_ID/pipelines/PIPELINE_ID. *PROJECT_IDcan contain letters ([A-Za-z]), numbers ([0-9]), hyphens (-), colons (:), and periods (.). For more information, see Identifying projects. *LOCATION_IDis the canonical ID for the pipeline's location. The list of available locations can be obtained by callinggoogle.cloud.location.Locations.ListLocations. Note that the Data Pipelines service is not available in all regions. It depends on Cloud Scheduler, an App Engine application, so it's only available in App Engine regions. *PIPELINE_IDis the ID of the pipeline. Must be unique for the selected project and location.
- PipelineSources map[string]string
- Immutable. The sources of the pipeline (for example, Dataplex). The keys and values are set by the corresponding sources during pipeline creation.
- Project string
- ScheduleInfo GoogleCloud Datapipelines V1Schedule Spec Args 
- Internal scheduling information for a pipeline. If this information is provided, periodic jobs will be created per the schedule. If not, users are responsible for creating jobs externally.
- SchedulerService stringAccount Email 
- Optional. A service account email to be used with the Cloud Scheduler job. If not specified, the default compute engine service account will be used.
- Workload
GoogleCloud Datapipelines V1Workload Args 
- Workload information for creating new jobs.
- displayName String
- The display name of the pipeline. It can contain only letters ([A-Za-z]), numbers ([0-9]), hyphens (-), and underscores (_).
- state
PipelineState 
- The state of the pipeline. When the pipeline is created, the state is set to 'PIPELINE_STATE_ACTIVE' by default. State changes can be requested by setting the state to stopping, paused, or resuming. State cannot be changed through UpdatePipeline requests.
- type
PipelineType 
- The type of the pipeline. This field affects the scheduling of the pipeline and the type of metrics to show for the pipeline.
- location String
- name String
- The pipeline name. For example: projects/PROJECT_ID/locations/LOCATION_ID/pipelines/PIPELINE_ID. *PROJECT_IDcan contain letters ([A-Za-z]), numbers ([0-9]), hyphens (-), colons (:), and periods (.). For more information, see Identifying projects. *LOCATION_IDis the canonical ID for the pipeline's location. The list of available locations can be obtained by callinggoogle.cloud.location.Locations.ListLocations. Note that the Data Pipelines service is not available in all regions. It depends on Cloud Scheduler, an App Engine application, so it's only available in App Engine regions. *PIPELINE_IDis the ID of the pipeline. Must be unique for the selected project and location.
- pipelineSources Map<String,String>
- Immutable. The sources of the pipeline (for example, Dataplex). The keys and values are set by the corresponding sources during pipeline creation.
- project String
- scheduleInfo GoogleCloud Datapipelines V1Schedule Spec 
- Internal scheduling information for a pipeline. If this information is provided, periodic jobs will be created per the schedule. If not, users are responsible for creating jobs externally.
- schedulerService StringAccount Email 
- Optional. A service account email to be used with the Cloud Scheduler job. If not specified, the default compute engine service account will be used.
- workload
GoogleCloud Datapipelines V1Workload 
- Workload information for creating new jobs.
- displayName string
- The display name of the pipeline. It can contain only letters ([A-Za-z]), numbers ([0-9]), hyphens (-), and underscores (_).
- state
PipelineState 
- The state of the pipeline. When the pipeline is created, the state is set to 'PIPELINE_STATE_ACTIVE' by default. State changes can be requested by setting the state to stopping, paused, or resuming. State cannot be changed through UpdatePipeline requests.
- type
PipelineType 
- The type of the pipeline. This field affects the scheduling of the pipeline and the type of metrics to show for the pipeline.
- location string
- name string
- The pipeline name. For example: projects/PROJECT_ID/locations/LOCATION_ID/pipelines/PIPELINE_ID. *PROJECT_IDcan contain letters ([A-Za-z]), numbers ([0-9]), hyphens (-), colons (:), and periods (.). For more information, see Identifying projects. *LOCATION_IDis the canonical ID for the pipeline's location. The list of available locations can be obtained by callinggoogle.cloud.location.Locations.ListLocations. Note that the Data Pipelines service is not available in all regions. It depends on Cloud Scheduler, an App Engine application, so it's only available in App Engine regions. *PIPELINE_IDis the ID of the pipeline. Must be unique for the selected project and location.
- pipelineSources {[key: string]: string}
- Immutable. The sources of the pipeline (for example, Dataplex). The keys and values are set by the corresponding sources during pipeline creation.
- project string
- scheduleInfo GoogleCloud Datapipelines V1Schedule Spec 
- Internal scheduling information for a pipeline. If this information is provided, periodic jobs will be created per the schedule. If not, users are responsible for creating jobs externally.
- schedulerService stringAccount Email 
- Optional. A service account email to be used with the Cloud Scheduler job. If not specified, the default compute engine service account will be used.
- workload
GoogleCloud Datapipelines V1Workload 
- Workload information for creating new jobs.
- display_name str
- The display name of the pipeline. It can contain only letters ([A-Za-z]), numbers ([0-9]), hyphens (-), and underscores (_).
- state
PipelineState 
- The state of the pipeline. When the pipeline is created, the state is set to 'PIPELINE_STATE_ACTIVE' by default. State changes can be requested by setting the state to stopping, paused, or resuming. State cannot be changed through UpdatePipeline requests.
- type
PipelineType 
- The type of the pipeline. This field affects the scheduling of the pipeline and the type of metrics to show for the pipeline.
- location str
- name str
- The pipeline name. For example: projects/PROJECT_ID/locations/LOCATION_ID/pipelines/PIPELINE_ID. *PROJECT_IDcan contain letters ([A-Za-z]), numbers ([0-9]), hyphens (-), colons (:), and periods (.). For more information, see Identifying projects. *LOCATION_IDis the canonical ID for the pipeline's location. The list of available locations can be obtained by callinggoogle.cloud.location.Locations.ListLocations. Note that the Data Pipelines service is not available in all regions. It depends on Cloud Scheduler, an App Engine application, so it's only available in App Engine regions. *PIPELINE_IDis the ID of the pipeline. Must be unique for the selected project and location.
- pipeline_sources Mapping[str, str]
- Immutable. The sources of the pipeline (for example, Dataplex). The keys and values are set by the corresponding sources during pipeline creation.
- project str
- schedule_info GoogleCloud Datapipelines V1Schedule Spec Args 
- Internal scheduling information for a pipeline. If this information is provided, periodic jobs will be created per the schedule. If not, users are responsible for creating jobs externally.
- scheduler_service_ straccount_ email 
- Optional. A service account email to be used with the Cloud Scheduler job. If not specified, the default compute engine service account will be used.
- workload
GoogleCloud Datapipelines V1Workload Args 
- Workload information for creating new jobs.
- displayName String
- The display name of the pipeline. It can contain only letters ([A-Za-z]), numbers ([0-9]), hyphens (-), and underscores (_).
- state "STATE_UNSPECIFIED" | "STATE_RESUMING" | "STATE_ACTIVE" | "STATE_STOPPING" | "STATE_ARCHIVED" | "STATE_PAUSED"
- The state of the pipeline. When the pipeline is created, the state is set to 'PIPELINE_STATE_ACTIVE' by default. State changes can be requested by setting the state to stopping, paused, or resuming. State cannot be changed through UpdatePipeline requests.
- type "PIPELINE_TYPE_UNSPECIFIED" | "PIPELINE_TYPE_BATCH" | "PIPELINE_TYPE_STREAMING"
- The type of the pipeline. This field affects the scheduling of the pipeline and the type of metrics to show for the pipeline.
- location String
- name String
- The pipeline name. For example: projects/PROJECT_ID/locations/LOCATION_ID/pipelines/PIPELINE_ID. *PROJECT_IDcan contain letters ([A-Za-z]), numbers ([0-9]), hyphens (-), colons (:), and periods (.). For more information, see Identifying projects. *LOCATION_IDis the canonical ID for the pipeline's location. The list of available locations can be obtained by callinggoogle.cloud.location.Locations.ListLocations. Note that the Data Pipelines service is not available in all regions. It depends on Cloud Scheduler, an App Engine application, so it's only available in App Engine regions. *PIPELINE_IDis the ID of the pipeline. Must be unique for the selected project and location.
- pipelineSources Map<String>
- Immutable. The sources of the pipeline (for example, Dataplex). The keys and values are set by the corresponding sources during pipeline creation.
- project String
- scheduleInfo Property Map
- Internal scheduling information for a pipeline. If this information is provided, periodic jobs will be created per the schedule. If not, users are responsible for creating jobs externally.
- schedulerService StringAccount Email 
- Optional. A service account email to be used with the Cloud Scheduler job. If not specified, the default compute engine service account will be used.
- workload Property Map
- Workload information for creating new jobs.
Outputs
All input properties are implicitly available as output properties. Additionally, the Pipeline resource produces the following output properties:
- CreateTime string
- Immutable. The timestamp when the pipeline was initially created. Set by the Data Pipelines service.
- Id string
- The provider-assigned unique ID for this managed resource.
- JobCount int
- Number of jobs.
- LastUpdate stringTime 
- Immutable. The timestamp when the pipeline was last modified. Set by the Data Pipelines service.
- CreateTime string
- Immutable. The timestamp when the pipeline was initially created. Set by the Data Pipelines service.
- Id string
- The provider-assigned unique ID for this managed resource.
- JobCount int
- Number of jobs.
- LastUpdate stringTime 
- Immutable. The timestamp when the pipeline was last modified. Set by the Data Pipelines service.
- createTime String
- Immutable. The timestamp when the pipeline was initially created. Set by the Data Pipelines service.
- id String
- The provider-assigned unique ID for this managed resource.
- jobCount Integer
- Number of jobs.
- lastUpdate StringTime 
- Immutable. The timestamp when the pipeline was last modified. Set by the Data Pipelines service.
- createTime string
- Immutable. The timestamp when the pipeline was initially created. Set by the Data Pipelines service.
- id string
- The provider-assigned unique ID for this managed resource.
- jobCount number
- Number of jobs.
- lastUpdate stringTime 
- Immutable. The timestamp when the pipeline was last modified. Set by the Data Pipelines service.
- create_time str
- Immutable. The timestamp when the pipeline was initially created. Set by the Data Pipelines service.
- id str
- The provider-assigned unique ID for this managed resource.
- job_count int
- Number of jobs.
- last_update_ strtime 
- Immutable. The timestamp when the pipeline was last modified. Set by the Data Pipelines service.
- createTime String
- Immutable. The timestamp when the pipeline was initially created. Set by the Data Pipelines service.
- id String
- The provider-assigned unique ID for this managed resource.
- jobCount Number
- Number of jobs.
- lastUpdate StringTime 
- Immutable. The timestamp when the pipeline was last modified. Set by the Data Pipelines service.
Supporting Types
GoogleCloudDatapipelinesV1FlexTemplateRuntimeEnvironment, GoogleCloudDatapipelinesV1FlexTemplateRuntimeEnvironmentArgs              
- AdditionalExperiments List<string>
- Additional experiment flags for the job.
- AdditionalUser Dictionary<string, string>Labels 
- Additional user labels to be specified for the job. Keys and values must follow the restrictions specified in the labeling restrictions. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.
- EnableStreaming boolEngine 
- Whether to enable Streaming Engine for the job.
- FlexrsGoal Pulumi.Google Native. Datapipelines. V1. Google Cloud Datapipelines V1Flex Template Runtime Environment Flexrs Goal 
- Set FlexRS goal for the job. https://cloud.google.com/dataflow/docs/guides/flexrs
- IpConfiguration Pulumi.Google Native. Datapipelines. V1. Google Cloud Datapipelines V1Flex Template Runtime Environment Ip Configuration 
- Configuration for VM IPs.
- KmsKey stringName 
- Name for the Cloud KMS key for the job. Key format is: projects//locations//keyRings//cryptoKeys/
- MachineType string
- The machine type to use for the job. Defaults to the value from the template if not specified.
- MaxWorkers int
- The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- Network string
- Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- NumWorkers int
- The initial number of Compute Engine instances for the job.
- ServiceAccount stringEmail 
- The email address of the service account to run the job as.
- Subnetwork string
- Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- TempLocation string
- The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.
- WorkerRegion string
- The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, defaults to the control plane region.
- WorkerZone string
- The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane region is chosen based on available capacity. If both worker_zoneandzoneare set,worker_zonetakes precedence.
- Zone string
- The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- AdditionalExperiments []string
- Additional experiment flags for the job.
- AdditionalUser map[string]stringLabels 
- Additional user labels to be specified for the job. Keys and values must follow the restrictions specified in the labeling restrictions. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.
- EnableStreaming boolEngine 
- Whether to enable Streaming Engine for the job.
- FlexrsGoal GoogleCloud Datapipelines V1Flex Template Runtime Environment Flexrs Goal 
- Set FlexRS goal for the job. https://cloud.google.com/dataflow/docs/guides/flexrs
- IpConfiguration GoogleCloud Datapipelines V1Flex Template Runtime Environment Ip Configuration 
- Configuration for VM IPs.
- KmsKey stringName 
- Name for the Cloud KMS key for the job. Key format is: projects//locations//keyRings//cryptoKeys/
- MachineType string
- The machine type to use for the job. Defaults to the value from the template if not specified.
- MaxWorkers int
- The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- Network string
- Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- NumWorkers int
- The initial number of Compute Engine instances for the job.
- ServiceAccount stringEmail 
- The email address of the service account to run the job as.
- Subnetwork string
- Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- TempLocation string
- The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.
- WorkerRegion string
- The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, defaults to the control plane region.
- WorkerZone string
- The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane region is chosen based on available capacity. If both worker_zoneandzoneare set,worker_zonetakes precedence.
- Zone string
- The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- additionalExperiments List<String>
- Additional experiment flags for the job.
- additionalUser Map<String,String>Labels 
- Additional user labels to be specified for the job. Keys and values must follow the restrictions specified in the labeling restrictions. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.
- enableStreaming BooleanEngine 
- Whether to enable Streaming Engine for the job.
- flexrsGoal GoogleCloud Datapipelines V1Flex Template Runtime Environment Flexrs Goal 
- Set FlexRS goal for the job. https://cloud.google.com/dataflow/docs/guides/flexrs
- ipConfiguration GoogleCloud Datapipelines V1Flex Template Runtime Environment Ip Configuration 
- Configuration for VM IPs.
- kmsKey StringName 
- Name for the Cloud KMS key for the job. Key format is: projects//locations//keyRings//cryptoKeys/
- machineType String
- The machine type to use for the job. Defaults to the value from the template if not specified.
- maxWorkers Integer
- The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- network String
- Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- numWorkers Integer
- The initial number of Compute Engine instances for the job.
- serviceAccount StringEmail 
- The email address of the service account to run the job as.
- subnetwork String
- Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- tempLocation String
- The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.
- workerRegion String
- The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, defaults to the control plane region.
- workerZone String
- The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane region is chosen based on available capacity. If both worker_zoneandzoneare set,worker_zonetakes precedence.
- zone String
- The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- additionalExperiments string[]
- Additional experiment flags for the job.
- additionalUser {[key: string]: string}Labels 
- Additional user labels to be specified for the job. Keys and values must follow the restrictions specified in the labeling restrictions. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.
- enableStreaming booleanEngine 
- Whether to enable Streaming Engine for the job.
- flexrsGoal GoogleCloud Datapipelines V1Flex Template Runtime Environment Flexrs Goal 
- Set FlexRS goal for the job. https://cloud.google.com/dataflow/docs/guides/flexrs
- ipConfiguration GoogleCloud Datapipelines V1Flex Template Runtime Environment Ip Configuration 
- Configuration for VM IPs.
- kmsKey stringName 
- Name for the Cloud KMS key for the job. Key format is: projects//locations//keyRings//cryptoKeys/
- machineType string
- The machine type to use for the job. Defaults to the value from the template if not specified.
- maxWorkers number
- The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- network string
- Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- numWorkers number
- The initial number of Compute Engine instances for the job.
- serviceAccount stringEmail 
- The email address of the service account to run the job as.
- subnetwork string
- Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- tempLocation string
- The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.
- workerRegion string
- The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, defaults to the control plane region.
- workerZone string
- The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane region is chosen based on available capacity. If both worker_zoneandzoneare set,worker_zonetakes precedence.
- zone string
- The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- additional_experiments Sequence[str]
- Additional experiment flags for the job.
- additional_user_ Mapping[str, str]labels 
- Additional user labels to be specified for the job. Keys and values must follow the restrictions specified in the labeling restrictions. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.
- enable_streaming_ boolengine 
- Whether to enable Streaming Engine for the job.
- flexrs_goal GoogleCloud Datapipelines V1Flex Template Runtime Environment Flexrs Goal 
- Set FlexRS goal for the job. https://cloud.google.com/dataflow/docs/guides/flexrs
- ip_configuration GoogleCloud Datapipelines V1Flex Template Runtime Environment Ip Configuration 
- Configuration for VM IPs.
- kms_key_ strname 
- Name for the Cloud KMS key for the job. Key format is: projects//locations//keyRings//cryptoKeys/
- machine_type str
- The machine type to use for the job. Defaults to the value from the template if not specified.
- max_workers int
- The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- network str
- Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- num_workers int
- The initial number of Compute Engine instances for the job.
- service_account_ stremail 
- The email address of the service account to run the job as.
- subnetwork str
- Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- temp_location str
- The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.
- worker_region str
- The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, defaults to the control plane region.
- worker_zone str
- The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane region is chosen based on available capacity. If both worker_zoneandzoneare set,worker_zonetakes precedence.
- zone str
- The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- additionalExperiments List<String>
- Additional experiment flags for the job.
- additionalUser Map<String>Labels 
- Additional user labels to be specified for the job. Keys and values must follow the restrictions specified in the labeling restrictions. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.
- enableStreaming BooleanEngine 
- Whether to enable Streaming Engine for the job.
- flexrsGoal "FLEXRS_UNSPECIFIED" | "FLEXRS_SPEED_OPTIMIZED" | "FLEXRS_COST_OPTIMIZED"
- Set FlexRS goal for the job. https://cloud.google.com/dataflow/docs/guides/flexrs
- ipConfiguration "WORKER_IP_UNSPECIFIED" | "WORKER_IP_PUBLIC" | "WORKER_IP_PRIVATE"
- Configuration for VM IPs.
- kmsKey StringName 
- Name for the Cloud KMS key for the job. Key format is: projects//locations//keyRings//cryptoKeys/
- machineType String
- The machine type to use for the job. Defaults to the value from the template if not specified.
- maxWorkers Number
- The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- network String
- Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- numWorkers Number
- The initial number of Compute Engine instances for the job.
- serviceAccount StringEmail 
- The email address of the service account to run the job as.
- subnetwork String
- Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- tempLocation String
- The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.
- workerRegion String
- The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, defaults to the control plane region.
- workerZone String
- The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane region is chosen based on available capacity. If both worker_zoneandzoneare set,worker_zonetakes precedence.
- zone String
- The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
GoogleCloudDatapipelinesV1FlexTemplateRuntimeEnvironmentFlexrsGoal, GoogleCloudDatapipelinesV1FlexTemplateRuntimeEnvironmentFlexrsGoalArgs                  
- FlexrsUnspecified 
- FLEXRS_UNSPECIFIEDRun in the default mode.
- FlexrsSpeed Optimized 
- FLEXRS_SPEED_OPTIMIZEDOptimize for lower execution time.
- FlexrsCost Optimized 
- FLEXRS_COST_OPTIMIZEDOptimize for lower cost.
- GoogleCloud Datapipelines V1Flex Template Runtime Environment Flexrs Goal Flexrs Unspecified 
- FLEXRS_UNSPECIFIEDRun in the default mode.
- GoogleCloud Datapipelines V1Flex Template Runtime Environment Flexrs Goal Flexrs Speed Optimized 
- FLEXRS_SPEED_OPTIMIZEDOptimize for lower execution time.
- GoogleCloud Datapipelines V1Flex Template Runtime Environment Flexrs Goal Flexrs Cost Optimized 
- FLEXRS_COST_OPTIMIZEDOptimize for lower cost.
- FlexrsUnspecified 
- FLEXRS_UNSPECIFIEDRun in the default mode.
- FlexrsSpeed Optimized 
- FLEXRS_SPEED_OPTIMIZEDOptimize for lower execution time.
- FlexrsCost Optimized 
- FLEXRS_COST_OPTIMIZEDOptimize for lower cost.
- FlexrsUnspecified 
- FLEXRS_UNSPECIFIEDRun in the default mode.
- FlexrsSpeed Optimized 
- FLEXRS_SPEED_OPTIMIZEDOptimize for lower execution time.
- FlexrsCost Optimized 
- FLEXRS_COST_OPTIMIZEDOptimize for lower cost.
- FLEXRS_UNSPECIFIED
- FLEXRS_UNSPECIFIEDRun in the default mode.
- FLEXRS_SPEED_OPTIMIZED
- FLEXRS_SPEED_OPTIMIZEDOptimize for lower execution time.
- FLEXRS_COST_OPTIMIZED
- FLEXRS_COST_OPTIMIZEDOptimize for lower cost.
- "FLEXRS_UNSPECIFIED"
- FLEXRS_UNSPECIFIEDRun in the default mode.
- "FLEXRS_SPEED_OPTIMIZED"
- FLEXRS_SPEED_OPTIMIZEDOptimize for lower execution time.
- "FLEXRS_COST_OPTIMIZED"
- FLEXRS_COST_OPTIMIZEDOptimize for lower cost.
GoogleCloudDatapipelinesV1FlexTemplateRuntimeEnvironmentIpConfiguration, GoogleCloudDatapipelinesV1FlexTemplateRuntimeEnvironmentIpConfigurationArgs                  
- WorkerIp Unspecified 
- WORKER_IP_UNSPECIFIEDThe configuration is unknown, or unspecified.
- WorkerIp Public 
- WORKER_IP_PUBLICWorkers should have public IP addresses.
- WorkerIp Private 
- WORKER_IP_PRIVATEWorkers should have private IP addresses.
- GoogleCloud Datapipelines V1Flex Template Runtime Environment Ip Configuration Worker Ip Unspecified 
- WORKER_IP_UNSPECIFIEDThe configuration is unknown, or unspecified.
- GoogleCloud Datapipelines V1Flex Template Runtime Environment Ip Configuration Worker Ip Public 
- WORKER_IP_PUBLICWorkers should have public IP addresses.
- GoogleCloud Datapipelines V1Flex Template Runtime Environment Ip Configuration Worker Ip Private 
- WORKER_IP_PRIVATEWorkers should have private IP addresses.
- WorkerIp Unspecified 
- WORKER_IP_UNSPECIFIEDThe configuration is unknown, or unspecified.
- WorkerIp Public 
- WORKER_IP_PUBLICWorkers should have public IP addresses.
- WorkerIp Private 
- WORKER_IP_PRIVATEWorkers should have private IP addresses.
- WorkerIp Unspecified 
- WORKER_IP_UNSPECIFIEDThe configuration is unknown, or unspecified.
- WorkerIp Public 
- WORKER_IP_PUBLICWorkers should have public IP addresses.
- WorkerIp Private 
- WORKER_IP_PRIVATEWorkers should have private IP addresses.
- WORKER_IP_UNSPECIFIED
- WORKER_IP_UNSPECIFIEDThe configuration is unknown, or unspecified.
- WORKER_IP_PUBLIC
- WORKER_IP_PUBLICWorkers should have public IP addresses.
- WORKER_IP_PRIVATE
- WORKER_IP_PRIVATEWorkers should have private IP addresses.
- "WORKER_IP_UNSPECIFIED"
- WORKER_IP_UNSPECIFIEDThe configuration is unknown, or unspecified.
- "WORKER_IP_PUBLIC"
- WORKER_IP_PUBLICWorkers should have public IP addresses.
- "WORKER_IP_PRIVATE"
- WORKER_IP_PRIVATEWorkers should have private IP addresses.
GoogleCloudDatapipelinesV1FlexTemplateRuntimeEnvironmentResponse, GoogleCloudDatapipelinesV1FlexTemplateRuntimeEnvironmentResponseArgs                
- AdditionalExperiments List<string>
- Additional experiment flags for the job.
- AdditionalUser Dictionary<string, string>Labels 
- Additional user labels to be specified for the job. Keys and values must follow the restrictions specified in the labeling restrictions. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.
- EnableStreaming boolEngine 
- Whether to enable Streaming Engine for the job.
- FlexrsGoal string
- Set FlexRS goal for the job. https://cloud.google.com/dataflow/docs/guides/flexrs
- IpConfiguration string
- Configuration for VM IPs.
- KmsKey stringName 
- Name for the Cloud KMS key for the job. Key format is: projects//locations//keyRings//cryptoKeys/
- MachineType string
- The machine type to use for the job. Defaults to the value from the template if not specified.
- MaxWorkers int
- The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- Network string
- Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- NumWorkers int
- The initial number of Compute Engine instances for the job.
- ServiceAccount stringEmail 
- The email address of the service account to run the job as.
- Subnetwork string
- Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- TempLocation string
- The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.
- WorkerRegion string
- The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, defaults to the control plane region.
- WorkerZone string
- The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane region is chosen based on available capacity. If both worker_zoneandzoneare set,worker_zonetakes precedence.
- Zone string
- The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- AdditionalExperiments []string
- Additional experiment flags for the job.
- AdditionalUser map[string]stringLabels 
- Additional user labels to be specified for the job. Keys and values must follow the restrictions specified in the labeling restrictions. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.
- EnableStreaming boolEngine 
- Whether to enable Streaming Engine for the job.
- FlexrsGoal string
- Set FlexRS goal for the job. https://cloud.google.com/dataflow/docs/guides/flexrs
- IpConfiguration string
- Configuration for VM IPs.
- KmsKey stringName 
- Name for the Cloud KMS key for the job. Key format is: projects//locations//keyRings//cryptoKeys/
- MachineType string
- The machine type to use for the job. Defaults to the value from the template if not specified.
- MaxWorkers int
- The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- Network string
- Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- NumWorkers int
- The initial number of Compute Engine instances for the job.
- ServiceAccount stringEmail 
- The email address of the service account to run the job as.
- Subnetwork string
- Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- TempLocation string
- The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.
- WorkerRegion string
- The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, defaults to the control plane region.
- WorkerZone string
- The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane region is chosen based on available capacity. If both worker_zoneandzoneare set,worker_zonetakes precedence.
- Zone string
- The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- additionalExperiments List<String>
- Additional experiment flags for the job.
- additionalUser Map<String,String>Labels 
- Additional user labels to be specified for the job. Keys and values must follow the restrictions specified in the labeling restrictions. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.
- enableStreaming BooleanEngine 
- Whether to enable Streaming Engine for the job.
- flexrsGoal String
- Set FlexRS goal for the job. https://cloud.google.com/dataflow/docs/guides/flexrs
- ipConfiguration String
- Configuration for VM IPs.
- kmsKey StringName 
- Name for the Cloud KMS key for the job. Key format is: projects//locations//keyRings//cryptoKeys/
- machineType String
- The machine type to use for the job. Defaults to the value from the template if not specified.
- maxWorkers Integer
- The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- network String
- Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- numWorkers Integer
- The initial number of Compute Engine instances for the job.
- serviceAccount StringEmail 
- The email address of the service account to run the job as.
- subnetwork String
- Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- tempLocation String
- The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.
- workerRegion String
- The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, defaults to the control plane region.
- workerZone String
- The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane region is chosen based on available capacity. If both worker_zoneandzoneare set,worker_zonetakes precedence.
- zone String
- The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- additionalExperiments string[]
- Additional experiment flags for the job.
- additionalUser {[key: string]: string}Labels 
- Additional user labels to be specified for the job. Keys and values must follow the restrictions specified in the labeling restrictions. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.
- enableStreaming booleanEngine 
- Whether to enable Streaming Engine for the job.
- flexrsGoal string
- Set FlexRS goal for the job. https://cloud.google.com/dataflow/docs/guides/flexrs
- ipConfiguration string
- Configuration for VM IPs.
- kmsKey stringName 
- Name for the Cloud KMS key for the job. Key format is: projects//locations//keyRings//cryptoKeys/
- machineType string
- The machine type to use for the job. Defaults to the value from the template if not specified.
- maxWorkers number
- The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- network string
- Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- numWorkers number
- The initial number of Compute Engine instances for the job.
- serviceAccount stringEmail 
- The email address of the service account to run the job as.
- subnetwork string
- Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- tempLocation string
- The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.
- workerRegion string
- The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, defaults to the control plane region.
- workerZone string
- The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane region is chosen based on available capacity. If both worker_zoneandzoneare set,worker_zonetakes precedence.
- zone string
- The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- additional_experiments Sequence[str]
- Additional experiment flags for the job.
- additional_user_ Mapping[str, str]labels 
- Additional user labels to be specified for the job. Keys and values must follow the restrictions specified in the labeling restrictions. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.
- enable_streaming_ boolengine 
- Whether to enable Streaming Engine for the job.
- flexrs_goal str
- Set FlexRS goal for the job. https://cloud.google.com/dataflow/docs/guides/flexrs
- ip_configuration str
- Configuration for VM IPs.
- kms_key_ strname 
- Name for the Cloud KMS key for the job. Key format is: projects//locations//keyRings//cryptoKeys/
- machine_type str
- The machine type to use for the job. Defaults to the value from the template if not specified.
- max_workers int
- The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- network str
- Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- num_workers int
- The initial number of Compute Engine instances for the job.
- service_account_ stremail 
- The email address of the service account to run the job as.
- subnetwork str
- Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- temp_location str
- The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.
- worker_region str
- The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, defaults to the control plane region.
- worker_zone str
- The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane region is chosen based on available capacity. If both worker_zoneandzoneare set,worker_zonetakes precedence.
- zone str
- The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- additionalExperiments List<String>
- Additional experiment flags for the job.
- additionalUser Map<String>Labels 
- Additional user labels to be specified for the job. Keys and values must follow the restrictions specified in the labeling restrictions. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.
- enableStreaming BooleanEngine 
- Whether to enable Streaming Engine for the job.
- flexrsGoal String
- Set FlexRS goal for the job. https://cloud.google.com/dataflow/docs/guides/flexrs
- ipConfiguration String
- Configuration for VM IPs.
- kmsKey StringName 
- Name for the Cloud KMS key for the job. Key format is: projects//locations//keyRings//cryptoKeys/
- machineType String
- The machine type to use for the job. Defaults to the value from the template if not specified.
- maxWorkers Number
- The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- network String
- Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- numWorkers Number
- The initial number of Compute Engine instances for the job.
- serviceAccount StringEmail 
- The email address of the service account to run the job as.
- subnetwork String
- Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- tempLocation String
- The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.
- workerRegion String
- The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, defaults to the control plane region.
- workerZone String
- The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane region is chosen based on available capacity. If both worker_zoneandzoneare set,worker_zonetakes precedence.
- zone String
- The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
GoogleCloudDatapipelinesV1LaunchFlexTemplateParameter, GoogleCloudDatapipelinesV1LaunchFlexTemplateParameterArgs              
- JobName string
- The job name to use for the created job. For an update job request, the job name should be the same as the existing running job.
- ContainerSpec stringGcs Path 
- Cloud Storage path to a file with a JSON-serialized ContainerSpec as content.
- Environment
Pulumi.Google Native. Datapipelines. V1. Inputs. Google Cloud Datapipelines V1Flex Template Runtime Environment 
- The runtime environment for the Flex Template job.
- LaunchOptions Dictionary<string, string>
- Launch options for this Flex Template job. This is a common set of options across languages and templates. This should not be used to pass job parameters.
- Parameters Dictionary<string, string>
- The parameters for the Flex Template. Example: {"num_workers":"5"}
- TransformName Dictionary<string, string>Mappings 
- Use this to pass transform name mappings for streaming update jobs. Example: {"oldTransformName":"newTransformName",...}
- Update bool
- Set this to true if you are sending a request to update a running streaming job. When set, the job name should be the same as the running job.
- JobName string
- The job name to use for the created job. For an update job request, the job name should be the same as the existing running job.
- ContainerSpec stringGcs Path 
- Cloud Storage path to a file with a JSON-serialized ContainerSpec as content.
- Environment
GoogleCloud Datapipelines V1Flex Template Runtime Environment 
- The runtime environment for the Flex Template job.
- LaunchOptions map[string]string
- Launch options for this Flex Template job. This is a common set of options across languages and templates. This should not be used to pass job parameters.
- Parameters map[string]string
- The parameters for the Flex Template. Example: {"num_workers":"5"}
- TransformName map[string]stringMappings 
- Use this to pass transform name mappings for streaming update jobs. Example: {"oldTransformName":"newTransformName",...}
- Update bool
- Set this to true if you are sending a request to update a running streaming job. When set, the job name should be the same as the running job.
- jobName String
- The job name to use for the created job. For an update job request, the job name should be the same as the existing running job.
- containerSpec StringGcs Path 
- Cloud Storage path to a file with a JSON-serialized ContainerSpec as content.
- environment
GoogleCloud Datapipelines V1Flex Template Runtime Environment 
- The runtime environment for the Flex Template job.
- launchOptions Map<String,String>
- Launch options for this Flex Template job. This is a common set of options across languages and templates. This should not be used to pass job parameters.
- parameters Map<String,String>
- The parameters for the Flex Template. Example: {"num_workers":"5"}
- transformName Map<String,String>Mappings 
- Use this to pass transform name mappings for streaming update jobs. Example: {"oldTransformName":"newTransformName",...}
- update Boolean
- Set this to true if you are sending a request to update a running streaming job. When set, the job name should be the same as the running job.
- jobName string
- The job name to use for the created job. For an update job request, the job name should be the same as the existing running job.
- containerSpec stringGcs Path 
- Cloud Storage path to a file with a JSON-serialized ContainerSpec as content.
- environment
GoogleCloud Datapipelines V1Flex Template Runtime Environment 
- The runtime environment for the Flex Template job.
- launchOptions {[key: string]: string}
- Launch options for this Flex Template job. This is a common set of options across languages and templates. This should not be used to pass job parameters.
- parameters {[key: string]: string}
- The parameters for the Flex Template. Example: {"num_workers":"5"}
- transformName {[key: string]: string}Mappings 
- Use this to pass transform name mappings for streaming update jobs. Example: {"oldTransformName":"newTransformName",...}
- update boolean
- Set this to true if you are sending a request to update a running streaming job. When set, the job name should be the same as the running job.
- job_name str
- The job name to use for the created job. For an update job request, the job name should be the same as the existing running job.
- container_spec_ strgcs_ path 
- Cloud Storage path to a file with a JSON-serialized ContainerSpec as content.
- environment
GoogleCloud Datapipelines V1Flex Template Runtime Environment 
- The runtime environment for the Flex Template job.
- launch_options Mapping[str, str]
- Launch options for this Flex Template job. This is a common set of options across languages and templates. This should not be used to pass job parameters.
- parameters Mapping[str, str]
- The parameters for the Flex Template. Example: {"num_workers":"5"}
- transform_name_ Mapping[str, str]mappings 
- Use this to pass transform name mappings for streaming update jobs. Example: {"oldTransformName":"newTransformName",...}
- update bool
- Set this to true if you are sending a request to update a running streaming job. When set, the job name should be the same as the running job.
- jobName String
- The job name to use for the created job. For an update job request, the job name should be the same as the existing running job.
- containerSpec StringGcs Path 
- Cloud Storage path to a file with a JSON-serialized ContainerSpec as content.
- environment Property Map
- The runtime environment for the Flex Template job.
- launchOptions Map<String>
- Launch options for this Flex Template job. This is a common set of options across languages and templates. This should not be used to pass job parameters.
- parameters Map<String>
- The parameters for the Flex Template. Example: {"num_workers":"5"}
- transformName Map<String>Mappings 
- Use this to pass transform name mappings for streaming update jobs. Example: {"oldTransformName":"newTransformName",...}
- update Boolean
- Set this to true if you are sending a request to update a running streaming job. When set, the job name should be the same as the running job.
GoogleCloudDatapipelinesV1LaunchFlexTemplateParameterResponse, GoogleCloudDatapipelinesV1LaunchFlexTemplateParameterResponseArgs                
- ContainerSpec stringGcs Path 
- Cloud Storage path to a file with a JSON-serialized ContainerSpec as content.
- Environment
Pulumi.Google Native. Datapipelines. V1. Inputs. Google Cloud Datapipelines V1Flex Template Runtime Environment Response 
- The runtime environment for the Flex Template job.
- JobName string
- The job name to use for the created job. For an update job request, the job name should be the same as the existing running job.
- LaunchOptions Dictionary<string, string>
- Launch options for this Flex Template job. This is a common set of options across languages and templates. This should not be used to pass job parameters.
- Parameters Dictionary<string, string>
- The parameters for the Flex Template. Example: {"num_workers":"5"}
- TransformName Dictionary<string, string>Mappings 
- Use this to pass transform name mappings for streaming update jobs. Example: {"oldTransformName":"newTransformName",...}
- Update bool
- Set this to true if you are sending a request to update a running streaming job. When set, the job name should be the same as the running job.
- ContainerSpec stringGcs Path 
- Cloud Storage path to a file with a JSON-serialized ContainerSpec as content.
- Environment
GoogleCloud Datapipelines V1Flex Template Runtime Environment Response 
- The runtime environment for the Flex Template job.
- JobName string
- The job name to use for the created job. For an update job request, the job name should be the same as the existing running job.
- LaunchOptions map[string]string
- Launch options for this Flex Template job. This is a common set of options across languages and templates. This should not be used to pass job parameters.
- Parameters map[string]string
- The parameters for the Flex Template. Example: {"num_workers":"5"}
- TransformName map[string]stringMappings 
- Use this to pass transform name mappings for streaming update jobs. Example: {"oldTransformName":"newTransformName",...}
- Update bool
- Set this to true if you are sending a request to update a running streaming job. When set, the job name should be the same as the running job.
- containerSpec StringGcs Path 
- Cloud Storage path to a file with a JSON-serialized ContainerSpec as content.
- environment
GoogleCloud Datapipelines V1Flex Template Runtime Environment Response 
- The runtime environment for the Flex Template job.
- jobName String
- The job name to use for the created job. For an update job request, the job name should be the same as the existing running job.
- launchOptions Map<String,String>
- Launch options for this Flex Template job. This is a common set of options across languages and templates. This should not be used to pass job parameters.
- parameters Map<String,String>
- The parameters for the Flex Template. Example: {"num_workers":"5"}
- transformName Map<String,String>Mappings 
- Use this to pass transform name mappings for streaming update jobs. Example: {"oldTransformName":"newTransformName",...}
- update Boolean
- Set this to true if you are sending a request to update a running streaming job. When set, the job name should be the same as the running job.
- containerSpec stringGcs Path 
- Cloud Storage path to a file with a JSON-serialized ContainerSpec as content.
- environment
GoogleCloud Datapipelines V1Flex Template Runtime Environment Response 
- The runtime environment for the Flex Template job.
- jobName string
- The job name to use for the created job. For an update job request, the job name should be the same as the existing running job.
- launchOptions {[key: string]: string}
- Launch options for this Flex Template job. This is a common set of options across languages and templates. This should not be used to pass job parameters.
- parameters {[key: string]: string}
- The parameters for the Flex Template. Example: {"num_workers":"5"}
- transformName {[key: string]: string}Mappings 
- Use this to pass transform name mappings for streaming update jobs. Example: {"oldTransformName":"newTransformName",...}
- update boolean
- Set this to true if you are sending a request to update a running streaming job. When set, the job name should be the same as the running job.
- container_spec_ strgcs_ path 
- Cloud Storage path to a file with a JSON-serialized ContainerSpec as content.
- environment
GoogleCloud Datapipelines V1Flex Template Runtime Environment Response 
- The runtime environment for the Flex Template job.
- job_name str
- The job name to use for the created job. For an update job request, the job name should be the same as the existing running job.
- launch_options Mapping[str, str]
- Launch options for this Flex Template job. This is a common set of options across languages and templates. This should not be used to pass job parameters.
- parameters Mapping[str, str]
- The parameters for the Flex Template. Example: {"num_workers":"5"}
- transform_name_ Mapping[str, str]mappings 
- Use this to pass transform name mappings for streaming update jobs. Example: {"oldTransformName":"newTransformName",...}
- update bool
- Set this to true if you are sending a request to update a running streaming job. When set, the job name should be the same as the running job.
- containerSpec StringGcs Path 
- Cloud Storage path to a file with a JSON-serialized ContainerSpec as content.
- environment Property Map
- The runtime environment for the Flex Template job.
- jobName String
- The job name to use for the created job. For an update job request, the job name should be the same as the existing running job.
- launchOptions Map<String>
- Launch options for this Flex Template job. This is a common set of options across languages and templates. This should not be used to pass job parameters.
- parameters Map<String>
- The parameters for the Flex Template. Example: {"num_workers":"5"}
- transformName Map<String>Mappings 
- Use this to pass transform name mappings for streaming update jobs. Example: {"oldTransformName":"newTransformName",...}
- update Boolean
- Set this to true if you are sending a request to update a running streaming job. When set, the job name should be the same as the running job.
GoogleCloudDatapipelinesV1LaunchFlexTemplateRequest, GoogleCloudDatapipelinesV1LaunchFlexTemplateRequestArgs              
- LaunchParameter Pulumi.Google Native. Datapipelines. V1. Inputs. Google Cloud Datapipelines V1Launch Flex Template Parameter 
- Parameter to launch a job from a Flex Template.
- Location string
- The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request. For example, us-central1,us-west1.
- Project string
- The ID of the Cloud Platform project that the job belongs to.
- ValidateOnly bool
- If true, the request is validated but not actually executed. Defaults to false.
- LaunchParameter GoogleCloud Datapipelines V1Launch Flex Template Parameter 
- Parameter to launch a job from a Flex Template.
- Location string
- The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request. For example, us-central1,us-west1.
- Project string
- The ID of the Cloud Platform project that the job belongs to.
- ValidateOnly bool
- If true, the request is validated but not actually executed. Defaults to false.
- launchParameter GoogleCloud Datapipelines V1Launch Flex Template Parameter 
- Parameter to launch a job from a Flex Template.
- location String
- The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request. For example, us-central1,us-west1.
- project String
- The ID of the Cloud Platform project that the job belongs to.
- validateOnly Boolean
- If true, the request is validated but not actually executed. Defaults to false.
- launchParameter GoogleCloud Datapipelines V1Launch Flex Template Parameter 
- Parameter to launch a job from a Flex Template.
- location string
- The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request. For example, us-central1,us-west1.
- project string
- The ID of the Cloud Platform project that the job belongs to.
- validateOnly boolean
- If true, the request is validated but not actually executed. Defaults to false.
- launch_parameter GoogleCloud Datapipelines V1Launch Flex Template Parameter 
- Parameter to launch a job from a Flex Template.
- location str
- The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request. For example, us-central1,us-west1.
- project str
- The ID of the Cloud Platform project that the job belongs to.
- validate_only bool
- If true, the request is validated but not actually executed. Defaults to false.
- launchParameter Property Map
- Parameter to launch a job from a Flex Template.
- location String
- The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request. For example, us-central1,us-west1.
- project String
- The ID of the Cloud Platform project that the job belongs to.
- validateOnly Boolean
- If true, the request is validated but not actually executed. Defaults to false.
GoogleCloudDatapipelinesV1LaunchFlexTemplateRequestResponse, GoogleCloudDatapipelinesV1LaunchFlexTemplateRequestResponseArgs                
- LaunchParameter Pulumi.Google Native. Datapipelines. V1. Inputs. Google Cloud Datapipelines V1Launch Flex Template Parameter Response 
- Parameter to launch a job from a Flex Template.
- Location string
- The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request. For example, us-central1,us-west1.
- Project string
- The ID of the Cloud Platform project that the job belongs to.
- ValidateOnly bool
- If true, the request is validated but not actually executed. Defaults to false.
- LaunchParameter GoogleCloud Datapipelines V1Launch Flex Template Parameter Response 
- Parameter to launch a job from a Flex Template.
- Location string
- The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request. For example, us-central1,us-west1.
- Project string
- The ID of the Cloud Platform project that the job belongs to.
- ValidateOnly bool
- If true, the request is validated but not actually executed. Defaults to false.
- launchParameter GoogleCloud Datapipelines V1Launch Flex Template Parameter Response 
- Parameter to launch a job from a Flex Template.
- location String
- The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request. For example, us-central1,us-west1.
- project String
- The ID of the Cloud Platform project that the job belongs to.
- validateOnly Boolean
- If true, the request is validated but not actually executed. Defaults to false.
- launchParameter GoogleCloud Datapipelines V1Launch Flex Template Parameter Response 
- Parameter to launch a job from a Flex Template.
- location string
- The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request. For example, us-central1,us-west1.
- project string
- The ID of the Cloud Platform project that the job belongs to.
- validateOnly boolean
- If true, the request is validated but not actually executed. Defaults to false.
- launch_parameter GoogleCloud Datapipelines V1Launch Flex Template Parameter Response 
- Parameter to launch a job from a Flex Template.
- location str
- The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request. For example, us-central1,us-west1.
- project str
- The ID of the Cloud Platform project that the job belongs to.
- validate_only bool
- If true, the request is validated but not actually executed. Defaults to false.
- launchParameter Property Map
- Parameter to launch a job from a Flex Template.
- location String
- The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request. For example, us-central1,us-west1.
- project String
- The ID of the Cloud Platform project that the job belongs to.
- validateOnly Boolean
- If true, the request is validated but not actually executed. Defaults to false.
GoogleCloudDatapipelinesV1LaunchTemplateParameters, GoogleCloudDatapipelinesV1LaunchTemplateParametersArgs            
- JobName string
- The job name to use for the created job.
- Environment
Pulumi.Google Native. Datapipelines. V1. Inputs. Google Cloud Datapipelines V1Runtime Environment 
- The runtime environment for the job.
- Parameters Dictionary<string, string>
- The runtime parameters to pass to the job.
- TransformName Dictionary<string, string>Mapping 
- Map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job. Only applicable when updating a pipeline.
- Update bool
- If set, replace the existing pipeline with the name specified by jobName with this pipeline, preserving state.
- JobName string
- The job name to use for the created job.
- Environment
GoogleCloud Datapipelines V1Runtime Environment 
- The runtime environment for the job.
- Parameters map[string]string
- The runtime parameters to pass to the job.
- TransformName map[string]stringMapping 
- Map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job. Only applicable when updating a pipeline.
- Update bool
- If set, replace the existing pipeline with the name specified by jobName with this pipeline, preserving state.
- jobName String
- The job name to use for the created job.
- environment
GoogleCloud Datapipelines V1Runtime Environment 
- The runtime environment for the job.
- parameters Map<String,String>
- The runtime parameters to pass to the job.
- transformName Map<String,String>Mapping 
- Map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job. Only applicable when updating a pipeline.
- update Boolean
- If set, replace the existing pipeline with the name specified by jobName with this pipeline, preserving state.
- jobName string
- The job name to use for the created job.
- environment
GoogleCloud Datapipelines V1Runtime Environment 
- The runtime environment for the job.
- parameters {[key: string]: string}
- The runtime parameters to pass to the job.
- transformName {[key: string]: string}Mapping 
- Map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job. Only applicable when updating a pipeline.
- update boolean
- If set, replace the existing pipeline with the name specified by jobName with this pipeline, preserving state.
- job_name str
- The job name to use for the created job.
- environment
GoogleCloud Datapipelines V1Runtime Environment 
- The runtime environment for the job.
- parameters Mapping[str, str]
- The runtime parameters to pass to the job.
- transform_name_ Mapping[str, str]mapping 
- Map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job. Only applicable when updating a pipeline.
- update bool
- If set, replace the existing pipeline with the name specified by jobName with this pipeline, preserving state.
- jobName String
- The job name to use for the created job.
- environment Property Map
- The runtime environment for the job.
- parameters Map<String>
- The runtime parameters to pass to the job.
- transformName Map<String>Mapping 
- Map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job. Only applicable when updating a pipeline.
- update Boolean
- If set, replace the existing pipeline with the name specified by jobName with this pipeline, preserving state.
GoogleCloudDatapipelinesV1LaunchTemplateParametersResponse, GoogleCloudDatapipelinesV1LaunchTemplateParametersResponseArgs              
- Environment
Pulumi.Google Native. Datapipelines. V1. Inputs. Google Cloud Datapipelines V1Runtime Environment Response 
- The runtime environment for the job.
- JobName string
- The job name to use for the created job.
- Parameters Dictionary<string, string>
- The runtime parameters to pass to the job.
- TransformName Dictionary<string, string>Mapping 
- Map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job. Only applicable when updating a pipeline.
- Update bool
- If set, replace the existing pipeline with the name specified by jobName with this pipeline, preserving state.
- Environment
GoogleCloud Datapipelines V1Runtime Environment Response 
- The runtime environment for the job.
- JobName string
- The job name to use for the created job.
- Parameters map[string]string
- The runtime parameters to pass to the job.
- TransformName map[string]stringMapping 
- Map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job. Only applicable when updating a pipeline.
- Update bool
- If set, replace the existing pipeline with the name specified by jobName with this pipeline, preserving state.
- environment
GoogleCloud Datapipelines V1Runtime Environment Response 
- The runtime environment for the job.
- jobName String
- The job name to use for the created job.
- parameters Map<String,String>
- The runtime parameters to pass to the job.
- transformName Map<String,String>Mapping 
- Map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job. Only applicable when updating a pipeline.
- update Boolean
- If set, replace the existing pipeline with the name specified by jobName with this pipeline, preserving state.
- environment
GoogleCloud Datapipelines V1Runtime Environment Response 
- The runtime environment for the job.
- jobName string
- The job name to use for the created job.
- parameters {[key: string]: string}
- The runtime parameters to pass to the job.
- transformName {[key: string]: string}Mapping 
- Map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job. Only applicable when updating a pipeline.
- update boolean
- If set, replace the existing pipeline with the name specified by jobName with this pipeline, preserving state.
- environment
GoogleCloud Datapipelines V1Runtime Environment Response 
- The runtime environment for the job.
- job_name str
- The job name to use for the created job.
- parameters Mapping[str, str]
- The runtime parameters to pass to the job.
- transform_name_ Mapping[str, str]mapping 
- Map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job. Only applicable when updating a pipeline.
- update bool
- If set, replace the existing pipeline with the name specified by jobName with this pipeline, preserving state.
- environment Property Map
- The runtime environment for the job.
- jobName String
- The job name to use for the created job.
- parameters Map<String>
- The runtime parameters to pass to the job.
- transformName Map<String>Mapping 
- Map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job. Only applicable when updating a pipeline.
- update Boolean
- If set, replace the existing pipeline with the name specified by jobName with this pipeline, preserving state.
GoogleCloudDatapipelinesV1LaunchTemplateRequest, GoogleCloudDatapipelinesV1LaunchTemplateRequestArgs            
- Project string
- The ID of the Cloud Platform project that the job belongs to.
- GcsPath string
- A Cloud Storage path to the template from which to create the job. Must be a valid Cloud Storage URL, beginning with 'gs://'.
- LaunchParameters Pulumi.Google Native. Datapipelines. V1. Inputs. Google Cloud Datapipelines V1Launch Template Parameters 
- The parameters of the template to launch. This should be part of the body of the POST request.
- Location string
- The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request.
- ValidateOnly bool
- If true, the request is validated but not actually executed. Defaults to false.
- Project string
- The ID of the Cloud Platform project that the job belongs to.
- GcsPath string
- A Cloud Storage path to the template from which to create the job. Must be a valid Cloud Storage URL, beginning with 'gs://'.
- LaunchParameters GoogleCloud Datapipelines V1Launch Template Parameters 
- The parameters of the template to launch. This should be part of the body of the POST request.
- Location string
- The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request.
- ValidateOnly bool
- If true, the request is validated but not actually executed. Defaults to false.
- project String
- The ID of the Cloud Platform project that the job belongs to.
- gcsPath String
- A Cloud Storage path to the template from which to create the job. Must be a valid Cloud Storage URL, beginning with 'gs://'.
- launchParameters GoogleCloud Datapipelines V1Launch Template Parameters 
- The parameters of the template to launch. This should be part of the body of the POST request.
- location String
- The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request.
- validateOnly Boolean
- If true, the request is validated but not actually executed. Defaults to false.
- project string
- The ID of the Cloud Platform project that the job belongs to.
- gcsPath string
- A Cloud Storage path to the template from which to create the job. Must be a valid Cloud Storage URL, beginning with 'gs://'.
- launchParameters GoogleCloud Datapipelines V1Launch Template Parameters 
- The parameters of the template to launch. This should be part of the body of the POST request.
- location string
- The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request.
- validateOnly boolean
- If true, the request is validated but not actually executed. Defaults to false.
- project str
- The ID of the Cloud Platform project that the job belongs to.
- gcs_path str
- A Cloud Storage path to the template from which to create the job. Must be a valid Cloud Storage URL, beginning with 'gs://'.
- launch_parameters GoogleCloud Datapipelines V1Launch Template Parameters 
- The parameters of the template to launch. This should be part of the body of the POST request.
- location str
- The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request.
- validate_only bool
- If true, the request is validated but not actually executed. Defaults to false.
- project String
- The ID of the Cloud Platform project that the job belongs to.
- gcsPath String
- A Cloud Storage path to the template from which to create the job. Must be a valid Cloud Storage URL, beginning with 'gs://'.
- launchParameters Property Map
- The parameters of the template to launch. This should be part of the body of the POST request.
- location String
- The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request.
- validateOnly Boolean
- If true, the request is validated but not actually executed. Defaults to false.
GoogleCloudDatapipelinesV1LaunchTemplateRequestResponse, GoogleCloudDatapipelinesV1LaunchTemplateRequestResponseArgs              
- GcsPath string
- A Cloud Storage path to the template from which to create the job. Must be a valid Cloud Storage URL, beginning with 'gs://'.
- LaunchParameters Pulumi.Google Native. Datapipelines. V1. Inputs. Google Cloud Datapipelines V1Launch Template Parameters Response 
- The parameters of the template to launch. This should be part of the body of the POST request.
- Location string
- The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request.
- Project string
- The ID of the Cloud Platform project that the job belongs to.
- ValidateOnly bool
- If true, the request is validated but not actually executed. Defaults to false.
- GcsPath string
- A Cloud Storage path to the template from which to create the job. Must be a valid Cloud Storage URL, beginning with 'gs://'.
- LaunchParameters GoogleCloud Datapipelines V1Launch Template Parameters Response 
- The parameters of the template to launch. This should be part of the body of the POST request.
- Location string
- The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request.
- Project string
- The ID of the Cloud Platform project that the job belongs to.
- ValidateOnly bool
- If true, the request is validated but not actually executed. Defaults to false.
- gcsPath String
- A Cloud Storage path to the template from which to create the job. Must be a valid Cloud Storage URL, beginning with 'gs://'.
- launchParameters GoogleCloud Datapipelines V1Launch Template Parameters Response 
- The parameters of the template to launch. This should be part of the body of the POST request.
- location String
- The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request.
- project String
- The ID of the Cloud Platform project that the job belongs to.
- validateOnly Boolean
- If true, the request is validated but not actually executed. Defaults to false.
- gcsPath string
- A Cloud Storage path to the template from which to create the job. Must be a valid Cloud Storage URL, beginning with 'gs://'.
- launchParameters GoogleCloud Datapipelines V1Launch Template Parameters Response 
- The parameters of the template to launch. This should be part of the body of the POST request.
- location string
- The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request.
- project string
- The ID of the Cloud Platform project that the job belongs to.
- validateOnly boolean
- If true, the request is validated but not actually executed. Defaults to false.
- gcs_path str
- A Cloud Storage path to the template from which to create the job. Must be a valid Cloud Storage URL, beginning with 'gs://'.
- launch_parameters GoogleCloud Datapipelines V1Launch Template Parameters Response 
- The parameters of the template to launch. This should be part of the body of the POST request.
- location str
- The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request.
- project str
- The ID of the Cloud Platform project that the job belongs to.
- validate_only bool
- If true, the request is validated but not actually executed. Defaults to false.
- gcsPath String
- A Cloud Storage path to the template from which to create the job. Must be a valid Cloud Storage URL, beginning with 'gs://'.
- launchParameters Property Map
- The parameters of the template to launch. This should be part of the body of the POST request.
- location String
- The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request.
- project String
- The ID of the Cloud Platform project that the job belongs to.
- validateOnly Boolean
- If true, the request is validated but not actually executed. Defaults to false.
GoogleCloudDatapipelinesV1RuntimeEnvironment, GoogleCloudDatapipelinesV1RuntimeEnvironmentArgs          
- AdditionalExperiments List<string>
- Additional experiment flags for the job.
- AdditionalUser Dictionary<string, string>Labels 
- Additional user labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.
- BypassTemp boolDir Validation 
- Whether to bypass the safety checks for the job's temporary directory. Use with caution.
- EnableStreaming boolEngine 
- Whether to enable Streaming Engine for the job.
- IpConfiguration Pulumi.Google Native. Datapipelines. V1. Google Cloud Datapipelines V1Runtime Environment Ip Configuration 
- Configuration for VM IPs.
- KmsKey stringName 
- Name for the Cloud KMS key for the job. The key format is: projects//locations//keyRings//cryptoKeys/
- MachineType string
- The machine type to use for the job. Defaults to the value from the template if not specified.
- MaxWorkers int
- The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- Network string
- Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- NumWorkers int
- The initial number of Compute Engine instances for the job.
- ServiceAccount stringEmail 
- The email address of the service account to run the job as.
- Subnetwork string
- Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- TempLocation string
- The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.
- WorkerRegion string
- The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- WorkerZone string
- The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity. If both worker_zoneandzoneare set,worker_zonetakes precedence.
- Zone string
- The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- AdditionalExperiments []string
- Additional experiment flags for the job.
- AdditionalUser map[string]stringLabels 
- Additional user labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.
- BypassTemp boolDir Validation 
- Whether to bypass the safety checks for the job's temporary directory. Use with caution.
- EnableStreaming boolEngine 
- Whether to enable Streaming Engine for the job.
- IpConfiguration GoogleCloud Datapipelines V1Runtime Environment Ip Configuration 
- Configuration for VM IPs.
- KmsKey stringName 
- Name for the Cloud KMS key for the job. The key format is: projects//locations//keyRings//cryptoKeys/
- MachineType string
- The machine type to use for the job. Defaults to the value from the template if not specified.
- MaxWorkers int
- The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- Network string
- Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- NumWorkers int
- The initial number of Compute Engine instances for the job.
- ServiceAccount stringEmail 
- The email address of the service account to run the job as.
- Subnetwork string
- Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- TempLocation string
- The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.
- WorkerRegion string
- The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- WorkerZone string
- The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity. If both worker_zoneandzoneare set,worker_zonetakes precedence.
- Zone string
- The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- additionalExperiments List<String>
- Additional experiment flags for the job.
- additionalUser Map<String,String>Labels 
- Additional user labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.
- bypassTemp BooleanDir Validation 
- Whether to bypass the safety checks for the job's temporary directory. Use with caution.
- enableStreaming BooleanEngine 
- Whether to enable Streaming Engine for the job.
- ipConfiguration GoogleCloud Datapipelines V1Runtime Environment Ip Configuration 
- Configuration for VM IPs.
- kmsKey StringName 
- Name for the Cloud KMS key for the job. The key format is: projects//locations//keyRings//cryptoKeys/
- machineType String
- The machine type to use for the job. Defaults to the value from the template if not specified.
- maxWorkers Integer
- The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- network String
- Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- numWorkers Integer
- The initial number of Compute Engine instances for the job.
- serviceAccount StringEmail 
- The email address of the service account to run the job as.
- subnetwork String
- Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- tempLocation String
- The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.
- workerRegion String
- The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- workerZone String
- The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity. If both worker_zoneandzoneare set,worker_zonetakes precedence.
- zone String
- The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- additionalExperiments string[]
- Additional experiment flags for the job.
- additionalUser {[key: string]: string}Labels 
- Additional user labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.
- bypassTemp booleanDir Validation 
- Whether to bypass the safety checks for the job's temporary directory. Use with caution.
- enableStreaming booleanEngine 
- Whether to enable Streaming Engine for the job.
- ipConfiguration GoogleCloud Datapipelines V1Runtime Environment Ip Configuration 
- Configuration for VM IPs.
- kmsKey stringName 
- Name for the Cloud KMS key for the job. The key format is: projects//locations//keyRings//cryptoKeys/
- machineType string
- The machine type to use for the job. Defaults to the value from the template if not specified.
- maxWorkers number
- The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- network string
- Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- numWorkers number
- The initial number of Compute Engine instances for the job.
- serviceAccount stringEmail 
- The email address of the service account to run the job as.
- subnetwork string
- Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- tempLocation string
- The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.
- workerRegion string
- The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- workerZone string
- The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity. If both worker_zoneandzoneare set,worker_zonetakes precedence.
- zone string
- The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- additional_experiments Sequence[str]
- Additional experiment flags for the job.
- additional_user_ Mapping[str, str]labels 
- Additional user labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.
- bypass_temp_ booldir_ validation 
- Whether to bypass the safety checks for the job's temporary directory. Use with caution.
- enable_streaming_ boolengine 
- Whether to enable Streaming Engine for the job.
- ip_configuration GoogleCloud Datapipelines V1Runtime Environment Ip Configuration 
- Configuration for VM IPs.
- kms_key_ strname 
- Name for the Cloud KMS key for the job. The key format is: projects//locations//keyRings//cryptoKeys/
- machine_type str
- The machine type to use for the job. Defaults to the value from the template if not specified.
- max_workers int
- The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- network str
- Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- num_workers int
- The initial number of Compute Engine instances for the job.
- service_account_ stremail 
- The email address of the service account to run the job as.
- subnetwork str
- Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- temp_location str
- The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.
- worker_region str
- The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- worker_zone str
- The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity. If both worker_zoneandzoneare set,worker_zonetakes precedence.
- zone str
- The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- additionalExperiments List<String>
- Additional experiment flags for the job.
- additionalUser Map<String>Labels 
- Additional user labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.
- bypassTemp BooleanDir Validation 
- Whether to bypass the safety checks for the job's temporary directory. Use with caution.
- enableStreaming BooleanEngine 
- Whether to enable Streaming Engine for the job.
- ipConfiguration "WORKER_IP_UNSPECIFIED" | "WORKER_IP_PUBLIC" | "WORKER_IP_PRIVATE"
- Configuration for VM IPs.
- kmsKey StringName 
- Name for the Cloud KMS key for the job. The key format is: projects//locations//keyRings//cryptoKeys/
- machineType String
- The machine type to use for the job. Defaults to the value from the template if not specified.
- maxWorkers Number
- The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- network String
- Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- numWorkers Number
- The initial number of Compute Engine instances for the job.
- serviceAccount StringEmail 
- The email address of the service account to run the job as.
- subnetwork String
- Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- tempLocation String
- The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.
- workerRegion String
- The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- workerZone String
- The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity. If both worker_zoneandzoneare set,worker_zonetakes precedence.
- zone String
- The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
GoogleCloudDatapipelinesV1RuntimeEnvironmentIpConfiguration, GoogleCloudDatapipelinesV1RuntimeEnvironmentIpConfigurationArgs              
- WorkerIp Unspecified 
- WORKER_IP_UNSPECIFIEDThe configuration is unknown, or unspecified.
- WorkerIp Public 
- WORKER_IP_PUBLICWorkers should have public IP addresses.
- WorkerIp Private 
- WORKER_IP_PRIVATEWorkers should have private IP addresses.
- GoogleCloud Datapipelines V1Runtime Environment Ip Configuration Worker Ip Unspecified 
- WORKER_IP_UNSPECIFIEDThe configuration is unknown, or unspecified.
- GoogleCloud Datapipelines V1Runtime Environment Ip Configuration Worker Ip Public 
- WORKER_IP_PUBLICWorkers should have public IP addresses.
- GoogleCloud Datapipelines V1Runtime Environment Ip Configuration Worker Ip Private 
- WORKER_IP_PRIVATEWorkers should have private IP addresses.
- WorkerIp Unspecified 
- WORKER_IP_UNSPECIFIEDThe configuration is unknown, or unspecified.
- WorkerIp Public 
- WORKER_IP_PUBLICWorkers should have public IP addresses.
- WorkerIp Private 
- WORKER_IP_PRIVATEWorkers should have private IP addresses.
- WorkerIp Unspecified 
- WORKER_IP_UNSPECIFIEDThe configuration is unknown, or unspecified.
- WorkerIp Public 
- WORKER_IP_PUBLICWorkers should have public IP addresses.
- WorkerIp Private 
- WORKER_IP_PRIVATEWorkers should have private IP addresses.
- WORKER_IP_UNSPECIFIED
- WORKER_IP_UNSPECIFIEDThe configuration is unknown, or unspecified.
- WORKER_IP_PUBLIC
- WORKER_IP_PUBLICWorkers should have public IP addresses.
- WORKER_IP_PRIVATE
- WORKER_IP_PRIVATEWorkers should have private IP addresses.
- "WORKER_IP_UNSPECIFIED"
- WORKER_IP_UNSPECIFIEDThe configuration is unknown, or unspecified.
- "WORKER_IP_PUBLIC"
- WORKER_IP_PUBLICWorkers should have public IP addresses.
- "WORKER_IP_PRIVATE"
- WORKER_IP_PRIVATEWorkers should have private IP addresses.
GoogleCloudDatapipelinesV1RuntimeEnvironmentResponse, GoogleCloudDatapipelinesV1RuntimeEnvironmentResponseArgs            
- AdditionalExperiments List<string>
- Additional experiment flags for the job.
- AdditionalUser Dictionary<string, string>Labels 
- Additional user labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.
- BypassTemp boolDir Validation 
- Whether to bypass the safety checks for the job's temporary directory. Use with caution.
- EnableStreaming boolEngine 
- Whether to enable Streaming Engine for the job.
- IpConfiguration string
- Configuration for VM IPs.
- KmsKey stringName 
- Name for the Cloud KMS key for the job. The key format is: projects//locations//keyRings//cryptoKeys/
- MachineType string
- The machine type to use for the job. Defaults to the value from the template if not specified.
- MaxWorkers int
- The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- Network string
- Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- NumWorkers int
- The initial number of Compute Engine instances for the job.
- ServiceAccount stringEmail 
- The email address of the service account to run the job as.
- Subnetwork string
- Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- TempLocation string
- The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.
- WorkerRegion string
- The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- WorkerZone string
- The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity. If both worker_zoneandzoneare set,worker_zonetakes precedence.
- Zone string
- The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- AdditionalExperiments []string
- Additional experiment flags for the job.
- AdditionalUser map[string]stringLabels 
- Additional user labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.
- BypassTemp boolDir Validation 
- Whether to bypass the safety checks for the job's temporary directory. Use with caution.
- EnableStreaming boolEngine 
- Whether to enable Streaming Engine for the job.
- IpConfiguration string
- Configuration for VM IPs.
- KmsKey stringName 
- Name for the Cloud KMS key for the job. The key format is: projects//locations//keyRings//cryptoKeys/
- MachineType string
- The machine type to use for the job. Defaults to the value from the template if not specified.
- MaxWorkers int
- The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- Network string
- Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- NumWorkers int
- The initial number of Compute Engine instances for the job.
- ServiceAccount stringEmail 
- The email address of the service account to run the job as.
- Subnetwork string
- Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- TempLocation string
- The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.
- WorkerRegion string
- The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- WorkerZone string
- The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity. If both worker_zoneandzoneare set,worker_zonetakes precedence.
- Zone string
- The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- additionalExperiments List<String>
- Additional experiment flags for the job.
- additionalUser Map<String,String>Labels 
- Additional user labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.
- bypassTemp BooleanDir Validation 
- Whether to bypass the safety checks for the job's temporary directory. Use with caution.
- enableStreaming BooleanEngine 
- Whether to enable Streaming Engine for the job.
- ipConfiguration String
- Configuration for VM IPs.
- kmsKey StringName 
- Name for the Cloud KMS key for the job. The key format is: projects//locations//keyRings//cryptoKeys/
- machineType String
- The machine type to use for the job. Defaults to the value from the template if not specified.
- maxWorkers Integer
- The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- network String
- Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- numWorkers Integer
- The initial number of Compute Engine instances for the job.
- serviceAccount StringEmail 
- The email address of the service account to run the job as.
- subnetwork String
- Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- tempLocation String
- The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.
- workerRegion String
- The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- workerZone String
- The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity. If both worker_zoneandzoneare set,worker_zonetakes precedence.
- zone String
- The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- additionalExperiments string[]
- Additional experiment flags for the job.
- additionalUser {[key: string]: string}Labels 
- Additional user labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.
- bypassTemp booleanDir Validation 
- Whether to bypass the safety checks for the job's temporary directory. Use with caution.
- enableStreaming booleanEngine 
- Whether to enable Streaming Engine for the job.
- ipConfiguration string
- Configuration for VM IPs.
- kmsKey stringName 
- Name for the Cloud KMS key for the job. The key format is: projects//locations//keyRings//cryptoKeys/
- machineType string
- The machine type to use for the job. Defaults to the value from the template if not specified.
- maxWorkers number
- The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- network string
- Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- numWorkers number
- The initial number of Compute Engine instances for the job.
- serviceAccount stringEmail 
- The email address of the service account to run the job as.
- subnetwork string
- Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- tempLocation string
- The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.
- workerRegion string
- The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- workerZone string
- The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity. If both worker_zoneandzoneare set,worker_zonetakes precedence.
- zone string
- The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- additional_experiments Sequence[str]
- Additional experiment flags for the job.
- additional_user_ Mapping[str, str]labels 
- Additional user labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.
- bypass_temp_ booldir_ validation 
- Whether to bypass the safety checks for the job's temporary directory. Use with caution.
- enable_streaming_ boolengine 
- Whether to enable Streaming Engine for the job.
- ip_configuration str
- Configuration for VM IPs.
- kms_key_ strname 
- Name for the Cloud KMS key for the job. The key format is: projects//locations//keyRings//cryptoKeys/
- machine_type str
- The machine type to use for the job. Defaults to the value from the template if not specified.
- max_workers int
- The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- network str
- Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- num_workers int
- The initial number of Compute Engine instances for the job.
- service_account_ stremail 
- The email address of the service account to run the job as.
- subnetwork str
- Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- temp_location str
- The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.
- worker_region str
- The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- worker_zone str
- The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity. If both worker_zoneandzoneare set,worker_zonetakes precedence.
- zone str
- The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- additionalExperiments List<String>
- Additional experiment flags for the job.
- additionalUser Map<String>Labels 
- Additional user labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.
- bypassTemp BooleanDir Validation 
- Whether to bypass the safety checks for the job's temporary directory. Use with caution.
- enableStreaming BooleanEngine 
- Whether to enable Streaming Engine for the job.
- ipConfiguration String
- Configuration for VM IPs.
- kmsKey StringName 
- Name for the Cloud KMS key for the job. The key format is: projects//locations//keyRings//cryptoKeys/
- machineType String
- The machine type to use for the job. Defaults to the value from the template if not specified.
- maxWorkers Number
- The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- network String
- Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- numWorkers Number
- The initial number of Compute Engine instances for the job.
- serviceAccount StringEmail 
- The email address of the service account to run the job as.
- subnetwork String
- Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- tempLocation String
- The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with gs://.
- workerRegion String
- The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- workerZone String
- The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity. If both worker_zoneandzoneare set,worker_zonetakes precedence.
- zone String
- The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
GoogleCloudDatapipelinesV1ScheduleSpec, GoogleCloudDatapipelinesV1ScheduleSpecArgs          
GoogleCloudDatapipelinesV1ScheduleSpecResponse, GoogleCloudDatapipelinesV1ScheduleSpecResponseArgs            
- NextJob stringTime 
- When the next Scheduler job is going to run.
- Schedule string
- Unix-cron format of the schedule. This information is retrieved from the linked Cloud Scheduler.
- TimeZone string
- Timezone ID. This matches the timezone IDs used by the Cloud Scheduler API. If empty, UTC time is assumed.
- NextJob stringTime 
- When the next Scheduler job is going to run.
- Schedule string
- Unix-cron format of the schedule. This information is retrieved from the linked Cloud Scheduler.
- TimeZone string
- Timezone ID. This matches the timezone IDs used by the Cloud Scheduler API. If empty, UTC time is assumed.
- nextJob StringTime 
- When the next Scheduler job is going to run.
- schedule String
- Unix-cron format of the schedule. This information is retrieved from the linked Cloud Scheduler.
- timeZone String
- Timezone ID. This matches the timezone IDs used by the Cloud Scheduler API. If empty, UTC time is assumed.
- nextJob stringTime 
- When the next Scheduler job is going to run.
- schedule string
- Unix-cron format of the schedule. This information is retrieved from the linked Cloud Scheduler.
- timeZone string
- Timezone ID. This matches the timezone IDs used by the Cloud Scheduler API. If empty, UTC time is assumed.
- next_job_ strtime 
- When the next Scheduler job is going to run.
- schedule str
- Unix-cron format of the schedule. This information is retrieved from the linked Cloud Scheduler.
- time_zone str
- Timezone ID. This matches the timezone IDs used by the Cloud Scheduler API. If empty, UTC time is assumed.
- nextJob StringTime 
- When the next Scheduler job is going to run.
- schedule String
- Unix-cron format of the schedule. This information is retrieved from the linked Cloud Scheduler.
- timeZone String
- Timezone ID. This matches the timezone IDs used by the Cloud Scheduler API. If empty, UTC time is assumed.
GoogleCloudDatapipelinesV1Workload, GoogleCloudDatapipelinesV1WorkloadArgs        
- DataflowFlex Pulumi.Template Request Google Native. Datapipelines. V1. Inputs. Google Cloud Datapipelines V1Launch Flex Template Request 
- Template information and additional parameters needed to launch a Dataflow job using the flex launch API.
- DataflowLaunch Pulumi.Template Request Google Native. Datapipelines. V1. Inputs. Google Cloud Datapipelines V1Launch Template Request 
- Template information and additional parameters needed to launch a Dataflow job using the standard launch API.
- DataflowFlex GoogleTemplate Request Cloud Datapipelines V1Launch Flex Template Request 
- Template information and additional parameters needed to launch a Dataflow job using the flex launch API.
- DataflowLaunch GoogleTemplate Request Cloud Datapipelines V1Launch Template Request 
- Template information and additional parameters needed to launch a Dataflow job using the standard launch API.
- dataflowFlex GoogleTemplate Request Cloud Datapipelines V1Launch Flex Template Request 
- Template information and additional parameters needed to launch a Dataflow job using the flex launch API.
- dataflowLaunch GoogleTemplate Request Cloud Datapipelines V1Launch Template Request 
- Template information and additional parameters needed to launch a Dataflow job using the standard launch API.
- dataflowFlex GoogleTemplate Request Cloud Datapipelines V1Launch Flex Template Request 
- Template information and additional parameters needed to launch a Dataflow job using the flex launch API.
- dataflowLaunch GoogleTemplate Request Cloud Datapipelines V1Launch Template Request 
- Template information and additional parameters needed to launch a Dataflow job using the standard launch API.
- dataflow_flex_ Googletemplate_ request Cloud Datapipelines V1Launch Flex Template Request 
- Template information and additional parameters needed to launch a Dataflow job using the flex launch API.
- dataflow_launch_ Googletemplate_ request Cloud Datapipelines V1Launch Template Request 
- Template information and additional parameters needed to launch a Dataflow job using the standard launch API.
- dataflowFlex Property MapTemplate Request 
- Template information and additional parameters needed to launch a Dataflow job using the flex launch API.
- dataflowLaunch Property MapTemplate Request 
- Template information and additional parameters needed to launch a Dataflow job using the standard launch API.
GoogleCloudDatapipelinesV1WorkloadResponse, GoogleCloudDatapipelinesV1WorkloadResponseArgs          
- DataflowFlex Pulumi.Template Request Google Native. Datapipelines. V1. Inputs. Google Cloud Datapipelines V1Launch Flex Template Request Response 
- Template information and additional parameters needed to launch a Dataflow job using the flex launch API.
- DataflowLaunch Pulumi.Template Request Google Native. Datapipelines. V1. Inputs. Google Cloud Datapipelines V1Launch Template Request Response 
- Template information and additional parameters needed to launch a Dataflow job using the standard launch API.
- DataflowFlex GoogleTemplate Request Cloud Datapipelines V1Launch Flex Template Request Response 
- Template information and additional parameters needed to launch a Dataflow job using the flex launch API.
- DataflowLaunch GoogleTemplate Request Cloud Datapipelines V1Launch Template Request Response 
- Template information and additional parameters needed to launch a Dataflow job using the standard launch API.
- dataflowFlex GoogleTemplate Request Cloud Datapipelines V1Launch Flex Template Request Response 
- Template information and additional parameters needed to launch a Dataflow job using the flex launch API.
- dataflowLaunch GoogleTemplate Request Cloud Datapipelines V1Launch Template Request Response 
- Template information and additional parameters needed to launch a Dataflow job using the standard launch API.
- dataflowFlex GoogleTemplate Request Cloud Datapipelines V1Launch Flex Template Request Response 
- Template information and additional parameters needed to launch a Dataflow job using the flex launch API.
- dataflowLaunch GoogleTemplate Request Cloud Datapipelines V1Launch Template Request Response 
- Template information and additional parameters needed to launch a Dataflow job using the standard launch API.
- dataflow_flex_ Googletemplate_ request Cloud Datapipelines V1Launch Flex Template Request Response 
- Template information and additional parameters needed to launch a Dataflow job using the flex launch API.
- dataflow_launch_ Googletemplate_ request Cloud Datapipelines V1Launch Template Request Response 
- Template information and additional parameters needed to launch a Dataflow job using the standard launch API.
- dataflowFlex Property MapTemplate Request 
- Template information and additional parameters needed to launch a Dataflow job using the flex launch API.
- dataflowLaunch Property MapTemplate Request 
- Template information and additional parameters needed to launch a Dataflow job using the standard launch API.
PipelineState, PipelineStateArgs    
- StateUnspecified 
- STATE_UNSPECIFIEDThe pipeline state isn't specified.
- StateResuming 
- STATE_RESUMINGThe pipeline is getting started or resumed. When finished, the pipeline state will be 'PIPELINE_STATE_ACTIVE'.
- StateActive 
- STATE_ACTIVEThe pipeline is actively running.
- StateStopping 
- STATE_STOPPINGThe pipeline is in the process of stopping. When finished, the pipeline state will be 'PIPELINE_STATE_ARCHIVED'.
- StateArchived 
- STATE_ARCHIVEDThe pipeline has been stopped. This is a terminal state and cannot be undone.
- StatePaused 
- STATE_PAUSEDThe pipeline is paused. This is a non-terminal state. When the pipeline is paused, it will hold processing jobs, but can be resumed later. For a batch pipeline, this means pausing the scheduler job. For a streaming pipeline, creating a job snapshot to resume from will give the same effect.
- PipelineState State Unspecified 
- STATE_UNSPECIFIEDThe pipeline state isn't specified.
- PipelineState State Resuming 
- STATE_RESUMINGThe pipeline is getting started or resumed. When finished, the pipeline state will be 'PIPELINE_STATE_ACTIVE'.
- PipelineState State Active 
- STATE_ACTIVEThe pipeline is actively running.
- PipelineState State Stopping 
- STATE_STOPPINGThe pipeline is in the process of stopping. When finished, the pipeline state will be 'PIPELINE_STATE_ARCHIVED'.
- PipelineState State Archived 
- STATE_ARCHIVEDThe pipeline has been stopped. This is a terminal state and cannot be undone.
- PipelineState State Paused 
- STATE_PAUSEDThe pipeline is paused. This is a non-terminal state. When the pipeline is paused, it will hold processing jobs, but can be resumed later. For a batch pipeline, this means pausing the scheduler job. For a streaming pipeline, creating a job snapshot to resume from will give the same effect.
- StateUnspecified 
- STATE_UNSPECIFIEDThe pipeline state isn't specified.
- StateResuming 
- STATE_RESUMINGThe pipeline is getting started or resumed. When finished, the pipeline state will be 'PIPELINE_STATE_ACTIVE'.
- StateActive 
- STATE_ACTIVEThe pipeline is actively running.
- StateStopping 
- STATE_STOPPINGThe pipeline is in the process of stopping. When finished, the pipeline state will be 'PIPELINE_STATE_ARCHIVED'.
- StateArchived 
- STATE_ARCHIVEDThe pipeline has been stopped. This is a terminal state and cannot be undone.
- StatePaused 
- STATE_PAUSEDThe pipeline is paused. This is a non-terminal state. When the pipeline is paused, it will hold processing jobs, but can be resumed later. For a batch pipeline, this means pausing the scheduler job. For a streaming pipeline, creating a job snapshot to resume from will give the same effect.
- StateUnspecified 
- STATE_UNSPECIFIEDThe pipeline state isn't specified.
- StateResuming 
- STATE_RESUMINGThe pipeline is getting started or resumed. When finished, the pipeline state will be 'PIPELINE_STATE_ACTIVE'.
- StateActive 
- STATE_ACTIVEThe pipeline is actively running.
- StateStopping 
- STATE_STOPPINGThe pipeline is in the process of stopping. When finished, the pipeline state will be 'PIPELINE_STATE_ARCHIVED'.
- StateArchived 
- STATE_ARCHIVEDThe pipeline has been stopped. This is a terminal state and cannot be undone.
- StatePaused 
- STATE_PAUSEDThe pipeline is paused. This is a non-terminal state. When the pipeline is paused, it will hold processing jobs, but can be resumed later. For a batch pipeline, this means pausing the scheduler job. For a streaming pipeline, creating a job snapshot to resume from will give the same effect.
- STATE_UNSPECIFIED
- STATE_UNSPECIFIEDThe pipeline state isn't specified.
- STATE_RESUMING
- STATE_RESUMINGThe pipeline is getting started or resumed. When finished, the pipeline state will be 'PIPELINE_STATE_ACTIVE'.
- STATE_ACTIVE
- STATE_ACTIVEThe pipeline is actively running.
- STATE_STOPPING
- STATE_STOPPINGThe pipeline is in the process of stopping. When finished, the pipeline state will be 'PIPELINE_STATE_ARCHIVED'.
- STATE_ARCHIVED
- STATE_ARCHIVEDThe pipeline has been stopped. This is a terminal state and cannot be undone.
- STATE_PAUSED
- STATE_PAUSEDThe pipeline is paused. This is a non-terminal state. When the pipeline is paused, it will hold processing jobs, but can be resumed later. For a batch pipeline, this means pausing the scheduler job. For a streaming pipeline, creating a job snapshot to resume from will give the same effect.
- "STATE_UNSPECIFIED"
- STATE_UNSPECIFIEDThe pipeline state isn't specified.
- "STATE_RESUMING"
- STATE_RESUMINGThe pipeline is getting started or resumed. When finished, the pipeline state will be 'PIPELINE_STATE_ACTIVE'.
- "STATE_ACTIVE"
- STATE_ACTIVEThe pipeline is actively running.
- "STATE_STOPPING"
- STATE_STOPPINGThe pipeline is in the process of stopping. When finished, the pipeline state will be 'PIPELINE_STATE_ARCHIVED'.
- "STATE_ARCHIVED"
- STATE_ARCHIVEDThe pipeline has been stopped. This is a terminal state and cannot be undone.
- "STATE_PAUSED"
- STATE_PAUSEDThe pipeline is paused. This is a non-terminal state. When the pipeline is paused, it will hold processing jobs, but can be resumed later. For a batch pipeline, this means pausing the scheduler job. For a streaming pipeline, creating a job snapshot to resume from will give the same effect.
PipelineType, PipelineTypeArgs    
- PipelineType Unspecified 
- PIPELINE_TYPE_UNSPECIFIEDThe pipeline type isn't specified.
- PipelineType Batch 
- PIPELINE_TYPE_BATCHA batch pipeline. It runs jobs on a specific schedule, and each job will automatically terminate once execution is finished.
- PipelineType Streaming 
- PIPELINE_TYPE_STREAMINGA streaming pipeline. The underlying job is continuously running until it is manually terminated by the user. This type of pipeline doesn't have a schedule to run on, and the linked job gets created when the pipeline is created.
- PipelineType Pipeline Type Unspecified 
- PIPELINE_TYPE_UNSPECIFIEDThe pipeline type isn't specified.
- PipelineType Pipeline Type Batch 
- PIPELINE_TYPE_BATCHA batch pipeline. It runs jobs on a specific schedule, and each job will automatically terminate once execution is finished.
- PipelineType Pipeline Type Streaming 
- PIPELINE_TYPE_STREAMINGA streaming pipeline. The underlying job is continuously running until it is manually terminated by the user. This type of pipeline doesn't have a schedule to run on, and the linked job gets created when the pipeline is created.
- PipelineType Unspecified 
- PIPELINE_TYPE_UNSPECIFIEDThe pipeline type isn't specified.
- PipelineType Batch 
- PIPELINE_TYPE_BATCHA batch pipeline. It runs jobs on a specific schedule, and each job will automatically terminate once execution is finished.
- PipelineType Streaming 
- PIPELINE_TYPE_STREAMINGA streaming pipeline. The underlying job is continuously running until it is manually terminated by the user. This type of pipeline doesn't have a schedule to run on, and the linked job gets created when the pipeline is created.
- PipelineType Unspecified 
- PIPELINE_TYPE_UNSPECIFIEDThe pipeline type isn't specified.
- PipelineType Batch 
- PIPELINE_TYPE_BATCHA batch pipeline. It runs jobs on a specific schedule, and each job will automatically terminate once execution is finished.
- PipelineType Streaming 
- PIPELINE_TYPE_STREAMINGA streaming pipeline. The underlying job is continuously running until it is manually terminated by the user. This type of pipeline doesn't have a schedule to run on, and the linked job gets created when the pipeline is created.
- PIPELINE_TYPE_UNSPECIFIED
- PIPELINE_TYPE_UNSPECIFIEDThe pipeline type isn't specified.
- PIPELINE_TYPE_BATCH
- PIPELINE_TYPE_BATCHA batch pipeline. It runs jobs on a specific schedule, and each job will automatically terminate once execution is finished.
- PIPELINE_TYPE_STREAMING
- PIPELINE_TYPE_STREAMINGA streaming pipeline. The underlying job is continuously running until it is manually terminated by the user. This type of pipeline doesn't have a schedule to run on, and the linked job gets created when the pipeline is created.
- "PIPELINE_TYPE_UNSPECIFIED"
- PIPELINE_TYPE_UNSPECIFIEDThe pipeline type isn't specified.
- "PIPELINE_TYPE_BATCH"
- PIPELINE_TYPE_BATCHA batch pipeline. It runs jobs on a specific schedule, and each job will automatically terminate once execution is finished.
- "PIPELINE_TYPE_STREAMING"
- PIPELINE_TYPE_STREAMINGA streaming pipeline. The underlying job is continuously running until it is manually terminated by the user. This type of pipeline doesn't have a schedule to run on, and the linked job gets created when the pipeline is created.
Package Details
- Repository
- Google Cloud Native pulumi/pulumi-google-native
- License
- Apache-2.0
Google Cloud Native is in preview. Google Cloud Classic is fully supported.