Cloud Dataproc V1 API - Class Google::Cloud::Dataproc::V1::Job (v1.4.0)

Reference documentation and code samples for the Cloud Dataproc V1 API class Google::Cloud::Dataproc::V1::Job.

A Dataproc job resource.

Inherits

  • Object

Extended By

  • Google::Protobuf::MessageExts::ClassMethods

Includes

  • Google::Protobuf::MessageExts

Methods

#done

defdone()->::Boolean
Returns
  • (::Boolean) — Output only. Indicates whether the job is completed. If the value is false, the job is still in progress. If true, the job is completed, and status.state field will indicate if it was successful, failed, or cancelled.

#driver_control_files_uri

defdriver_control_files_uri()->::String
Returns
  • (::String) — Output only. If present, the location of miscellaneous control files which can be used as part of job setup and handling. If not present, control files might be placed in the same location as driver_output_uri.

#driver_output_resource_uri

defdriver_output_resource_uri()->::String
Returns
  • (::String) — Output only. A URI pointing to the location of the stdout of the job's driver program.

#driver_scheduling_config

defdriver_scheduling_config()->::Google::Cloud::Dataproc::V1::DriverSchedulingConfig
Returns

#driver_scheduling_config=

defdriver_scheduling_config=(value)->::Google::Cloud::Dataproc::V1::DriverSchedulingConfig
Parameter
Returns
defflink_job()->::Google::Cloud::Dataproc::V1::FlinkJob
Returns
  • (::Google::Cloud::Dataproc::V1::FlinkJob) — Optional. Job is a Flink job.

    Note: The following fields are mutually exclusive: flink_job, hadoop_job, spark_job, pyspark_job, hive_job, pig_job, spark_r_job, spark_sql_job, presto_job, trino_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

defflink_job=(value)->::Google::Cloud::Dataproc::V1::FlinkJob
Parameter
  • value (::Google::Cloud::Dataproc::V1::FlinkJob) — Optional. Job is a Flink job.

    Note: The following fields are mutually exclusive: flink_job, hadoop_job, spark_job, pyspark_job, hive_job, pig_job, spark_r_job, spark_sql_job, presto_job, trino_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

Returns
  • (::Google::Cloud::Dataproc::V1::FlinkJob) — Optional. Job is a Flink job.

    Note: The following fields are mutually exclusive: flink_job, hadoop_job, spark_job, pyspark_job, hive_job, pig_job, spark_r_job, spark_sql_job, presto_job, trino_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

#hadoop_job

defhadoop_job()->::Google::Cloud::Dataproc::V1::HadoopJob
Returns
  • (::Google::Cloud::Dataproc::V1::HadoopJob) — Optional. Job is a Hadoop job.

    Note: The following fields are mutually exclusive: hadoop_job, spark_job, pyspark_job, hive_job, pig_job, spark_r_job, spark_sql_job, presto_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

#hadoop_job=

defhadoop_job=(value)->::Google::Cloud::Dataproc::V1::HadoopJob
Parameter
  • value (::Google::Cloud::Dataproc::V1::HadoopJob) — Optional. Job is a Hadoop job.

    Note: The following fields are mutually exclusive: hadoop_job, spark_job, pyspark_job, hive_job, pig_job, spark_r_job, spark_sql_job, presto_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

Returns
  • (::Google::Cloud::Dataproc::V1::HadoopJob) — Optional. Job is a Hadoop job.

    Note: The following fields are mutually exclusive: hadoop_job, spark_job, pyspark_job, hive_job, pig_job, spark_r_job, spark_sql_job, presto_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

#hive_job

defhive_job()->::Google::Cloud::Dataproc::V1::HiveJob
Returns
  • (::Google::Cloud::Dataproc::V1::HiveJob) — Optional. Job is a Hive job.

    Note: The following fields are mutually exclusive: hive_job, hadoop_job, spark_job, pyspark_job, pig_job, spark_r_job, spark_sql_job, presto_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

#hive_job=

defhive_job=(value)->::Google::Cloud::Dataproc::V1::HiveJob
Parameter
  • value (::Google::Cloud::Dataproc::V1::HiveJob) — Optional. Job is a Hive job.

    Note: The following fields are mutually exclusive: hive_job, hadoop_job, spark_job, pyspark_job, pig_job, spark_r_job, spark_sql_job, presto_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

Returns
  • (::Google::Cloud::Dataproc::V1::HiveJob) — Optional. Job is a Hive job.

    Note: The following fields are mutually exclusive: hive_job, hadoop_job, spark_job, pyspark_job, pig_job, spark_r_job, spark_sql_job, presto_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

#job_uuid

defjob_uuid()->::String
Returns
  • (::String) — Output only. A UUID that uniquely identifies a job within the project over time. This is in contrast to a user-settable reference.job_id that might be reused over time.

#labels

deflabels()->::Google::Protobuf::Map{::String=>::String}
Returns
  • (::Google::Protobuf::Map{::String => ::String}) — Optional. The labels to associate with this job. Label keys must contain 1 to 63 characters, and must conform to RFC 1035. Label values can be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035. No more than 32 labels can be associated with a job.

#labels=

deflabels=(value)->::Google::Protobuf::Map{::String=>::String}
Parameter
  • value (::Google::Protobuf::Map{::String => ::String}) — Optional. The labels to associate with this job. Label keys must contain 1 to 63 characters, and must conform to RFC 1035. Label values can be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035. No more than 32 labels can be associated with a job.
Returns
  • (::Google::Protobuf::Map{::String => ::String}) — Optional. The labels to associate with this job. Label keys must contain 1 to 63 characters, and must conform to RFC 1035. Label values can be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035. No more than 32 labels can be associated with a job.

#pig_job

defpig_job()->::Google::Cloud::Dataproc::V1::PigJob
Returns
  • (::Google::Cloud::Dataproc::V1::PigJob) — Optional. Job is a Pig job.

    Note: The following fields are mutually exclusive: pig_job, hadoop_job, spark_job, pyspark_job, hive_job, spark_r_job, spark_sql_job, presto_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

#pig_job=

defpig_job=(value)->::Google::Cloud::Dataproc::V1::PigJob
Parameter
  • value (::Google::Cloud::Dataproc::V1::PigJob) — Optional. Job is a Pig job.

    Note: The following fields are mutually exclusive: pig_job, hadoop_job, spark_job, pyspark_job, hive_job, spark_r_job, spark_sql_job, presto_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

Returns
  • (::Google::Cloud::Dataproc::V1::PigJob) — Optional. Job is a Pig job.

    Note: The following fields are mutually exclusive: pig_job, hadoop_job, spark_job, pyspark_job, hive_job, spark_r_job, spark_sql_job, presto_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

#placement

defplacement()->::Google::Cloud::Dataproc::V1::JobPlacement
Returns

#placement=

defplacement=(value)->::Google::Cloud::Dataproc::V1::JobPlacement
Parameter
Returns

#presto_job

defpresto_job()->::Google::Cloud::Dataproc::V1::PrestoJob
Returns
  • (::Google::Cloud::Dataproc::V1::PrestoJob) — Optional. Job is a Presto job.

    Note: The following fields are mutually exclusive: presto_job, hadoop_job, spark_job, pyspark_job, hive_job, pig_job, spark_r_job, spark_sql_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

#presto_job=

defpresto_job=(value)->::Google::Cloud::Dataproc::V1::PrestoJob
Parameter
  • value (::Google::Cloud::Dataproc::V1::PrestoJob) — Optional. Job is a Presto job.

    Note: The following fields are mutually exclusive: presto_job, hadoop_job, spark_job, pyspark_job, hive_job, pig_job, spark_r_job, spark_sql_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

Returns
  • (::Google::Cloud::Dataproc::V1::PrestoJob) — Optional. Job is a Presto job.

    Note: The following fields are mutually exclusive: presto_job, hadoop_job, spark_job, pyspark_job, hive_job, pig_job, spark_r_job, spark_sql_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

#pyspark_job

defpyspark_job()->::Google::Cloud::Dataproc::V1::PySparkJob
Returns
  • (::Google::Cloud::Dataproc::V1::PySparkJob) — Optional. Job is a PySpark job.

    Note: The following fields are mutually exclusive: pyspark_job, hadoop_job, spark_job, hive_job, pig_job, spark_r_job, spark_sql_job, presto_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

#pyspark_job=

defpyspark_job=(value)->::Google::Cloud::Dataproc::V1::PySparkJob
Parameter
  • value (::Google::Cloud::Dataproc::V1::PySparkJob) — Optional. Job is a PySpark job.

    Note: The following fields are mutually exclusive: pyspark_job, hadoop_job, spark_job, hive_job, pig_job, spark_r_job, spark_sql_job, presto_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

Returns
  • (::Google::Cloud::Dataproc::V1::PySparkJob) — Optional. Job is a PySpark job.

    Note: The following fields are mutually exclusive: pyspark_job, hadoop_job, spark_job, hive_job, pig_job, spark_r_job, spark_sql_job, presto_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

#reference

defreference()->::Google::Cloud::Dataproc::V1::JobReference
Returns
  • (::Google::Cloud::Dataproc::V1::JobReference) — Optional. The fully qualified reference to the job, which can be used to obtain the equivalent REST path of the job resource. If this property is not specified when a job is created, the server generates a job_id.

#reference=

defreference=(value)->::Google::Cloud::Dataproc::V1::JobReference
Parameter
  • value (::Google::Cloud::Dataproc::V1::JobReference) — Optional. The fully qualified reference to the job, which can be used to obtain the equivalent REST path of the job resource. If this property is not specified when a job is created, the server generates a job_id.
Returns
  • (::Google::Cloud::Dataproc::V1::JobReference) — Optional. The fully qualified reference to the job, which can be used to obtain the equivalent REST path of the job resource. If this property is not specified when a job is created, the server generates a job_id.

#scheduling

defscheduling()->::Google::Cloud::Dataproc::V1::JobScheduling
Returns

#scheduling=

defscheduling=(value)->::Google::Cloud::Dataproc::V1::JobScheduling
Parameter
Returns

#spark_job

defspark_job()->::Google::Cloud::Dataproc::V1::SparkJob
Returns
  • (::Google::Cloud::Dataproc::V1::SparkJob) — Optional. Job is a Spark job.

    Note: The following fields are mutually exclusive: spark_job, hadoop_job, pyspark_job, hive_job, pig_job, spark_r_job, spark_sql_job, presto_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

#spark_job=

defspark_job=(value)->::Google::Cloud::Dataproc::V1::SparkJob
Parameter
  • value (::Google::Cloud::Dataproc::V1::SparkJob) — Optional. Job is a Spark job.

    Note: The following fields are mutually exclusive: spark_job, hadoop_job, pyspark_job, hive_job, pig_job, spark_r_job, spark_sql_job, presto_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

Returns
  • (::Google::Cloud::Dataproc::V1::SparkJob) — Optional. Job is a Spark job.

    Note: The following fields are mutually exclusive: spark_job, hadoop_job, pyspark_job, hive_job, pig_job, spark_r_job, spark_sql_job, presto_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

#spark_r_job

defspark_r_job()->::Google::Cloud::Dataproc::V1::SparkRJob
Returns
  • (::Google::Cloud::Dataproc::V1::SparkRJob) — Optional. Job is a SparkR job.

    Note: The following fields are mutually exclusive: spark_r_job, hadoop_job, spark_job, pyspark_job, hive_job, pig_job, spark_sql_job, presto_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

#spark_r_job=

defspark_r_job=(value)->::Google::Cloud::Dataproc::V1::SparkRJob
Parameter
  • value (::Google::Cloud::Dataproc::V1::SparkRJob) — Optional. Job is a SparkR job.

    Note: The following fields are mutually exclusive: spark_r_job, hadoop_job, spark_job, pyspark_job, hive_job, pig_job, spark_sql_job, presto_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

Returns
  • (::Google::Cloud::Dataproc::V1::SparkRJob) — Optional. Job is a SparkR job.

    Note: The following fields are mutually exclusive: spark_r_job, hadoop_job, spark_job, pyspark_job, hive_job, pig_job, spark_sql_job, presto_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

#spark_sql_job

defspark_sql_job()->::Google::Cloud::Dataproc::V1::SparkSqlJob
Returns
  • (::Google::Cloud::Dataproc::V1::SparkSqlJob) — Optional. Job is a SparkSql job.

    Note: The following fields are mutually exclusive: spark_sql_job, hadoop_job, spark_job, pyspark_job, hive_job, pig_job, spark_r_job, presto_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

#spark_sql_job=

defspark_sql_job=(value)->::Google::Cloud::Dataproc::V1::SparkSqlJob
Parameter
  • value (::Google::Cloud::Dataproc::V1::SparkSqlJob) — Optional. Job is a SparkSql job.

    Note: The following fields are mutually exclusive: spark_sql_job, hadoop_job, spark_job, pyspark_job, hive_job, pig_job, spark_r_job, presto_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

Returns
  • (::Google::Cloud::Dataproc::V1::SparkSqlJob) — Optional. Job is a SparkSql job.

    Note: The following fields are mutually exclusive: spark_sql_job, hadoop_job, spark_job, pyspark_job, hive_job, pig_job, spark_r_job, presto_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

#status

defstatus()->::Google::Cloud::Dataproc::V1::JobStatus
Returns

#status_history

defstatus_history()->::Array<::Google::Cloud::Dataproc::V1::JobStatus>
Returns

#trino_job

deftrino_job()->::Google::Cloud::Dataproc::V1::TrinoJob
Returns
  • (::Google::Cloud::Dataproc::V1::TrinoJob) — Optional. Job is a Trino job.

    Note: The following fields are mutually exclusive: trino_job, hadoop_job, spark_job, pyspark_job, hive_job, pig_job, spark_r_job, spark_sql_job, presto_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

#trino_job=

deftrino_job=(value)->::Google::Cloud::Dataproc::V1::TrinoJob
Parameter
  • value (::Google::Cloud::Dataproc::V1::TrinoJob) — Optional. Job is a Trino job.

    Note: The following fields are mutually exclusive: trino_job, hadoop_job, spark_job, pyspark_job, hive_job, pig_job, spark_r_job, spark_sql_job, presto_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

Returns
  • (::Google::Cloud::Dataproc::V1::TrinoJob) — Optional. Job is a Trino job.

    Note: The following fields are mutually exclusive: trino_job, hadoop_job, spark_job, pyspark_job, hive_job, pig_job, spark_r_job, spark_sql_job, presto_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

#yarn_applications

defyarn_applications()->::Array<::Google::Cloud::Dataproc::V1::YarnApplication>
Returns
  • (::Array<::Google::Cloud::Dataproc::V1::YarnApplication>) — Output only. The collection of YARN applications spun up by this job.

    Beta Feature: This report is available for testing purposes only. It might be changed before final release.

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025年10月30日 UTC.