Cloud Dataproc V1 API - Class Google::Cloud::Dataproc::V1::SparkJob (v1.6.0)
Stay organized with collections
Save and categorize content based on your preferences.
Reference documentation and code samples for the Cloud Dataproc V1 API class Google::Cloud::Dataproc::V1::SparkJob.
A Dataproc job for running Apache Spark applications on YARN.
Inherits
- Object
Extended By
- Google::Protobuf::MessageExts::ClassMethods
Includes
- Google::Protobuf::MessageExts
Methods
#archive_uris
defarchive_uris()->::Array<::String>- (::Array<::String>) — Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
#archive_uris=
defarchive_uris=(value)->::Array<::String>- value (::Array<::String>) — Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- (::Array<::String>) — Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
#args
defargs()->::Array<::String>-
(::Array<::String>) — Optional. The arguments to pass to the driver. Do not include arguments,
such as
--conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
#args=
defargs=(value)->::Array<::String>-
value (::Array<::String>) — Optional. The arguments to pass to the driver. Do not include arguments,
such as
--conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
-
(::Array<::String>) — Optional. The arguments to pass to the driver. Do not include arguments,
such as
--conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
#file_uris
deffile_uris()->::Array<::String>- (::Array<::String>) — Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
#file_uris=
deffile_uris=(value)->::Array<::String>- value (::Array<::String>) — Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- (::Array<::String>) — Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
#jar_file_uris
defjar_file_uris()->::Array<::String>- (::Array<::String>) — Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
#jar_file_uris=
defjar_file_uris=(value)->::Array<::String>- value (::Array<::String>) — Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
- (::Array<::String>) — Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
#logging_config
deflogging_config()->::Google::Cloud::Dataproc::V1::LoggingConfig- (::Google::Cloud::Dataproc::V1::LoggingConfig) — Optional. The runtime log config for job execution.
#logging_config=
deflogging_config=(value)->::Google::Cloud::Dataproc::V1::LoggingConfig- value (::Google::Cloud::Dataproc::V1::LoggingConfig) — Optional. The runtime log config for job execution.
- (::Google::Cloud::Dataproc::V1::LoggingConfig) — Optional. The runtime log config for job execution.
#main_class
defmain_class()->::String-
(::String) — The name of the driver's main class. The jar file that contains the class
must be in the default CLASSPATH or specified in
SparkJob.jar_file_uris.
Note: The following fields are mutually exclusive:
main_class,main_jar_file_uri. If a field in that set is populated, all other fields in the set will automatically be cleared.
#main_class=
defmain_class=(value)->::String-
value (::String) — The name of the driver's main class. The jar file that contains the class
must be in the default CLASSPATH or specified in
SparkJob.jar_file_uris.
Note: The following fields are mutually exclusive:
main_class,main_jar_file_uri. If a field in that set is populated, all other fields in the set will automatically be cleared.
-
(::String) — The name of the driver's main class. The jar file that contains the class
must be in the default CLASSPATH or specified in
SparkJob.jar_file_uris.
Note: The following fields are mutually exclusive:
main_class,main_jar_file_uri. If a field in that set is populated, all other fields in the set will automatically be cleared.
#main_jar_file_uri
defmain_jar_file_uri()->::String-
(::String) — The HCFS URI of the jar file that contains the main class.
Note: The following fields are mutually exclusive:
main_jar_file_uri,main_class. If a field in that set is populated, all other fields in the set will automatically be cleared.
#main_jar_file_uri=
defmain_jar_file_uri=(value)->::String-
value (::String) — The HCFS URI of the jar file that contains the main class.
Note: The following fields are mutually exclusive:
main_jar_file_uri,main_class. If a field in that set is populated, all other fields in the set will automatically be cleared.
-
(::String) — The HCFS URI of the jar file that contains the main class.
Note: The following fields are mutually exclusive:
main_jar_file_uri,main_class. If a field in that set is populated, all other fields in the set will automatically be cleared.
#properties
defproperties()->::Google::Protobuf::Map{::String=>::String}- (::Google::Protobuf::Map{::String => ::String}) — Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
#properties=
defproperties=(value)->::Google::Protobuf::Map{::String=>::String}- value (::Google::Protobuf::Map{::String => ::String}) — Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- (::Google::Protobuf::Map{::String => ::String}) — Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.