tensorflow::ClientSession

#include <client_session.h>

A ClientSession object lets the caller drive the evaluation of the TensorFlow graph constructed with the C++ API.

Summary

Example:

Scoperoot=Scope::NewRootScope();
autoa=Placeholder(root,DT_INT32);
autoc=Add(root,a,{41});
ClientSessionsession(root);
std::vectoroutputs;
Statuss=session.Run({{a,{1}}},{c},&outputs);
if(!s.ok()){...}

Constructors and Destructors

ClientSession(const Scope & scope, const string & target)
Create a new session to evaluate the graph contained in scope by connecting to the TensorFlow runtime specified by target.
ClientSession(const Scope & scope)
Same as above, but use the empty string ("") as the target specification.
ClientSession(const Scope & scope, const SessionOptions & session_options)
Create a new session, configuring it with session_options.
~ClientSession()

Public types

CallableHandle typedef
int64_t
A handle to a subgraph, created with ClientSession::MakeCallable() .
FeedType typedef
std::unordered_map< Output, Input::Initializer, OutputHash >
A data type to represent feeds to a Run call.

Public functions

MakeCallable(const CallableOptions & callable_options, CallableHandle *out_handle)
Status
Creates a handle for invoking the subgraph defined by callable_options.
ReleaseCallable(CallableHandle handle)
Status
Releases resources associated with the given handle in this session.
Run(const std::vector< Output > & fetch_outputs, std::vector< Tensor > *outputs) const
Status
Evaluate the tensors in fetch_outputs.
Run(const FeedType & inputs, const std::vector< Output > & fetch_outputs, std::vector< Tensor > *outputs) const
Status
Same as above, but use the mapping in inputs as feeds.
Run(const FeedType & inputs, const std::vector< Output > & fetch_outputs, const std::vector< Operation > & run_outputs, std::vector< Tensor > *outputs) const
Status
Same as above. Additionally runs the operations ins run_outputs.
Run(const RunOptions & run_options, const FeedType & inputs, const std::vector< Output > & fetch_outputs, const std::vector< Operation > & run_outputs, std::vector< Tensor > *outputs, RunMetadata *run_metadata) const
Status
Use run_options to turn on performance profiling.
Run(const RunOptions & run_options, const FeedType & inputs, const std::vector< Output > & fetch_outputs, const std::vector< Operation > & run_outputs, std::vector< Tensor > *outputs, RunMetadata *run_metadata, const thread::ThreadPoolOptions & threadpool_options) const
Status
Same as above.
RunCallable(CallableHandle handle, const std::vector< Tensor > & feed_tensors, std::vector< Tensor > *fetch_tensors, RunMetadata *run_metadata)
Status
Invokes the subgraph named by handle with the given options and input tensors.
RunCallable(CallableHandle handle, const std::vector< Tensor > & feed_tensors, std::vector< Tensor > *fetch_tensors, RunMetadata *run_metadata, const thread::ThreadPoolOptions & options)
Status
Invokes the subgraph named by handle with the given options and input tensors.

Public types

CallableHandle

int64_t CallableHandle

A handle to a subgraph, created with ClientSession::MakeCallable() .

FeedType

std::unordered_map< Output, Input::Initializer, OutputHash > FeedType

A data type to represent feeds to a Run call.

This is a map of Output objects returned by op-constructors to the value to feed them with. See Input::Initializer for details on what can be used as feed values.

Public functions

ClientSession

ClientSession(
constScope  & scope,
conststring & target
)

Create a new session to evaluate the graph contained in scope by connecting to the TensorFlow runtime specified by target.

ClientSession

ClientSession(
constScope  & scope
)

Same as above, but use the empty string ("") as the target specification.

ClientSession

ClientSession(
constScope  & scope,
constSessionOptions & session_options
)

Create a new session, configuring it with session_options.

MakeCallable

StatusMakeCallable(
constCallableOptions & callable_options,
CallableHandle *out_handle
)

Creates a handle for invoking the subgraph defined by callable_options.

NOTE: This API is still experimental and may change.

ReleaseCallable

Status ReleaseCallable(
 CallableHandle handle
)

Releases resources associated with the given handle in this session.

NOTE: This API is still experimental and may change.

Run

StatusRun(
conststd::vector<Output  > & fetch_outputs,
std::vector<Tensor  > *outputs
)const

Evaluate the tensors in fetch_outputs.

The values are returned as Tensor objects in outputs. The number and order of outputs will match fetch_outputs.

Run

StatusRun(
constFeedType  & inputs,
conststd::vector<Output  > & fetch_outputs,
std::vector<Tensor  > *outputs
)const

Same as above, but use the mapping in inputs as feeds.

Run

StatusRun(
constFeedType  & inputs,
conststd::vector<Output  > & fetch_outputs,
conststd::vector<Operation  > & run_outputs,
std::vector<Tensor  > *outputs
)const

Same as above. Additionally runs the operations ins run_outputs.

Run

StatusRun(
constRunOptions & run_options,
constFeedType  & inputs,
conststd::vector<Output  > & fetch_outputs,
conststd::vector<Operation  > & run_outputs,
std::vector<Tensor  > *outputs,
RunMetadata*run_metadata
)const

Use run_options to turn on performance profiling.

run_metadata, if not null, is filled in with the profiling results.

Run

StatusRun(
constRunOptions & run_options,
constFeedType  & inputs,
conststd::vector<Output  > & fetch_outputs,
conststd::vector<Operation  > & run_outputs,
std::vector<Tensor  > *outputs,
RunMetadata*run_metadata,
constthread::ThreadPoolOptions & threadpool_options
)const

Same as above.

Additionally allows user to provide custom threadpool implementation via ThreadPoolOptions.

RunCallable

StatusRunCallable(
CallableHandle handle,
conststd::vector<Tensor  > & feed_tensors,
std::vector<Tensor  > *fetch_tensors,
RunMetadata*run_metadata
)

Invokes the subgraph named by handle with the given options and input tensors.

The order of tensors in feed_tensors must match the order of names in CallableOptions::feed() and the order of tensors in fetch_tensors will match the order of names in CallableOptions::fetch() when this subgraph was created. NOTE: This API is still experimental and may change.

RunCallable

StatusRunCallable(
CallableHandle handle,
conststd::vector<Tensor  > & feed_tensors,
std::vector<Tensor  > *fetch_tensors,
RunMetadata*run_metadata,
constthread::ThreadPoolOptions & options
)

Invokes the subgraph named by handle with the given options and input tensors.

The order of tensors in feed_tensors must match the order of names in CallableOptions::feed() and the order of tensors in fetch_tensors will match the order of names in CallableOptions::fetch() when this subgraph was created. NOTE: This API is still experimental and may change.

~ClientSession

 ~ClientSession()

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.

Last updated 2022年02月08日 UTC.