WOLFRAM

Enable JavaScript to interact with content and submit forms on Wolfram websites. Learn how
Wolfram Language & System Documentation Center
This service connection requires LLM access »
Use the OpenAI API with the Wolfram Language.

Connecting & Authenticating

ServiceConnect ["OpenAI"] creates a connection to the OpenAI API. If a previously saved connection can be found, it will be used; otherwise, a new authentication request will be launched. »
Use of this connection requires internet access and an OpenAI account.

Requests

ServiceExecute ["OpenAI" ,"Requests"] gives a list of all available requests.
ServiceExecute ["OpenAI","request",params] sends a request to the OpenAI API, using parameters params. The following gives possible requests.
The list of available requests and their output formats may change with API updates in different versions of the Wolfram Language.

"TestConnection" returns Success for working connection, Failure otherwise

Text

"Completion" create text completion for a given prompt

Parameters:
  • "Prompt" (required) the prompt for which to generate completions
    "BestOf" Automatic number of completions to generate before selecting the "best"
    "Echo" Automatic include the prompt in the completion
    "FrequencyPenalty" Automatic penalize tokens based on their existing frequency in the text so far (between -2 and 2)
    "LogProbs" Automatic include the log probabilities on the most likely tokens, as well as the chosen tokens (between 0 and 5)
    "MaxTokens" Automatic maximum number of tokens to generate
    "Model" Automatic name of the model to use
    "N" Automatic number of completions to return
    "PresencePenalty" Automatic penalize new tokens based on whether they appear in the text so far (between -2 and 2)
    "StopTokens" None up to four strings where the API will stop generating further tokens
    "Stream" Automatic return the result as server-sent events
    "Suffix" Automatic suffix that comes after a completion
    "Temperature" Automatic sampling temperature (between 0 and 2)
    "ToolChoice" Automatic which (if any) tool is called by the model
    "Tools" Automatic one or more LLMTool objects available to the model
    "TotalProbabilityCutoff" None an alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with the requested probability mass
    "User" Automatic unique identifier representing the end user
  • "Chat" create a response for the given chat conversation

    Parameters:
  • "Messages" (required) a list of messages in the conversation, each given as an association with "Role" and "Content" keys
    "FrequencyPenalty" Automatic penalize tokens based on their existing frequency in the text so far (between -2 and 2)
    "LogProbs" Automatic include the log probabilities on the most likely tokens, as well as the chosen tokens (between 0 and 5)
    "MaxTokens" Automatic maximum number of tokens to generate
    "Model" Automatic name of the model to use
    "N" Automatic number of chat completions to return
    "PresencePenalty" Automatic penalize new tokens based on whether they appear in the text so far (between -2 and 2)
    "StopTokens" None up to four strings where the API will stop generating further tokens
    "Stream" Automatic return the result as server-sent events
    "Suffix" Automatic suffix that comes after a completion
    "Temperature" Automatic sampling temperature (between 0 and 2)
    "ToolChoice" Automatic which (if any) tool is called by the model
    "Tools" Automatic one or more LLMTool objects available to the model
    "TotalProbabilityCutoff" None an alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with the requested probability mass
    "User" Automatic unique identifier representing the end user
  • "Embedding" create an embedding vector representing the input text

    Parameters:
  • "Input" (required) one or a list of texts to get embeddings for
    "EncodingFormat" Automatic format to return the embeddings
    "EncodingLength" Automatic number of dimensions of the result
    "Model" Automatic name of the model to use
    "User" Automatic unique identifier representing the end user
  • Image

    "ImageCreate" create a square image given a prompt

    Parameters:
  • "Prompt" (required) text description of the desired image
    "Model" Automatic name of the model to use
    "N" Automatic number of images to generate
    "Quality" Automatic control the quality of the result; possible values include "hd"
    "Size" Automatic size of the generated image
    "Style" Automatic style of generated images; possible values include "vivid" or "natural"
    "User" Automatic unique identifier representing the end user
  • "ImageVariation" create a variation of a given image

    Parameters:
  • "Image" (required) image to use as the basis for the variation
    "N" Automatic number of images to generate
    "Size" Automatic size of the generated image
    "User" Automatic unique identifier representing the end user
  • "ImageEdit" create an edited image given an original image and a prompt

    Parameters:
  • "Image" (required) image to edit; requires an alpha channel if a mask is not provided
    "Mask" None additional image whose fully transparent areas indicate where the input should be edited
    "N" Automatic number of images to generate
    "Prompt" None text description of the desired image edit
    "Size" Automatic size of the generated image
    "User" Automatic unique identifier representing the end user
  • Audio

    "AudioTranscription" transcribe an audio recording into the input language

    Parameters:
  • "Audio" (required) the Audio object to transcribe
    "Language" Automatic language of the input audio
    "Model" Automatic name of the model to use
    "Prompt" None optional text to guide the model's style or continue a previous audio segment
    "Temperature" Automatic sampling temperature (between 0 and 1)
    "TimestampGranularities" Automatic the timestamp granularity of transcription (either "word" or "segment")
  • "AudioTranslation" translate an audio recording into English

    Parameters:
  • "Audio" (required) the Audio object to translate
    "Model" Automatic name of the model to use
    "Prompt" None optional text to guide the model's style or continue a previous audio segment
    "Temperature" Automatic sampling temperature (between 0 and 1)
  • "SpeechSynthesize" synthesize speech from text

    Parameters:
  • "Input" (required) the text to synthesize
    "Model" Automatic name of the model to use
    "Speed" Automatic the speed of the produced speech
    "Voice" Automatic the voice to use for the synthesis
  • Model Lists

    "ChatModelList" list models available for the "Chat" request

    "CompletionModelList" list models available for the "Completion" request

    "EmbeddingModelList" list models available for the "Embedding" request

    "ModerationModelList" list models available for the "Moderation" request

    "ImageModelList" list models available for the image-related requests

    "SpeechSynthesizeModelList" list models available for the "SpeechSynthesize" request

    "AudioModelList" list models available for the "AudioTranscribe" request

    Moderation

    "Moderation" classify if text violates OpenAI's Content Policy

    Parameters:
  • "Input" (required) the text to classify
    "Model" Automatic name of the model to use
  • Examples

    open all close all

    Basic Examples  (1)

    Create a new connection:

    Complete a piece of text:

    Generate a response from a chat:

    Compute the embedding for a sentence:

    Generate an Image from a prompt:

    Transcribe an Audio object:

    Synthesize a piece of text:

    Scope  (10)

    Text  (4)

    Completion  (1)

    Change the sampling temperature:

    Increase the number of characters returned:

    Return multiple completions:

    Include the prompt in the returned completion:

    Chat  (2)

    Respond to a chat containing multiple messages:

    Change the sampling temperature:

    Increase the number of characters returned:

    Return multiple completions:

    Allow the model to use an LLMTool :

    Send multimodal input:

    Send a chat request asynchronously using ServiceSubmit and collect the response using the HandlerFunctions and HandlerFunctionsKeys options:

    Embedding  (1)

    Compute the embedding for multiple sentences:

    Plot the results:

    Compute the embeddings for a list of words:

    Plot the results:

    Image  (3)

    ImageCreate  (1)

    Create an Image :

    Return multiple results:

    Change the size of the returned Image :

    Use a different model:

    ImageVariation  (1)

    Create a variation of an Image :

    Return multiple results:

    Change the size of the returned Image :

    ImageEdit  (1)

    Use an Image with an alpha channel to indicate where the editing will take place:

    Use a non-transparent Image and a mask to indicate where the editing will take place:

    Return multiple results:

    Change the size of the returned Image :

    Audio  (3)

    AudioTranscription  (1)

    Transcribe an Audio object:

    Use a prompt to provide context for the transcription:

    Transcribe a recording made in a different language:

    Increase the temperature used for the sampling:

    Include timestamps in the transcription:

    AudioTranslation  (1)

    Translate an Audio object into English:

    Use a prompt to provide context for the translation:

    Increase the temperature used for the sampling:

    SpeechSynthesize  (1)

    Synthesize a piece of text:

    Use a different voice for the synthesis:

    Authentication  (4)

    If no connections exist, ServiceConnect will prompt a dialog where an API key can be entered:

    The API key can also be specified using the Authentication option:

    Use credentials stored in SystemCredential :

    The credentials are stored directly by the framework, since SystemCredential ["key"] evaluates to a string:

    Only store the SystemCredential key rather than its value by using RuleDelayed :

    Retrieve the value of the authentication credentials used in a specific service object:

    Overwrite the authentication credentials of an existing service object:

    See Also

    ServiceExecute   ServiceConnect   LLMFunction   LLMSynthesize   ChatEvaluate   LLMConfiguration   ImageSynthesize   SpeechRecognize

    Service Connections: AlephAlpha   Anthropic   Cohere   DeepSeek   GoogleGemini   Groq   MistralAI   TogetherAI   GoogleSpeech

    Top [フレーム]

    AltStyle によって変換されたページ (->オリジナル) /