-
Couldn't load subscription status.
- Fork 2k
Using Perplexity and OpenAI in the same application #3554
-
In the documentation for Perplexity Chat the integration uses spring.ai.openai.api-key=<your-perplexity-api-key> and similar openai based configuration.
What is the recommended approach if an application needs to access both OpenAI and Perplexity chat models in the same application?
Beta Was this translation helpful? Give feedback.
All reactions
Replies: 2 comments
-
@tzolov any thoughts on this?
Beta Was this translation helpful? Give feedback.
All reactions
-
Hi @dolukhanov, I think you have to create a custom ChatClient when you have to rely on both perplexity and OpenAI. Generally, if you want to use the OpenAI API interface and have a different model, you can create a ChatClient for each model you want to use.
I hope the following example helps and answers your question.
Example Project
Here I am gonna make an example where I want to use the OpenAI API interface when running gemma:2b locally and using gpt-4.1-nano model at the same time. I create an AiService:
@Service public class AiService { private final ChatClient gemmaChatClient; private final ChatClient openAiChatClient; public AiService(ChatClient.Builder chatClientBuilder) { var gemmaApi = OpenAiApi.builder() .baseUrl("http://localhost:11434") .apiKey("test") // Otherwise api key required exception is thrown .build(); var gemmaChatModel = OpenAiChatModel.builder() .openAiApi(gemmaApi) .build(); var gemmaChatOptions = ChatOptions.builder() .model("gemma:2b") .build(); this.gemmaChatClient = ChatClient .builder(gemmaChatModel) .defaultOptions(gemmaChatOptions) .build(); var openAiChatOptions = ChatOptions.builder() .model("gpt-4.1-nano") .build(); this.openAiChatClient = chatClientBuilder .defaultOptions(openAiChatOptions) .build(); } public String askGemma(String message) { return gemmaChatClient.prompt() .user(message) .call() .content(); } public String askOpenai(String message) { return openAiChatClient.prompt() .user(message) .call() .content(); } }
To test this I created a simple controller where I am injecting my AiService:
@PostMapping(value = "/askai") public String askai(@RequestParam("message") String message, @RequestParam(value = "model", defaultValue = "openai") String model) { return switch (model) { case "openai" -> aiService.askOpenai(message); case "gemma" -> aiService.askGemma(message); default -> throw new IllegalStateException("Unexpected value: " + model); }; }
In my properties file, I will provide just the OpenAI API key. In my local environment, I have the gemma:2b model in ollama:
$ ollama ls
NAME ID SIZE MODIFIED
gemma:2b b50d6c999e59 1.7 GB 8 days ago
Now, after running my application, I tested it using httpie using the two models:
OpenAI
$ http POST :8080/askai message=="Are you trained by openai?" model=="openai" -b
Yes, I am developed and trained by OpenAI.
Gemma
$ http POST :8080/askai message=="Are you trained by google?" model=="gemma" -b
I am a large language model, trained by Google. I am a conversational AI that can engage in human-like conversations on a wide range of topics.
Beta Was this translation helpful? Give feedback.