Skip to main content
Stack Overflow
  1. About
  2. For Teams
Filter by
Sorted by
Tagged with
4 votes
2 answers
752 views

I'm trying to install the LLaMA 3.1 8B model by following the instructions in the llamamodel GitHub README. When I run the command: llama-model download --source meta --model-id CHOSEN_MODEL_ID (...
0 votes
0 answers
55 views

I am doing some tests using Ollama on local computer, with Llama 3.2, which consists in prompting a task against a document. I read that after having reached maximum context, I should restart the ...
0 votes
0 answers
50 views

I'm trying to extract API integration parameters like Authorization headers, query params, and request body fields from API documentation. This is essentially a custom NER task. I’ve experimented with ...
0 votes
1 answer
149 views

I am using llama stack (https://llama-stack.readthedocs.io/en/latest/) and as provider of models to interact with Ollama. At first I used tool calling from models directly downloaded from Ollama. ...
0 votes
0 answers
99 views

I'm using a locally hosted model(llama3.2) with Ollama and trying to replicate functionality similar to bind_tools(to create and run the tools with LLM ) for tool calling. This is my model service ...
1 vote
0 answers
239 views

I'm following codes from links: https://github.com/jalr4ever/Tiny-OAI-MCP-Agent/blob/main/mcp_client.py https://github.com/philschmid/mcp-openai-gemini-llama-example/blob/master/...
0 votes
1 answer
135 views

So I'm trying to toss together a little demo that is essentially: 1) generate some text live and save to a file (I've got this working), 2) have a local instance of an LLM running (Llama3 in this case)...
PoGaMi's user avatar
  • 133
0 votes
0 answers
596 views

I am teaching myself LLM programming by developing a RAG application. I am running Llama 3.2 on my laptop using Ollama, and using a mix of SQLite and langchain. I can pass a context to the llm along ...
0 votes
0 answers
30 views

I am learning to fine tune Llama3.1 on a custom dataset.I have converted my dataset to a hugging face dataset.By evaluating directly using the model gives accuracy of 80%.Now when i am trying to fine ...
0 votes
0 answers
350 views

I'm extracting Inputs, Outputs, and Summaries from large legacy codebases (COBOL, RPG), but facing repetition issues, especially when generating bullet points. Summaries work fine, but sections like ...
0 votes
1 answer
136 views

I am communicating with ollama (llama3.1b) and have it respond with a tool call that I can resolve. However - I am struggling with the final call to ollama that would resolve the orginal question. I ...
1 vote
1 answer
454 views

On a windows 11 machine, I am trying to get a json reponse from the llama3 model on my local ollama installation on jupyter notebook but it does not work Steps I tried: This below snippet works ...
0 votes
1 answer
226 views

I am trying to make Llama3 Instruct able to use function call from tools , it does work but now it is answering only function call! if I ask something like who are you ? or what is apple device ? it ...
1 vote
0 answers
3k views

I'm integrating the Groq API in my Flask application to classify social media posts using a model based on DeepSeek r1 (e.g., deepseek-r1-distill-llama-70b). I build a prompt by combining multiple ...
0 votes
0 answers
143 views

I have a collection of news articles and I want to produce some new (unbiased) news articles using meta-llama/Meta-Llama-3-8B-Instruct. The articles are in a huggingface Dataset and to feed the ...

15 30 50 per page
1
2 3 4

AltStyle によって変換されたページ (->オリジナル) /