Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit f7fc0fe

Browse files
docs:专栏更新
1 parent 97e29f4 commit f7fc0fe

13 files changed

+2392
-323
lines changed

‎.vscode/.server-controller-port.log‎

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
{
22
"port": 9145,
3-
"time": 1728450843494,
3+
"time": 1729093610425,
44
"version": "0.0.3"
55
}

‎docs/.vuepress/config.js‎

Lines changed: 39 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -516,9 +516,17 @@ module.exports = {
516516
{
517517
text: 'Redis',
518518
items: [{
519-
text: 'Redis数据结构的最佳实践',
519+
text: '基础',
520520
link: '/md/redis/00-数据结构的最佳实践.md'
521-
}]
521+
},
522+
{
523+
text: '源码',
524+
link: '/md/redis/00-数据结构的最佳实践.md'
525+
},
526+
{
527+
text: '业务',
528+
link: '/md/redis/00-数据结构的最佳实践.md'
529+
},]
522530
},
523531

524532
{
@@ -1110,6 +1118,7 @@ module.exports = {
11101118
collapsable: false,
11111119
sidebarDepth: 0,
11121120
children: [
1121+
"04-RPC框架在网络通信的网络IO模型选型",
11131122
"熔断限流",
11141123
"11-RPC的负载均衡",
11151124
]
@@ -1176,7 +1185,7 @@ module.exports = {
11761185
children: [
11771186
"为啥要学习数据分析?",
11781187
"correct-data-analysis-learning-methods",
1179-
"02-数据挖掘的学习路径",
1188+
"learning-path-data-mining",
11801189
"企业如何利用数据打造精准用户画像?",
11811190
"如何自动化采集数据",
11821191
"how-to-use-octoparse-for-data-scraping",
@@ -1194,6 +1203,7 @@ module.exports = {
11941203
"03-ReentrantLock与AQS.md",
11951204
"04-线程池以及生产环境使用.md",
11961205
"05-京东并行框架asyncTool如何针对高并发场景进行优化?.md",
1206+
"java21-virtual-threads-where-did-my-lock-go",
11971207
]
11981208
},
11991209
{
@@ -1562,6 +1572,7 @@ module.exports = {
15621572
collapsable: false,
15631573
sidebarDepth: 0,
15641574
children: [
1575+
"01-Netty源码面试实战+原理(一)-鸿蒙篇",
15651576
"netty-basic-components",
15661577
"ChannelPipeline接口",
15671578
"(06-1)-ChannelHandler 家族",
@@ -1664,20 +1675,34 @@ module.exports = {
16641675
}, ],
16651676

16661677
"/md/redis/": [{
1667-
title: "Redis",
1678+
title: "基础",
16681679
collapsable: false,
16691680
sidebarDepth: 0,
16701681
children: [
1671-
"00-数据结构的最佳实践",
16721682
"01-Redis和ZK分布式锁优缺点对比以及生产环境使用建议",
1673-
"02-Redisson可重入锁加锁源码分析",
1674-
"03-Redisson公平锁加锁源码分析",
1675-
"04-Redisson读写锁加锁机制分析",
16761683
"05-缓存读写策略模式详解",
16771684
"06-如何快速定位 Redis 热 key",
16781685
"12-Redis 闭源?",
16791686
]
1680-
}],
1687+
},
1688+
{
1689+
title: "源码",
1690+
collapsable: false,
1691+
sidebarDepth: 0,
1692+
children: [
1693+
"02-Redisson可重入锁加锁源码分析",
1694+
"03-Redisson公平锁加锁源码分析",
1695+
"04-Redisson读写锁加锁机制分析",
1696+
]
1697+
},
1698+
{
1699+
title: "业务",
1700+
collapsable: false,
1701+
sidebarDepth: 0,
1702+
children: [
1703+
"00-数据结构的最佳实践",
1704+
]
1705+
},],
16811706
"/md/es/": [{
16821707
title: "ElasticSearch",
16831708
collapsable: false,
@@ -2076,6 +2101,7 @@ module.exports = {
20762101
"11-lcel-memory-addition-method",
20772102
"12-lcel-agent-core-components",
20782103
"13-best-development-practices",
2104+
"local-large-model-deployment",
20792105
]
20802106
},
20812107

@@ -2087,6 +2113,9 @@ module.exports = {
20872113
"01-three-minute-fastapi-ai-agent-setup",
20882114
"02-Agent应用对话情感优化",
20892115
"03-use-tts-to-make-your-ai-agent-speak",
2116+
"langserve-revolutionizes-llm-app-deployment",
2117+
"customizing-a-tool-for-your-ai-agent",
2118+
"Complex-SQL-Joins-with-LangGraph-and-Waii",
20902119
"AI Agent应用出路到底在哪?",
20912120
]
20922121
},
@@ -2098,6 +2127,7 @@ module.exports = {
20982127
children: [
20992128
"00-introduce-to-LangGraph",
21002129
"langgraph-studio",
2130+
"multi_agent",
21012131
"methods-adapting-large-language-models",
21022132
"to-fine-tune-or-not-to-fine-tune-llm",
21032133
"effective-datasets-fine-tuning",

‎docs/md/AI/13-best-development-practices.md‎

Lines changed: 1 addition & 202 deletions
Original file line numberDiff line numberDiff line change
@@ -27,205 +27,4 @@
2727

2828
优点:数据私有、更灵活、成本低
2929

30-
缺点:算力设施、技术支撑
31-
32-
## 3 使用 Ollama 在本地部署大模型
33-
34-
### 3.1 下载并运行应用程序
35-
36-
37-
38-
![](https://my-img.javaedge.com.cn/javaedge-blog/2024/06/a872adde1e96e5dbd3ddb0e910f48088.png)
39-
40-
41-
42-
![](https://my-img.javaedge.com.cn/javaedge-blog/2024/06/de0bfb92df17722ebdbb5c0696fd7666.png)
43-
44-
45-
46-
![](https://my-img.javaedge.com.cn/javaedge-blog/2024/06/daa95f47315ba60e6790d27661f85021.png)
47-
48-
49-
50-
![](https://my-img.javaedge.com.cn/javaedge-blog/2024/06/85b62d04db4c06665b1fff64de5bec87.png)
51-
52-
### 3.2 从命令行中选取模型(ollama pull llam2)
53-
54-
[官网支持的模型](https://ollama.com/library?sort=newest):
55-
56-
![](/Users/javaedge/Downloads/IDEAProjects/java-edge-master/assets/image-20240621135627185.png)
57-
58-
挑选一个比较小的试玩下:
59-
60-
![](https://my-img.javaedge.com.cn/javaedge-blog/2024/06/46b83f44f00fb3965c35e700cb45eb85.png)
61-
62-
### 3.3 运行
63-
64-
[浏览器](localhost:11434):
65-
66-
![](/Users/javaedge/Downloads/IDEAProjects/java-edge-master/assets/image-20240621141710055.png)
67-
68-
![](https://my-img.javaedge.com.cn/javaedge-blog/2024/06/29fa4e05840db498501e59e03db1e63f.png)
69-
70-
## 4 本地大模型调用
71-
72-
既然部署本地完成了,来看看如何调用呢?
73-
74-
```python
75-
from langchain_community.llms import Ollama
76-
77-
llm = Ollama(model="qwen2:0.5b")
78-
llm.invoke(input="你是谁?")
79-
```
80-
81-
82-
83-
![](https://my-img.javaedge.com.cn/javaedge-blog/2024/06/af07e34926600fdd9946e2905c05bb7a.png)
84-
85-
### 使用流式
86-
87-
```python
88-
#使用流式
89-
from langchain.callbacks.manager import CallbackManager
90-
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
91-
92-
llm = Ollama(
93-
model="qwen2:0.5b", callback_manager=CallbackManager([StreamingStdOutCallbackHandler()])
94-
)
95-
llm.invoke(input="第一个登上月球的人是谁?")
96-
```
97-
98-
99-
100-
![](https://my-img.javaedge.com.cn/javaedge-blog/2024/06/57bcab3fd266daac316d119b20199b37.png)
101-
102-
## 5 模型评估
103-
104-
### 5.1 远程大模型
105-
106-
```python
107-
from langchain_openai import ChatOpenAI
108-
from langchain.evaluation import load_evaluator
109-
llm = ChatOpenAI(
110-
api_key=os.getenv("DASHSCOPE_API_KEY"),
111-
base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",
112-
model="qwen-plus"
113-
)
114-
115-
evaluator = load_evaluator("criteria", llm=llm, criteria="conciseness")
116-
eval_result = evaluator.evaluate_strings(
117-
prediction="four.",
118-
input="What's 2+2?",
119-
)
120-
print(eval_result)
121-
```
122-
123-
124-
125-
![](https://my-img.javaedge.com.cn/javaedge-blog/2024/06/bb4d72b250043b2ee5bd0ae82541e655.png)
126-
127-
如果不简洁的回答:
128-
129-
```python
130-
#inpt 输入的评测问题
131-
#prediction 预测的答案
132-
# 返回值 Y/N 是否符合
133-
# 返回值score 1-0分数,1为完全,0为不完全
134-
eval_result = evaluator.evaluate_strings(
135-
prediction="What's 2+2? That's an elementary question. The answer you're looking for is that two and two is four.",
136-
input="What's 2+2?",
137-
)
138-
print(eval_result)
139-
```
140-
141-
142-
143-
![](https://my-img.javaedge.com.cn/javaedge-blog/2024/06/61c1b940051d6c7b5849cf6211fceefb.png)
144-
145-
### 5.2 本地大模型
146-
147-
```python
148-
from langchain_community.chat_models import ChatOllama
149-
llm = ChatOllama(model="qwen2:0.5b")
150-
evaluator = load_evaluator("criteria", llm=llm, criteria="conciseness")
151-
```
152-
153-
```python
154-
#inpt 输入的评测问题
155-
#prediction 预测的答案
156-
# 返回值 Y或者N是否符合
157-
# 返回值score 1-0分数,1为完全,0为不完全
158-
eval_result = evaluator.evaluate_strings(
159-
prediction="What's 2+2? That's an elementary question. The answer you're looking for is that two and two is four.",
160-
input="What's 2+2?",
161-
)
162-
print(eval_result)
163-
```
164-
165-
166-
167-
![](https://my-img.javaedge.com.cn/javaedge-blog/2024/06/ea116b14383b6db7194d7658810767fd.png)
168-
169-
### 5.3 内置评估标准
170-
171-
```python
172-
# 内置的一些评估标准
173-
from langchain.evaluation import Criteria
174-
175-
list(Criteria)
176-
```
177-
178-
179-
180-
![](https://my-img.javaedge.com.cn/javaedge-blog/2024/06/f71d5563c6a00a18f2951bb3a4e2f9cd.png)
181-
182-
183-
184-
185-
```python
186-
llm = ChatOllama(model="qwen2:0.5b")
187-
#使用enum格式加载标准
188-
from langchain.evaluation import EvaluatorType
189-
#自定义评估标准
190-
custom_criterion = {
191-
"幽默性": "输出的内容是否足够幽默或者包含幽默元素",
192-
}
193-
eval_chain = load_evaluator(
194-
EvaluatorType.CRITERIA,
195-
llm=llm,
196-
criteria=custom_criterion,
197-
)
198-
query = "给我讲一个笑话"
199-
prediction = "有一天,小明去买菜,结果买了一堆菜回家,结果发现自己忘了带钱。"
200-
eval_result = eval_chain.evaluate_strings(prediction=prediction, input=query)
201-
print(eval_result)
202-
```
203-
204-
205-
206-
![](https://my-img.javaedge.com.cn/javaedge-blog/2024/06/b626bd419b59ded036872353dbd91d41.png)
207-
208-
### 模型比较
209-
210-
```python
211-
from langchain.model_laboratory import ModelLaboratory
212-
from langchain.prompts import PromptTemplate
213-
from langchain_openai import OpenAI
214-
from langchain_community.llms.chatglm import ChatGLM
215-
from langchain_community.chat_models import ChatOllama
216-
217-
#比较openai、ChatGLM、ChatOllama三个模型的效果
218-
llms = [
219-
# OpenAI(temperature=0),
220-
ChatOllama(model="qwen2:0.5b"),
221-
]
222-
```
223-
224-
```python
225-
model_lab = ModelLaboratory.from_llms(llms)
226-
model_lab.compare("齐天大圣的师傅是谁?")
227-
```
228-
229-
230-
231-
![](https://my-img.javaedge.com.cn/javaedge-blog/2024/06/8c693bac93ab5309068b4a724dd9eac1.png)
30+
缺点:算力设施、技术支撑

0 commit comments

Comments
(0)

AltStyle によって変換されたページ (->オリジナル) /