Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit db91572

Browse files
Added basic model loading and dataset loading in train file.
1 parent d615e53 commit db91572

File tree

2 files changed

+53
-1
lines changed

2 files changed

+53
-1
lines changed

‎model/train.py‎

Lines changed: 52 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,52 @@
1+
from tokenizers import ByteLevelBPETokenizer
2+
from transformers import GPT2Config, GPT2LMHeadModel, GPT2Tokenizer
3+
from datasets import load_dataset
4+
5+
def encode(lines):
6+
return tokenizer(lines['text'], add_special_tokens=True, truncation=True, max_length=512)
7+
8+
TRAIN_BASE = False
9+
TOKENIZER_DIR = "tokenizer"
10+
11+
paths = ["../data.txt"]
12+
13+
if TRAIN_BASE:
14+
tokenizer = ByteLevelBPETokenizer()
15+
16+
tokenizer.train(files=paths, vocab_size=52000, min_frequency=2, special_tokens=[
17+
"<s>",
18+
"<pad>",
19+
"</s>",
20+
"<unk>",
21+
"<mask>",
22+
])
23+
24+
tokenizer.save_model(TOKENIZER_DIR)
25+
26+
inp = "print('hello world!')"
27+
28+
tokenizer = GPT2Tokenizer.from_pretrained(TOKENIZER_DIR)
29+
tokenizer.add_special_tokens({
30+
"eos_token": "</s>",
31+
"bos_token": "<s>",
32+
"unk_token": "<unk>",
33+
"pad_token": "<pad>",
34+
"mask_token": "<mask>"
35+
})
36+
37+
t = tokenizer.encode(inp)
38+
print(t)
39+
print(tokenizer.decode(t))
40+
41+
config = GPT2Config(
42+
vocab_size = tokenizer.vocab_size,
43+
bos_token_id = tokenizer.bos_token_id,
44+
eos_token_id = tokenizer.eos_token_id
45+
)
46+
47+
model = GPT2LMHeadModel()
48+
49+
dataset = load_dataset("text", data_files=paths)
50+
51+
dataset.set_transform(encode)
52+
dataset = dataset['train']

‎preprocess/tokenizer.py‎

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
TRAIN_BASE = False
66
TOKENIZER_DIR = "tokenizer"
77

8-
paths = ["data.txt"]
8+
paths = ["../data.txt"]
99

1010
if TRAIN_BASE:
1111
tokenizer = ByteLevelBPETokenizer()

0 commit comments

Comments
(0)

AltStyle によって変換されたページ (->オリジナル) /