Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

短文本聚类预处理模块 Short text cluster

License

Notifications You must be signed in to change notification settings

cyjack/TextCluster

Folders and files

NameName
Last commit message
Last commit date

Latest commit

History

5 Commits

Repository files navigation

短文本聚类

项目介绍

短文本聚类是常用的文本预处理步骤,可以用于洞察文本常见模式、分析设计语义解析规范等。本项目实现了内存友好的短文本聚类方法。

依赖库

pip install tqdm jieba

使用方法

python cluster.py --infile ./data/infile \
--output ./data/output

具体参数设置可以参考cluster.py文件内_get_parser()函数参数说明,包含设置分词词典、停用词、匹配采样数、匹配度阈值等。

算法原理

算法原理

文件路径

TextCluster
| README.md
| LICENSE
| cluster.py 主要执行程序
| 
|------utils 公共功能模块
| | __init__.py
| | segmentor.py 分词器封装
| | similar.py 相似度计算函数
| | utils.py 文件处理模块
|
|------data
| | infile 默认输入文本路径,用于测试中文模式
| | infile_en 默认输入文本路径,用于测试英文模式
| | seg_dict 默认分词词典
| | stop_words 默认停用词路径

注:本方法仅面向短文本,长文本聚类可根据需求选用SimHash, LDA等其他算法。

Text Cluster

Introduction

Text cluster is a normal preprocess procedure to analysis text feature. This project implements a memory friendly method only for short text cluster. For long text, it is preferable to choose SimHash or LDA or others according to demand.

Requirements

pip install tqdm spacy

Usage

python cluster.py --infile ./data/infile_en \
--output ./data/output \
--lang en

For more configure arguments description, see _get_parser() in cluster.py, including stop words setting, sample number.

Basic Idea

Algorithm_en

File Structure

TextCluster
| README.md
| LICENSE
| cluster.py main excutable function
| 
|------utils utilities
| | __init__.py
| | segmentor.py tokenizer wrapper
| | similar.py similarity calculator
| | utils.py file process module
|
|------data
| | infile default input file path, to test Chinese mode
| | infile_en default input file path, to test English mode
| | seg_dict default tokenizer dict path
| | stop_words default stop words path

Other Language

For other specific language, modify tokenizer wrapper in ./utils/segmentor.py.

About

短文本聚类预处理模块 Short text cluster

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%

AltStyle によって変換されたページ (->オリジナル) /