Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

JonathanRaiman/ciseau

Folders and files

NameName
Last commit message
Last commit date

Latest commit

History

46 Commits

Repository files navigation

Ciseau

Word and sentence tokenization in Python.

PyPI version Build Status Jonathan Raiman, author

License

Usage

Use this package to split up strings according to sentence and word boundaries. For instance, to simply break up strings into tokens:

tokenize("Joey was a great sailor.")
#=> ["Joey ", "was ", "a ", "great ", "sailor ", "."]

To also detect sentence boundaries:

sent_tokenize("Cat sat mat. Cat's named Cool.", keep_whitespace=True)
#=> [["Cat ", "sat ", "mat", ". "], ["Cat ", "'s ", "named ", "Cool", "."]]

sent_tokenize can keep the whitespace as-is with the flags keep_whitespace=True and normalize_ascii=False.

Installation

pip3 install ciseau

Testing

Run nose2.

If you find this project useful for your work or research, here's how you can cite it:

@misc{RaimanCiseau2017,
 author = {Raiman, Jonathan},
 title = {Ciseau},
 year = {2017},
 publisher = {GitHub},
 journal = {GitHub repository},
 howpublished = {\url{https://github.com/jonathanraiman/ciseau}},
 commit = {fe88b9d7f131b88bcdd2ff361df60b6d1cc64c04}
}

About

πŸš€ Tokenize and clean strings in Python

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

AltStyle γ«γ‚ˆγ£γ¦ε€‰ζ›γ•γ‚ŒγŸγƒšγƒΌγ‚Έ (->γ‚ͺγƒͺγ‚ΈγƒŠγƒ«) /