Fast English word segmentation in Rust
Go to file
Dirkjan Ochtman 9febbe32d4 Add license files (fixes #15) 2021-04-26 21:22:54 +02:00
.cargo py: initial version of Python bindings 2021-03-24 11:57:29 +01:00
.github/workflows Tweak CI to avoid testing bindings for now 2021-03-24 11:57:29 +01:00
data Initial version 2020-05-26 20:07:00 +02:00
instant-segment Add license files (fixes #15) 2021-04-26 21:22:54 +02:00
instant-segment-py Add license files (fixes #15) 2021-04-26 21:22:54 +02:00
.gitignore py: initial version of Python bindings 2021-03-24 11:57:29 +01:00
Cargo.toml py: initial version of Python bindings 2021-03-24 11:57:29 +01:00
LICENSE Add license files (fixes #15) 2021-04-26 21:22:54 +02:00
Makefile py: initial version of Python bindings 2021-03-24 11:57:29 +01:00
README.md Tighten the language a little bit 2020-12-16 10:48:31 +01:00
cover.svg Add files via upload 2021-04-20 11:06:47 -07:00
deny.toml Allow MPL licenses 2020-11-25 17:34:03 +01:00

README.md

Cover logo

instant-segment: fast English word segmentation in Rust

Documentation Crates.io Build status License: Apache 2.0

instant-segment is a fast Apache-2.0 library for English word segmentation. It is based on the Python wordsegment project written by Grant Jenkins, which is in turn based on code from Peter Norvig's chapter Natural Language Corpus Data from the book Beautiful Data (Segaran and Hammerbacher, 2009).

The data files in this repository are derived from the Google Web Trillion Word Corpus, as described by Thorsten Brants and Alex Franz, and distributed by the Linguistic Data Consortium. Note that this data "may only be used for linguistic education and research", so for any other usage you should acquire a different data set.

For the microbenchmark included in this repository, instant-segment is ~17x faster than the Python implementation. Further optimizations are planned -- see the issues. The API has been carefully constructed so that multiple segmentations can share the underlying state to allow parallel usage.