Fast English word segmentation in Rust
Go to file
Nicholas Rempel 4558b10b58 line wrap 80 2021-04-26 10:04:16 -07:00
.cargo py: initial version of Python bindings 2021-03-24 11:57:29 +01:00
.github/workflows Tweak CI to avoid testing bindings for now 2021-03-24 11:57:29 +01:00
.vscode line wrap 80 2021-04-26 10:04:16 -07:00
data Add testing 2021-04-23 10:10:08 -07:00
instant-segment Bump version number to 0.8.1 2021-04-22 15:08:23 +02:00
instant-segment-py Rename sentence_score() to score_sentence() 2021-04-22 15:04:48 +02:00
.gitignore py: initial version of Python bindings 2021-03-24 11:57:29 +01:00
Cargo.toml py: initial version of Python bindings 2021-03-24 11:57:29 +01:00
Makefile py: initial version of Python bindings 2021-03-24 11:57:29 +01:00
README.md line wrap 80 2021-04-26 10:04:16 -07:00
cover.svg Add files via upload 2021-04-20 11:06:47 -07:00
deny.toml Allow MPL licenses 2020-11-25 17:34:03 +01:00

README.md

Cover logo

Instant Segment: fast English word segmentation in Rust

Documentation Crates.io PyPI Build status License: Apache 2.0

segmenter = instant_segment.Segmenter(unigrams(), bigrams())
search = instant_segment.Search()
segmenter.segment("instantdomainsearch", search)
print([word for word in search])

--> ['instant', 'domain', 'search']
let segmenter = Segmenter::from_maps(unigrams, bigrams);
let mut search = Search::default();
let words = segmenter
    .segment("instantdomainsearch", &mut search)
    .unwrap();
println!("{:?}", words.collect::<Vec<&str>>())

--> ["instant", "domain", "search"]

Instant Segment is a fast Apache-2.0 library for English word segmentation. It is based on the Python wordsegment project written by Grant Jenks, which is in turn based on code from Peter Norvig's chapter Natural Language Corpus Data from the book Beautiful Data (Segaran and Hammerbacher, 2009).

The data files in this repository are derived from the Google Web Trillion Word Corpus, as described by Thorsten Brants and Alex Franz, and distributed by the Linguistic Data Consortium. Note that this data "may only be used for linguistic education and research", so for any other usage you should acquire a different data set.

For the microbenchmark included in this repository, Instant Segment is ~17x faster than the Python implementation. Further optimizations are planned -- see the issues. The API has been carefully constructed so that multiple segmentations can share the underlying state to allow parallel usage.

Installing

Python (>= 3.9)

pip install instant-segment

Rust

[dependencies]
instant-segment = "*"

Using

Instant Segment works by segmenting a string into words by selecting the splits with the highest probability given a corpus of words and their occurances.

For instance, provided that choose and spain occur more frequently than chooses and pain, Instant Segment can help you split the string choosespain.com into ChooseSpain.com which more likely matches user intent.

import instant_segment


def main():
    unigrams = []
    unigrams.append(("choose", 50))
    unigrams.append(("chooses", 10))
    unigrams.append(("spain", 50))
    unigrams.append(("pain", 10))

    bigrams = []
    bigrams.append((("choose", "spain"), 10))
    bigrams.append((("chooses", "pain"), 10))

    segmenter = instant_segment.Segmenter(iter(unigrams), iter(bigrams))
    search = instant_segment.Search()
    segmenter.segment("choosespain", search)
    print([word for word in search])


if __name__ == "__main__":
    main()

use instant_segment::{Search, Segmenter}; use std::collections::HashMap;

fn main() {
    let mut unigrams = HashMap::default();

    unigrams.insert("choose".into(), 50 as f64);
    unigrams.insert("chooses".into(), 10 as f64);

    unigrams.insert("spain".into(), 50 as f64);
    unigrams.insert("pain".into(), 10 as f64);

    let mut bigrams = HashMap::default();

    bigrams.insert(("choose".into(), "spain".into()), 10 as f64);
    bigrams.insert(("chooses".into(), "pain".into()), 10 as f64);

    let segmenter = Segmenter::from_maps(unigrams, bigrams);
    let mut search = Search::default();

    let words = segmenter.segment("choosespain", &mut search).unwrap();

    println!("{:?}", words.collect::<Vec<&str>>())
}
['choose', 'spain']

Play with the examples above to see that different numbers of occurances will influence the results

The example above is succinct but, in practice, you will want to load these words and occurances from a corpus of data like the ones we provide here. Check out the tests to see examples of how you might do that.

Testing

To run the tests run the following:

cargo t -p instant-segment --all-features

You can also test the python bindings with:

make test-python