Fast English word segmentation in Rust
Go to file
Dirkjan Ochtman 85f4f94b53 Use more efficient segmentation strategy
Based on the triangular matrix approach as explained here:

https://towardsdatascience.com/fast-word-segmentation-for-noisy-text-2c2c41f9e8da

Use iteration rather than recursion to segment the input forwards
rather than backwards and use a `Vec`-based memoization strategy
instead of relying on a `HashMap` of words. This version is about
4.8x faster, 100 lines of code less and should use much less memory.
2021-05-28 14:30:27 +02:00
.cargo py: initial version of Python bindings 2021-03-24 11:57:29 +01:00
.github Upgrade to GitHub-native Dependabot 2021-04-30 09:51:09 +02:00
data Flesh out README (#14) 2021-04-29 11:12:42 +02:00
instant-segment Use more efficient segmentation strategy 2021-05-28 14:30:27 +02:00
instant-segment-py Add license files (fixes #15) 2021-04-29 15:34:02 +02:00
.gitignore py: initial version of Python bindings 2021-03-24 11:57:29 +01:00
Cargo.toml py: initial version of Python bindings 2021-03-24 11:57:29 +01:00
LICENSE Add license files (fixes #15) 2021-04-29 15:34:02 +02:00
Makefile py: initial version of Python bindings 2021-03-24 11:57:29 +01:00
README.md Use more efficient segmentation strategy 2021-05-28 14:30:27 +02:00
cover.svg Add files via upload 2021-04-20 11:06:47 -07:00
deny.toml Allow MPL licenses 2020-11-25 17:34:03 +01:00

README.md

Cover logo

Instant Segment: fast English word segmentation in Rust

Documentation Crates.io PyPI Build status License: Apache 2.0

Instant Segment is a fast Apache-2.0 library for English word segmentation. It is based on the Python wordsegment project written by Grant Jenks, which is in turn based on code from Peter Norvig's chapter Natural Language Corpus Data from the book Beautiful Data (Segaran and Hammerbacher, 2009).

The data files in this repository are derived from the Google Web Trillion Word Corpus, as described by Thorsten Brants and Alex Franz, and distributed by the Linguistic Data Consortium. Note that this data "may only be used for linguistic education and research", so for any other usage you should acquire a different data set.

For the microbenchmark included in this repository, Instant Segment is ~100x faster than the Python implementation. Further optimizations are planned -- see the issues. The API has been carefully constructed so that multiple segmentations can share the underlying state to allow parallel usage.

How it works

Instant Segment works by segmenting a string into words by selecting the splits with the highest probability given a corpus of words and their occurrences.

For instance, provided that choose and spain occur more frequently than chooses and pain, and that the pair choose spain occurs more frequently than chooses pain, Instant Segment can help identify the domain choosespain.com as ChooseSpain.com which more likely matches user intent.

We use this technique at Instant Domain Search to help our users find relevant domains.

Using the library

Python (>= 3.9)

pip install instant-segment

Rust

[dependencies]
instant-segment = "0.8.1"

Examples

The following examples expect unigrams and bigrams to exist. See the examples (Rust, Python) to see how to construct these objects.

import instant_segment

segmenter = instant_segment.Segmenter(unigrams, bigrams)
search = instant_segment.Search()
segmenter.segment("instantdomainsearch", search)
print([word for word in search])

--> ['instant', 'domain', 'search']
use instant_segment::{Search, Segmenter};
use std::collections::HashMap;

let segmenter = Segmenter::from_maps(unigrams, bigrams);
let mut search = Search::default();
let words = segmenter
    .segment("instantdomainsearch", &mut search)
    .unwrap();
println!("{:?}", words.collect::<Vec<&str>>())

--> ["instant", "domain", "search"]

Check out the tests for more thorough examples: Rust, Python

Testing

To run the tests run the following:

cargo t -p instant-segment --all-features

You can also test the Python bindings with:

make test-python