a = FuzzySet(['Michael Axiak']); a.get("micael asiak"); [[0.8461538461538461, 'Michael Axiak']]
npm install fuzzyset.js
Open up your console and try it out!
f = FuzzySet(['what'])
Also check out the python version.
Arguments to constructor function
|array||An array of strings to initialize the data structure with|
|useLevenshtein||Whether or not to use the levenshtein distance to determine the match scoring. Default: True|
|gramSizeLower||The lower bound of gram sizes to use, inclusive (see Theory of operation). Default: 2|
|gramSizeUpper||The upper bound of gram sizes to use, inclusive (see Theory of operation). Default: 3|
Methods on initialized
|get(value, [default])||try to match a string to entries, otherwise return null or default if it is given.|
|add(value)||add a value to the set returning false if it is already in the set.|
|length()||return the number of items in the set.|
|isEmpty()||returns true if the set is empty.|
|values()||returns an array of the values in the set.|
First let's look at adding a string, 'michaelich' to an empty set. We first break apart the string into n-grams (strings of length n). So trigrams of 'michaelich' would look like:
'-mi' 'mic' 'ich' 'cha' 'hae' 'ael' 'eli' 'lic' 'ich' 'ch-'
Note that fuzzyset will first normalize the string by removing non word characters except for spaces and commas and force everything to be lowercase.
Next the fuzzyset essentially creates a reverse index on those grams. Maintaining a dictionary that says:
'mic' -> (1, 0) 'ich' -> (2, 0) ...
where the first number is the number of grams and the second number is the index of the item in a list that looks like:
Note that we maintain this reverse index for all grams from
gram_size_upper in the constructor.
This becomes important in a second.
To search the data structure, we take the n-grams of the query string and perform a reverse index look up. To illustrate,
let's consider looking up
'michael' in our fictitious set containing
'michaelich' where the
gram_size_lower parameters are default (3 and 2 respectively).
We begin by considering first all trigrams (the value of
gram_size_upper). Those grams are:
'-mi' 'mic' 'ich' 'cha' 'el-'
Then we create a list of any element in the set that has at least one occurrence of a trigram listed above. Note that this is just a dictionary lookup 5 times. For each of these matched elements, we compute the cosine similarity between each element and the query string. We then sort to get the most similar matched elements.
use_levenshtein is false, then we return all top matched elements with the same cosine similarity.
use_levenshtein is true, then we truncate the possible search space to 50, compute a score based on the Levenshtein
distance (so that we handle transpositions), and return based on that.
In the event that none of the trigrams matched, we try the whole thing again with bigrams (note though that if there are no matches, the failure to match will be quick). Bigram searching will always be slower because there will be a much larger set to order.