Dmitriy Selivanov — written Jan 2, 2015 — source
In the next series of posts I will try to explain base concepts Locality Sensitive Hashing technique.
Note, that I will try to follow general functional programming style. So I will use R’s Higher-Order Functions instead of traditional R’s *apply functions family (I suppose this post will be more readable for non R users). Also I will use brilliant pipe operator %>%
from magrittr package. We will start with basic concepts, but end with very efficient implementation in R (it is about 100 times faster than python implementations I found).
Imagine the following interesting problem. We have two very large social netwotks (for example facebook and google+), which have hundreds of millions of profiles and we want to determine profiles owned by same person. One reasonable approach is to assume that these people have nearly same or at least highly overlapped sets of friends in both networks. One well known measure for determining degree of similarity of sets is Jaccard Index:
Set operations are computationally cheap and straightforward solution seems quite good. But let’s try to estimate computational time for duplicates detection for only people with name “John Smith”. Imagine that in average each person has 100 friends:
[[1]] [1] "eyl" [[2]] [1] "ukm" [[3]] [1] "fes" [[4]] [1] "fka" [[5]] [1] "vuw" [[6]] [1] "ypg"
Unit: microseconds expr min lq mean median jaccard(friends_set_1, friends_set_2) 32.62 34.2485 37.92605 35.2385 uq max neval 36.6465 150.625 100
One operation takes 50 microseconds in average (on my machine). If we have 100000 of peoples with name John Smith and we have to compare all pairs, total computation will take more than 100 hours!
[1] 138.8889
Of course this is unappropriate because of complexity of our brute-force algorithm.
To solve this kind problem we will use Locality-sensitive hashing – a method of performing probabilistic dimension reduction of high-dimensional data. It provides good tradeoff between accuracy and computational time and roughly speaking has complexity.
I will explain one scheme of LSH, called MinHash.
The intuition of the method is the following: we will try to hash the input items so that similar items are mapped to the same buckets with high probability (the number of buckets being much smaller than the universe of possible input items).
Let’s construct simple example:
Now we have 3 sets to compare and identify profiles, related to same “John Smith”. From these sets we will construct matrix which encode relations between sets:
set_1 set_2 set_3 SMITH 1 1 0 JOHNSON 1 1 0 WILLIAMS 1 0 0 BROWN 1 1 0 THOMAS 0 0 1 MARTINEZ 0 0 1 DAVIS 0 0 1
Let’s call this matrix input-matrix. In our representation similarity of two sets from source array equal to the similarity of two corresponding columns with non-zero rows:
name | set_1 | set_2 | intersecton | union |
---|---|---|---|---|
SMITH | 1 | 1 | + | + |
JOHNSON | 1 | 1 | + | + |
WILLIAMS | 1 | 0 | – | + |
BROWN | 1 | 1 | + | + |
THOMAS | 0 | 0 | – | – |
MARTINEZ | 0 | 0 | – | – |
DAVIS | 0 | 0 | – | – |
From table above we can conclude, that jaccard index between set_1 and set_2 is 0.75.
Let’s check:
[1] 0.75
[1] TRUE
All the magic starts here. Suppose random permutation of rows of the input-matrix m
. And let’s define minhash function = # of first row in which column . If we will use independent permutations we will end with minhash functions. So we can construct signature-matrix from input-matrix using these minhash functions. Below we will do it not very efficiently with 2 nested for
loops. But the logic should be very clear.
[,1] [,2] [,3] [1,] 3 3 1 [2,] 1 1 3 [3,] 1 1 2 [4,] 1 1 4
You can see how we obtain signature-matrix matrix after “minhash transformation”. Permutations and corresponding signatures marked with same colors:
perm_1 | perm_2 | perm_3 | perm_4 | set_1 | set_2 | set_3 |
---|---|---|---|---|---|---|
4 | 1 | 4 | 6 | 1 | 1 | 0 |
3 | 4 | 1 | 1 | 1 | 1 | 0 |
7 | 6 | 6 | 2 | 1 | 0 | 0 |
6 | 2 | 7 | 3 | 1 | 1 | 0 |
5 | 3 | 2 | 5 | 0 | 0 | 1 |
2 | 5 | 3 | 7 | 0 | 0 | 1 |
1 | 7 | 5 | 4 | 0 | 0 | 1 |
set_1 | set_2 | set_3 |
---|---|---|
3 | 3 | 1 |
1 | 1 | 3 |
1 | 1 | 2 |
1 | 1 | 4 |
You can notice that set_1 and set_2 signatures are very similar and signature of set_3 dissimilar with set_1 and set_2.
[1] 1
[1] 0
Intuition is very straighforward. Let’s look down the permuted columns and until we detect 1.
Moreover there exist theoretical guaranties for estimation of Jaccard similarity: for any constant there is a constant such that the expected error of the estimate is at most .
Suppose input-matrix is very big, say 1e9
rows. It is quite hard computationally to permute 1 billion rows. Plus you need to store these entries and access these values. It is common to use following scheme instead:
So we end up with following ALGORITHM(1) from excellent Mining of Massive Datasets book:
I highly recommend to watch video about minhashing from Stanford Mining Massive Datasets course.
Let’s summarize what we have learned from first part of tutorial:
In the next posts I will describe how to efficently construct and store input-matrix in sparse format. Then we will discuss how to construct family of hash functions. After that we will implement fast vectorized version of ALGORITHM(1). And finally we will see how to use Locality Sensitive Hashing to determine candidate pairs for similar sets in time. Stay tuned!
tags: LSH
Tweet