cshalizi + hashing + nearest_neighbors   2

[1612.07545] A Revisit of Hashing Algorithms for Approximate Nearest Neighbor Search
"Approximate Nearest Neighbor Search (ANNS) is a fundamental problem in many areas of machine learning and data mining. During the past decade, numerous hashing algorithms are proposed to solve this problem. Every proposed algorithm claims outperform other state-of-the-art hashing methods. However, the evaluation of these hashing papers was not thorough enough, and those claims should be re-examined. The ultimate goal of an ANNS method is returning the most accurate answers (nearest neighbors) in the shortest time. If implemented correctly, almost all the hashing methods will have their performance improved as the code length increases. However, many existing hashing papers only report the performance with the code length shorter than 128. In this paper, we carefully revisit the problem of search with a hash index, and analyze the pros and cons of two popular hash index search procedures. Then we proposed a very simple but effective two level index structures and make a thorough comparison of eleven popular hashing algorithms. Surprisingly, the random-projection-based Locality Sensitive Hashing (LSH) is the best performed algorithm, which is in contradiction to the claims in all the other ten hashing papers. Despite the extreme simplicity of random-projection-based LSH, our results show that the capability of this algorithm has been far underestimated. For the sake of reproducibility, all the codes used in the paper are released on GitHub, which can be used as a testing platform for a fair comparison between various hashing algorithms."
to:NB  data_mining  approximation  nearest_neighbors  locality-sensitive_hashing  hashing  have_read  via:vaguery  random_projections  k-means  databases 
january 2019 by cshalizi
[1507.05910] Clustering is Efficient for Approximate Maximum Inner Product Search
"Locality Sensitive Hashing (LSH) techniques have recently become a popular solution for solving the approximate Maximum Inner Product Search (MIPS) problem, which arises in many situations and have in particular been used as a speed-up for the training of large neural probabilistic language models.
"In this paper we propose a new approach for solving approximate MIPS based on a variant of the k-means algorithm. We suggest using spherical k-means which is an algorithm that can efficiently be used to solve the approximate Maximum Cosine Similarity Search (MCSS), and basing ourselves on previous work by Shrivastava and Li we show how it can be adapted for approximate MIPS.
"Our new method compares favorably with LSH-based methods on a simple recall rate test, by providing a more accurate set of candidates for the maximum inner product. The proposed method is thus likely to benefit the wide range of problems with very large search spaces where a robust approximate MIPS heuristic could be of interest, such as for providing a high quality short list of candidate words to speed up the training of neural probabilistic language models."

---- I thought people viewed k-means as a _kind_ of locality-sensitive hashing?
to:NB  databases  hashing  clustering  k-means  nearest_neighbors  locality-sensitive_hashing  data_mining 
august 2015 by cshalizi

Copy this bookmark:



description:


tags: