cshalizi + databases   10

Fast Generalized Linear Models by Database Sampling and One-Step Polishing: Journal of Computational and Graphical Statistics: Vol 0, No 0
"In this article, I show how to fit a generalized linear model to N observations on p variables stored in a relational database, using one sampling query and one aggregation query, as long as N^{1/2+δ} observations can be stored in memory, for some δ>0. The resulting estimator is fully efficient and asymptotically equivalent to the maximum likelihood estimator, and so its variance can be estimated from the Fisher information in the usual way. A proof-of-concept implementation uses R with MonetDB and with SQLite, and could easily be adapted to other popular databases. I illustrate the approach with examples of taxi-trip data in New York City and factors related to car color in New Zealand. "
to:NB  computational_statistics  linear_regression  regression  databases  lumley.thomas  to_teach:statcomp 
8 weeks ago by cshalizi
[1612.07545] A Revisit of Hashing Algorithms for Approximate Nearest Neighbor Search
"Approximate Nearest Neighbor Search (ANNS) is a fundamental problem in many areas of machine learning and data mining. During the past decade, numerous hashing algorithms are proposed to solve this problem. Every proposed algorithm claims outperform other state-of-the-art hashing methods. However, the evaluation of these hashing papers was not thorough enough, and those claims should be re-examined. The ultimate goal of an ANNS method is returning the most accurate answers (nearest neighbors) in the shortest time. If implemented correctly, almost all the hashing methods will have their performance improved as the code length increases. However, many existing hashing papers only report the performance with the code length shorter than 128. In this paper, we carefully revisit the problem of search with a hash index, and analyze the pros and cons of two popular hash index search procedures. Then we proposed a very simple but effective two level index structures and make a thorough comparison of eleven popular hashing algorithms. Surprisingly, the random-projection-based Locality Sensitive Hashing (LSH) is the best performed algorithm, which is in contradiction to the claims in all the other ten hashing papers. Despite the extreme simplicity of random-projection-based LSH, our results show that the capability of this algorithm has been far underestimated. For the sake of reproducibility, all the codes used in the paper are released on GitHub, which can be used as a testing platform for a fair comparison between various hashing algorithms."
to:NB  data_mining  approximation  nearest_neighbors  locality-sensitive_hashing  hashing  have_read  via:vaguery  random_projections  k-means  databases 
january 2019 by cshalizi
[1507.05910] Clustering is Efficient for Approximate Maximum Inner Product Search
"Locality Sensitive Hashing (LSH) techniques have recently become a popular solution for solving the approximate Maximum Inner Product Search (MIPS) problem, which arises in many situations and have in particular been used as a speed-up for the training of large neural probabilistic language models.
"In this paper we propose a new approach for solving approximate MIPS based on a variant of the k-means algorithm. We suggest using spherical k-means which is an algorithm that can efficiently be used to solve the approximate Maximum Cosine Similarity Search (MCSS), and basing ourselves on previous work by Shrivastava and Li we show how it can be adapted for approximate MIPS.
"Our new method compares favorably with LSH-based methods on a simple recall rate test, by providing a more accurate set of candidates for the maximum inner product. The proposed method is thus likely to benefit the wide range of problems with very large search spaces where a robust approximate MIPS heuristic could be of interest, such as for providing a high quality short list of candidate words to speed up the training of neural probabilistic language models."

---- I thought people viewed k-means as a _kind_ of locality-sensitive hashing?
to:NB  databases  hashing  clustering  k-means  nearest_neighbors  locality-sensitive_hashing  data_mining 
august 2015 by cshalizi
10 Easy Steps to a Complete Understanding of SQL - Tech.Pro
And by "to_teach", I mean "to mention".

ETA: arthegall calls item #2 somewhere between incoherent and wrong, and he'd know better than I...
databases  programming  to_teach:statcomp  via:kjhealy 
september 2013 by cshalizi
Challenges and Opportunities with Big Data
"A community white paper developed by leading researchers across the United States"
to:NB  to_read  data_mining  data_analysis  computational_statistics  databases  re:data_science_whitepaper 
july 2013 by cshalizi

Copy this bookmark:



description:


tags: