hellsten + algorithms 49
security - How can bcrypt have built-in salts? - Stack Overflow
bcrypt
authentication
gotcha
algorithms
4 weeks ago by hellsten
Stored in the database, a bcrypt "hash" might look something like this:
$2a$10$vI8aWBnW3fID.ZQ4/zo1G.q1lRps.9cGLcZEiGDMVr5yUP1KUOYTa
This is actually three fields, delimited by "$":
2a identifies the bcrypt algorithm version that was used.
10 is the cost factor; 210 iterations of the key derivation function are used (which is not enough, by the way. I'd recommend a cost of 12 or more.)
vI8aWBnW3fID.ZQ4/zo1G.q1lRps.9cGLcZEiGDMVr5yUP1KUOYTa is the salt and the cipher...
4 weeks ago by hellsten
Consistent Hash Rings Explained Simply
5 weeks ago by hellsten
- you may want to take a URL and get back the server the website is hosted on.
- The problem of mimicking a hash table when the number of locations are constantly changing was exactly why consistent hashing was invented.
- For 2,000 keys spread across 100 locations, you now need to move only 20 keys to a new location if 1 location with only 20 keys goes down.
- This is the main benefit of consistent hashing: you now no longer need to move so many things just because one location has disappea...
algorithms
algorithm
distributed
consistent-hash
hash
cs
- The problem of mimicking a hash table when the number of locations are constantly changing was exactly why consistent hashing was invented.
- For 2,000 keys spread across 100 locations, you now need to move only 20 keys to a new location if 1 location with only 20 keys goes down.
- This is the main benefit of consistent hashing: you now no longer need to move so many things just because one location has disappea...
5 weeks ago by hellsten
Vector Clocks Explained
algorithms
distributed
distributed-systems
clock
vector
time
5 weeks ago by hellsten
Vector Clocks by Example
We’ve all had this problem:
Alice, Ben, Cathy, and Dave are planning to meet next week for
dinner. The planning starts with Alice suggesting they meet on
Wednesday. Later, Dave discuss alternatives with Cathy, and they
decide on Thursday instead. Dave also exchanges email with Ben, and
they decide on Tuesday. When Alice pings everyone again to find out
whether they still agree with her Wednesday suggestion, she gets
mixed message...
5 weeks ago by hellsten
AdventOfCode Day 4 - High-Entropy Passphrases | Accelerated Science
10 weeks ago by hellsten
Part 1: mathematical approach
1. Looking at the bottom right of the grid above we see that the maximum square root in each ring is actually giving us the number of elements in the row and column of its ring. This is easy to convert into the number of the ring, because as we have already seen we add two at each step, starting from one, so to get from the number of elements to the number of the ring we need to just subtract one and divide by two:
2. First though, to be able to do it for any numb...
advent-of-code
2017
algorithms
1. Looking at the bottom right of the grid above we see that the maximum square root in each ring is actually giving us the number of elements in the row and column of its ring. This is easy to convert into the number of the ring, because as we have already seen we add two at each step, starting from one, so to get from the number of elements to the number of the ring we need to just subtract one and divide by two:
2. First though, to be able to do it for any numb...
10 weeks ago by hellsten
The On-Line Encyclopedia of Integer Sequences® (OEIS®)
11 weeks ago by hellsten
The On-Line Encyclopedia of Integer Sequences® (OEIS®)
Most people use the OEIS to get information about a particular number sequence. If you are a new visitor, then you might ask the database if it can recognize your favorite sequence, if you have one. To do this, go to the main look-up page, enter the sequence, and click Search. You could also look for your sequence in the Index.
integers
math
mathematics
sequences
reference
cs
list
advent-of-code
algorithms
Most people use the OEIS to get information about a particular number sequence. If you are a new visitor, then you might ask the database if it can recognize your favorite sequence, if you have one. To do this, go to the main look-up page, enter the sequence, and click Search. You could also look for your sequence in the Index.
11 weeks ago by hellsten
Let's Learn Algorithms: Implementing Binary Search - Calhoun.io
september 2018 by hellsten
It is important to remember that a binary search can only work on data that is sorted relevant to what you are searching. For instance, if you want to find a the smallest number greater than 6 in a list, you need a sorted list, but if you want to find a commit that broke your code, you only need the commits sorted in the order ..., working, working, broken, broken, .... That is, you need all working commits to come before the breaking commit, and then all commits after the bad commit must be broken.
binary-search
binary
algorithm
algorithms
september 2018 by hellsten
Learning… by Alexander S. Kulikov et al. [PDF/iPad/Kindle]
algorithms
august 2018 by hellsten
Learning Algorithms Through Programming and Puzzle Solving
august 2018 by hellsten
GitHub - donnemartin/interactive-coding-challenges: Interactive Python coding interview challenges (algorithms and data structures). Includes Anki flashcards.
coding-challenges
interview
programming
cs
learning
algorithms
algorithm
august 2018 by hellsten
Interactive Python coding interview challenges (algorithms and data structures). Includes Anki flashcards.
august 2018 by hellsten
Constraint satisfaction - Wikipedia
algorithms
algorithm
ai
sudoku
may 2018 by hellsten
[edit]
Constraint satisfaction problems on finite domains are typically solved using a form of search. The most used techniques are variants of backtracking, constraint propagation, and local search. These techniques are used on problems with nonlinear constraints.
Variable elimination and the simplex algorithm are used for solving linear and polynomial equations and inequalities, and problems containing variables with infinite domain. These are typically solved as optimization prob...
may 2018 by hellsten
montanaflynn/stats
october 2015 by hellsten
A statistics package with common functions that are missing from the Golang standard library. Completely Documented / MIT Licensed / 100% Code Coverage
golang
algorithms
statistics
stat
october 2015 by hellsten
www.nobelprize.org/nobel_prizes/economic-sciences/laureates/2012/popular-economicsciences2012.pdf
september 2015 by hellsten
The Gale-Shapley algorithm proved to be useful in other applications, such as high-school choice.
Up until 2003, applicants to New York City public high schools were asked to rank their five most
preferred choices, after which these preference lists were sent to the schools. The schools then decided
which students to admit, reject, or place on waiting lists. The process was repeated in two more
rounds, and students who had not been assigned to any school after the third round were allocated
through an administrative process. However, this did not provide the applicants with enough opportunities
to list their preferences, and the schools did not have enough opportunities to make offers. As
a result, about 30,000 students per year ended up at schools they had not listed. Moreover, the process
gave rise to misrepresentation of preferences. Since schools were more likely to admit students who
ranked them as their first choice, students unlikely to be admitted to their favorite school found it
in their best interest to list a more realistic option as their first choice, while applicants who simply
reported their true preferences suffered unnecessarily poor outcomes. In 2003, Roth and his colleagues
helped redesign this admissions process, based on an applicant-proposing version of the Gale-Shapley
algorithm. The new algorithm proved to be successful, with a 90 percent reduction in the number of
students assigned to schools for which they had expressed no preference. Today, a growing number of
U.S. metropolitan areas use some variant of the Gale-Shapley algorithm.
algorithm
algorithms
nobel-prize
Up until 2003, applicants to New York City public high schools were asked to rank their five most
preferred choices, after which these preference lists were sent to the schools. The schools then decided
which students to admit, reject, or place on waiting lists. The process was repeated in two more
rounds, and students who had not been assigned to any school after the third round were allocated
through an administrative process. However, this did not provide the applicants with enough opportunities
to list their preferences, and the schools did not have enough opportunities to make offers. As
a result, about 30,000 students per year ended up at schools they had not listed. Moreover, the process
gave rise to misrepresentation of preferences. Since schools were more likely to admit students who
ranked them as their first choice, students unlikely to be admitted to their favorite school found it
in their best interest to list a more realistic option as their first choice, while applicants who simply
reported their true preferences suffered unnecessarily poor outcomes. In 2003, Roth and his colleagues
helped redesign this admissions process, based on an applicant-proposing version of the Gale-Shapley
algorithm. The new algorithm proved to be successful, with a 90 percent reduction in the number of
students assigned to schools for which they had expressed no preference. Today, a growing number of
U.S. metropolitan areas use some variant of the Gale-Shapley algorithm.
september 2015 by hellsten
The most important skill in software development | John D. Cook
june 2015 by hellsten
Here’s an insightful paragraph from James Hague’s blog post Organization skills beat algorithmic wizardry:
When it comes to writing code, the number one most important skill is how to keep a tangle of features from collapsing under the weight of its own complexity. I’ve worked on large telecommunications systems, console games, blogging software, a bunch of personal tools, and very rarely is there some tricky data structure or algorithm that casts a looming shadow over everything else. But there’s always lots of state to keep track of, rearranging of values, handling special cases, and carefully working out how all the pieces of a system interact. To a great extent the act of coding is one of organization. Refactoring. Simplifying. Figuring out how to remove extraneous manipulations here and there.
Algorithmic wizardry is easier to teach and easier to blog about than organizational skill, so we teach and blog about it instead. A one-hour class, or a blog post, can showcase a clever algorithm. But how do you present a clever bit of organization? If you jump to the solution, it’s unimpressive. “Here’s something simple I came up with. It may not look like much, but trust me, it was really hard to realize this was all I needed to do.” Or worse, “Here’s a moderately complicated pile of code, but you should have seen how much more complicated it was before. At least now someone stands a shot of understanding it.” Ho hum. I guess you had to be there.
software-development
engineering
algorithms
advice
best
programming
When it comes to writing code, the number one most important skill is how to keep a tangle of features from collapsing under the weight of its own complexity. I’ve worked on large telecommunications systems, console games, blogging software, a bunch of personal tools, and very rarely is there some tricky data structure or algorithm that casts a looming shadow over everything else. But there’s always lots of state to keep track of, rearranging of values, handling special cases, and carefully working out how all the pieces of a system interact. To a great extent the act of coding is one of organization. Refactoring. Simplifying. Figuring out how to remove extraneous manipulations here and there.
Algorithmic wizardry is easier to teach and easier to blog about than organizational skill, so we teach and blog about it instead. A one-hour class, or a blog post, can showcase a clever algorithm. But how do you present a clever bit of organization? If you jump to the solution, it’s unimpressive. “Here’s something simple I came up with. It may not look like much, but trust me, it was really hard to realize this was all I needed to do.” Or worse, “Here’s a moderately complicated pile of code, but you should have seen how much more complicated it was before. At least now someone stands a shot of understanding it.” Ho hum. I guess you had to be there.
june 2015 by hellsten
Deep Learning From The Bottom Up | Hacker News
may 2014 by hellsten
To address some of the comments being presented here, neural nets despite being harder to train can be debugged visually.
A few tips for those of you who use neural nets:
Debug the weights with histograms. Track the gradient and make sure the magnitude is not too large and its normally distributed.
Keep track of your gradient changes when using either gradient descent or conjugate gradient.
Plot your filters, visualize what each neuron is learning.
Watch the rate of change of your cost function. If it seems like its changing too fast and stops early lower your learning rate.
Plot your activations: if they start out grey you're fine. If you start all black, you need to retune some of your parameters.
Lastly, understand the algorithm you're using. Convolutional nets are different from recursive neural tensor are different denoising autoencoders are different from RBMS/DBNs.
Pay attention to your cost function, reconstruction entropy is used differently from negative log likelihood is used differently for different objectives.
If you are trying to do feature learning, you are using RBMs, Denoising AutoEncoders and you will use reconstruction entropy. This is what you use for feature detectors. You may end up using negative log likelihood if you are dealing with continuous data.
For RBMs, pay attention to the different kinds of units[1]. Hinton recommends Gaussian visible with recitifed linear for continuous data, binary binary otherwise.
For denoising autoencoders, watch your corruption level. A higher one helps generalize better, especially with less data.
For time series or sequential data, you can use a recurrent net,moving window with DBNs, or recursive neural tensor
Other knobs:
If your deep learning framework doesn't have adagrad find one that does.
Dropout: crucial. Dropout is used in combination with mini batch learning to handle learning different "poses" of images as well as generalizing feature learning. This can be used in combination with sampling with replacement to minimize sampling error.
Regularization: L2 is typically used. Hinton once said: you want a neural net that always overfits but is regularized (youtube video...don't remember link right now).
Would love to answer questions! Source: I work on/teach this stuff.[2][3]
Lastly, tweak one knob at a time. Neural nets have a lot going on. You don't want a situation where you A/B tested 10 different parameters at once and you don't know which one worked or why.
[1]: http://www.cs.toronto.edu/~hinton/absps/guideTR.pdf
[2]: http://deeplearning4j.org/
[3]: http://zipfianacademy.com/
[4]: http://arxiv.org/abs/1206.5533 http://deeplearning4j.org/ http://deeplearning4j.org/debug.html http://yosinski.com/media/papers/Yosinski2012VisuallyDebuggi...
deep-learning
ai
algorithms
A few tips for those of you who use neural nets:
Debug the weights with histograms. Track the gradient and make sure the magnitude is not too large and its normally distributed.
Keep track of your gradient changes when using either gradient descent or conjugate gradient.
Plot your filters, visualize what each neuron is learning.
Watch the rate of change of your cost function. If it seems like its changing too fast and stops early lower your learning rate.
Plot your activations: if they start out grey you're fine. If you start all black, you need to retune some of your parameters.
Lastly, understand the algorithm you're using. Convolutional nets are different from recursive neural tensor are different denoising autoencoders are different from RBMS/DBNs.
Pay attention to your cost function, reconstruction entropy is used differently from negative log likelihood is used differently for different objectives.
If you are trying to do feature learning, you are using RBMs, Denoising AutoEncoders and you will use reconstruction entropy. This is what you use for feature detectors. You may end up using negative log likelihood if you are dealing with continuous data.
For RBMs, pay attention to the different kinds of units[1]. Hinton recommends Gaussian visible with recitifed linear for continuous data, binary binary otherwise.
For denoising autoencoders, watch your corruption level. A higher one helps generalize better, especially with less data.
For time series or sequential data, you can use a recurrent net,moving window with DBNs, or recursive neural tensor
Other knobs:
If your deep learning framework doesn't have adagrad find one that does.
Dropout: crucial. Dropout is used in combination with mini batch learning to handle learning different "poses" of images as well as generalizing feature learning. This can be used in combination with sampling with replacement to minimize sampling error.
Regularization: L2 is typically used. Hinton once said: you want a neural net that always overfits but is regularized (youtube video...don't remember link right now).
Would love to answer questions! Source: I work on/teach this stuff.[2][3]
Lastly, tweak one knob at a time. Neural nets have a lot going on. You don't want a situation where you A/B tested 10 different parameters at once and you don't know which one worked or why.
[1]: http://www.cs.toronto.edu/~hinton/absps/guideTR.pdf
[2]: http://deeplearning4j.org/
[3]: http://zipfianacademy.com/
[4]: http://arxiv.org/abs/1206.5533 http://deeplearning4j.org/ http://deeplearning4j.org/debug.html http://yosinski.com/media/papers/Yosinski2012VisuallyDebuggi...
may 2014 by hellsten
A Tour of Machine Learning Algorithms | Hacker News
may 2014 by hellsten
Here's an interesting series of tutorials on intro to data learning / machine learning. I've been able to read through several of them and it is very beginner friendly but still useful.
Main website: http://guidetodatamining.com/chapter-1/
The actual pdfs http://guidetodatamining.com/guide/ch1/DataMining-ch1.pdf
http://guidetodatamining.com/guide/ch2/DataMining-ch2.pdf
http://guidetodatamining.com/guide/ch3/DataMining-ch3.pdf
http://guidetodatamining.com/guide/ch4/DataMining-ch4.pdf
http://guidetodatamining.com/guide/ch5/DataMining-ch5.pdf
http://guidetodatamining.com/guide/ch6/DataMining-ch6.pdf
http://guidetodatamining.com/guide/ch7/DataMining-ch7.pdf
data-mining
machine-learning
algorithms
Main website: http://guidetodatamining.com/chapter-1/
The actual pdfs http://guidetodatamining.com/guide/ch1/DataMining-ch1.pdf
http://guidetodatamining.com/guide/ch2/DataMining-ch2.pdf
http://guidetodatamining.com/guide/ch3/DataMining-ch3.pdf
http://guidetodatamining.com/guide/ch4/DataMining-ch4.pdf
http://guidetodatamining.com/guide/ch5/DataMining-ch5.pdf
http://guidetodatamining.com/guide/ch6/DataMining-ch6.pdf
http://guidetodatamining.com/guide/ch7/DataMining-ch7.pdf
may 2014 by hellsten
Google Code Jam
april 2014 by hellsten
Google Code Jam is back in action challenging professional and student programmers around the globe to solve difficult algorithmic puzzles. This year's Code Jam consists of four online rounds and concludes with the Onsite World Finals under the California sun in Google's Los Angeles office. The competition will really heat up this August during the finals, where the top 25, along with last year's champion, Ivan Miatselski (mystic) of Belarus, will jam it out for the $15,000 grand prize, the coveted title of Code Jam 2014 Champion and automatic qualification in the Code Jam 2015 finals to defend the title.
algorithms
contest
challenge
programming
todo
april 2014 by hellsten
Josh Haberman: LL and LR Parsing Demystified
august 2013 by hellsten
Think back to our first example of 1 + 2 * 3. Here is that expression written as a tree:
parsing
parser
algorithms
python
august 2013 by hellsten
An Exhaustive Explanation of Minimax, a Staple AI Algorithm
january 2012 by hellsten
Remember that in ruby objects are not immutable. You can relax memory requirements by traversing game tree on demand instead of creating whole thing at the beginning.
ai
algorithms
ruby
totry
toread
january 2012 by hellsten
Clever Algorithms
february 2011 by hellsten
The book "Clever Algorithms: Nature-Inspired Programming Recipes" by Jason Brownlee PhD describes 45 algorithms from the field of Artificial Intelligence. All algorithm descriptions are complete and consistent to ensure that they are accessible, usable and understandable by a wide audience.
5 Reasons To Read:
1. 45 algorithms described.
2. Designed specifically for Programmers, Research Scientists and Interested Amateurs.
3. Complete code examples in the Ruby programming language.
4. Standardized algorithm descriptions.
5. Algorithms drawn from the popular fields of Computational Intelligence, Metaheuristics, and Biologically Inspired Computation.
ai
algorithms
programming
book
ruby
free
tosite1
5 Reasons To Read:
1. 45 algorithms described.
2. Designed specifically for Programmers, Research Scientists and Interested Amateurs.
3. Complete code examples in the Ruby programming language.
4. Standardized algorithm descriptions.
5. Algorithms drawn from the popular fields of Computational Intelligence, Metaheuristics, and Biologically Inspired Computation.
february 2011 by hellsten
Using the Python NLTK Bayesian Classifier for word sense disambiguation - 92% accuracy - Jim Plush's Programming Paradise
december 2010 by hellsten
Today's article will be going over some basic word sense disambiguation using the NLTK toolkit in Python and Wikipedia. Word sense disambiguation is the process of trying to determine if you mention the world "apple" are you talking about Apple the company or apple the fruit? I've read a few white papers on the subject and decided to try out some of my own tests to compare results.
nlp
nltk
bayesian
algorithms
python
analysis
text-classification
december 2010 by hellsten
Self-Improving Bayesian Sentiment Analysis for Twitter
august 2010 by hellsten
The simplest way to do this was to remove all ‘noise’ words from the tweets and classification process – those words that do not imply positivity or negativity, but that may falsely skew the results.
There are plenty of noise word lists around, so I took one of those and removed any words that are relevant to sentiment analysis (e.g. ‘unfortunately’, that appears in the MySQL list, may be useful for identifying negative tweets).
Each word (‘feature’) has a strength of positive/negative sentiment, based on the number of positive/negative tweets it is previously featured in. For example, in my dataset, the word ‘new’ is fairly positive, but the word ‘kudos’ is extremely positive. By calculating the number of strong words and variation in positive/negative words, a confidence can be calculated (e.g. a tweet that includes 5 negative words, 1 positive word, 2 extremely negative words and no extremely positive words can be confidently assumed to be negative).
ai
text-classification
bayesian
twitter
algorithms
bayes
classification
There are plenty of noise word lists around, so I took one of those and removed any words that are relevant to sentiment analysis (e.g. ‘unfortunately’, that appears in the MySQL list, may be useful for identifying negative tweets).
Each word (‘feature’) has a strength of positive/negative sentiment, based on the number of positive/negative tweets it is previously featured in. For example, in my dataset, the word ‘new’ is fairly positive, but the word ‘kudos’ is extremely positive. By calculating the number of strong words and variation in positive/negative words, a confidence can be calculated (e.g. a tweet that includes 5 negative words, 1 positive word, 2 extremely negative words and no extremely positive words can be confidently assumed to be negative).
august 2010 by hellsten
Introduction to Information Retrieval
july 2010 by hellsten
The book aims to provide a modern approach to information retrieval from a computer science perspective. It is based on a course we have been teaching in various forms at Stanford University and at the University of Stuttgart.
information-retrieval
rocchio
bayesian
svm
lsi
text-classification
classification
td-idf
best
development
computer-science
book
free
algorithms
datamining
ir
july 2010 by hellsten
You Don’t Need Math Skills To Be A Good Developer But You Do Need Them To Be A Great One
june 2010 by hellsten
Learning a little bit about search exposed me to all sorts of interesting software-y and computer science-y related things/problems (machine learning, natural language processing, algorithm analysis etc.) and now everywhere I turn I see math and so feel my lack of skills all the more keenly. I've come to the realization that you need a decent level of math skill if you want to do cool and interesting things with computers.
Normally people choose a framework or two and a programming language and go with that, which is fine and worthwhile. But consider the fact that frameworks and to a lesser extent languages have a limited shelf life. If you're building a career on being a Hibernate, Rails or Struts expert (the struts guys should really be getting worried now :)), you will have to rinse and repeat all over again in a few years when new frameworks come along to supersede the current flavour of the month.
skills
career
programming
math
development
best
tosite1
algorithms
Normally people choose a framework or two and a programming language and go with that, which is fine and worthwhile. But consider the fact that frameworks and to a lesser extent languages have a limited shelf life. If you're building a career on being a Hibernate, Rails or Struts expert (the struts guys should really be getting worried now :)), you will have to rinse and repeat all over again in a few years when new frameworks come along to supersede the current flavour of the month.
june 2010 by hellsten
Stevey's Blog Rants: Math For Programmers
march 2010 by hellsten
The right way to learn math is to ignore the actual algorithms and proofs, for the most part, and to start by learning a little bit about all the techniques: their names, what they're useful for, approximately how they're computed, how long they've been around, (sometimes) who invented them, what their limitations are, and what they're related to. Think of it as a Liberal Arts degree in mathematics.
math
algorithms
programming
career
education
mathematics
learning
best
development
tosite1
march 2010 by hellsten
A Thousand Foot View of Machine Learning « Awwthor Blog
january 2010 by hellsten
For further reading, I suggest taking a look at Andrew Moore’s tutorials, which I have found to be very helpful. Andrew Moore is a well-known AI researcher from CMU. Mainly, I suggest taking a look at his tutorials on Decision Trees, Gaussian Mixture Models, K-Means Clustering and Support Vector Machines. For a broad look at the field, his Intro to AI tutorial might be helpful.
ai
machine-learning
algorithms
programming
svm
january 2010 by hellsten
Data Structures and Algorithms with Object-Oriented Design Patterns in Ruby
april 2007 by hellsten
Data Structures and Algorithms
with Object-Oriented Design Patterns in Ruby
programming
algorithms
book
ruby
with Object-Oriented Design Patterns in Ruby
april 2007 by hellsten
related tags
advent-of-code ⊕ advice ⊕ ai ⊕ algorithm ⊕ algorithms ⊖ analysis ⊕ authentication ⊕ bayes ⊕ bayesian ⊕ bcrypt ⊕ best ⊕ big-o ⊕ binary ⊕ binary-search ⊕ binary-tree ⊕ book ⊕ career ⊕ challenge ⊕ cheatsheet ⊕ classification ⊕ clock ⊕ coding-challenges ⊕ computer-science ⊕ consistent-hash ⊕ contest ⊕ course ⊕ cs ⊕ css ⊕ data-mining ⊕ datamining ⊕ deep-learning ⊕ development ⊕ distributed ⊕ distributed-systems ⊕ education ⊕ engineering ⊕ free ⊕ github ⊕ golang ⊕ gotcha ⊕ grid ⊕ hash ⊕ images ⊕ information-retrieval ⊕ integers ⊕ interview ⊕ ir ⊕ khan-academy ⊕ layout ⊕ learning ⊕ list ⊕ lsi ⊕ machine-learning ⊕ machinelearning ⊕ math ⊕ mathematics ⊕ news ⊕ nlp ⊕ nltk ⊕ nobel-prize ⊕ parser ⊕ parsing ⊕ programming ⊕ python ⊕ reference ⊕ rocchio ⊕ rochio ⊕ ruby ⊕ sequences ⊕ skills ⊕ software-development ⊕ stat ⊕ statistics ⊕ sudoku ⊕ svm ⊕ td-idf ⊕ text-analysis ⊕ text-classification ⊕ theory ⊕ thumbnails ⊕ time ⊕ todo ⊕ toread ⊕ tosite1 ⊕ totry ⊕ tree ⊕ trie ⊕ twitter ⊕ vector ⊕ visualization ⊕ yahoo ⊕Copy this bookmark: