randomforests   56

« earlier    

Leo Breiman 1928--2005
Professor Breiman was a member of the National Academy of Sciences. His research in later years focussed on computationally intensive multivariate analysis, especially the use of nonlinear methods for pattern recognition and prediction in high dimensional spaces. He was a co-author of Classification and Regression Trees and he developed decision trees as computationally efficient alternatives to neural nets. This work has applications in speech and optical character recognition. He was the author of the textbooks Probability and Stochastic Processes with a View Toward Applications, Statistics with a View Toward Applications, and Probability.
decisiontrees  machinelearning  research  statistics  mathematics  randomforests 
september 2018 by ianchanning
Quick-R: Tree-Based Models
Recursive partitioning is a fundamental tool in data mining. It helps us explore the stucture of a set of data, while developing easy to visualize decision rules for predicting a categorical (classification tree) or continuous (regression tree) outcome. This section briefly describes CART modeling, conditional inference trees, and random forests.
r  decisiontrees  ml  tutorial  randomforests 
september 2018 by paulbradshaw
Deep Neural Decision Networks
We present Deep Neural Decision Forests – a novel approach
that unifies classification trees with the representation
learning functionality known from deep convolutional
networks, by training them in an end-to-end manner. To
combine these two worlds, we introduce a stochastic and
differentiable decision tree model, which steers the representation
learning usually conducted in the initial layers
of a (deep) convolutional network. Our model differs
from conventional deep networks because a decision forest
provides the final predictions and it differs from conventional
decision forests since we propose a principled,
joint and global optimization of split and leaf node parameters.
We show experimental results on benchmark machine
learning datasets like MNIST and ImageNet and find onpar
or superior results when compared to state-of-the-art
deep models. Most remarkably, we obtain Top5-Errors of
only 7.84%/6.38% on ImageNet validation data when integrating
our forests in a single-crop, single/seven model
GoogLeNet architecture, respectively. Thus, even without
any form of training data set augmentation we are improving
on the 6.67% error obtained by the best GoogLeNet architecture
(7 models, 144 crops).
machinelearning  papers  decisiontrees  deeplearning  randomforests  algorithms  research 
november 2015 by devin

« earlier    

related tags

ai  algorithm  algorithms  analytics  anomolydetection  apache  artificialintelligence  bagging  bigdata  blogs  book  boosting  breiman  classification  compsci  data  datamining  datasci  datascience  decisiontree  decisiontrees  deeplearning  ensemble  fast.ai  forest  forests  free  from  google  gradientboosting  hn  implementation  java  kaggle  libs  machine-learning  machinelearning  mapreduce  math  mathematics  missingdata  ml  multilabel  networks  neuralnetworks  online  paper  papers  physics  prediction  programming  python  r  random  randomforest  regression  research  rf  scikit-learn  slides  statistics  stats  svm  thesis  titanic  trading  trees  tutorial  twitter  video  work 

Copy this bookmark: