ML   35360

« earlier    

Caret with Yeo Johnson method normal normalization
4/29/2017
Q:
*Is there a way I can find out which variables it preprocessed rather than going to individual variables to see what occurred?

How can i find out which power it used for Yeo Johnson transformation for each variable?**
------------------------------
A: Max's answer -
If you are using the preprocessing inside of train, you can access the preProcess object:
library(caret)

mod <- train(Sepal.Width ~ ., data = iris, method = "lm", preProc = c("center", "scale", "YeoJohnson"))
mod$preProcess
Created from 150 samples and 5 variables

Pre-processing:

centered (5)
ignored (0)
scaled (5)
Yeo-Johnson transformation (3)

Lambda estimates for Yeo-Johnson transformation:
-0.32, 1.09, 0.84

You can get the specific variables using:

mod$preProcess$method
$center
[1] "Sepal.Length" "Petal.Length" "Petal.Width" "Speciesversicolor" "Speciesvirginica"

$scale
[1] "Sepal.Length" "Petal.Length" "Petal.Width" "Speciesversicolor" "Speciesvirginica"

$YeoJohnson
[1] "Sepal.Length" "Petal.Length" "Petal.Width"

$ignore
character(0)

There are objects for each preprocessing operation too. For example:

mod$preProcess$yj
$Sepal.Length
Estimated transformation parameters
Y1
-0.321

$Petal.Length
Estimated transformation parameters
Y1
1.09

$Petal.Width
Estimated transformation parameters
Y1
0.84
Linear  Regression  ML  machine  learning  Caret  ex  exs  examples  example  iris  specify  specification  Yeo  Johnson  method  normal  normalization  var  variables  vars  variable  YeoJohnson  Yeo-Johnson 
12 hours ago by SFdude
Monte Carlo tree search - Wikipedia
A heuristic search algorithm for some kinds of decision processes, most notably those employed in game play. Works well for non-deterministic games. Reminds me a bit of one-armed bandit optimization, in that it balances explore/exploit strategies down the decision tree. Uses UCB1 applied to trees.
algorithms  ml  ai 
yesterday by tobym
ConceptNet Numberbatch 17.04: better, less-stereotyped word vectors – ConceptNet blog
Systematically removing inappropriate bias from machine-learned semantics/word vectors.
ml  ai  word2vec  google  microsoft  bias  fairness  techdel  via:slack 
yesterday by npdoty

« earlier    

related tags

ai  algorithms  ami  api  articles  audio  australia  automotive  aviation  aws-ml  aws-mxnet  aws  b2b  batch-norm  bias  bigdata  blockchain  blog-post  c++  canada  cancer  caret  ch  chatbots  classification  clojure  code  compliance  computervision  computing  courses  crf  cv  darkflow  de  deeplearning  deepmind  digiral  digital  diy  dl  docker  dsp  education  ensemble  erp  ex  example  examples  exs  fairness  fig  financial  fintech  fun  fundmanagement  geometry  github  google  graph  grc  hacking  hardware  health  healthcare  hearing  hedgefund  images  infovis  insurance  insurtech  international  iris  israel  johnson  justice  learning  lego  lending  linear  machine-hearing  machine-learning  machine  machinelearning  marketplace  messaging  method  microsoft  ml  mxnet  neural-networks  neuralnets  nlp  no  normal  normalization  online  paper  papers  payments  photography  pix  portfollio  privateequity  programming  python  r  reference  regression  reporting  research  road-sign-vision  royalsociety  satellite  screening  securities  self-driving  software  sofwtare  sonnet  sorting  specification  specify  startups  statistics  study  tech  techdel  technology  tensorflow  testing  trading  training  trees  tutorial  udacity  uk  usa  var  variable  variables  vars  vc  vision  visualisation  visualization  voice  wealthmanagement  weird  word2vec  yeo-johnson  yeo  yeojohnson 

Copy this bookmark:



description:


tags: