machine-learning   24838

« earlier    

BenevolentAI: Worth Two Billion? | In the Pipeline
"AI is going to be very good at digging through what we’ve already found, and the hope is that it’ll tell us, from time to time, “Hey guys, you’re sitting on something big here but you just haven’t realized it yet”. But producing new knowledge is something else again."

I suppose you are speaking mainly from your experience as a scientist in the field for a couple decades. Legit! But hey -- define your terms. What do you mean by knowledge? Do you mean causality? Or do you mean mechanism? Causal inference tells us that these two things are different and that, under certain conditions, observational data can actually measure causal relations. Of course, to get mechanism you need to do an experiment. If this is what you mean, then your argument breaks down to the common idea "correlation doesn't imply causation," but I address this is the paragraph above.

Also implicit in your argument is that all the "knowledge" (again, whatever that means) from existing data sets has been extracted. True or false? And finally, that existing knowledge has been synthesized to create some kind of unified entity that can be queried.

So, I dunno about this PowerPoint presentation. Sounds like a pile of annoying smelly garbage, I agree, but it doesn't mean that their work is actually worthless. On the contrary, you yourself seem to admit that at the end of the article. And if they need to sell their work to investors using some hyped up nonsense, then that shouldn't surprise you having been a working scientist for decades right?
benevolentai  drug-discovery  machine-learning  mlhc  ml-hype 
19 hours ago by gideonite
[D] Why is Deep Learning so bad for tabular data? : MachineLearning
By personal experience and general ML culture, I know that standard ML methods like SVM, RF and tree boostings outperform DL models for supervised...
machine-learning  deep-learning  tabular 
yesterday by pmigdal
[1808.05563] Learning Invariances using the Marginal Likelihood
Generalising well in supervised learning tasks relies on correctly extrapolating the training data to a large region of the input space. One way to achieve this is to constrain the predictions to be invariant to transformations on the input that are known to be irrelevant (e.g. translation). Commonly, this is done through data augmentation, where the training set is enlarged by applying hand-crafted transformations to the inputs. We argue that invariances should instead be incorporated in the model structure, and learned using the marginal likelihood, which correctly rewards the reduced complexity of invariant models. We demonstrate this for Gaussian process models, due to the ease with which their marginal likelihood can be estimated. Our main contribution is a variational inference scheme for Gaussian processes containing invariances described by a sampling procedure. We learn the sampling procedure by back-propagating through it to maximise the marginal likelihood.
machine-learning  generalization  representation  rather-interesting  HOWEVER  consider:genetic-programming  consider:evolution-of-code  to-write-about 
yesterday by Vaguery
[1808.04730] Analyzing Inverse Problems with Invertible Neural Networks
In many tasks, in particular in natural science, the goal is to determine hidden system parameters from a set of measurements. Often, the forward process from parameter- to measurement-space is a well-defined function, whereas the inverse problem is ambiguous: one measurement may map to multiple different sets of parameters. In this setting, the posterior parameter distribution, conditioned on an input measurement, has to be determined. We argue that a particular class of neural networks is well suited for this task -- so-called Invertible Neural Networks (INNs). Although INNs are not new, they have, so far, received little attention in literature. While classical neural networks attempt to solve the ambiguous inverse problem directly, INNs are able to learn it jointly with the well-defined forward process, using additional latent output variables to capture the information otherwise lost. Given a specific measurement and sampled latent variables, the inverse pass of the INN provides a full distribution over parameter space. We verify experimentally, on artificial data and real-world problems from astrophysics and medicine, that INNs are a powerful analysis tool to find multi-modalities in parameter space, to uncover parameter correlations, and to identify unrecoverable parameters.
machine-learning  neural-networks  inverse-problems  rather-interesting  representation  to-write-about 
yesterday by Vaguery

« earlier    

related tags

academia  advice  aggregator  ai  airbnb  algo  algorithm  algorithms  amazon-aws  api  art  artists  arxiv  asr  automated-machine-learning  automl  bayesian  benevolentai  blog  book  books  business  cloud  cnns  comparison  computer-science  computer-vision  computer_vision  conference  consider:evolution-of-code  consider:genetic-programming  content-creation  cool  data-science  data  databricks  datascience  deep-learning  deepmind  deployment  design  dexter  dictation  differential-geometry  differential-privacy  dropout  drug-discovery  ethics  facebook  fairness  fake-news  fast-ai  feature-engineering  framework  games  gan  generalization  generative-adversarial-network  generative  google  gpu  graphics  hardware  haskell  however  howto  imagenet  interactive  inverse-problems  javascript  js  julia  karpathy  keras  learning  library  list  lstm  machinelearning  marketplace  match  math  mathematics  ml-hype  ml  mlflow  mlhc  neural-networks  neural  neural_network  neuralnetworks  nlp  open-ai  open-source  optimization  pandas  portrait  prediction  pricing  programming  publishing  python  pytorch  rather-interesting  reading  reference  reinforcement-learning  representation  research  rnns  scan  selfie  software-engineering  software  spark  speech  statistics  structural-modeling  study-group  summary  systems  tabular  tensor-flow  tensorflow  text  time-series  time  tips  to-write-about  tool  tools  topology  training  transfer-learning  tutorial  ui  ux  visualization  web  xgboost 

Copy this bookmark:



description:


tags: