machinelearning   50117

« earlier    

Train your own ML model using Scikit and use in iOS app with CoreML (and probably with Augmented…
In this article, we will train a machine learning model from scratch, convert it to CoreML model, and finally use the generated CoreML model to write a simple iOS application. The focus of the…
yesterday by swanand
Machine Learning on 2K of RAM
This paper develops a novel tree-based algorithm, called Bonsai, for efficient prediction on IoT devices—such as those based on the Arduino Uno board having an 8-bit ATmega328P microcontroller operating at 16 MHz with no native floating point support, 2KB RAM, and 32KB read-only flash.
machinelearning  arduino  iot  internetofthings  constrained 
yesterday by dlkinney
Neural-inspired sensors enable sparse, efficient classification of spatiotemporal data | PNAS
Winged insects perform remarkable aerial feats in uncertain, complex fluid environments. This ability is enabled by sensation of mechanical forces to inform rapid corrections in body orientation. Curiously, mechanoreceptor neurons do not faithfully report forces; instead, they are activated by specific time histories of forcing. We find that, far from being a bug, neural encoding by biological sensors is a feature that acts as built-in temporal filtering superbly matched to detect body rotation. Indeed, this encoding further enables surprisingly efficient detection using only a small handful of neurons at key locations. Nature suggests smart data as an alternative strategy to big data, and neural-inspired sensors establish a paradigm in hyperefficient sensing of complex systems.
biology  biomechanics  biomemesis  machinelearning 
2 days ago by madamim
Will compression be machine learning’s killer app? • Pete Warden's blog
Warden used to be chief technology officer for a company called Jetpac, which used neural networks to do interesting stuff with Instagram photos; then Google bought it, and he's working on machine learning there:
<p>One of the other reasons I think ML is such a good fit for compression is how many interesting results we’ve had recently with natural language. If you squint, you can see captioning as a way of radically compressing an image. One of the projects I’ve long wanted to create is a camera that runs captioning at one frame per second, and then writes each one out as a series of lines in a log file. That would create a very simplistic story of what the camera sees over time, I think of it as a narrative sensor.

The reason I think of this as compression is that you can then apply a generative neural network to each caption to recreate images. The images won’t be literal matches to the inputs, but they should carry the same meaning. If you want results that are closer to the originals, you can also look at stylization, for example to create a line drawing of each scene. What these techniques have in common is that they identify parts of the input that are most important to us as people, and ignore the rest.

It’s not just images.

There’s a similar trend in the speech world. Voice recognition is improving rapidly, and so is the ability to synthesize speech. Recognition can be seen as the process of compressing audio into natural language text, and synthesis as the reverse. You could imagine being able to highly compress conversations down to transmitting written representations rather than audio. I can’t imagine a need to go that far, but it does seem likely that we’ll be able to achieve much better quality and lower bandwidth by exploiting our new understanding of the patterns in speech.</p>
machinelearning  compression 
2 days ago by charlesarthur

« earlier    

related tags

2018  advice  ai  algorithm  algorithms  amazon  analytics  architecture  arduino  art  arthistory  artificialintelligence  badscience  bayes  bayesiannetworks  bestpractices  bias  bio  biology  biomechanics  biomemesis  birds  birthdays  blog  bloomberg  book  books  business  categorytheory  chairman  changemanagement  clustering  cnn  cognitive  collectionsonline  compression  computers  computerscience  computervision  constrained  contentgeneration  course  data  datascience  dataset  datasets  debugging  deeplearning  deeptext  digitalheritage  digitalhumanities  directory  dj  document  dropbox  economics  education  embeddings  entityextraction  ethics  facebook  feminism  finance  free  gan  gender  generative  golang  graphics  guide  haskell  health  hiring  hn  hpc  images  informationretrievel  inspiration  intelligence  interesting  internetofthings  iot  iw  javascript  jobs  kaggle  lang:de  lang:en  leadership  learning  linearalgebra  lumos  machine-learning  machine_learning  machinetranslation  marketplace  mashup  math  mathematics  maths  mattlevine  medicine  michelangelo  mixing  ml  mlaas  mlr  model  modelbased  music  neuroscience  nlp  nlproc  nokia  oct18  onlinecollections  opensource  organization  paper  people  person  personaldevelopment  philosophy  photography  php  physics  polymathsai  productivity  programming  publishers  python  racism  radiology  reinforcementlearning  research  resnet  robotics  scaling  science  search  sexism  siggraph  software  statistics  stats  stem  strategy  teaching  tech  tensorflow  toolkit  toread  translation  tutorial  tutorials  twitter  uber  vision  webdesign  webservice  writing 

Copy this bookmark: