scale   30749

« earlier    

Why the Future of Machine Learning is Tiny « Pete Warden's blog
Instead I chose to speak about another trend that I am just as certain about, and will have just as much impact, but which isn’t nearly as well known. I’m convinced that machine learning can run on tiny, low-power chips, and that this combination will solve a massive number of problems we have no solutions for right now. That’s what I’ll be talking about at CogX, and in this post I’ll explain more about why I’m so sure.
ml  scale  compute  blog  toread 
yesterday by cjitlal
State of React Native 2018 | Hacker News
I have been working on a commercial RN project for 2 years now. As of today, the project is ~50kloc, and provides builds for Android, IOS, Windows, and Macos platforms. I started it using RN after prototyping with an early Ionic2 version and checking Xamarin.
The code share between IOS and Android is very high, almost 100% (except a few simple home-made native components), and we are above the 85% of shared code between the mobile and desktop versions (mobile is RN, desktop is react-native-web + rewriting of several components on a Qt WebEngine with native versions of a few RN components rewritten in C++). The C++ part is actually very simple, and under 2Kloc. Note that we use Typescript everywhere on the React side, and we consider it very valuable.

Overall, everything works great, we have a few performance issues here and there, but nothing we can't deal with. However the biggest flaw is how indigent the build system is (combining all the flaws of npm and the issues of android upgrades), we still do not manage to have reproducible builds, and it's been sometimes very painful to fix an issue a few weeks later because it would just not build and require a few hours to make it build again.
Do you know why LibreOffice moved away from Java?
Because the performance was just not good enough.

They had custom, native UI while the backend was running in Java.

And it just wasn't fast enough. Even when hot, throughput and latency weren't good enough.

So, they started moving to C++ as much as possible, as fast as possible. By now you can run Writer without Java at all.

Current JS runtimes are all slower than HotSpot.

Yet somehow we're to believe that doing the exact same mistake again will work better this time?

It'll still end up being slow and a memory hog.

Moore's law is dead. RAM is more expensive than at any other point in almost 10 years.

You can't just throw abstractions at everything at the cost of performance anymore, expecting the hardware to catch up.

yesterday by toddhoff
What It Was Like to Write a Full-Blown Flutter App | Hacker News
sunw 23 hours ago [-]

CodePush was the main reason why our team went with React Native instead of Flutter. Being able to push bug fixes to all of our users in real time is incredible
yesterday by toddhoff

« earlier    

related tags

1930s  1957  1988  2007  2016  alexandralange  algebra  algorithm  apache  architecture  art  audience  automation  blog  brightonrock  candy  car  children  cloud  compute  d3js  dance  data-science  data-viz  design  distributedsystems  dsl  earthworks  framework  geography  google  image  infrastructure  isamunoguchi  japan  joannemcneil  johnrafman  jvm  landfill  landscape  linear  logging  logscale  machine-learning  machinelearning  mahout  maps  marketing  marthagraham  math  mbostock  mirage  mist  ml  mountain  movement  nineeyes  normalization  park  performance  photography  pilgrimmage  pipelines  play  playground  privacy  programming  pubsub  redis  regionalism  reliabilty  satellite  selfsimilar  site  street  stretch  surveillance  text  theater  to_read  toread  type  typography  uckminstrfuller 

Copy this bookmark: