asynchronous_event_processing   83

« earlier    

Rust in Detail: Writing Scalable Chat Service from Scratch
So, instead we’ll use efficient I/O multiplexing system APIs that employ an event loop — that’s epoll on Linux[7] and kqueue on FreeBSD and OS X[8].
Rust  networking_hardware  Sockets  OS_X  FreeBSD  Linux  epoll  kqueue  asynchronous_event_processing  System_Programming 
october 2015 by snearch
How we've made Raptor up to 4x faster than Unicorn, up to 2x faster than Puma, Torquebox
The libev event library

As mentioned before, our builtin HTTP server is evented. Writing a network event loop with support for I/O, timers, etcetera is quite a lot of work. This is further complicated by the fact that every operating system has its own mechanism for scalable I/O polling. Linux has epoll, the BSDs and OS X have kqueue, Solaris has event ports; the list goes on.

Fortunately, there exist libraries which abstract away these differences. We use the excellent libev library by Marc Lehmann. Libev is very fast and provides I/O watchers, timer watchers, async-signal safe communication channels, support for multiple event loops, etc.

Libev should not be confused with the similarly-named libevent. Libevent is also an excellent library and is much more full-featured than libev. For example, it also provides asynchronous DNS lookups, an RPC framework, a builtin evented HTTP server, etc. However, we don’t need any of those extra features, and we were confident that we can make an HTTP server that’s faster than the libevent builtin HTTP server. We’ve also found libev to be faster than libevent thanks to libev’s smaller feature set. This is why we’ve chosen to go with libev instead of libevent.
programming  asynchronous_event_processing  OS_X  Linux  *BSD  libev  libevent  epoll  networking_hardware  kqueue 
november 2014 by snearch
My first unikernel - Thomas Leonard's blog
Mirage uses the usual Lwt library for cooperative threading, which I wrote about at last year in Asynchronous Python vs OCaml - >>= means to wait for the result, allowing other code to run. Everything in Mirage is non-blocking, even looking up the console. runs the main event loop.

Since we’re using libraries, let’s switch to ocamlbuild and give the dependencies in the _tags file, as usual for OCaml projects:
OCaml  Unikernel  Leonard_Thomas  Virtual_Machines  XEN  Tools_Software  mirage  asynchronous_event_processing 
july 2014 by snearch
How I want to write Node: stream all the things | Hacker News
aegiso 14 hours ago | link

> JS's lack of strong typing limits your ability to reason about streams (a lot more than just streams, too) and further limits your ability to write performant stream computing software

I think you missed my point so I'll restate: in practice you don't reason about streams in Node, because the community (a product of the simplicity of the streams API) has a packaged solution to your problem. It plugs right in. And this ecosystem exists because of the simplicity and dynamicity of the constructs used.

I actually agree with you that Haskell does it "better". It's purer and cleaner. You'll probably have less bugs if you write everything in Haskell.

Except it doesn't matter to me, because Haskell doesn't have anything close to the plug-and-playability of npm modules -- and this is a pure social product of the stupid interface that Node exposes compared to Haskell. Node is shittier, and that's why it's more capable at solving the problem I have -- constructing powerful apps in close to no time, and zero lines of my own code.

I guess what I'm saying is that sometimes worse is better.


codygman 12 hours ago | link

Can you qualify "Haskell doesn't have anything close to the plug-and-playability of npm modules" because I don't quite get what you mean? Maybe it is because I can't think of anything plug-and-play in node that isn't plug-and-play in Haskell.


aegiso 1 hour ago | link

Sure. I want a git server with push notifications. Two lines with the pushover module in node.

I would be pleasantly surprised if this existed at all in the Haskell community. Even more so with two lines of my own code.

Node.js  Streaming  asynchronous_event_processing 
february 2014 by snearch
JT's Personal Blog, Real Time Web Apps
Real Time Web Apps

I’m starting to get serious about real-time web apps. At first I was considering using some of the Perl stuff (Meteor, Twiggy, AnyEvent, etc) to roll my own, but then I started looking around, and found a bunch of other interesting stuff to investigate:

- Firehose
- Firebase
- PubNub
- Pusher

The thing that’s cool about these services is that I can still write my apps using non-async stuff that I love (like DBIx::Class) and still get most/all of the benefits of an async web service.
Webdevelopment  Firehose  Firebase  PubNub  Pusher  asynchronous_event_processing  DBIx::Class 
september 2013 by snearch
nonblocking - What is the point/purpose of Ruby EventMachine, Python Twisted, or JavaScript Node.js? - Stack Overflow
You might use one of these frameworks if you want to write code that does networking.

For example, if you were going to write a massively multiplayer video game, "setting up a Java program ... to dispatch a thread for each request" probably isn't an option; juggling that many threads is phenomenally complex, and it performs poorly as well. Not to mention the fact that "just spawn a bunch of threads" is missing a bunch of the management tools that Twisted et. al. have, like twistd, which handles logging, daemonization, startup and shutdown, and so on.

Or if you wanted to write a build automation system, the ability to asynchronously invoke and control subprocesses would be useful. If you spawn a process asynchronously, you can easily kill that process and gracefully deal with its exit. If you spawn it by starting a thread and blocking in that thread you can't stop it easily, since stopping a thread is inherently unsafe.

EventMachine and Twisted can both be used to write client-side programs as well; maybe you're writing a GUI application that isn't web-based, and you want to use the same protocol implementation on the client and the server.

Since you can use asynchronous frameworks in so many different contexts, it's possible that you might want to use it in a web application simply because you have existing library code, written for some other application using your async framework, which you want to use. Or you might want to be able to re-use your web application code in some hypothetical future non-web application. In this case, it's not that much different than using Apache or Tomcat or whatever in terms of functionality, it just gives you a more general, re-usable way to organize your program.
asynchronous_event_processing  EventMachine  Ruby  Twisted  Python 
july 2013 by snearch
What is the best practice to write a concurrent TCP server in Go? - Google Groups
It is said that event-driven nonblocking model is not the preferred programming model in Go, so I use "one goroutine for one client" model, but is it OK to handle millions of concurrent goroutines in a server process?

A goroutine itself is 4kb. So 1e6 goroutines would require 4gb of base memory. And then whatever your server needs per goroutine that you add.

Any machine that might be handling 1e6 concurrent connections should have well over 4gb of memory.

And, how can I "select" millions of channel to see which goroutine has data received?

That's not how it works. You just try to .Read() in each goroutine, and the select is done under the hood. The select{} statement is for channel communication, specifically.

All IO in go is event driven under the hood, but as far as the code you write, looks linear. The Go runtime maintains a single thread that runs epoll or kqueue or whatever under the hood, and wakes up a goroutine when new data has arrived for that goroutine.

The "select" statement can only select on predictable number of channels, not on a lot of unpredictable channels. And how can I "select" a TCP Connection (which is not a channel) to see if there is any data arrived? Is there any "design patterns" on concurrent programming in Go?

These problems you anticipate simply do not exist with go. Give it a shot!
Goroutines  asynchronous_event_processing  select  tcp  concurrent_server  server  Client/Server  Golang 
july 2013 by snearch
Clojure core.async and Go: A Code Comparison : programming
[–]RappingProgrammer 2 points 2 hours ago

Ok, so we have this code here in examples:

(doseq [_ (range 10)]
(println (<! c)))))

Why could this not be written as:

(doseq [_ (range 10)]
(println (<! c)))

give gold

[–]Hueho 4 points 1 hour ago

It's explained in the article, right after the example.

give gold

[–]doubleagent03 1 point 1 hour ago

I don't buy it. -main is running as a native thread already. In the Clojure code he goes to the trouble of spawning another thread but he does not do so in the Go code.

give gold

[–]RappingProgrammer 0 points 1 hour ago

That's not what I mean. I mean why is that even necessary in Clojure?

give gold

[–]erlanggod 1 point 36 minutes ago

Yep! It could be (almost[1]) written as you wrote, had he use thread macro instead of go macro. The go macros was desgined for ClojureScript support.


Clojure  ClojureScript  core.async  asynchronous_event_processing  Golang 
july 2013 by snearch

« earlier    

related tags

*bsd  1.9  ajax  anyevent  api  async.js  backend  bm  book_recommendation  boost::asio  bosh  c++  c++11  c++14  c  c10k  catalyst  cgi  cgi::downloads  chef  client/server  clojure  clojurescript  coffeescript  comet  commonjs  concurrent_server  core.async  coro  corona  cps_continuation_passing_style  cramp  daemontools  dart  dbix::class  debugging  debugging_strategies  deployment  development  diesel  django  dlang  eintauchen  epoll  erlang  eventmachine  express  f#  facebook  fastcgi  fd  feersum  filo_david  firebase  firefox-lesezeichen  firehose  flup  for_loop  framework  freebsd  fun_in_programming  futures  gevent  golang  google  gorilla  goroutines  gunicorn  haskell  higher_quality  hot_code_swapping  http-server  http-server::unicorn  http  insightful  inspiration  io::async  jade  javascript  jquery  jrockway  json  kaplan-moss_jacob  kqueue  leonard_thomas  lernherausforderung  lesezeichen-symbolleiste  libev  libevent  libraries_programmers_tools  libuv  linux  load_balancing  lua  magnum  making_a_http_request_with_core.async  memcached  messaging  meteor  mgo  mirage  modern  mongodb  mongrel  multithreading  mzscheme  networking_hardware  nginx  node.js  ocaml  opa  os_x  pagaltzis_aristotle  panteleev_vladimir  paste  perl  perl6  php5  plack  play  poe  poll  posix  postgresql  print!!  print!  print  programming  programming_language  psgi  pubnub  pusher  python  qt  rabbitmq  racket  ragel  rails  repl  replication  request  rest  reverse_proxying  rockway_jonathan  ruby  ruby::weaknesses  rust  scala  scaling_website  select  server  server_side  setuptools  sicilian_buttercup  soa  sockets  spawning  standardisierung  starlet  starman  stevens_w._richard  streaming  stylus  system_programming  tcp  teuer_bezahlt  threads  thrift  tomayko_ryan  tools_software  top  tornado  tornado_framework  twiggy  twisted  ubuntu  unicorn  unikernel  unix  uwsgi  v8_java_script_engine  v8_javascript_engine  vibe.d  virtual_machines  virtualenv  webdevelopment  webserver  webservices  websockets  wong_eric  xen  xhr  yahoo  yield  zawodny_jeremy  zeromq 

Copy this bookmark: