jm + map-reduce   12

Hadoop Internals
This is the best documentation on the topic I've seen in a while
hadoop  map-reduce  architecture  coding  java  distcomp 
9 weeks ago by jm
_DataEngConf: Parquet at Datadog_
"How we use Parquet for tons of metrics data". good preso from Datadog on their S3/Parquet setup
datadog  parquet  storage  s3  databases  hadoop  map-reduce  big-data 
may 2016 by jm
Submitting User Applications with spark-submit - AWS Big Data Blog
looks reasonably usable, although EMR's crappy UI is still an issue
emr  big-data  spark  hadoop  yarn  map-reduce  batch 
february 2016 by jm
Spark 1.2 released
This is the version with the superfast petabyte-sort record:
Spark 1.2 includes several cross-cutting optimizations focused on performance for large scale workloads. Two new features Databricks developed for our world record petabyte sort with Spark are turned on by default in Spark 1.2. The first is a re-architected network transfer subsystem that exploits Netty 4’s zero-copy IO and off heap buffer management. The second is Spark’s sort based shuffle implementation, which we’ve now made the default after significant testing in Spark 1.1. Together, we’ve seen these features give as much as 5X performance improvement for workloads with very large shuffles.
spark  sorting  hadoop  map-reduce  batch  databricks  apache  netty 
december 2014 by jm
Spark Breaks Previous Large-Scale Sort Record – Databricks
Massive improvement over plain old Hadoop. This blog post goes into really solid techie reasons why, including:
First and foremost, in Spark 1.1 we introduced a new shuffle implementation called sort-based shuffle (SPARK-2045). The previous Spark shuffle implementation was hash-based that required maintaining P (the number of reduce partitions) concurrent buffers in memory. In sort-based shuffle, at any given point only a single buffer is required. This has led to substantial memory overhead reduction during shuffle and can support workloads with hundreds of thousands of tasks in a single stage (our PB sort used 250,000 tasks).


Also, use of Timsort, an external shuffle service to offload from the JVM, Netty, and EC2 SR-IOV.
spark  hadoop  map-reduce  batch  parallel  sr-iov  benchmarks  performance  netty  shuffle  algorithms  sort-based-shuffle  timsort 
october 2014 by jm
Google Replaces MapReduce With New Hyper-Scale Cloud Analytics System
MR no more:
“We don’t really use MapReduce anymore,” [Urs] Hölzle said in his keynote presentation at the Google I/O conference in San Francisco Wednesday. The company stopped using the system “years ago.”

Cloud Dataflow, which Google will also offer as a service for developers using its cloud platform, does not have the scaling restrictions of MapReduce. “Cloud Dataflow is the result of over a decade of experience in analytics,” Hölzle said. “It will run faster and scale better than pretty much any other system out there.”

Gossip on the mech-sympathy list says that 'seems that the new platform taking over is a combination of FlumeJava and MillWheel: http://pages.cs.wisc.edu/~akella/CS838/F12/838-CloudPapers/FlumeJava.pdf ,
http://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/41378.pdf'
map-reduce  google  hadoop  cloud-dataflow  scalability  big-data  urs-holzle  google-io 
june 2014 by jm
Presto: Interacting with petabytes of data at Facebook
Presto has become a major interactive system for the company’s data warehouse. It is deployed in multiple geographical regions and we have successfully scaled a single cluster to 1,000 nodes. The system is actively used by over a thousand employees,who run more than 30,000 queries processing one petabyte daily.

Presto is 10x better than Hive/MapReduce in terms of CPU efficiency and latency for most queries at Facebook. It currently supports a large subset of ANSI SQL, including joins, left/right outer joins, subqueries,and most of the common aggregate and scalar functions, including approximate distinct counts (using HyperLogLog) and approximate percentiles (based on quantile digest). The main restrictions at this stage are a size limitation on the join tables and cardinality of unique keys/groups. The system also lacks the ability to write output data back to tables (currently query results are streamed to the client).
facebook  hadoop  hdfs  open-source  java  sql  hive  map-reduce  querying  olap 
november 2013 by jm
Don't use Hadoop - your data isn't that big
see also HN comments: https://news.ycombinator.com/item?id=6398650 , particularly davidmr's great one:

I suppose all of this is to say that the amount of required parallelization of a problem isn't necessarily related to the size of the problem set as is mentioned most in the article, but also the inherent CPU and IO characteristics of the problem. Some small problems are great for large-scale map-reduce clusters, some huge problems are horrible for even bigger-scale map-reduce clusters (think fluid dynamics or something that requires each subdivision of the problem space to communicate with its neighbors).
I've had a quote printed on my door for years: Supercomputers are an expensive tool for turning CPU-bound problems into IO-bound problems.


I love that quote!
hadoop  big-data  scaling  map-reduce 
september 2013 by jm
Practical machine learning tricks from the KDD 2011 best industry paper
Wow, this is a fantastic paper. It's a Google paper on detecting scam/spam ads using machine learning -- but not just that, it's how to build out such a classifier to production scale, and make it operationally resilient, and, indeed, operable.

I've come across a few of these ideas before, and I'm happy to say I might have reinvented a few (particularly around the feature space), but all of them together make extremely good sense. If I wind up working on large-scale classification again, this is the first paper I'll go back to. Great info! (via Toby diPasquale.)
classification  via:codeslinger  training  machine-learning  google  ops  kdd  best-practices  anti-spam  classifiers  ensemble  map-reduce 
july 2012 by jm
Introduction to parallel & distributed algorithms
really interesting parallel algorithm concepts. I'd seen parallel merge sort before from the map-reduce world, but some others are new to me and worth thinking about (via Hacker News)
via:hackernews  algorithms  distributed  parallel  map-reduce  merge-sort  sorting  from delicious
august 2010 by jm
filemap
'File-based, rather than tuple-based processing'; based around UNIX command-line toolset; good UNIXish UI; lots of caching of intermediate results; low setup overhead -- although it does require a shared POSIX filesystem, e.g. NFS, for synchronization
networking  python  opensource  grid  map-reduce  filemap  files  unix  command-line  parallel  distcomp 
july 2009 by jm

Copy this bookmark:



description:


tags: