jm + marshalling   4

Avro, mail # dev - bytes and fixed handling in Python implementation - 2014-09-04, 22:54
More Avro trouble with "bytes" fields! Avoid using "bytes" fields in Avro if you plan to interoperate with either of the Python implementations; they both fail to marshal them into JSON format correctly. This is the official "avro" library, which produces UTF-8 errors when a non-UTF-8 byte is encountered
bytes  avro  marshalling  fail  bugs  python  json  utf-8 
march 2015 by jm
Release Protocol Buffers v3.0.0-alpha-2 · google/protobuf
New major-version track for protobuf, with some interesting new features:

Removal of field presence logic for primitive value fields, removal of required fields, and removal of default values. This makes proto3 significantly easier to implement with open struct representations, as in languages like Android Java, Objective C, or Go.
Removal of unknown fields.
Removal of extensions, which are instead replaced by a new standard type called Any.
Fix semantics for unknown enum values.
Addition of maps.
Addition of a small set of standard types for representation of time, dynamic data, etc.
A well-defined encoding in JSON as an alternative to binary proto encoding.
protobuf  binary  marshalling  serialization  google  grpc  proto3  coding  open-source 
february 2015 by jm
Parquet
'a columnar storage format that supports nested data', from Twitter and Cloudera, encoded using Apache Thrift in a Dremel-based record shredding and assembly algorithm. Pretty crazy stuff:
We created Parquet to make the advantages of compressed, efficient columnar data representation available to any project in the Hadoop ecosystem.

Parquet is built from the ground up with complex nested data structures in mind, and uses the record shredding and assembly algorithm described in the Dremel paper. We believe this approach is superior to simple flattening of nested name spaces.

Parquet is built to support very efficient compression and encoding schemes. Multiple projects have demonstrated the performance impact of applying the right compression and encoding scheme to the data. Parquet allows compression schemes to be specified on a per-column level, and is future-proofed to allow adding more encodings as they are invented and implemented.

Parquet is built to be used by anyone. The Hadoop ecosystem is rich with data processing frameworks, and we are not interested in playing favorites. We believe that an efficient, well-implemented columnar storage substrate should be useful to all frameworks without the cost of extensive and difficult to set up dependencies.
twitter  cloudera  storage  parquet  dremel  columns  record-shredding  hadoop  marshalling  columnar-storage  compression  data 
march 2013 by jm

Copy this bookmark:



description:


tags: