Software Development

Data encoding and storage

Data encoding and storage formats are evolving fields. It has seen so many changes starting from naive text-based encoding to advance compact nested binary format.

Encoding/Decoding

Selecting correct encoding/storage format has big impact on application performance and how easily it can evolve. Data encoding has big impact on whether application is backward/forward compatible.

Selecting right encoding format can be one of the important factor for data driven application agility.

Application developer tends to makes default choice of text(xml, csv or json) based encoding because it is human readable and language agonist.

Text format are not very efficient, they are take time time/space and also struggle to evolve. If someone care about efficiency then binary format is the way to go.

In this post i will compare text vs binary encoding and build simple persistent storage that supports flexible encoding.

We will compare popular text/binary encoding like csv, json, avro, chronicle and sbe

long tradeId , customerId;
int qty;
TradeType tradeType;
String symbol,exchange;

I will use the above Trade object as an example for this comparison.

CSV

It is one of the most popular textual format, it has no support for types and makes no distinction between different type of numbers. One of the major restriction is that it only supports scalar types, if we have to store nested or complex object then custom encoding is required. Column and rows values are separated by deliminator and special handling is required when deliminator is part of column value.

Reader application has to parse text and convert into proper type at read time, it produces garbage and is also CPU intensive.

Best thing is that it can be edit in any text editor. All programming language can read and write CSV.

JSON

This is what drives Web today. Majorities of micro services that are user facing are using JSON for REST APIs.

This address some of the issues with CSV by making distinction between string and number, also support nested types like Map,Array, Lists etc. It is possible to have schema for JSON message but it is not in practice because it takes ways flexible schema. This is new XML these days.

One of major drawback is size, size of JSON message is more as it has to keep key/attribute name as part of message. I have heard in some document based database attribute names takes up more than 50% of the space, so be careful when you select attribute name in json document.

Both of these text format are very popular inspite of all the inefficiency. Across team if you need any friction less data format interface then go for text based one.

Chronicle/Avro/SBE

These are very popular binary format and very efficient for distributed or trading systems.

SBE is very popular in financial domain and used as replacement of FIX protocol. I shared about it in post inside-simple-binary-encoding-sbe.

Avro is also very popular and it is built by taking lots of learning from protobuffer and thrift. For row based and nested storage this is very good choice. It supports multiple languages. Avro applies some cool encoding tricks to reduce size of message, you can read about it in post integer-encoding-magic

Chronicle-Wire is picking up and i came across this very recently. It has nice abstraction over text and binary message with single unified interface. This library allows to choose different encoding based on usecase.

Lets look at some number now. This is very basic comparison just on size aspect of message. Run your benchmark before making any selection.

 Object[][] data = new Object[][]{
                {new Random().nextLong(), new Random().nextLong(), "NYSE", "Buy", "GOOGL", 100},
                {new Random().nextLong(), new Random().nextLong(), "NYSE", "Sell", "AAPL", 100},
        };

process("Avro", Arrays.stream(data), AvroTradeRecordBuilder::newTrade, AvroTradeRecordBuilder::toBytes, AvroTradeRecordBuilder::fromBytes);
        process("chronicle", Arrays.stream(data), ChronicleTradeRecordBuilder::newTrade, ChronicleTradeRecordBuilder::toBytes, ChronicleTradeRecordBuilder::fromBytes);
        process("sbe", Arrays.stream(data), SBETradeRecordBuilder::newTrade, SBETradeRecordBuilder::toBytes, SBETradeRecordBuilder::fromBytes);

        process("csv", Arrays.stream(data), CSVTradeRecordBuilder::newTrade, CSVTradeRecordBuilder::toBytes, CSVTradeRecordBuilder::fromBytes);
        process("json", Arrays.stream(data), JsonTradeRecordBuilder::newTrade, JsonTradeRecordBuilder::toBytes, JsonTradeRecordBuilder::fromBytes);

We will try to save above 2 records in different format and compare size.

 (Avro) -> Size 43 Bytes
{"tradeId": 454442738626075203, "customerId": 1924973958993118808, "qty": 100, "tradeType": "Buy", "symbol": "GOOGL", "exchange": "NYSE"}
(Avro) -> Size 43 Bytes
{"tradeId": 1984810212692753661, "customerId": -7397692262047989958, "qty": 100, "tradeType": "Sell", "symbol": "AAPL", "exchange": "NYSE"}

(chronicle) -> Size 35 Bytes
CVCNX†©ãá¶NYSEBuyGOOGLd   
(chronicle) -> Size 35 Bytes
ý|Z_v‹:?ùV™NYSESellAAPLd   

(sbe) -> Size 50 Bytes
[SBETrade](sbeTemplateId=2|sbeSchemaId=1|sbeSchemaVersion=0|sbeBlockLength=25):tradeId=454442738626075203|customerId=1924973958993118808|qty=100|tradeType=Buy|symbol=GOOGL|exchange=NYSE
(sbe) -> Size 49 Bytes
[SBETrade](sbeTemplateId=2|sbeSchemaId=1|sbeSchemaVersion=0|sbeBlockLength=25):tradeId=1984810212692753661|customerId=-7397692262047989958|qty=100|tradeType=Sell|symbol=AAPL|exchange=NYSE

(csv) -> Size 57 Bytes
{tradeId:454442738626075203,customerId:1924973958993118808,qty:100,tradeType:Buy,symbol:GOOGL,exchange:NYSE}
(csv) -> Size 59 Bytes
{tradeId:1984810212692753661,customerId:-7397692262047989958,qty:100,tradeType:Sell,symbol:AAPL,exchange:NYSE}

(json) -> Size 126 Bytes
{tradeId:454442738626075203,customerId:1924973958993118808,qty:100,tradeType:Buy,symbol:GOOGL,exchange:NYSE}
(json) -> Size 128 Bytes
{tradeId:1984810212692753661,customerId:-7397692262047989958,qty:100,tradeType:Sell,symbol:AAPL,exchange:NYSE}

Chronicle is most efficient in this example and i have used RawWire format for this example and it is the most compact option available in library because it only stores data, no schema metadata is stored.

Next one is Avro and SBE, very close in terms of size but sbe is more efficient in terms of encoding/decoding operation.

CSV is not that bad, it took 57 bytes for single row but don’t select CSV based on size. As expected JSON takes up more bytes to represent same message. It is taking around 2X more than Chronicle.

Lets look at some real application of these encoding. These encoding can be used for building logs , queues , block storage, RPC message etc.

To explore more i created simple storage library that is backed by file and allows to specific different encoding format.

public interface RecordContainer<T> extends Closeable {     boolean append(T message);     void read(RecordConsumer<T> reader);     void read(long offSet, RecordConsumer<T> reader);     default void close() {     }     int size();     String formatName(); }

This implementation allow to append records at the end of buffer and access the buffer from starting or randomly from given message offset. This can seen as append only unbounded message queue, it has some similarity with kafka topic storage.

RandomAccessFile form java allow to map file content as array buffer and after that file content can be managed like any array.

All the code used in this post is available @ encoding github

Published on Java Code Geeks with permission by Ashkrit Sharma, partner at our JCG program. See the original article here: Data encoding and storage
Opinions expressed by Java Code Geeks contributors are their own.

Ashkrit Sharma

Pragmatic software developer who loves practice that makes software development fun and likes to develop high performance & low latency system.
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments
Back to top button