JagsLog

Technical rants on distributed computing, high performance data management, etc. You are warned! A lot will be shameless promotion for VMWare products

Tuesday, December 13, 2011

SQLFire 1.0 - the Data Fabric Sequel


This week we finally reached GA status for VMWare vFabric SQLFire  - a memory-optimized distributed SQL database delivering dynamic scalability and high performance for data-intensive modern applications. 


In this post, I will highlight some important elements in our design and draw out some of our core values.


The current breed of popular NoSQL stores promote different approaches to data modelling, storage architectures and consistency models to solve the scalability and performance problems in relational databases. The overarching messages in all of them seems to suggest that the core of the problem with traditional relational databases is SQL. 
But, ironically, the core of the scalability problem has little to do with SQL itself - it is the manner in which the traditional DB manages disk buffers, manages its locks and latches through a centralized architecture to preserve strict ACID properties that represents a challenge. Here is a slide from research at MIT and Brown university on where the time is spent in OLTP databases. 




Design center
With SQLFire we change the design center in a few interesting ways:
1) Optimize for main memory: we assume memory is abundant across a cluster of servers and optimize the design through highly concurrent data structures all resident in memory. The design is not concerned with buffering contiguous disk blocks in memory but rather manages application rows in memory hashmaps in a form so it can be directly consumed by clients. Changes are synchronously propagated to redundants in the cluster for HA. 


2) Rethink ACID transactions: There is no support for strict serializable transactions but assume that most applications can get by with simpler "read committed" and "repeatable read" semantics. Instead of worrying about "read ahead" transaction logs on disk, all transactional state resides in distributed memory and uses a non-2PC commit algorithm optimized for small duration, non-overlapping transactions. The central theme is to avoid any single points of contentions like with a distribtued lock service. See some details here.


3) "Partition aware DB design": Almost every single high scale DB solution offers a way to linearly scale by hashing keys to a set of partitions. But, how do you make SQL queries and DML scale when they involve joins or complex conditions? Given that distributed joins inherently don't scale we promote the idea that the designer should think about common data access patterns and choose the partitioning strategy accordingly. To make things relatively simple for the designer, we extended the DDL (Data definition language in SQL) so the designer can specify how related data should be colocated ( for instance 'create table Orders (...) colocate with Customer' tells us that the order records for a customer should always be colocated onto the same partition). The colocation now makes join processing and query optimization a local partition problem (avoids large transfers of intermediate data sets). The design assumes classic OLTP workload patterns where vast majority of individual requests can be pruned to a few nodes and that the concurrent workload from all users is spread across the entire data set (and, hence across all the partitions). Look here for some details.


4) Shared nothing logs on disk: Disk stores are merely "append only" logs and designed so that application writes are never exposed to the disk seek latencies. Writes are synchronously streamed to disk on all replicas. A lot of the disk store design looks similar to other NoSQL systems - rolling logs, background/offline compression, memory tables pointing to disk offsets, etc. But, the one aspect that represents core IP is all around managing consistent copies on disk in the face of failures. Given that distributed members can come and go, how do we make sure that the disk state a member is working with is the one I should be working with? I cover our "shared nothing disk architecture" in lot more detail here.


5) Parallelize data access and application behavior: We extend the classic stored procedure model by allowing applications to parallelize the procedure across the cluster or just a subset of nodes by hinting the data the procedure is dependent on. This applicaton hinting is done by supplying a "where clause" that is used to determine where to route and parallelize the execution. Unlike traditional databases, procedures can be arbitrary application Java code (you can infact embed the cluster members in your Spring container) and run collocated with the data. Yes, literally in the same process space where the data is stored. Controversial, yes, but, now your application code can do a scan as efficiently as the database engine.


6) Dynamic rebalancing of data and behavior: This is the act of figuring out what data buckets should be migrated when new capacity (cluster size grows) is allocated (or removed) and how to do this without causing consistency issues or introducing contention points for concurrent readers and writes. Here is the patent that describes some aspects of the design. 




Embedded or a client-server topology
SQLFire supports switching from the classic client-server (your DB runs in its own processes) topology to embedded mode where the DB cluster and the application cluster is one and the same (for Java apps). 
We believe the emdedded model will be very useful in scenarios where the data sets are relatively small. It simplifies deployment concerns and at the same time provides significant boost in performance when replicated tables are in use.

All you do is change the DB URL from
'jdbc:sqlfire://server_Host:port' to 'jdbc:sqlfire:;mcast-port=portNum' and now all your application processes that use the same DB URL will become part of a single distributed system. Essentially, the mcast-port port identifies a broadcast channel for membership gossiping. New servers will automatically join the cluster once authenticated. Any replicated tables will automatically get hosted in the new process and partitioned tables could get rebalanced and share some of the data with the new process. All this is abstracted away from the developer. 
As far as the application is concerned, you just create connections and execute SQL like with any other DB.





How well does it perform and scale? 
Here are the results of a simple benchmark done internally using commodity (2 CPU) machines showcasing linear scaling with concurrent user load. I will soon augment this with more interesting workload characterization. The details are here.


Comparing SQLFire and GemFire


Here is a high level view into how the two products compare. I hope to add a blog post that provides specific details on the differences and use cases where one might apply better than the other.




SQLFire benefits from the years of commercially deployed production code found in GemFire.  SQLFire adds a rich SQL engine with the idea that now folks can manage operational data primarily in memory, partitioned across any number of nodes and with a disk architecture that avoids disk seeks.  Note the two offerings, SQLFire and GemFire, are completely unique products and deployed separately




As always, I would love to get your candid feedback (link to our forum). I assure you that trying it out is very simple - just like using Apache Derby or H2. 


Get to the download, docs and quickstart all from here. The developer license is perpetual and works on upto 3 server nodes.





Sunday, November 20, 2011

HPTS 2011 talk on 'Flexible OLTP in the future'

I recently spoke at HPTS 2011 (High Performance Transaction Systems). If you haven't already you should check out some of the very interesting content on NoSQL ecosystem, future in core density, big data experiences and scars, etc.

Here is the abstract:

Flexible OLTP data models in the future
=================================

There has been a flurry of highly scalable data stores and a dramatic spike in the interest level. The solutions with the most mindshare seem to be inspired by Dynamo's (Amazon) eventually consistency model or a data model that promotes nested, self-describing data structures like BigTable from Google. At the same time you see projects within these corporations evolving to architectures like MegaStore and Dremel (Google) where features from the column-oriented data model is blended together with the relational model.

The shift from just highly structured data to unstructured and semistructured content is evident. New applications are being developed or existing applications are being modified at break neck speed. Developers want the data model evolution to be extremely simple and want support for nested structures so they can map to representations like JSON with ease so there is little impedance between the application programming model and the database. Next generation enterprise applications will increasingly work with structured and semi-structured data from a multitude of data sources. A pure relational model is too rigid and a pure BigTable like model has too many shortcomings and cannot be integrated with existing relational databases systems.

In this talk, I present an alternative. We prefer the familiar "row oriented" over "column oriented" approach but still tilt the relational model - mostly the schema definition to support partitioning and colocation, redundancy level and support for dynamic and nested columns.
Each of these extensions will support different desired attributes - partitioning and colocation primitives cover horizontal scaling, availability primitives allow explicit support for replication model and the placement policies (local vs across data centers), dynamic columns will address flexibility for schema evolution (different rows have different columns and added with no DDL requirements) and nested columns that support organizing data in a hierarchy.

We draw inspiration for the data model from Pat helland's 'Life beyond distributed transactions' by adopting entity groups as a first class artifact designers start with, and define relationships between entities within the group (associations based on reference as well as containment). Rationalizing the design around entity groups will force the designer to think about data access patterns and how the data will be colocated in partitions. We then cover why ACID properties and sophiticated querying becomes significantly less challenging to accomplish. There are many ideas around partitioning policies, tradeoffs in supporting transactions and joins across entity groups that are worth discussion.

The idea is to present a model and generate discussion on how to achieve the best of both worlds. Flexible schemas without losing referential integrity, support for associations and the power of SQL. It is ironic that NoSQL databases like Mongodb are getting to be more popular as they begin to add SQL like querying capabilities.


Finally, this summarizes all the different views shared at HPTS.

Monday, September 19, 2011

What is new in vFabric GemFire 6.6?


GemFire 6.6 was released (Sept 2011) as part of the new vFabric 5.0 product suite and, it represents a big step along the following important dimensions:
  1. developer productivity
  2. more DBMS like features 
  3. better scaling features

Here are some highlights on each dimension:

Developer productivity: Introduced a new serialization framework called PDX (stands for Portable Data eXchange and not my favorite airport).
PDX is a framework that provides a portable, compact, language neutral and versionable format for representing object data in GemFire. It is proprietary but designed for high efficiency. It is comparable to other serialization frameworks like apache Avro, Google protobuf ,etc.
Alright. I realize the above definition is a mouth full :-)

Simply put, the framework supports versioning allowing apps using older class versions to work with apps with newer versions of the domain classes and vice versa, provides a format and type system for interop between the various languages and an API so server side application code can operate on objects without requiring the domain classes (i.e. no deserialization).
The type evolution has to be incremental - this is the only way to avoid data loss or exceptions.
The raw serialization performance is comparable to Avro, protobuf but is much more optimized for distribution and operating in a GemFire cluster. The chart below is the result of a open source benchmark on popular serialization frameworks. The details are available here. 'Total' represents the total time required to create, serialize and then deserialize. See the benchmark description for details.


You can either implement serialization callbacks (for optimal performance) or simply use the built in PDXSerializer (reflection based today). Arguably, the best part of the framework is its support for object access in server side functions or callbacks like listeners without requiring the application classes. You can dynamically discover the fields and nested objects and operate on these using the PDX API. On the application client that has the domain classes the same PDXInstance is automatically turned into the domain object.

We introduced a new command shell called gfsh (pronounced "gee - fish" ) - a command line tool for browsing and editing data stored in GemFire. Its rich set of Unix-flavored commands allows you to easily access data, monitor peers, redirect outputs to files, and run batch scripts. This is an initial step towards a more complete tool that can provision, monitor, debug, tune and administer a cluster as a whole. Ultimately, we hope to advance the gfsh scripting language making integration of GemFire deployments into cloud like virtualized environments a "breeze".

More DBMS like: 
Querying and Indexing
we added several features to our query engine - query/index on hashmaps, bind parameters from edge clients, OrderBy support for partitioned data regions, full support for LIKE predicates and being able to index regions that overflow to disk.

Increasingly we see developers wanting to decouple the data model in GemFire from the class schema used within their applicatons. Even though PDX offers an excellent option, we also see developers mapping their data into "self describing" hashmaps in GemFire. The data store is basically "schema free" and allows many application teams to change the object model without impacting each other. Given a simple KV storage model in GemFire this has never been an issue except for querying. Now, not only can you store maps, you can index keys within these Hashmaps and execute highly performant queries.
Do take note that the query engine now natively understands the PDX data structures with no need for application classes on servers.

We expanded distributed transactions by allowing edge clients to initiate or terminate transactions. No need to invoke a server side function for transactions. We also added a new JCA resource adapter that supports participation in externally coordinated transactions as a "Last resource".

Finally, on the scaling dimension:
You are probably aware that GemFire's shared nothing persistence relies on append-only operation logs to provide very high write throughput. There are no additional Btree data files to maintain like in a traditional database system. The tradeoff with this design is cluster recovery speed. One has to walk through the logs to recover the data back into memory and the time for the entire cluster to bootstrap from disk is proportional to the volume of data (and inversely proportional to the cluster count). And, this can be long (put mildly) with large data volumes even though you can parallelize the recovery across the cluster. To minimize this recovery delay, the 6.6 persistence layer now also manages "key files" on disk. We simply recover the keys back into memory and lazily recover the data giving recovery in general a significant performance boost.

Prior to 6.6, GemFire randomly picked a different host to manage redundant copies for partitioned data regions. Often, customers provision multiple racks and want their redundant copy to always be stored on a different physical rack. Occasionally, we also see customers wanting to store their redundant data on a different site. We added support for "redundancy zone" in partitioned region configuration allowing users to identify one or more redundancy zones (could be racks, sites, etc). GemFire will automatically enforce managing redundants in different zones.

Everything mentioned happens to be more of a prelude. The list of enhancements is much longer and is  documented here.

The product documentation is available here.
You can start discussions here.

Would love to hear your thoughts.