Technical rants on distributed computing, high performance data management, etc. You are warned! A lot will be shameless promotion for VMWare products

Monday, September 19, 2011

What is new in vFabric GemFire 6.6?

GemFire 6.6 was released (Sept 2011) as part of the new vFabric 5.0 product suite and, it represents a big step along the following important dimensions:
  1. developer productivity
  2. more DBMS like features 
  3. better scaling features

Here are some highlights on each dimension:

Developer productivity: Introduced a new serialization framework called PDX (stands for Portable Data eXchange and not my favorite airport).
PDX is a framework that provides a portable, compact, language neutral and versionable format for representing object data in GemFire. It is proprietary but designed for high efficiency. It is comparable to other serialization frameworks like apache Avro, Google protobuf ,etc.
Alright. I realize the above definition is a mouth full :-)

Simply put, the framework supports versioning allowing apps using older class versions to work with apps with newer versions of the domain classes and vice versa, provides a format and type system for interop between the various languages and an API so server side application code can operate on objects without requiring the domain classes (i.e. no deserialization).
The type evolution has to be incremental - this is the only way to avoid data loss or exceptions.
The raw serialization performance is comparable to Avro, protobuf but is much more optimized for distribution and operating in a GemFire cluster. The chart below is the result of a open source benchmark on popular serialization frameworks. The details are available here. 'Total' represents the total time required to create, serialize and then deserialize. See the benchmark description for details.

You can either implement serialization callbacks (for optimal performance) or simply use the built in PDXSerializer (reflection based today). Arguably, the best part of the framework is its support for object access in server side functions or callbacks like listeners without requiring the application classes. You can dynamically discover the fields and nested objects and operate on these using the PDX API. On the application client that has the domain classes the same PDXInstance is automatically turned into the domain object.

We introduced a new command shell called gfsh (pronounced "gee - fish" ) - a command line tool for browsing and editing data stored in GemFire. Its rich set of Unix-flavored commands allows you to easily access data, monitor peers, redirect outputs to files, and run batch scripts. This is an initial step towards a more complete tool that can provision, monitor, debug, tune and administer a cluster as a whole. Ultimately, we hope to advance the gfsh scripting language making integration of GemFire deployments into cloud like virtualized environments a "breeze".

More DBMS like: 
Querying and Indexing
we added several features to our query engine - query/index on hashmaps, bind parameters from edge clients, OrderBy support for partitioned data regions, full support for LIKE predicates and being able to index regions that overflow to disk.

Increasingly we see developers wanting to decouple the data model in GemFire from the class schema used within their applicatons. Even though PDX offers an excellent option, we also see developers mapping their data into "self describing" hashmaps in GemFire. The data store is basically "schema free" and allows many application teams to change the object model without impacting each other. Given a simple KV storage model in GemFire this has never been an issue except for querying. Now, not only can you store maps, you can index keys within these Hashmaps and execute highly performant queries.
Do take note that the query engine now natively understands the PDX data structures with no need for application classes on servers.

We expanded distributed transactions by allowing edge clients to initiate or terminate transactions. No need to invoke a server side function for transactions. We also added a new JCA resource adapter that supports participation in externally coordinated transactions as a "Last resource".

Finally, on the scaling dimension:
You are probably aware that GemFire's shared nothing persistence relies on append-only operation logs to provide very high write throughput. There are no additional Btree data files to maintain like in a traditional database system. The tradeoff with this design is cluster recovery speed. One has to walk through the logs to recover the data back into memory and the time for the entire cluster to bootstrap from disk is proportional to the volume of data (and inversely proportional to the cluster count). And, this can be long (put mildly) with large data volumes even though you can parallelize the recovery across the cluster. To minimize this recovery delay, the 6.6 persistence layer now also manages "key files" on disk. We simply recover the keys back into memory and lazily recover the data giving recovery in general a significant performance boost.

Prior to 6.6, GemFire randomly picked a different host to manage redundant copies for partitioned data regions. Often, customers provision multiple racks and want their redundant copy to always be stored on a different physical rack. Occasionally, we also see customers wanting to store their redundant data on a different site. We added support for "redundancy zone" in partitioned region configuration allowing users to identify one or more redundancy zones (could be racks, sites, etc). GemFire will automatically enforce managing redundants in different zones.

Everything mentioned happens to be more of a prelude. The list of enhancements is much longer and is  documented here.

The product documentation is available here.
You can start discussions here.

Would love to hear your thoughts. 


Revealed said...

Hi Jags, I have a query on the partitioned regions data location when we use data co-location in the region by using partition resolver.

I have got a partitioned region storing objects and I use field to co-locate all the objects having save value for this field.

When we query the grid for the given key, it is able to get the object back from the grid with the single hop (proposed) but how does that work -

When querying by key - I am not supplying the co-location key which is needed to find the partition where the data resides.

Is Gemfire storing the mapping between Key and RoutingKey somewhere so that whenever i ask for the object for the given key, grid can hash the corresponding routing key and locate the partition and return the object ?

Is that mapping again stored in the replicated manner or the distributed partitioned manner on this grid ?

Jags Ramnarayan said...

when you use colocation(i.e. a routingObject/partitioningKey determines location) we generally expect your key object to be able to supply this routingObject on a 'Region.get(..)' operation. Essentially, to get 'single hop' access your Region entry key should implement 'PartitionResolver'. On a 'get', we simply detect the key implements 'PartitionResolver' and use the routingObject returned to determine the member node that is possibly managing the data.