GemFire 6.6 was released (Sept 2011) as part of the new vFabric 5.0 product suite and, it represents a big step along the following important dimensions:
- developer productivity
- more DBMS like features
- better scaling features
Here are some highlights on each dimension:
Developer productivity: Introduced a new serialization framework called PDX (stands for Portable Data eXchange and not my favorite airport).
PDX is a framework that provides a portable, compact, language neutral and versionable format for representing object data in GemFire. It is proprietary but designed for high efficiency. It is comparable to other serialization frameworks like apache Avro, Google protobuf ,etc.
Alright. I realize the above definition is a mouth full :-)
Simply put, the framework supports versioning allowing apps using older class versions to work with apps with newer versions of the domain classes and vice versa, provides a format and type system for interop between the various languages and an API so server side application code can operate on objects without requiring the domain classes (i.e. no deserialization).
The type evolution has to be incremental - this is the only way to avoid data loss or exceptions.
The raw serialization performance is comparable to Avro, protobuf but is much more optimized for distribution and operating in a GemFire cluster. The chart below is the result of a open source benchmark on popular serialization frameworks. The details are available here. 'Total' represents the total time required to create, serialize and then deserialize. See the benchmark description for details.
You can either implement serialization callbacks (for optimal performance) or simply use the built in PDXSerializer (reflection based today). Arguably, the best part of the framework is its support for object access in server side functions or callbacks like listeners without requiring the application classes. You can dynamically discover the fields and nested objects and operate on these using the PDX API. On the application client that has the domain classes the same PDXInstance is automatically turned into the domain object.
We introduced a new command shell called gfsh (pronounced "gee - fish" ) - a command line tool for browsing and editing data stored in GemFire. Its rich set of Unix-flavored commands allows you to easily access data, monitor peers, redirect outputs to files, and run batch scripts. This is an initial step towards a more complete tool that can provision, monitor, debug, tune and administer a cluster as a whole. Ultimately, we hope to advance the gfsh scripting language making integration of GemFire deployments into cloud like virtualized environments a "breeze".
More DBMS like:
Querying and Indexing
we added several features to our query engine - query/index on hashmaps, bind parameters from edge clients, OrderBy support for partitioned data regions, full support for LIKE predicates and being able to index regions that overflow to disk.
Increasingly we see developers wanting to decouple the data model in GemFire from the class schema used within their applicatons. Even though PDX offers an excellent option, we also see developers mapping their data into "self describing" hashmaps in GemFire. The data store is basically "schema free" and allows many application teams to change the object model without impacting each other. Given a simple KV storage model in GemFire this has never been an issue except for querying. Now, not only can you store maps, you can index keys within these Hashmaps and execute highly performant queries.
Do take note that the query engine now natively understands the PDX data structures with no need for application classes on servers.
We expanded distributed transactions by allowing edge clients to initiate or terminate transactions. No need to invoke a server side function for transactions. We also added a new JCA resource adapter that supports participation in externally coordinated transactions as a "Last resource".
Finally, on the scaling dimension:
You are probably aware that GemFire's shared nothing persistence relies on append-only operation logs to provide very high write throughput. There are no additional Btree data files to maintain like in a traditional database system. The tradeoff with this design is cluster recovery speed. One has to walk through the logs to recover the data back into memory and the time for the entire cluster to bootstrap from disk is proportional to the volume of data (and inversely proportional to the cluster count). And, this can be long (put mildly) with large data volumes even though you can parallelize the recovery across the cluster. To minimize this recovery delay, the 6.6 persistence layer now also manages "key files" on disk. We simply recover the keys back into memory and lazily recover the data giving recovery in general a significant performance boost.
Prior to 6.6, GemFire randomly picked a different host to manage redundant copies for partitioned data regions. Often, customers provision multiple racks and want their redundant copy to always be stored on a different physical rack. Occasionally, we also see customers wanting to store their redundant data on a different site. We added support for "redundancy zone" in partitioned region configuration allowing users to identify one or more redundancy zones (could be racks, sites, etc). GemFire will automatically enforce managing redundants in different zones.
Everything mentioned happens to be more of a prelude. The list of enhancements is much longer and is documented here.
The product documentation is available here.
You can start discussions here.
Would love to hear your thoughts.