The CAP Theorem is still valid, but Pivotal Cloud Cache is stretching its limits.
Pivotal Cloud Cache (PCC) is a specialized implementation of Pivotal GemFire, Pivotal’s commercial caching product based on Apache Geode. GemFire is heavily biased toward “strong consistency” in CAP parlance. PCC has the additional advantage that it runs on Pivotal Cloud Foundry, which is heavily biased toward “high availability”. This means that PCC on PCF is uniquely positioned to be able to offer both strong consistency AND very good availability, in addition to partition tolerance. This opens up possibilities for supporting new types of horizontally scalable data-intensive applications that were very difficult to achieve until now.
Lets drill down on this a bit further.
Consistency in ACID refers to the requirement that any database transaction must change affected data only in allowed ways. Any data written to the database must be valid according to all defined rules, including constraints, cascades, and triggers. Consistency is defined as the guarantee that:
Any transaction started in the future necessarily sees the effects of other transactions committed in the past (AKA READ_COMMITTED)
Database constraints are not violated, particularly once a transaction commits
Operations in transactions are performed accurately, correctly, and with validity, with respect to application semantics
Consistency in the CAP theorem has a different meaning. It postulates that it is impossible for a distributed data store to simultaneously provide more than two out of the following three guarantees:
Consistency: Every read receives the most recent write or an error. This is very different from the consistency in ACID.
Availability: Every request receives a non-error response, without guarantee that it contains the most recent write.
Partition tolerance: The system continues to operate despite an arbitrary number of messages being dropped or delayed by the network between nodes.
The CAP theorem further states that in a distributed system you MUST have partition tolerance. That is the ‘P’ in CAP. Therefore, you must choose between availability (A) and consistency (C), you can’t have both. You can either provide 100% availability, known as “AP”, or 100% consistency, known as “CP”. There are many caching products in each of the AP and CP camps.
It is very important to understand the difference between the ‘C’ in ACID and the ‘C’ in CAP. You can have the consistency represented by the ‘C’ in ACID along with the “AP” from CAP, but you cannot have the consistency represented by the ‘C’ in CAP along with the “AP”.
What kinds of applications work best with AP data stores?
There is a very large market for data stores that favor the availability side of the CAP triangle. Applications like Facebook, Google+, Myspace, Instagram, LinkedIn, Pinterest, Snapchat, and Twitter can all benefit from such a data store. That's because there is no real property nor any operational consequence of getting an old value for an entry in your Facebook stream. Plus, there are almost no updates to such data anyway. Everything is an insert, so if you have a value it must be the latest.
What kinds of applications require CP databases?
There is also a huge market for data stores that favor the CP side of the CAP triangle. Things that enterprises do like banking, billing, insurance, inventory, logistics, online e-commerce, manufacturing, risk management and trading are examples of applications that benefit from a CP-biased data store. The update heavy nature of these applications requires that they operate on correct and up-to-date data. This means that in the event of a network partition scenario, having an old copy of data is worse than getting an error when trying to access it if it is unavailable.
PCC is strongly biased towards the CP side of the CAP triangle.
In the event of network segmentation, PCC will always return the most recent successful write, or an error. That is the definition of consistency in CAP.
Pivotal Cloud Foundry, meanwhile, provides high availability for the services that run on the platform, so it enhances the availability of services like PCC.
Here’s how it works. PCC reduces the potential for an error return due to unavailability of an entry during a network segmentation by being configured to hold multiple copies of the data. It spreads those copies across multiple PCF availability zones by mapping the servers in those zones to GemFire redundancy zones. This reduces the possibility that all copies of a given entry will be on the losing side of a network split. This gives us really high (though not 100%) availability thanks to PCF, along with strong consistency via PCC.
The best of both worlds: Strong consistency and high availability
This combination of strong consistency from PCC and high availability from PCF really stretches the boundaries of the CAP theorem. You still can’t have 100% of all three of C, A, and P, but the availability characteristics of the PCF platform combined with the strong consistency that PCC inherits from GemFire creates a real stand-out among the data services on PCF.
About the Author
Mike Stolz is a contributor and member of the PMC for the Apache Geode project. He also leads the Product Management team for Pivotal GemFire which is powered by Apache Geode. This involves constant communication with users to learn what features are needed, prioritization of features, managing the product roadmap and tracking progress on development of the features.More Content by Mike Stolz