Pivotal GemFire Function Development – Best Practices

June 5, 2018 Wes Williams

Pivotal GemFire is a popular choice for customers who want to gain a strong competitive advantage by enabling fast access to their data. This is naturally expected due to GemFire’s very fast in-memory technology, which is up to five orders of magnitude faster than disk I/O [1]. While this performance boost is impressive, GemFire’s capabilities extend far beyond the simple use cases involving fast reads and writes. One such powerful capability is the use of GemFire Functions, which provide extremely fast performance for use-cases needing analytics involving calculations on the data.

This article outlines a best practices approach to writing, testing and debugging functions in a way that maximizes developer productivity.  It does not deal with writing functions in general. If you are looking for a description of functions, or how and when to use them, I recommend this link: https://gemfire.docs.pivotal.io/geode/developing/function_exec/chapter_overview.html.


When I write a function, I use Domain-Driven Design principles and separate my business logic from the concerns of the function. The function deals with concerns such as running logic on a single GemFire server or across multiple servers in parallel, or whether you are running with a partitioned region in mind or on a GemFire server without concern for a region. It’s concerned with partitions, filters, formatting arguments and communicating answers back to the caller. My business logic is focused on the problem I’m solving and should be separate from the function.

There are many benefits by separating these concerns but for purposes of this article, I’m going to focus on testability and development. I can develop and test my business logic independently and quickly.


To illustrate, if I need to write a function to calculate order totals, I first develop an OrderTotalCalculator class and write a unit test for it. It contains no function syntax. I create the CalculateOrderTotalFunction later that calls my OrderTotalCalculator business logic. The following illustrates this separation.








Figure 1: Separate the business concerns from the function concerns.

A benefit from encapsulating the business logic is that I can run easily test it independently without calling a function. In fact, I can go a step further and develop and test it using a client rather than a server. How do I accomplish this, since function execution is only on the server? Remember, my goal at this point is to develop my business logic. I pass any regions that my business logic needs as arguments using the Region interface. I can use the quick, iterative productivity of client-side test-driven development to inject client regions at development time since both client and server regions use the same Region interface. I will deal more with testing later in this article.

A typical function has only 3 primary inputs. These inputs are:

1. The region on which the function operates
2. Optional arguments passed from the client
3. Optional keys, or “filter” on which the region will operate

We will pass these to the business logic as parameters, as follows:








Figure 2: Business Logic without function concerns


I can now test the business logic using IDE client unit tests by simply passing in the client region and optional arguments and keys into the business logic.







This approach is powerful for rapid development and testing for most applications. I will deal with a different testing approach in a later blog that’s appropriate for testing function parallelism or testing with very large data sets, in which client testing is impractical.


I now develop my CalculateOrderTotalFunction, which will call my OrderTotalCalculator. I try to keep the function as minimal as possible and limit it to extracting passed parameters, error trapping and communicating back to the caller.

The function creates the OrderTotalCalculator class. I generally inject my regions into the constructor of my OrderTotalCalculator class and then pass the arguments and filters at execution time.

The function:

  • Determines if the function was called on a specific region or not.
  • Extracts optional arguments passed from the client
  • Extracts any optional filters (i.e. region “keys”) passed from the client
  • Calls the business logic and passes regions, arguments, and filters
  • Wraps the call to the business logic with error handling
  • Returns the answer to the caller, or logs and returns a formatted error

The main point is that by following this pattern you can rapidly develop and test your business logic on the client and transfer it to the server side when you’re ready. You will be ready to transfer to the server when you want to test things like parallel execution on multiple nodes, or with query execution on targeted nodes.

Figure 3: Sample Function Template without business logic concerns




Separating your business and function concerns will make it easier for you to write, test and debug your business logic and functions.

Client-side testing of your business logic assists rapid development and testing using test-driven and/or test-oriented development methods.

In the next blog post, I will deal with another best-practice with functions, that of testing existing server-side functions using Mockito. This variation is helpful when you have large amounts of existing test data in a cluster and it’s not practical to bring back to the client, or you need to test function execution running parallel across the cluster.

I’d love to hear your success stories with functions!

[1] Jonas Bonér. Latency Numbers Every Programmer Should Know. 2012. [Cited June 6, 2018]. Available from https://gist.github.com/jboner/2841832.

About the Author

Wes Williams

Wes Williams has extensive experience as a hands-on architect and designer creating and delivering enterprise-class information technology systems in a wide variety of industries from small businesses to Fortune 10 companies. Of his specialties, his favorite speciality is using GemFire in-memory data technology to create real-time, high throughput, low latency systems with very high scalability. This creates business value by requiring less hardware to get the job done. Wes is a team leader in both process and software best-practices.

Follow on Linkedin More Content by Wes Williams