RDBMS and Apache Geode Data Movement - Heather Riddle & Paul Warren, HCSC

December 14, 2017

Extract, transform, load (ETL) has always been complex and expensive for moving massive data sets from one data source to another. This is especially true if the source system is a traditional RDBMS with complicated relationships between tables. Most of the time, traditional ETL processes are implemented with batch, monolithic, and tightly coupled approaches. As the result, traditional ETL processes are often considered fragile, hard to maintain, not easy to tune, and often introduce high data latency between source and destination systems. In this session, Paul and Heather will cover how to create cloud-native event driven microservices (ETL pipeline) for RDBMS and Apache Geode by using Cloud Foundry, Spring Cloud Stream, and RabbitMQ/Kafka. The pipelines can handle high volume data sets and complex database queries, yet with low data latency between the source RDBMS and Apache Geode. In addition, the design is highly tunable and scalable. The session will also cover analysis of performance metrics based on the implementations of real world use cases. Slides: TBA Paul Warren, Senior Engineer, HCSC Heather Riddle, Senior Engineer, HCSC Filmed at SpringOne Platform 2017

Previous
Automated PCF Upgrades with Concourse - Rich Ruedin, Express Scripts
Automated PCF Upgrades with Concourse - Rich Ruedin, Express Scripts

Staying up-to-date with updates and patches is not an easy task. When updates come out multiple times a wee...

Next Video
Monitoring and Troubleshooting Spring Boot Microservices Architecture - Bogdatov, Gadiya, Thaker
Monitoring and Troubleshooting Spring Boot Microservices Architecture - Bogdatov, Gadiya, Thaker

Monitoring and troubleshooting Spring Boot microservices in production requires observability, traceability...