We are witnessing the start of what is likely to be the biggest upheaval in the database world since E.F. Codd introduced the relational database model in the 1970s. We're talking, of course, about the move to the cloud. Enterprises are beginning to migrate on-premises databases, including data warehouses that support mission-critical data and analytics workloads, as part of larger cloud-first and hybrid cloud strategies. Due to its cost-structure and hardware footprint, Teradata is a prime candidate for replacement as part of moving to the cloud.
Enterprises are really considering Teradata “replatforming” efforts because they want to leave behind expensive, rigid on-premises databases for powerful, efficient and cost-effective cloud databases ... and they want to migrate as quickly as possible.
The secret weapon: Pivotal Greenplum
Few data warehouses can stand up to Teradata. Specialized data warehouse technologies have come and gone in the last decade without leaving much of a mark. Pivotal Greenplum is one of the few that offers the full range of data warehouse functionality enterprises demand. It's open source query optimizer has become the industry standard for MPP systems and related projects like Apache MADlib (incubating) have a strong following among data scientists. Pivotal Greenplum is now available in the cloud on AWS and Microsoft Azure, which makes it a compelling alternative when moving from Teradata on premises to the cloud.
It’s the application migration that makes all the difference
Ironically, the hard thing about migrating a database is actually not the migration of the database at all: moving schema and data are processes that can be executed effectively. The real migration challenge is migrating the applications that use the database, the diagram below is a good depiction of the cost and effort required for application migration versus database migration.
The only way to migrate applications, especially custom written applications, has been to rewrite them for the new database. Teradata is no exception here. Teradata applications use a Teradata-specific SQL dialect and Teradata-specific drivers and connectors that communicate over a Teradata-specific protocol with, well, Teradata. Therefore, replacing Teradata with conventional means requires rewriting, reconfiguring, and rewiring all applications. Application migrations for analytical workloads are known to take years and up to one and a half times the cost of the Teradata appliance.
While there are many tools that automate data and schema migration, there hasn’t been a solution to automate the application migration process. Not anymore.
Datometry lets enterprises run Teradata workloads natively and instantly on Pivotal Greenplum
Just imagine: what if you could run your analytical applications, written originally for Teradata, natively and instantly on Pivotal Greenplum? In an industry first, Datometry together with Pivotal makes this possible. Datometry Hyper-Q for Pivotal Data Suite is a next-generation virtualization platform that enables Teradata applications to run completely transparently on the Pivotal stack. Datometry enables a transparent and instant application migration.
Fundamentally changing the pace and economics of cloud database migrations
Datometry’s technology eviscerates the cost, time, and risk commonly associated with Teradata migrations. Enterprises can now free up budget to innovate and do more in the cloud. Datometry and Pivotal together deliver a powerful and compelling solution to enterprises looking to migrate off Teradata.
Datometry and Pivotal just opened a complete new chapter in database technology. Change the economics of big data analytics instantly and unlock your Teradata workloads!
Watch the Datometry Hyper-Q for Pivotal Data Suite demo.
Pivotal customers can download Hyper-Q here.
About the AuthorFollow on Twitter More Content by Jacque Istok