Pivotal’s Application Transformation team helps customers migrate applications to cloud-native every day. Recently I wondered, what are some of the most frequent modifications we make in order to get an app to cloud-native. Is there a way to automate monolithic and legacy applications to Spring Boot ? Which manual transformation, when automated, will lead to the holy grail of automatic app migration to cloud? If you are interested please read on.
1. Spring Bootification : The process of injecting the Spring Boot framework and packaging the application as a fat jar. If you want to go through a comprehensive recipe for bootification look no further than here. Don't forget the old faithful guide of Building an Application with Spring Boot.
2. Escaping the Dependency Hell : As part of the bootification, you will drag in new versions of dependent java libraries. You will need to go through an exercise of deduping and reconciling dependencies. Typically this results in an upgrade all dependencies to the latest versions as part of pom.xml or the build.gradle refactoring. Coexistence and upgrade of older libraries and frameworks with Spring 2.x, 5.x starters. The maven dependency management plugin is a friend in these cases especially the dependency:tree command that can be used to determine dependencies that are used and declared; used and undeclared; unused and declared. If you are still in classpath resolution hell and cannot figure out the origin of a class leverage the ClassGraph tool a super-fast and flexible classpath scanner.
3. Externalized Configuration : If you manage to escape the dependency hell, you now get the privilege of isolating and separating the configuration of your application and externalizing it. Typically configuration is strewn in java code, various property files, app server deployment descriptors and sometimes files on VM or bare-metal laid down by the Devops or shared services groups. When taking an app to the cloud the configuration needs to come from an external source so that it can be manipulated by the cloud platform in a consistent fashion. The best approach to take here is to fix the most egregious config options first like hardcoded ports and hardcoded file paths etc. After that take a look at the app-server deployment descriptors and port all the JNDI names and resource names to Spring Beans and configure these via Spring property sources. Eventually when most of the configuration has been externalized you can put it in a config server and organize it based on your deployment environment i.e config is segmented based on dev, test, perf, QA and prod spaces. There are some static code analysis tools that can help you understand the broad impact of these changes. Leverage app-server specific tools like windup and IBM migration-toolkit as well as classic static analysis tools like find-bugs that identify anti-cloud patterns. Pivotal has a tool in this space that identifies all the cloud remediations called Pivotal App Analyzer. Remove any app server specific deployment descriptors and clustering behavior since the clustering of app instances is now a feature provided by the cloud platform.
4. Connections to external data sources : Remove JNDI dependencies. Leverage user Provided data sources along with Spring Cloud Connectors to connect to backend and backing services in the clouds. Here is the recipe for replacing the JavaEE way of connecting to backend resources with the Spring way of datasource beans and wiring the data sources to endpoints in Pivotal Cloud Foundry. see. Spring, Java Buildpack and Pivotal Cloud Foundry does a lot of magic in this area. Remember this golden rule - NO application should *ever* use auto-reconfiguration in production. They should always use Spring Cloud Connectors directly. The right way is to leverage the java-cfenv which allows the developer direct control over parsing VCAP_SERVICES and configuring and connecting to downstream bound services.
5. Logging - Log to stdout and stderr. Before you get along too far you need some window into the state of the app in the cloud. For this purpose you should change the loggers of the app to log to stdout and stderr so these streams can be aggregated and captured in one tool provided by the platform or a trusted third-party like Splunk. Typically this involves 1. Remove logging related jars from the legacy pom.xml 2. Introduce the Cloud Native Logging Facade with a convenience library like lombok or Use SLF4J Logger Directly. Thereafter configure Standard Logging by adding standard logging pattern to application.yml. This pattern will log all output to one line, and ensure that the log readers and can parse the log in a way consistent with other projects. Follow this comprehensive recipe for application logging in CloudFoundry.
6. Observability and Health Checks : One of the cool features of Spring Boot is the ability to add production ready actuators and health checks to the project that introspect the datasource beans and configure the right health checks chained and invoked by the platform. This should be very easy to do following the step by step documentation defined in production-ready apps. It is also a good idea to define domain specific custom health indicators that give deep insight into the health of the app to the Cloud Foundry health monitor. Recipe for defining custom health checks can be found here.
7. Securing applications : Handle user credentials, backing service credentials with Vault or Credhub. Configure user authentication and role based access control authorization with the Pivotal Tile SSO. Recipe for using Vault as a backing store Config Server. Recipe for Securing Applications with CredHub. Recipe for configuring the Pivotal SSO Tile for apps. Configure Pivotal SSO Automatic Resource Configuration.
8. Remove persistence of State: In-memory state offload to an external cache. Recipe for Offload HTTP Sessions with Spring Session and Redis. Conversion of Entity EJBs to Hibernate or OpenJPA or other ORM. Recipe for Converting an Existing EJB-Based Application to a Spring-Based Application. Conversion of 2pc transactions to 1pc transactions or eventual consistent state. Recipe for How To Deal With XA Global Transactions in the Cloud. Cloud Foundry doesn’t provide a (durable) file system. If your application requires a file system for durable persistence, consider using something like a MongoDB GridFS-based solution or an Amazon Web services-based S3 solution. If your application’s use of the file system is ephemeral—staging file uploads or something—then you can use the Cloud Foundry application’s temporary directory for the life of the request. forklifted-application Another alternative is to use volume service in PCF for persistent data. See recipe on external-file-system.
9. Messaging with distributed brokers. Leverage Spring Cloud Streams as an abstraction over Kafka or RabbitMQ. The recipe for running spring cloud streams with the right broker provider plugin on PCF.
10. CI/CD automation > Build and Deployment Pipelines. Auto generate CI/CD pipelines for your legacy app using Spring Pipelines. Spring-cloud-pipelines provides scripts, configuration, and convention for automated deployment pipeline creation for Jenkins and Concourse with Cloud Foundry or Kubernetes. Project Crawler allows for auto generating cloud pipelines.
About the AuthorMore Content by Rohit Kelapure