Next Pathway can accelerate your cloud migration initiatives, regardless of which stage you're in
Complex Workload Migration
"Our early estimates for code conversion, through a manual/offshore approach, were coming to 1.5 years. This was a non-starter for us. Leveraging Next Pathway’s SHIFT™ Translator, we not only accelerated the entire migration timeline, but it allowed our teams to focus on other value-adding tasks to get us cloud-ready."
A US retail giant decided that a move to the cloud – and off their legacy data warehouse – was the best way to manage their data, perform complex analytics, and lower TCO going forward. However, the migration posed a number of challenges, including:
- Over 2.5 MM lines of K-Shell scripts, SQL, and embedded SQL within ETL had to be converted to the target cloud environment.
- Complex Teradata functions – like BTEQ – that did not have native support within the cloud data warehouse had to be addressed
2. Migrate Downstream Consuming Applications:
- Over 2000 reports on various BI tools had to be triaged to determine which needed to be repointed, refactored or rebuilt for the cloud.
Overall, our solution accelerated the company’s end to end migration by 40%.
We employed the SHIFT Migration Suite to help automate each part of the migration effort.
1. Via SHIFT™ Translator and Tester, all 2.5 MM lines of code was translated to the cloud native dialect with 100% accuracy within 3 weeks
2. Translated all embedded SQL within the 2000+ reports within 1 week via SHIFT™ Translator.
Downstream Consuming Application Migration
"Using SHIFT™ Crawler, we quickly identified duplicate reports that could be consolidated, reducing our overall number of downstream consuming applications by roughly 30%."
A large financial services company was completing a large data warehouse migration to the cloud, but wasn’t sure how best to migrate their downstream consuming applications to the cloud platform. Over 1500 applications were consuming data from the legacy warehouse, with thousands of reports consumed across hundreds of users. The company wanted to avoid a one-to-one migration to control growing compute costs and inefficient consumption patterns.
We employed our Crawler360™ to:
1. Develop a complete inventory of all code/objects within the legacy data warehouse
2. A set of network maps to define the lineage and dependencies within all downstream consuming applications
3. An integrated network map to trace the consuming applications back to their tables in the legacy warehouse
This allowed us to consolidate the number of downstream consuming applications based on similar patterns identified, identify which downstream applications could simply be repointed or those that required refactoring, and define the appropriate reporting data model in the cloud.
"Using SHIFT™ Crawler, we identified all in-scope DataStage ETL jobs within 1 week by automatically scanning over 62 million lines of code."
A large bank needed to triage over 62 million lines of DataStage ETL code to identify which pipelines needed to be migrated from the legacy data warehouse to the cloud. In addition to the large volumes, the company had no visibility into which pipelines fed which applications, and the types of transformations happening throughout the data flow.
Not knowing which pipelines were relevant for the legacy data warehouse was a huge operational risk to other applications reliant on those data feeds.
Using Crawler360™, we identified all in-scope DataStage ETL jobs within 1 week – and quickly built a lineage view to determine read/write dependencies throughout the application.
This allowed the migration program to continue without risk to other systems, but also accelerated testing cycles by knowing how to efficiently orchestrate the execution of the pipeline groups to compare data results between the cloud and legacy environments, prior to cut-over.
Data Platform Modernization
"Next Pathway helped us plan a migration strategy to the cloud that was most effective for our organization."
A global insurance company had ambitions to move to the cloud, but had a large investment in an on-prem Hadoop enterprise data lake and wasn’t sure how to reconcile. All data was already loaded to the lake, and being consumed by many lines of business.
Also, some business units were beginning to launch departmental cloud solutions that circumvented existing enterprise data governance controls.
We implemented a unique reference architecture that allowed for a self-service consumption pattern for all business units to move data from the on-prem lake to the cloud.
Further, we implemented an enterprise data model that would allow on-prem data to be modeled in such a way to satisfy most enterprise consumption requirements, and enable consumers to conduct their specific business transformations within the cloud – lowering compute costs on the enterprise data lake.