Dell IT

An App and Workload Strategy for the Cloud (Dell IT Cloud Journey Series)

Kevin Herrin By Kevin Herrin Vice President, IT Infrastructure Platform & Engineering, Dell Technologies March 17, 2020

In Part 1 of this Dell IT Cloud Journey Series, Follow Us to the Cloud Using Dell Technology: Our Multi-Year Journey Is Mapping the Way, I shared the details relating to Dell Digital’s multi-year journey to create a brand new cloud-based infrastructure to support Dell’s business needs in the wake of one of the biggest mergers in tech history.

In this second blog, I’ll be discussing application and workload strategy for the cloud.

Like every effort that involves building and/or transforming things, using the right tool for the right job is paramount to successfully transitioning your traditional data center to a multi-cloud environment. For Dell Digital, determining which tools to use across our evolving multi-cloud landscape starts with our application and workload strategy.

Dell Digital, Dell’s IT operation, is currently working to modernize hundreds of applications in our legacy data center and to host the resulting workloads in our new cloud environment using an extensive toolbox of our own Dell Digital tools.

It is a time-consuming process focused on helping to guide application owners in determining the best path to modernization as they move to the cloud. Their demands are ultimately what determines Dell Digital’s infrastructure selections as we continue to build out multi-cloud infrastructure platform services to support our business.

Here’s a look at our application strategy and how we are shaping resulting workloads and leveraging multi-cloud flexibility going forward.

Forging an App Rationalization Plan

In the wake of a mega-merger with EMC, Dell Technologies started our app rationalization with some 3,000 applications in its data center environment. Of those, just over 1,100 are custom applications (not third-party tools) that could be considered for modernization.

We are using a lifecycle approach to rationalize our legacy applications, determining which applications will remain and which will retire as a first step. For the ones that will remain into the future, we are working with app owners to decide whether they will be refactored into a more modern cloud-based and/or microservices architecture, or whether they will simply be modernized in terms of the technology stack and rehosted in virtual machines or containers.

Based on these options, we know what new workloads we need to host in our modern cloud, across the spectrum of Software-as-a-Service platforms (SaaS), Platform-as-a-Service (PaaS) with Pivotal Platform (now part of VMware Tanzu), Pivotal Container Service (now VMware Enterprise PKS), on our virtualization VM layer which is our Infrastructure-as-a-Service platform (IaaS), physical servers or services in the public cloud.

One of the things we are doing is avoiding the lift and shift in place without getting to modern versions of the technology stack (operating system, databases, etc.). We’re paying down a lot of legacy technical debt as we move into these new clouds. We are using access to these modern, flexible, and efficient infrastructure services as a motivation mechanism to force some modernization teams to move workloads into the cloud.

There’s an App Factory for That

In an ideal world, we’d prefer all our apps to be refactored in Pivotal Platform, which provides a turnkey cloud-native framework that provides the most automated, most self-healing and most efficient lifecycle management features in the cloud. Pivotal Platform allows for seamless upgrades of the technology stack below the application so that our services are always able to benefit from the latest versions of our technology without having to cope with long outage windows for upgrades.

Pivotal Platform also drives our strategy of breaking apps down into their smallest logical components or microservices, allowing them to be hosted in the most agile way.

The biggest barrier to app owners opting to refactor with Pivotal Platform is that doing a significant architecture refactor of an application can be a heavy lift. It takes time, money and has a long learning curve for teams that don’t do refactoring for a living. Having engineers learn the lessons of refactoring for the first time makes the journey more difficult.

Enter a unique approach to overcoming that hurdle. Dell IT has created its own App Factory, a SWAT team of refactoring engineers that will help to refactor apps in Pivotal Platform, and also provides a “cookbook” to refactoring recipes that can be used by the broader organization to gain Pivotal Platform insights.

The idea is that this group of refactoring engineers already knows Pivotal Platform and the challenges that refactoring poses and can rewrite legacy apps much more quickly. In fact, the average app refactoring time is 30 days, compared with up to nine months on average by app owners themselves.

Some 430 apps have been singled out to be migrated to the Pivotal Platform. Approximately 100 have been refactored, of which some 25 used the App Factory.

Other Tools in the Toolbox

Those app owners still not ready to refactor in cloud native using Pivotal Platform can choose to repackage their apps using Pivotal Container Service (PKS), which requires less changes than cloud native but still offers cloud enablement features like self-healing and enhanced flexibility. They can also choose Infrastructure-as-a-Service (IaaS), where we move modernized versions of their app components to our virtualized machine layer with VMware and server virtualization platform ESX. Some apps will still require physical servers in our new environment, and some may use public cloud or Software-as-a-Service (SaaS), where appropriate.

An app can have workloads that span all of those tools if they are broken up into components.

Hosting some apps in the public cloud can also be a tool for Dell Digital to gain flexibility in managing capacity and costs, particularly since our VMware technology provides seamless access between public and private environments.

We use the public cloud when we don’t have a capability that business users need, to expand capacity when we either have timeline constraints or we only need capacity temporarily, and when we lack presence in a geography where we need hosting. However, we are always looking for ways to bring workloads back on-prem whenever possible because we can provide lower cost hosting and better performance.

Overall, our multi-cloud technology gives us ultimate mobility to move things wherever we want.

The Art of Shaping Workloads

As our workload in the cloud evolves, we are honing our hosting strategy. We leverage a mix of hyper-converged VxRail technology and configurations to meet workload requirements, grouping infrastructure components into clusters around common use cases. We are in the midst of an effort to enhance our strategy by tailoring those cluster configurations to the shape of workloads we are hosting to maximize utilization and efficiency.

This approach ensures we minimize costs too. That’s because different workloads have different CPU and memory (RAM) requirements depending how the app functions. Since RAM is the most expensive component on a server, it is important to make sure it is consumed before the CPU is. If a workload uses up CPU first, then the remaining RAM is “stranded,” greatly increasing the cost of the VM.

We are striving to better manage our systems to shape the ratio of CPU to RAM to workload needs and avoid wasting RAM. The art of shaping workloads is fine-tuning those ratios across hundreds of infrastructure clusters.

Summary

As we continue to build our multi-cloud environment, Dell is providing the ultimate tool kit to meet whatever challenges we face. In my next blog, I will look at how cloud platform as a service can help you maximize the value of what your smart people are doing.

What strategic steps is your organization taking to modernize applications and host the resulting workloads in the cloud?

Blogs in this Series

Follow Us to the Cloud Using Dell Technology: Our Multi-Year Journey Is Mapping the Way (Dell IT Cloud Journey Series)

Kevin Herrin

About Kevin Herrin


Vice President, IT Infrastructure Platform & Engineering, Dell Technologies

As Vice President of Dell Digital’s (Dell’s IT organization) Infrastructure Platform & Engineering organization, Kevin Herrin is responsible for global data center, platform, database, public and private cloud, network, voice, and call center telephony infrastructures. He is also responsible for the financial management of those infrastructure platforms.

Kevin is a strategic, multi-dimensional technology executive, with a proven track record of delivering profitable growth and organizational impact through large-scale, disruptive technology and its business applications. He is an energetic change agent and visionary respected for his skills in structuring and growing complex teams, operating in chaotic environments, and developing relationships with colleagues, clients and executives.

Prior to joining Dell, Kevin was founder and CEO of Technology Pathfinders Consulting, a consulting firm offering client organizations DevOps, Cloud Engineering and Transformation, and Operational Optimization. He has also held a variety of engineering and leadership roles at Virtustream, EMC, VMWare and AT&T.

Read More

Share this Story
Join the Conversation

Our Team becomes stronger with every person who adds to the conversation. So please join the conversation. Comment on our posts and share!

Leave a Reply

Your email address will not be published. Required fields are marked *