top of page

Cloud Technology Portfolio (Part 1)

Abstract


A small number of worldwide companies have put their resources and finances toward building large scale data centers capable of housing IT hardware and software for global consumption by all businesses in the world. These resources are capable of entirely replacing on-premise requirements of IT hardware by allowing organizations to migrate their existing applications and data to the cloud while using the solution provider’s resources to create the infrastructure foundation for the business.


However, extensive planning, strategy, and skillsets are required to migrate to the cloud to be successful. These concepts will be further explored in this paper and compared to best practices for cloud project portfolio management, existing architectures and framework processes. A combination of peer-reviewed articles, articles from reputable publications, and my personal knowledge in this field have been utilized to compile this paper.


Basics of Cloud Computing


The cloud industry has a global current market share of $210 billion (“Gartner Forecasts”, 2019). This is expected to increase with 2022 projections around $331 billion – a 58% increase over a 3-year period. This evolution of technology has forced leadership personnel responsible for managing project portfolios in the Information Technology field to ensure that business operations are transitioned to processes that are technologically driven (Bayrak, 2018). Mostly in the latest decade, these developments processes are migrating from on-premise hardware and software onto compute resources that are maintained and housed by cloud service providers. The cloud service providers establish data centers that are globally distributed and can provide resiliency, speed, and back-up protection for organizations (Knorr, 2018). While the underlying technologies are similar to those of on-premise systems, organizations and IT professionals must learn new terminology and how to provision, maintain, scale, and secure these resources while adhering to company policies.

The computational resources provided by data centers allow businesses to pay for servers and compute resources as they consume these resources. This shift in the IT model allows for incredible flexibility, speed, and potential reductions in cost:


(1) Flexibility can be achieved by having the ability to shift from one software stack to another without a significant impact to the business or IT. For example, if a chosen architecture does not work, pivoting to a new cloud architecture can be achieved without having to return hardware or deal with user licensing contracts.


(2) Speed comes from the capability for an organization to spin up or down a new server with a long list of pre-installed software within a matter of a few minutes.


(3) Lastly, cost reductions come from the lower total cost of ownership of the IT infrastructure, particularly with Capex expenditures. Companies can pay as they use resources, rather than paying significant upfront investment costs toward equipment and labor resources.


The current five largest data center proprietors globally include Amazon, Microsoft, Google, IBM, and Oracle (Drake, 2019). For over a decade, these five companies and 15 others have positioned significant financial investments into orchestrating globally scalable connected hardware components. Dubbed the Capex kings of the IT industry, collectively, 20 organizations spent over $120 billion toward building, expanding, and updating data centers throughout the world in 2018 (Haranas, 2019). Most of the 20 companies are increasing their Capex investments year over year for multiple years in a row. As of the end of 2018, there were 439 data centers worldwide, with 100 new data centers being constructed in 2019. Over the 20 companies, this averages to roughly 22 data centers per organization, and these buildings are relatively cheap because they are constructed in old warehouses or situated in cheap real estate locations.

Comments


bottom of page