Transitioning Data Analytics to the Cloud
The preceding sections have laid the logical reasoning for positioning a corporate strategy in the favor of cloud-based analytics. To facilitate this development, a corporate review process should take into consideration the following (Gatehouse & Millman, 2016):
Cloud Options: If migrating to the cloud provides value to the business, then the first decision is to choose a cloud service provider and then how to move the data and secure it.
Data Analysis Needs: An organization needs to choose between analytic solutions available in the market from vendors or develop custom solutions based on how data will be analyzed.
IT Responsibilities: IT support and involvement are critical in cloud deployments in order to maintain and govern the company’s core technology platform.
Risk Management: As data is becoming one of the company’s largest asset, widespread policies, training, and access controls are required to ensure the security of the firm.
In the digital era, these topics will ensure businesses can remain competitive by adapting quickly to changing making conditions. Organizations make these decisions in a centralized or decentralized manner and this should be considered when conducting this review process, as both methods have varying degrees of differences (Gatehouse & Millman, 2016).
Using Analytics to Improve Business Offerings
Integrating analytics into business offerings to optimize the customer experience has become a competitive advantage, and within this next decade, will become a business necessity (Gatehouse & Millman, 2016). As this MBA program is offered online, Saint Mary’s University of Minnesota can benefit from analytics by providing students with a personalized online experience. A study of technology enhanced learning (TEL) found that big data analytics services in the cloud allow organizations to extract value to generate actionable insights to improve the student’s learning and comfort with technology (Shorfuzzaman, Hossain, Nazir, Muhammad, & Alamri, 2019).
The path to improving business offerings should have many considerations, but the top four must include:
Cost: The cost, formally referred to as the total cost of ownership (TCO), an organization anticipates using an on-premise, hybrid, or private cloud model should be assessed based on 3 or 5-year time horizons. Research and real-world business outcomes have shown that cloud deployments are cheaper than the exact deployment created on-premises (Buyya, et al., 2015). This is owing to several explanations: not having to pay for servers, storage units, hardware racks, wiring, the associated installation labor, and more. Hardware failures, software updates, and monitoring resources are other expenses that are managed by the cloud service provider. However, devoid of comprehensive forecasting, an organization can come into a situation where they are substantially paying extra for cloud resources than they anticipated, or worse, than on-premises deployments.
Scale: Global scale and control of scalability variables are important factors to consider. Cloud computing is improving scale with a few clicks of a button for any organization. For example, when architecting for a global organization, having your servers and resources close to the end-users is important. Latency, an important measure of the time it takes for data to transfer from one location to another, is critical for businesses to monitor (Buyya, et al., 2015). A few seconds of change in latency periods can change business profitability in the millions, or tens of millions if considering Black Friday sales for companies such as Amazon, Walmart, or Best Buy. Amazon’s solution DynamoDB or Azure’s Cosmos DB are globally accessible and scalable data storage solutions for organizations. Copying the storage across multiple data centers globally decreased latency periods to remain in the low milliseconds and retains high customer satisfaction owing to the high rate of read/write speed of transactions.
Reliability: Enhanced reliability and security are the last considerations discussed in this paper. Improved reliability is discussed in terms of Service Level Agreement (SLA), which in cloud-terms is referred to as “up-time” (Rouse, 2018). For example, a 99.9 percent SLA requires that the resource(s) be available for customers 99.9 percent of the year. Converting that to minutes equates to 8 hours and 45 minutes a year of downtime is allowable within the 99.9 percent SLA. Increasing that SLA with one additional “9”, to 99.99 percent, shortens the allowable downtime to 52 minutes a year. For a sizable organization with customers spending hundreds of thousands of dollars a minute, every minute of downtime is critical and even 99.9 percent, or “three-nines”, can have consequences worth millions of dollars. While SLAs can reimburse the organization for the cost of resources during downtime, SLAs do not provide coverage against the possible loss of revenue from customers being unable to access a website. This is why solution architects focus closely on the design of the infrastructure to support high-availability and high-reliability.
Trust: A detailed study on the cost-benefit analysis of a company’s intentions to migrate to the cloud revealed that trust and risk were the most important features that businesses considered in reaching these decisions (Chang & Hsu, 2019). In other words, the study concluded that increased trust in cloud providers decreases the perceived risk to the organization.
留言