Tag: Digital Transformation

Connects historical computing insights to broader technological shifts.

  • Cloud Adoption – Driving Digital Transformation with Strategies and Innovation

    Cloud Adoption – Driving Digital Transformation with Strategies and Innovation

    The adoption of high-value yet high-impact technologies such as AI and cloud should always follow a roadmap aligned with the organization’s maturity level and operational context.

    It is essential that decision-makers have a clear vision and motivation to ensure the successful introduction of new technologies.

    Forward-thinking organizations should also consider the post-adoption phase—what happens after achieving an initial goal, which in most cases will be intermediate or exploratory in a first adoption phase.

    A consolidation phase should be planned, where service adjustments identified during the adoption phase, but not initially foreseen, can be implemented.

    Simultaneously, expansion or evolution phases can be considered.

    In the case of cloud, it is now common to think in terms of consolidation or expansion phases. However, in some cases where expected results are not met (NOT OK), cloud services may even be decommissioned.

    In contrast, AI adoption is still in an early stage for many organizations, with some in a standby phase, waiting to observe results from other experiences.


    Cloud Adoption – Driving Digital Transformation with Strategies and Innovation

    The Cloud Adoption Process.

    Cloud adoption typically unfolds in three phases.

    The first phase is the actual adoption, where the organization defines objectives for which the cloud is deemed central. The motivations behind a cloud adoption project can vary widely experimental, tactical, or part of a long-term strategy.2

    Cycle of cloud adoption phases

    The key factor determining the success of the adoption phase is the realistic definition of expected outcomes.
    This success factor is strongly influenced by the organization’s posture toward the adoption project. Has the company already gained experience at the technical, administrative, and managerial levels?
    This, in turn, depends on the organization’s cloud maturity level and its awareness that cloud requires a different service model from traditional IT.
    A sensible approach to cloud adoption involves clear goal setting. If your primary objective is cost reduction, you have chosen the most challenging goal—sometimes even unfeasible within a cloud adoption process.
    If this is your goal, there are only two possibilities: either you have been using a well-configured cloud-native information ecosystem for some time, or you have little experience with cloud.

    To properly assess cloud adoption, organizations should answer five key questions:
    • Why?
    • What?
    • Who?
    • When?
    • Where?

    Each response must be precise and well-defined.
    Cloud adoption should not be seen as a strategy in itself but rather as a tactical step that becomes strategic over time.
    Consider the analogy of home renovation. If you decide to renovate your entire home inside and out, you may need to temporarily live elsewhere while the construction takes place, ensuring that work follows the agreed-upon project plan.
    For most organizations, such a scenario is impractical.
    Instead, cloud adoption is more akin to renovating one room at a time—accepting temporary inconveniences such as noise, dust, interruptions, and the presence of external workers in the house.
    The transition from an on-premises to a cloud ecosystem is similar: careful planning and incremental changes ensure a smoother process and a better final outcome.

    Diagram showing the impact of cloud adoption on an IT ecosystem, transitioning from a local setup to a hybrid ecosystem where component A1 moves to the cloud.

    Cloud adoption reshapes the ecosystem: applications migrate to the cloud, evolving traditional architectures into hybrid ecosystems.

    In Figure a highly schematic representation illustrates the state transition of an ecosystem due to cloud adoption.

    At time t0, the ecosystem operates in a traditional configuration, executing two processes, A and B. Each process relies on specific services:

    • Process A utilizes services A1 and A2.
    • Process B utilizes services B1 and B2.

    Process B was introduced after process A, and service B1 has a functional and operational dependency on service A1 (e.g., data flows, APIs, functions, or other dependencies).

    For specific business or technical reasons, the decision is made to migrate service A1 to the cloud.

    By the end of the adoption process, at time t1, the final ecosystem has transitioned into a hybrid model, where A1 now resides in the cloud.

    But what happened to the ecosystem during the phase represented by?

    Step-by-step diagram of the cloud adoption transformation phase, showing how an ecosystem evolves over time (t₀ to t₁) into a hybrid ecosystem with components migrated to the cloud.

    The transformation phase of cloud adoption: applications gradually migrate, reshaping the ecosystem into a hybrid model over time.m.

    In Figure the cloud adoption process is depicted through key milestones, capturing the ecosystem’s transformation.

    If, instead of A1, service A2 were migrated first, the complexity of the adoption process would increase significantly.

    Service A2 has multiple dependencies. Moving it first would extend the adoption timeline due to the additional integrations required between cloud-based and on-premises components.

    Interestingly, in real-world scenarios, organizations often face the challenge of migrating A2 rather than A1. Core services like A2 are frequently used by multiple other services and may need to be made accessible to cloud-based services already in place.

    A common example is a data warehouse, a data mart, or a dataset generated from a complex SQL view or procedure executed in real-time.

    Typically, the more valuable the data, the more it becomes entangled within multiple application layers, incoming and outgoing data flows, and business logic layers.

    Older hosting technologies tend to accumulate these layers over time, making migration increasingly complex.

    Not all cases follow this pattern, but in many situations, organizations prioritize immediate cost and time savings, leading to layered and entangled legacy systems.

    If an organization lacks cloud maturity and experience, it is advisable to start with peripheral scenarios before tackling core components.

    However, budget constraints often mean that smaller, peripheral projects receive less funding, limiting their ability to serve as meaningful test cases.

    Eventually, the need arises for data and services to be shared across the organization. This is where different cloud adoption scenarios emerge, which we will explore in the next chapter.

    A recommended best practice is to include one or two low-risk migration projects (such as A1) in the early phases of a larger cloud adoption initiative. This allows organizations to gain experience and refine their migration process before addressing more complex cases.

    The Next Phase After Cloud Adoption: Consolidation

    After the adoption phase, if the process has been successful, the organization enters the consolidation phase. Around the newly implemented cloud ecosystem, service extensions begin to emerge, generally aimed at improving efficiency, optimizing costs, and enhancing overall effectiveness.

    A service built in the cloud has a smoother evolution: if designed with a cloud-native approach, it will be easier to extend and enhance over time.

    If this phase also yields satisfactory results, the next step is expansion.

    At this point, the organization may embark on a full-fledged race towards the cloud, often accompanied—or even preceded—by the adoption of artificial intelligence. This dynamic is reminiscent of the gold rush in the American West: exciting, full of opportunities, but also highly delicate from an IT and FinOps perspective.

    In this phase, the cloud-native ecosystem may evolve into a hybrid or multi-cloud model, a scenario that, while representing a natural evolution, introduces new risks and complexities.

    The initial core of the cloud ecosystem expands with new resource clusters, designed to meet emerging business needs. At the same time, other business areas begin exploring the cloud, creating additional resource clusters. In these early stages, each cluster typically consists of only a few dozen resources, allocated according to the operational needs of each process.

    This is the moment when an organization can take a strategic step and decide to migrate entire business processes to the cloud ecosystem, firmly establishing cloud adoption. However, managing expansion across multiple business lines requires a high level of cloud maturity and a strong grasp of the FinOps framework to maintain cost control and ensure operational sustainability.

    In general, the cloud adoption journey can be classified as either digital transformation or innovation, depending on the nature of the business being migrated.

    A Special Case: Cloud-Specialized Services

    For years, marketing and communication departments have been using tools such as Google Analytics. For these users, extending their infrastructure with a service like BigQuery is a natural step, often quickly leading to integration with Google’s Gemini AI. Once inside this ecosystem, alternatives become increasingly limited, and the trajectory of technological evolution is, in practice, guided by the service provider.ding 2

    Cloud Adoption Models

    Cloud adoption strategies can be classified into different categories, each describing how organizations migrate or adopt applications and infrastructure in the cloud.

    There are various ways to represent these models, and the following classification does not claim to be exhaustive or definitive for all possible cloud adoption strategies (or tactics).

    Rehost (Lift and Shift)

    The Rehost strategy involves migrating existing applications and infrastructure to the cloud without significantly modifying their architecture. This approach is quick and allows resources to be moved from on-premises data centers to the cloud with minimal changes.

    Operational Example:

    • Moving a legacy application to a virtual machine on AWS EC2 or Azure VM without altering its code.

    Advantages:

    • Fast migration times.
    • Low initial complexity.

    Disadvantages:

    • Limited efficiency gains.
    • Higher operational costs.
    • Does not fully leverage cloud-native capabilities like auto-scaling and managed services.

    Refactor (Replatform)

    The Refactor or Replatform strategy involves optimizing or modifying parts of an application to better leverage cloud services, without fully rewriting it. Minor changes to the code or infrastructure can improve efficiency and scalability.

    Operational Example:

    • Migrating an application to a managed database service like Amazon RDS or Azure SQL, eliminating the need for on-premises database management.

    Advantages:

    • Improved performance and scalability.
    • Lower operational costs compared to Rehost.
    • No need for a complete rewrite.

    Disadvantages:

    • More complex than Rehost.
    • Requires additional effort for adaptation.

    Repurchase (Drop and Shop)

    With the Repurchase strategy, an organization replaces its legacy applications with ready-to-use SaaS (Software as a Service) solutions. This means abandoning existing infrastructure and directly adopting cloud-native solutions.

    Operational Example:

    • Replacing an internal CRM system with a SaaS solution like Salesforce.

    Advantages:

    • Significant reduction in management and maintenance costs.
    • Immediate access to modern solutions with automatic updates.

    Disadvantages:

    • Loss of customization and control over the application.
    • Data migration challenges.

    Rebuild (Re-architect)

    The Rebuild strategy involves completely rewriting an application to fully exploit cloud-native capabilities. This approach allows rethinking architecture using microservices, containers, and serverless technologies.

    Operational Example:

    • Transforming a monolithic application into a microservices-based architecture deployed on Kubernetes (EKS/AKS) or using AWS Lambda serverless functions.

    Advantages:

    • Maximum benefit from cloud scalability, flexibility, and resilience.
    • Complete modernization of the application.

    Disadvantages:

    • Long and costly development process.
    • Requires specialized skills and significant resources.

    Retire

    With the Retire strategy, an organization decommissions or removes obsolete applications or infrastructure. Sometimes, during migration planning, certain applications are found to be redundant and can be eliminated.

    Operational Example:

    • Decommissioning an old application that is no longer in use or has been replaced by a more efficient solution.

    Advantages:

    • Cost reduction from eliminating maintenance of unused systems.
    • Simplification of the IT landscape.

    Disadvantages:

    • Possible resistance from teams still relying on the retired application.
    • Potential loss of historical data.

    Retain (Hybrid)

    The Retain strategy involves keeping certain applications or data on-premises due to security, compliance, or operational dependencies on legacy systems. Organizations adopting this approach often manage a hybrid infrastructure, using both cloud and on-premises resources.

    Operational Example:

    • Keeping an ERP system on-premises while migrating fewer sensitive applications to the cloud.

    Advantages:

    • Flexibility in maintaining critical applications on-premises.
    • Compliance with security and regulatory requirements.

    Disadvantages:

    • Increased management complexity.
    • Higher operational costs.
    • Challenges in integrating cloud and on-premises data.

    New Application (Cloud-native Development)

    With this strategy, new applications are developed directly in the cloud, following a cloud-native approach from the start. This model takes full advantage of PaaS (Platform as a Service) and SaaS capabilities.

    Operational Example:

    • Building a new application using AWS Lambda, DynamoDB, and S3, eliminating the need for physical servers.

    Advantages:

    • Maximum flexibility and scalability.
    • Optimal use of modern cloud technologies.

    Disadvantages:

    • Requires cloud-native development expertise.
    • High initial investment in development.

    Evaluating Cloud Adoption Strategies: Benefits and Risks

    These strategies allow organizations to gradually adopt the cloud according to their operational and technological needs. Each approach has its benefits and challenges, and the choice depends on factors such as cost, complexity, internal expertise, and business objectives.

    Each cloud adoption strategy presents distinct benefits and risks. The selection depends on an organization’s specific needs, technological maturity, regulatory constraints, and balance between initial costs and long-term benefits. Companies must carefully evaluate which strategy to adopt based on their priorities, capabilities, and business goals.

    Table – Analysis of Benefits and Risks for Cloud Adoption Strategies

    StrategyBenefitsRisks
    Rehosting (Lift and Shift)Fast migration, Low initial costs, Simple implementationLimited efficiency, Higher operational costs, Limited cloud benefits
    Refactoring (Replatform)Optimized performance, Lower operational costs, Improved scalabilityLonger migration times, Higher initial investment, Need for new skills
    Buyback (Drop and Shop)Simplified complexity, Automatic updates, Predictable costsLoss of customization, Training costs, Vendor lock-in risk
    Rebuild (Redesign)Full cloud benefits, Improved performance, High scalabilityHigh initial costs, Operational risks, long implementation times
    RetireCost savings, Simplified infrastructure, Increased focus on core servicesLoss of historical data, Resistance to change, Potential operational impact
    Retain (hybrid)Flexibility, Security and compliance, Control over sensitive dataIncreased complexity, Higher costs, Data integration challenges
    New application (cloud-native development)Maximizes cloud advantages, Accelerated innovation, DevOps compatibilityLimited efficiency, Higher operational costs, Limited cloud benefits

    Success Cases for Different Cloud Adoption Scenarios

    Below are some publicly known success stories, each representing a specific cloud adoption strategy on a particular cloud provider.

    Rehost (Lift and Shift) – Netflix (AWS)

    Netflix initially migrated its on-premises infrastructure to AWS using a lift-and-shift approach, moving applications without significant modifications. This transition allowed Netflix to enhance scalability and disaster recovery while reducing operational overhead. Over time, Netflix evolved its architecture to leverage more cloud-native services, but the initial move provided the foundation for its current highly resilient, global streaming platform

    See more on https://aws.amazon.com/solutions/case-studies/netflix/

    .

    Refactor (Replatform) – Coca-Cola (Google Cloud)

    Coca-Cola leveraged Google Cloud’s Kubernetes Engine (GKE) to refactor and optimize its vending machine order management system. By migrating its microservices architecture to a managed Kubernetes environment, Coca-Cola improved service reliability, enhanced real-time analytics, and achieved better cost efficiency through auto-scaling and optimized infrastructure usage.

    See more on https://cloud.google.com/customers/coca-cola


    Repurchase (Drop and Shop) – Royal Dutch Shell (Microsoft Azure)

    hell opted for a SaaS-based approach by transitioning its legacy ERP systems to Microsoft Dynamics 365. This move eliminated the need for complex on-premises infrastructure management, providing Shell with a more agile and integrated business platform that supports predictive analytics, automation, and streamlined global operations.

    See more on https://customers.microsoft.com/en-us/story/royaldutchshell-energy-azure-dynamics365

    Rebuild (Re-architect) – Capital One (AWS)

    Capital One undertook a full application re-architecture by adopting microservices, serverless computing, and AI-driven automation on AWS. The company replaced monolithic banking applications with cloud-native services utilizing AWS Lambda, Amazon DynamoDB, and Amazon SageMaker for AI-driven fraud detection. This strategy resulted in improved security, better operational efficiency, and enhanced customer experience.

    See more on https://aws.amazon.com/solutions/case-studies/capital-one


    Retire – Dropbox (AWS to private Infrastructure)

    Dropbox originally hosted its storage services on AWS but later decided to decommission parts of its cloud-based infrastructure in favor of an in-house solution called Magic Pocket. This transition allowed Dropbox to optimize its storage architecture, reduce dependency on third-party providers, and significantly cut operational costs while maintaining high-performance scalability.

    See more on https://www.wired.com/2016/03/epic-story-dropboxs-exodus-amazon-cloud-empire/

    Retain (Hybrid) – Volkswagen (Microsoft Azure + on-premises)

    Volkswagen adopted a hybrid cloud strategy by keeping critical manufacturing and vehicle telemetry data on-premises while shifting other workloads to Microsoft Azure. This approach enabled Volkswagen to comply with strict data sovereignty regulations while taking advantage of Azure’s AI and analytics services for predictive maintenance, supply chain optimization, and autonomous vehicle development.

    See more on https://customers.microsoft.com/en-us/story/volkswagen-groupmanufacturing-azure

    New Application (Cloud-native Development) – Airbnb (AWS)

    Airbnb was built from the ground up as a cloud-native platform using AWS services. By leveraging AWS EC2 for compute, Amazon RDS for database management, and Amazon S3 for storage, Airbnb ensured high scalability and global availability. Over time, it integrated AI and big data analytics to optimize search, pricing strategies, and fraud detection, making its infrastructure a benchmark for digital platform scalability and efficiency.

    See more on https://aws.amazon.com/solutions/case-studies/airbnb


    Conclusion

    Cloud adoption is not a one-size-fits-all journey but rather a progressive transformation shaped by each organization’s context, priorities, and maturity. The strategies explored — from rehosting to cloud-native development — highlight that every choice carries both opportunities and trade-offs. Success depends less on the technology itself and more on the clarity of vision, the ability to balance risks and benefits, and the willingness to foster cultural and organizational change.

    Adopting the cloud means embracing new operating models, strengthening governance and compliance, and developing the skills needed to manage complexity. Organizations that approach this transformation holistically — considering people, processes, and technology together — are better equipped to unlock the full potential of the cloud.

    Ultimately, cloud adoption is not an end point but a continuous journey. As ecosystems evolve, hybrid and multi-cloud models will become increasingly common, enabling flexibility, resilience, and innovation at scale. By aligning strategy with execution, and innovation with responsibility, organizations can transform cloud adoption from a technical migration into a true driver of digital transformation.



    References

    This article is an excerpt from the book

    Cloud-Native Ecosystems

    A Living Link — Technology, Organization, and Innovation

  • How the Cloud is Built and How It Works | Essential Guide to Cloud Infrastructure & Digital Transformation

    How the Cloud is Built and How It Works | Essential Guide to Cloud Infrastructure & Digital Transformation

    In today’s digital world, the cloud is often perceived as an abstract concept, hidden behind the simplicity of a web interface. Yet, behind every click, there is a vast and complex infrastructure made of data centers, high-speed connections, and advanced virtualization technologies. In this article, adapted from my book Exploring Cloud-Native Ecosystems, we’ll explore the physical and logical foundations of the cloud to understand how it is truly built and how it works.


    How the Cloud is Built and How It Works | Essential Guide to Cloud Infrastructure & Digital Transformation

    The widespread adoption of cloud computing, as detailed in my post Cloud Adoption, would not have been possible without several enabling industrial factors:

    • The expansion of a stable, high-speed, and highly available global network infrastructure.
    • The exponential growth of computational capacity per unit of physical space, along with a reduction in equivalent energy consumption.
    • The evolution of computational models.

    We have seen that the cloud can be described through its service models and distribution models, presenting itself as a ready-to-use service for consumers.

    We have also seen how cloud resources, and therefore the entire cloud, can be summarized into a few key elements: computational power, data storage, and data transport.

    Moreover, we have seen that these characteristics are enabled by specific electronic devices.

    In reality, the cloud consists of all these components—just on a much larger scale.

    Whether public or private, cloud services are delivered through a vast network of data centers distributed worldwide, managed directly by public cloud providers.

    Each data center contains enormous stacks of computing units, such as the DGX SuperPOD (though not all of them 😊)

    What is Inside a Cloud Data Center?

    Illuminated server racks in data center

    A cloud data center is a facility that can span vast physical dimensions, as shown in Figure.

    Inside a cloud data center, we find rows of specialized servers neatly stored inside rack enclosures—tall, standardized metal cabinets designed to house multiple computing units in a compact and organized manner.

    Unlike traditional office computers, which typically have keyboards, monitors, and user interfaces for direct interaction, cloud servers are headless meaning they lack direct input/output devices. Instead, they are designed for remote management and automated operation, ensuring maximum efficiency and scalability.

    Each server rack contains:

    • Motherboards with powerful multi-core processors (CPUs & GPUs) optimized for parallel workloads.
    • High-speed RAM (memory modules) to handle intensive data processing.
    • Storage devices (HDDs, SSDs, or NVMe drives) that provide ultra-fast access to data.
    • Network interface cards (NICs) that allow high-speed communication with other servers.
    • Redundant power supply units (PSUs) to ensure continuous operation.

    To enable seamless operation across thousands of machines, these rack-mounted servers are interconnected through high-speed data buses, forming a massively parallel computing environment.

    Key technologies enabling communication within a cloud data center include:

    1. Backplane Bus Systems
      1. Each rack has an integrated backplane—a high-speed communication backbone that interconnects all servers within the same cabinet.
    2. High-Speed Network Switching
      1. Servers are connected via fiber-optic networking switches, enabling low-latency data exchange between different racks and clusters.
    3. Software-Defined Networking (SDN)
      1. Instead of relying on traditional manual network configurations, cloud providers use software-defined networking, which allows dynamic traffic routing and load balancing across the entire data center.
    4. Inter-Rack Optical Links
      1. Since cloud computing requires extreme bandwidth, data is transmitted using fiber-optic cables inside the data center, connecting racks at speeds of 100 Gbps or higher.
    5. Distributed Storage Systems
      1. Cloud servers don’t store data locally like personal computers. Instead, they access a distributed storage layer that spans multiple racks and even multiple data centers, ensuring redundancy and fault tolerance.

    How These Servers Work Together

    Each server in a rack is not an isolated unit but part of a cluster, working together to handle massive computational workloads. Cloud data centers are architected using the concept of hyperscale computing, meaning:

    • Workloads are dynamically distributed across multiple physical machines.
    • A single task (e.g., processing an AI model or serving a website) may run across dozens or even hundreds of servers simultaneously.
    • If one server fails, its workload is automatically shifted to another available machine, ensuring continuous service availability.

    The Role of Virtualization and Containers

    Each server in a rack is not an isolated unit but part of a cluster, working together to handle massive computational workloads. Cloud data centers are architected using the concept of hyperscale computing, meaning:

    • Workloads are dynamically distributed across multiple physical machines.
    • A single task (e.g., processing an AI model or serving a website) may run across dozens or even hundreds of servers simultaneously.
    • If one server fails, its workload is automatically shifted to another available machine, ensuring continuous service availability.

    The Importance of Rack Density & Cooling

    Because cloud data centers must pack thousands of high-performance servers into a limited space, rack density is a critical factor. Modern high-density racks can house:

    • 40 to 60 blade servers per rack
    • Up to 10,000 CPU cores per data hall

    This extreme density generates massive amounts of heat, requiring advanced cooling technologies, including:

    • Liquid cooling solutions that circulate coolant to dissipate heat.
    • Hot aisle / cold aisle configurations to optimize airflow and prevent overheating.
    • AI-powered energy management to dynamically adjust cooling based on real-time workloads.

    Geographical Distribution of the Cloud.

    The geographical distribution of data centers is a key factor in service quality. Over time, alongside massive data centers, edge data centers and modular data centers have been introduced.

    A modular data center can be expanded over time by adding new units to increase computing power. This strategy is widely used by cloud providers offering public cloud services in newly developing areas, ensuring low-latency service for a limited set of cloud resources.

    However, as you might expect, the computing power of a modular container-based data center (as shown in  Figure 26) cannot match that of a large-scale data center (as shown in Figure 24).

    The geographical distribution of cloud providers’ data centers follows a two-tiered structure:

    • Consumers see only the service delivery regions (referred to as regions).
    • Each region consists of multiple redundant data centers providing high availability at the regional level.

    Cloud providers do not disclose the exact physical location of data centers, mainly for security reasons.

    However, users can explore the cloud providers’ regional maps, such as:

    Regions, once created, gradually expand with additional cloud resources over time.

    The time required to establish a new region depends on the regulatory frameworks of the host country where the data centers for that region are located.

    Due to legislative constraints, data centers must first comply with national regulations before adhering to international standards.

    As a result, each cloud region is effectively tied to data centers within a single country.

    The creation of a new region does not immediately guarantee the availability of all cloud resources present in a long-established region.

    The cloud resource availability map for each region enables the analysis of two critical factors:

    1. Cost control – Identifying available resources within a specific region helps optimize expenses, reducing unnecessary data transfers and avoiding unexpected costs.
    2. Legal risk assessment – If a required cloud resource is unavailable in the designated national region or outside the compliance perimeter dictated by regulations, it may introduce regulatory and compliance risks.

    Moreover, data traffic between different regions, even when hosted within the same public or private cloud, can lead to higher operational costs, making strategic regional resource planning essential for both financial efficiency and regulatory compliance.

    What Is the Cloud Made Of?

    What materials are used in cloud computing?

    From a materials science perspective, a server farm consists of various materials used in electronic components, network infrastructure, and cooling systems.

    Key Materials Used in Cloud Infrastructure:

    1. Metals and Minerals:

    • Silicon – Used for semiconductors and processor chips.
    • Copper – Used in wiring and circuit boards due to its high electrical conductivity.
    • Aluminum – Used for server chassis and heat sinks.
    • Gold – Used in connector plating to prevent corrosion.
    • Nickel & Cobalt – Used in batteries and electronic components.
    • Rare Earth Elements – Used in hard disk magnets and high-performance electronics.

    2. Cooling Systems:

    • Water – Used in liquid cooling systems for data centers.
    • Plastic Pipes – Used for cooling distribution systems.
    • Refrigerants – Special chemical compounds used in high-efficiency air conditioning.

    3. Power and Storage Technologies:

    • Lead & Sulfuric Acid – Used in UPS backup batteries (Uninterruptible Power Supply).
    • Lithium – Used in modern lithium-ion batteries for energy storage.
    • Ferromagnetic Materials – Used in transformers and voltage regulators.

    4. Structural and Environmental Materials:

    • Concrete & Steel – Used to construct data center buildings.
    • Thermal Insulation Materials – Used to maintain temperature stability.
    • Lightweight Alloys – Used for server racks.

    5. Sustainable Energy Materials:

    • Solar Panels – Made from silicon and other semiconductors to provide renewable energy.
    • Eco-friendly Materials – Used in new green data centers to minimize environmental impact.

    These materials are essential for constructing and operating cloud data centers, which house thousands of servers running in a stable and energy-efficient environment.

    The Critical Role of Communication Infrastructure in the Cloud

    One of the key challenges of cloud computing is its underlying communication infrastructure.

    In today’s world, the widespread availability of broadband connections has enabled millions of people to continue working remotely during the COVID-19 pandemic. It is clear that without high-speed, large-scale connectivity, this transition would not have been possible.

    I live in Italy, in a town where broadband has been deployed, but it has not yet reached every street—a “no man’s land” where no one intervenes. As a result, my neighbor, just 50 meters away, has full broadband access, while my family does not. (That said, with 60 Mbps download speed, we don’t face too many issues! 😊)

    The Cloud’s Dependency on Communication Infrastructure

    Public cloud services rely heavily on data transport capabilities—both in terms of infrastructure capacity and global and local network integration.

    At the lower layers of the ISO/OSI stack, we find the telecommunications carriers that facilitate global data exchanges.

    Let’s take, for example, data transmission across the Atlantic Ocean, which connects Europe and the United States.

    This massive undersea communication backbone is built on fiber-optic submarine cables, utilizing Dense Wavelength Division Multiplexing (DWDM) technology. DWDM allows multiple data channels to travel through the same fiber, using different wavelengths, significantly boosting bandwidth efficiency.

    Cloud Providers and Network Connectivity

    To ensure seamless and reliable connectivity, cloud service providers leverage a mix of:

    • Global network providers
    • Data transport service providers
    • Internet connectivity providers

    Many cloud vendors implement hybrid network solutions, combining their own private infrastructure with the existing telecommunications networks of local providers.

    A prime example is the MAREA cable, a joint project between Microsoft, Facebook, and Telxius. MAREA is one of the most powerful transatlantic cables, boasting a data transport capacity of 160 terabits per second.

    The Strategic Importance of Interconnection Infrastructures

    Global network of submarine internet cables

    These interconnection infrastructures are not just essential for commercial cloud services—they are strategic assets for national security as well.

    Most of these critical network infrastructures are designed and managed by private companies. However, governments retain some level of control over their operation, particularly when it comes to critical security configurations.

    For a deeper dive into the role of submarine cables in global internet connectivity, you can check out GeoPop’s Italian-language YouTube video: CAVI SOTTOMARINI – la fibra ottica del mondo passa in fondo agli oceani, altro che satelliti – Ep-1 (youtube.com)

    Global network of submarine internet cables (from https://www.submarinecablemap.com/)


    Conclusion

    The cloud is not magic, but the outcome of decades of technological progress and cultural transformation. By uncovering its inner workings—from industrial enablers to global networks—we gain the tools to navigate digital transformation with awareness. The better we understand its foundations, the better we can design, govern, and innovate our future cloud-native ecosystems.



    References

    This article is an excerpt from the book

    Cloud-Native Ecosystems

    A Living Link — Technology, Organization, and Innovation