Categoria: Cloud Computing

Posts exploring cloud adoption, service models, and cloud-native strategies, with insights from NIST and beyond

  • How the Cloud is Built and How It Works | Essential Guide to Cloud Infrastructure & Digital Transformation

    How the Cloud is Built and How It Works | Essential Guide to Cloud Infrastructure & Digital Transformation

    In today’s digital world, the cloud is often perceived as an abstract concept, hidden behind the simplicity of a web interface. Yet, behind every click, there is a vast and complex infrastructure made of data centers, high-speed connections, and advanced virtualization technologies. In this article, adapted from my book Exploring Cloud-Native Ecosystems, we’ll explore the physical and logical foundations of the cloud to understand how it is truly built and how it works.


    How the Cloud is Built and How It Works | Essential Guide to Cloud Infrastructure & Digital Transformation

    The widespread adoption of cloud computing, as detailed in my post Cloud Adoption, would not have been possible without several enabling industrial factors:

    • The expansion of a stable, high-speed, and highly available global network infrastructure.
    • The exponential growth of computational capacity per unit of physical space, along with a reduction in equivalent energy consumption.
    • The evolution of computational models.

    We have seen that the cloud can be described through its service models and distribution models, presenting itself as a ready-to-use service for consumers.

    We have also seen how cloud resources, and therefore the entire cloud, can be summarized into a few key elements: computational power, data storage, and data transport.

    Moreover, we have seen that these characteristics are enabled by specific electronic devices.

    In reality, the cloud consists of all these components—just on a much larger scale.

    Whether public or private, cloud services are delivered through a vast network of data centers distributed worldwide, managed directly by public cloud providers.

    Each data center contains enormous stacks of computing units, such as the DGX SuperPOD (though not all of them 😊)

    What is Inside a Cloud Data Center?

    Illuminated server racks in data center

    A cloud data center is a facility that can span vast physical dimensions, as shown in Figure.

    Inside a cloud data center, we find rows of specialized servers neatly stored inside rack enclosures—tall, standardized metal cabinets designed to house multiple computing units in a compact and organized manner.

    Unlike traditional office computers, which typically have keyboards, monitors, and user interfaces for direct interaction, cloud servers are headless meaning they lack direct input/output devices. Instead, they are designed for remote management and automated operation, ensuring maximum efficiency and scalability.

    Each server rack contains:

    • Motherboards with powerful multi-core processors (CPUs & GPUs) optimized for parallel workloads.
    • High-speed RAM (memory modules) to handle intensive data processing.
    • Storage devices (HDDs, SSDs, or NVMe drives) that provide ultra-fast access to data.
    • Network interface cards (NICs) that allow high-speed communication with other servers.
    • Redundant power supply units (PSUs) to ensure continuous operation.

    To enable seamless operation across thousands of machines, these rack-mounted servers are interconnected through high-speed data buses, forming a massively parallel computing environment.

    Key technologies enabling communication within a cloud data center include:

    1. Backplane Bus Systems
      1. Each rack has an integrated backplane—a high-speed communication backbone that interconnects all servers within the same cabinet.
    2. High-Speed Network Switching
      1. Servers are connected via fiber-optic networking switches, enabling low-latency data exchange between different racks and clusters.
    3. Software-Defined Networking (SDN)
      1. Instead of relying on traditional manual network configurations, cloud providers use software-defined networking, which allows dynamic traffic routing and load balancing across the entire data center.
    4. Inter-Rack Optical Links
      1. Since cloud computing requires extreme bandwidth, data is transmitted using fiber-optic cables inside the data center, connecting racks at speeds of 100 Gbps or higher.
    5. Distributed Storage Systems
      1. Cloud servers don’t store data locally like personal computers. Instead, they access a distributed storage layer that spans multiple racks and even multiple data centers, ensuring redundancy and fault tolerance.

    How These Servers Work Together

    Each server in a rack is not an isolated unit but part of a cluster, working together to handle massive computational workloads. Cloud data centers are architected using the concept of hyperscale computing, meaning:

    • Workloads are dynamically distributed across multiple physical machines.
    • A single task (e.g., processing an AI model or serving a website) may run across dozens or even hundreds of servers simultaneously.
    • If one server fails, its workload is automatically shifted to another available machine, ensuring continuous service availability.

    The Role of Virtualization and Containers

    Each server in a rack is not an isolated unit but part of a cluster, working together to handle massive computational workloads. Cloud data centers are architected using the concept of hyperscale computing, meaning:

    • Workloads are dynamically distributed across multiple physical machines.
    • A single task (e.g., processing an AI model or serving a website) may run across dozens or even hundreds of servers simultaneously.
    • If one server fails, its workload is automatically shifted to another available machine, ensuring continuous service availability.

    The Importance of Rack Density & Cooling

    Because cloud data centers must pack thousands of high-performance servers into a limited space, rack density is a critical factor. Modern high-density racks can house:

    • 40 to 60 blade servers per rack
    • Up to 10,000 CPU cores per data hall

    This extreme density generates massive amounts of heat, requiring advanced cooling technologies, including:

    • Liquid cooling solutions that circulate coolant to dissipate heat.
    • Hot aisle / cold aisle configurations to optimize airflow and prevent overheating.
    • AI-powered energy management to dynamically adjust cooling based on real-time workloads.

    Geographical Distribution of the Cloud.

    The geographical distribution of data centers is a key factor in service quality. Over time, alongside massive data centers, edge data centers and modular data centers have been introduced.

    A modular data center can be expanded over time by adding new units to increase computing power. This strategy is widely used by cloud providers offering public cloud services in newly developing areas, ensuring low-latency service for a limited set of cloud resources.

    However, as you might expect, the computing power of a modular container-based data center (as shown in  Figure 26) cannot match that of a large-scale data center (as shown in Figure 24).

    The geographical distribution of cloud providers’ data centers follows a two-tiered structure:

    • Consumers see only the service delivery regions (referred to as regions).
    • Each region consists of multiple redundant data centers providing high availability at the regional level.

    Cloud providers do not disclose the exact physical location of data centers, mainly for security reasons.

    However, users can explore the cloud providers’ regional maps, such as:

    Regions, once created, gradually expand with additional cloud resources over time.

    The time required to establish a new region depends on the regulatory frameworks of the host country where the data centers for that region are located.

    Due to legislative constraints, data centers must first comply with national regulations before adhering to international standards.

    As a result, each cloud region is effectively tied to data centers within a single country.

    The creation of a new region does not immediately guarantee the availability of all cloud resources present in a long-established region.

    The cloud resource availability map for each region enables the analysis of two critical factors:

    1. Cost control – Identifying available resources within a specific region helps optimize expenses, reducing unnecessary data transfers and avoiding unexpected costs.
    2. Legal risk assessment – If a required cloud resource is unavailable in the designated national region or outside the compliance perimeter dictated by regulations, it may introduce regulatory and compliance risks.

    Moreover, data traffic between different regions, even when hosted within the same public or private cloud, can lead to higher operational costs, making strategic regional resource planning essential for both financial efficiency and regulatory compliance.

    What Is the Cloud Made Of?

    What materials are used in cloud computing?

    From a materials science perspective, a server farm consists of various materials used in electronic components, network infrastructure, and cooling systems.

    Key Materials Used in Cloud Infrastructure:

    1. Metals and Minerals:

    • Silicon – Used for semiconductors and processor chips.
    • Copper – Used in wiring and circuit boards due to its high electrical conductivity.
    • Aluminum – Used for server chassis and heat sinks.
    • Gold – Used in connector plating to prevent corrosion.
    • Nickel & Cobalt – Used in batteries and electronic components.
    • Rare Earth Elements – Used in hard disk magnets and high-performance electronics.

    2. Cooling Systems:

    • Water – Used in liquid cooling systems for data centers.
    • Plastic Pipes – Used for cooling distribution systems.
    • Refrigerants – Special chemical compounds used in high-efficiency air conditioning.

    3. Power and Storage Technologies:

    • Lead & Sulfuric Acid – Used in UPS backup batteries (Uninterruptible Power Supply).
    • Lithium – Used in modern lithium-ion batteries for energy storage.
    • Ferromagnetic Materials – Used in transformers and voltage regulators.

    4. Structural and Environmental Materials:

    • Concrete & Steel – Used to construct data center buildings.
    • Thermal Insulation Materials – Used to maintain temperature stability.
    • Lightweight Alloys – Used for server racks.

    5. Sustainable Energy Materials:

    • Solar Panels – Made from silicon and other semiconductors to provide renewable energy.
    • Eco-friendly Materials – Used in new green data centers to minimize environmental impact.

    These materials are essential for constructing and operating cloud data centers, which house thousands of servers running in a stable and energy-efficient environment.

    The Critical Role of Communication Infrastructure in the Cloud

    One of the key challenges of cloud computing is its underlying communication infrastructure.

    In today’s world, the widespread availability of broadband connections has enabled millions of people to continue working remotely during the COVID-19 pandemic. It is clear that without high-speed, large-scale connectivity, this transition would not have been possible.

    I live in Italy, in a town where broadband has been deployed, but it has not yet reached every street—a “no man’s land” where no one intervenes. As a result, my neighbor, just 50 meters away, has full broadband access, while my family does not. (That said, with 60 Mbps download speed, we don’t face too many issues! 😊)

    The Cloud’s Dependency on Communication Infrastructure

    Public cloud services rely heavily on data transport capabilities—both in terms of infrastructure capacity and global and local network integration.

    At the lower layers of the ISO/OSI stack, we find the telecommunications carriers that facilitate global data exchanges.

    Let’s take, for example, data transmission across the Atlantic Ocean, which connects Europe and the United States.

    This massive undersea communication backbone is built on fiber-optic submarine cables, utilizing Dense Wavelength Division Multiplexing (DWDM) technology. DWDM allows multiple data channels to travel through the same fiber, using different wavelengths, significantly boosting bandwidth efficiency.

    Cloud Providers and Network Connectivity

    To ensure seamless and reliable connectivity, cloud service providers leverage a mix of:

    • Global network providers
    • Data transport service providers
    • Internet connectivity providers

    Many cloud vendors implement hybrid network solutions, combining their own private infrastructure with the existing telecommunications networks of local providers.

    A prime example is the MAREA cable, a joint project between Microsoft, Facebook, and Telxius. MAREA is one of the most powerful transatlantic cables, boasting a data transport capacity of 160 terabits per second.

    The Strategic Importance of Interconnection Infrastructures

    Global network of submarine internet cables

    These interconnection infrastructures are not just essential for commercial cloud services—they are strategic assets for national security as well.

    Most of these critical network infrastructures are designed and managed by private companies. However, governments retain some level of control over their operation, particularly when it comes to critical security configurations.

    For a deeper dive into the role of submarine cables in global internet connectivity, you can check out GeoPop’s Italian-language YouTube video: CAVI SOTTOMARINI – la fibra ottica del mondo passa in fondo agli oceani, altro che satelliti – Ep-1 (youtube.com)

    Global network of submarine internet cables (from https://www.submarinecablemap.com/)


    Conclusion

    The cloud is not magic, but the outcome of decades of technological progress and cultural transformation. By uncovering its inner workings—from industrial enablers to global networks—we gain the tools to navigate digital transformation with awareness. The better we understand its foundations, the better we can design, govern, and innovate our future cloud-native ecosystems.



    References

    This article is an excerpt from the book

    Cloud-Native Ecosystems

    A Living Link — Technology, Organization, and Innovation

  • Essential Cloud Distribution Models

    Essential Cloud Distribution Models

    Cloud distribution models—private, public, community, and hybrid—define how cloud infrastructures are deployed and governed. Beyond service models, these classifications are crucial for compliance, security, and organizational strategy. Learn how NIST definitions shape adoption paths and why hybrid solutions dominate modern ecosystems.


    Cloud Distribution Models

    Beyond the service model of a cloud resource, understanding the cloud distribution model is crucial, as it plays a key role in the application of industry-specific regulations and national or continental security policies.

    The NIST 800-105 (16) document provides definitions for the different cloud service distribution models.

    Private Cloud

    A cloud infrastructure that is exclusively used by a single organization composed of multiple consumers across various operational locations or branches. It may be owned, operated, and managed by the organization itself, a third party, or a combination of both. The infrastructure can exist either on-premises or off-premises.

    Community Cloud

    A cloud infrastructure that is exclusively used by a specific community of consumers from distinct organizations that share common interests and service objectives (e.g., operational missions, security requirements, policies, or compliance regulations). Ownership, operation, and management can be carried out by one or more organizations within the community, a third party, or a combination of both. The infrastructure may be located on or off the premises of the participating organizations.

    Public Cloud

    A cloud infrastructure that is made available for open use by any individual or business consumer. Ownership, operation, and management may be carried out by a commercial, academic, or governmental organization, or a combination thereof. This infrastructure is located at the cloud provider’s premises.

    Hybrid Cloud

    A cloud infrastructure that combines two or more distinct cloud infrastructures (private, community, or public), which remain unique entities but are connected through standardized or proprietary technology that enables data and application portability. Examples include load balancing across geographically distributed environments, high availability management, and disaster recovery planning for core business services.

    Considerations on Cloud Distribution Models

    Public cloud is often the first model that comes to mind when discussing cloud computing.

    However, it is important to recognize that there are no inherent technological differences that distinguish cloud distribution models at their core; the primary differences lie in contractual agreements.

    In public cloud models, there is a clear distinction between the provider (supplier) and the consumer (client), whereas this distinction becomes increasingly blurred in other distribution models.

    Fundamentally, a public cloud is characterized by the fact that a data center is not contractually dedicated to a single client. Even large enterprises that request dedicated cloud farms adjacent to their data centers still operate in a shared cloud environment.

    Conversely, a private cloud is designed to ensure the highest level of segregation. However, in practice, data must eventually traverse public infrastructure—such as global fiber-optic backbones—to enable communication, even in strictly controlled environments.

    Modern data centers introduce the concept of edge computing, providing localized computing and storage resources closer to the end user. These edge data centers offer limited local capacity while ensuring direct integration with major fiber and satellite communication carriers.

    Despite the high level of isolation an edge data center may provide, it cannot truly be classified as a private cloud if it economically relies on shared communication bandwidth provided by major carriers. Essentially, data transport follows the same principle as cargo transportation: whether by rail, ship, or aircraft, multiple clients share the infrastructure.

    Given these complexities, hybrid cloud solutions have become the most common approach in cloud adoption strategies, allowing organizations to combine multiple cloud models based on evolving needs.

    From the author’s perspective, any cloud distribution model should meet all the requirements defined by NIST to be properly classified as cloud computing.

    One key aspect to focus on is the responsibility matrix associated with each cloud distribution model, which will be further explored in the chapter on cloud regulations.

    The history of cloud computing offers a broad and detailed overview of the key milestones in the development of this technology. While not exhaustive, it provides an interpretation of innovation as a driving force.

    We can divide this history into dis


    ConclusionHolistic Vision

    Understanding cloud distribution models is more than an academic exercise. It represents a key step in aligning technology with governance, compliance, and business resilience.

    • Public cloud pushes scalability and global reach, but also requires careful risk management.
    • Private cloud promises control and segregation, though it inevitably intersects with shared infrastructures.
    • Community cloud shows the strength of collective approaches, where compliance and missions converge.
    • Hybrid cloud emerges as the pragmatic solution, balancing innovation with regulation and providing flexibility in uncertain times.

    In practice, the choice of a distribution model is rarely absolute. Organizations evolve, regulations tighten, and infrastructures adapt. What matters is not only selecting a model but building an ecosystem capable of integrating them all.

    From a cloud-native perspective, distribution models are not silos: they are complementary dimensions of the same continuum. Recognizing this helps enterprises navigate complexity with confidence, ensuring that security, compliance, and innovation can coexist in a sustainable way.H2



    References

    This article is an excerpt from the book

    Cloud-Native Ecosystems

    A Living Link — Technology, Organization, and Innovation

  • Cloud Service Models

    Cloud Service Models

    In this post, we introduce the definition of service models according to NIST. The concept of a service model is central to planning a cloud adoption process. In 2011, NIST classified three service models for cloud resource management:

    • Software as a Service (SaaS)
    • Platform as a Service (PaaS)
    • Infrastructure as a Service (IaaS)
    • Extended Service Models (BPaaS)

    Cloud Service Models Explained

    In this chapter, we introduce the definition of service models according to NIST. The concept of a service model is central to planning a cloud adoption process. In 2011, NIST classified three service models for cloud resource management:

    • Software as a Service (SaaS)
    • Platform as a Service (PaaS)
    • Infrastructure as a Service (IaaS)

    Software as a Service (SaaS)

    The consumer can use the provider’s applications running on a cloud infrastructure.

    These applications are accessible from various client devices through a thin client interface, such as a web browser (e.g., web-based email) or an application program interface (API).

    The consumer does not manage or control the underlying cloud infrastructure, including network, servers, operating systems, storage, or even individual application functionalities, except for limited user-specific application settings.

    Platform as a Service (PaaS)

    The capability provided to the consumer is to deploy onto the cloud infrastructure applications created or acquired using programming languages, libraries, services, and tools supported by the provider.

    The consumer does not manage or control the underlying cloud infrastructure, including network, servers, operating systems, or storage, but has control over the deployed applications and, possibly, configuration settings for the application hosting environment.

    Infrastructure as a Service (IaaS)

    The capability provided to the consumer is to provision processing, storage, networking, and other fundamental computing resources where the consumer can deploy and run arbitrary software, which can include operating systems and applications.

    The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, and deployed applications and, possibly, limited control of selected networking components (e.g., host firewalls).

    Service Model Insights

    Service models implicitly introduce a responsibility matrix dictated by contractual agreements specific to each model.

    A comparative analysis of service models and the ISO/OSI stack helps clarify the impact of the service model on the cloud adoption journey.

    • IaaS: The cloud provider typically ensures service availability up to layer 4 of the ISO/OSI stack (Transport layer).
    • PaaS: The guaranteed level varies between layer 4 and layer 5, sometimes extending to layer 6.
    • SaaS: Services are typically maintained at layer 6, and in specific cases where no user interaction occurs through a web or mobile application, it can reach layer 7.

    The division of responsibility between the cloud provider and the consumer plays a crucial role in defining the service model.

    Another key aspect to consider is scalability expectations in the cloud. Opting for an IaaS model does not necessarily take full advantage of the cloud’s scalability capabilities. In fact, scalability responsibility falls on the hosted content within the IaaS service, which may result in classical scalability limitations, potentially leading to higher operational costs rather than reduced expenses.

    While it is relatively easy to classify cloud resources managed through IaaS models, this becomes more complex with PaaS and SaaS models.

    Many complex cloud resources are delivered via hybrid service models, such as PaaS/IaaS or PaaS/SaaS.

    For example:

    • Recognizable SaaS solutions include Salesforce CRM, Microsoft 365, and Google Workspace, all of which offer complex suites of SaaS-based solutions.

    Below, we provide a deeper insight into each service model.

    IaaS

    Classic IaaS services include basic network resources (firewalls, VPNs, etc.).

    Virtual machines (VMs) not managed by the cloud provider also fall under IaaS. This is one of the most comparable cloud services to traditional IT system management, making it one of the most considered options in lightweight cloud adoption strategies.

    The cloud provider guarantees hardware operation, including both physical and logical security.

    The security responsibility is typically covered up to layer 4 of the ISO/OSI stack.

    The consumer is responsible for configuring and managing the operating system and any additional application software installed on the VM.

    hybrid case between IaaS and PaaS is a VM with a managed operating system. Here, the provider manages OS updates and security patches autonomously but does not enforce version changes until official support ends. However, the provider does not assume responsibility for third-party applications installed by the consumer.

    IaaS/PaaS and PaaS

    PaaS is the first service model that truly defines cloud computing, though finding purely PaaS-classifiable resources can be challenging.

    Many cloud resources use hybrid service models (IaaS/PaaS), including Kubernetes and Red Hat OpenShift.

    Some databases are offered as PaaS services, where the consumer only defines:

    • DDL (Data Definition Language): Schema structures, stored procedures, functions, and user permissions.
    • DML (Data Manipulation Language): CRUD (Create, Read, Update, Delete) operations.

    Notable PaaS database services include Azure Database and AWS Aurora.

    Other pure PaaS services include Azure Functions and AWS Lambda.

    In a pure PaaS model, the consumer relinquishes control over the cloud resource’s state, which is contractually managed by the provider. The responsibility matrix is clear: the consumer is only responsible for the functional and application layer.

    SaaS

    SaaS is one of the most widely adopted cloud service models in the public cloud.

    Consider the early free webmail providers, available for more than two decades.

    SaaS encompasses social media services and widely used B2C sales platforms, some of which are expanding into enterprise solutions (e.g., WhatsApp for Business).

    The boundary between SaaS and PaaS is sometimes ambiguous, and even cloud providers themselves may struggle to classify services technically, often relying on marketing terminology instead.

    Examples of Paid SaaS Services

    Azure:

    • Microsoft 365: A well-known SaaS application that allows users to access cloud-based productivity tools such as email, calendars, and document management.
    • Microsoft Dynamics.
    • Microsoft Business Central.
    • Microsoft Teams.
    • Microsoft Chat GPT: One of the most widespread AI-driven prompt services powered by OpenAI.
    • Microsoft Copilot: A productivity acceleration service leveraging OpenAI technology.

    AWS:

    • AWS Elastic Beanstalk: A plug-and-play platform supporting multiple programming languages and environments.

    Google Cloud Platform (GCP):

    • Google Mail: One of the most widely used free and paid email services.
    • Google Workspace: A suite of productivity tools used worldwide.
    • Google Gemini: Emerging as an AI-integrated service within Google’s productivity offerings, with promising integration into BigQuery .
    • BigQuery: Google’s serverless, multi-cloud data warehouse , offered as a SaaS solution. It integrates machine learning algorithms to provide real-time insights into business processes.

    This classification helps organizations understand which service models best fit their operational and strategic needs when adopting cloud computing.

    Ambiguity in Service Model Interpretation

    In personal-use solutions, many cloud services function as pure SaaS, offering a fully managed experience with minimal user-specific customization.

    However, in enterprise environments, these services evolve into hybrid models, providing organizations with extensive customization and control over service configurations.

    While activating a solution like Microsoft 365 or Google Workspace may be simple, its enterprise-level deployment introduces significant complexity. Advanced setup demands strong networking and security expertise to ensure full-service functionality while maintaining corporate compliance.

    Additionally, enterprise versions often offer API-based integrations, allowing deep customization and automation. As a result, responsibility for service management shifts from solely the provider to a shared responsibility model, where both the provider and the consumer play key roles in governance and maintenance.

    Extending Cloud Service Models

    Cloud Service Model
    Extended Cloud Service Model

    In the initial chapters of this section, we introduced fundamental concepts regarding cloud service and distribution models.

    IaaS, PaaS, and SaaS—delivered via public cloud providers—are the primary service models offered by major cloud vendors.

    However, hybrid cloud and cloud-native solutions are evolving, creating new service opportunities.

    For example, as a software provider, I may have shifted from installing my product on customers’ local machines to offering it as a web service, with only lightweight local applications mimicking the traditional desktop experience.

    In parallel, we have defined the key roles in the cloud supply chain and the cloud consumption chain, introducing the responsibility matrix (RACI) as a simplified representation of service responsibility management.

    The cloud is continuously evolving, introducing increasingly complex cloud resources designed to address specific consumer needs—sometimes creating new demands through marketing strategies.

    For example, instead of offering my software as a web service, I might distribute it as a cloud marketplace product:

    • The customer downloads my product, which is automatically deployed in their cloud environment.
    • The software runs within their cloud but sends me usage analytics on how the customer interacts with my service.

    Alternatively, I might offer a dedicated cloud ecosystem for each client, running on my own cloud infrastructure.

    As these hybrid models emerge, cloud providers are launching more specialized solutions to prevent customer migration based on cost optimization alone.

    At the center of this strategy is data.

    If I offer a unique service that no competitor provides—even at an acceptable price—my customers will find it very difficult to switch to another provider.

    To reinforce customer retention, public cloud providers are introducing hybrid PaaS/SaaS services that are native to their specific cloud platform and not easily replicable on other clouds.

    Impact on IT Operations and Organizational Restructuring

    We have already seen how cloud consumption structures introduce specialized roles, leading to a reorganization of IT operations.

    Figure 30 attempts to illustrate how the cloud service market is adapting to this growing complexity.

    In Figure the term “hyperscaler” is used to represent a primary public cloud provider.

    Hyperscalers collaborate with telecommunications providers, ensuring the first level of service—data transport, which corresponds to the lower layers of the ISO/OSI model.

    Above this, we find:

    • The three primary cloud service models (IaaS, PaaS, SaaS).
    • Infrastructure as Code (IaC), enabling automated cloud ecosystem lifecycle management.
    • DevSecOps operations, which integrate with the Cloud Architecture COE and FinOps COE to mediate service deployment.

    What If an Organization Has a Low Cloud Maturity Level?

    Some enterprises may be unable to fully implement cloud-native operations, due to:

    • Low cloud maturity
    • Cultural gaps that cannot be easily addressed within the budget of a single business project

    In these cases, cloud services allow organizations to outsource the creation and management of a dedicated cloud ecosystem tailored to a specific business project.

    This is known as Business Process as a Service (BPaaS).

    BPaaS allows businesses to adopt cloud services incrementally, while preparing for future cloud-native transformations.

    Holistic Vision

    Choosing among IaaS, PaaS, and SaaS is not a purely technical decision; it’s an orchestration of technology, organization, finance, risk, and portability. The “right” model is the one that best aligns these lenses for a specific business outcome—now and as it evolves.

    The big picture

    • Value speed vs. control: PaaS/SaaS accelerate delivery by abstracting infrastructure; IaaS maximizes control at the cost of more operational burden.
    • Shared responsibility ≠ zero responsibility: As you move from IaaS → PaaS → SaaS, what you manage changes, but governance and data accountability remain yours.
    • Data gravity rules: Application code can be portable; data creates inertia. Plan for data lifecycle, residency, and egress from day one.

    Four lenses to decide

    1. Architecture & Ops
      • IaaS: bespoke environments, full control over OS/network; demands solid IaC, GitOps, and runbooks.
      • PaaS: managed runtimes/databases, faster releases; design to provider SLOs/limits and use 12‑factor practices.
      • SaaS: consume capabilities (e.g., email, CRM, analytics) with minimal ops; integration and identity become first‑class work.
    1. Organization & Roles
      • Define a RACI: who’s Accountable for uptime, security, backups, cost?
      • Create a Platform Team (even small) to offer “golden paths” and guardrails for IaaS/PaaS users.
    1. Risk, Compliance & Security
      • Treat identity (IAM), encryption, logging, and backup/restore as non‑negotiable baselines across all models.
      • Map data classification to service choice (e.g., regulated data may require private networking, KMS ownership, or specific regions).
    1. FinOps & Unit Economics
      • IaaS: variable costs + idle risk → rightsize, schedule, and autoscale.
      • PaaS: managed scaling; watch per‑request/GB‑second style pricing.
      • SaaS: per‑seat/per‑usage; model over 3–5 years and include egress/integration costs.

    When to prefer each model (rule of thumb)

    • SaaS for commodity capabilities (email, docs, ITSM, CRM, analytics UI). Demand: export formats, APIs, clear exit plan.
    • PaaS for differentiated apps where speed matters (functions, containers, managed DBs). Demand: SLOs, scaling policy, quotas.
    • IaaS for specialized workloads (legacy, low‑level tuning, strict isolation). Demand: automation (IaC), hardening, cost guards.

    Guardrails that travel with you

    • Policy‑as‑code (tagging, budgets, SCPs/OPA), secrets management, immutable builds, observability SLOs, DR patterns (RTO/RPO defined).
    • Standardize on containers + IaC even when using PaaS, to keep future options open.

    Reducing vendor lock‑in (pragmatically)

    • Favor open interfaces (e.g., S3‑compatible storage, OpenAPI for services).
    • Decouple domain logic from provider SDKs behind adapters.
    • Keep data export pipelines and schema ownership internal.
    • Track a short reversibility doc: what would it take to move in 90 days?

    Decision checklist (copy/paste)

    • Exit strategy (export formats, data volumes, notice periods)?
    • Data class & residency? Encryption & key ownership?
    • Required SLOs (latency, availability) and RTO/RPO?
    • Who is Accountable for uptime, security, and cost?
    • 12‑month run rate vs. 36‑month TCO (incl. egress & staff time)?
    • What business KPI does this service impact?


    References

    This article is an excerpt from the book

    Cloud-Native Ecosystems

    A Living Link — Technology, Organization, and Innovation

  • NIST Definition of Cloud Computing: Essential Characteristics

    NIST Definition of Cloud Computing: Essential Characteristics

    More than two decades after NIST first defined the essential characteristics of cloud computing, these principles continue to shape how organizations adopt the cloud. Understanding them is the first step toward building scalable, resilient, and cost-efficient digital ecosystems.


    NIST Definition of Cloud Computing: Essential Characteristics

    The essential characteristics define the cloud as a service that is directly manageable by the customer, available across a wide geographical area, and structured with organized resources.

    The concept of cloud consumption is introduced from the perspective of the buyer, who is identified as a consumer of “resources” or “services” provided by the cloud provider. The commonly used terminology refers to “cloud provider” and “cloud consumer.”

    Cloud computing, as an IT service, has distinctive features that set it apart from other IT services.

    The NIST (National Institute of Standards and Technology) (20) is a U.S. government agency that develops standards, guidelines, and best practices to support technological innovation and enhance the security and reliability of information systems. Founded in 1901, its goal is to promote industrial competitiveness and scientific progress through the adoption of shared standards.

    In this article, we will rely on NIST publications to understand the meaning of cloud computing.

    NIST has provided formal definitions of cloud computing through descriptions of certain essential properties. A cloud service must possess these characteristics to be classified as such.

    On-Demand Self-Service

    A consumer can unilaterally configure and utilize computing capabilities, such as server time and network storage, based on their needs, autonomously and without requiring interaction with each cloud service provider.

    Broad Network Access

    The functionalities are available over the network and can be accessed through standard mechanisms that promote usability across various heterogeneous devices (e.g., mobile phones, tablets, laptops, and workstations). This ensures ease of access and a wide availability of resources.

    Resource Pooling and Utilization

    The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model, where different physical and virtual resources are dynamically assigned and reassigned based on consumer demand.

    NIST also specifies that cloud consumers typically do not have control or detailed knowledge of the exact location of the provided resources. However, they may be able to specify higher-level attributes such as the country, state, or data center where resources are hosted. Examples of cloud resources include storage, processing, memory, and network bandwidth.

    Elasticity and Scalability of Cloud Resources

    In some cases, provisioning and releasing functionalities can be performed elastically and automatically, allowing rapid scaling up and down based on demand.

    From the consumer’s perspective, cloud resources appear to be highly scalable and can be allocated based on the required consumption at any given moment (just-in-time upscaling/downscaling).

    Measured Service

    Cloud systems automatically control and optimize resource usage by leveraging a metering capability. At an appropriate level of abstraction relevant to the type of service (e.g., storage, processing, bandwidth, and active user accounts), resource usage can be monitored, controlled, and reported, ensuring transparency for both the provider and the consumer.

    Cloud Computing as an OPEX-Based Expense

    Beyond NIST’s technological and functional definition, it is useful to consider that cloud computing—especially in the B2B (Business-to-Business) context discussed in this book—represents an operational expense (OPEX) rather than a capital expenditure (CAPEX).

    The field of FinOps has emerged to address the necessary integration between technology, finance, and treasury operations. The recurring cost, calculated on a monthly basis, introduces challenges in budget planning and financial management for organizations. This disrupts the traditional model in which IT expenses were typically categorized as capital investments (CAPEX) within long-term budget plans.

    This shift requires organizations to adopt service models that can fully leverage the benefits of cloud computing’s adaptability while ensuring cost predictability.

    This change also demands scalable architectures, both at the infrastructure and application levels, as well as data models oriented toward secure data sharing based on access rights. These aspects, while beneficial, introduce complexity in cost forecasting and financial planning.

    Cloud computing is not a one-size-fits-all solution. It should be interpreted and adopted only after fully understanding its potential and limitations, which is the objective of this section of the book.

    Further Considerations

    More than two decades after NIST first defined the essential characteristics of cloud computing, these principles still largely hold true in today’s market.

    Yet, the increasing complexity of cloud services often makes dynamic scaling a challenge, particularly when dealing with full-fledged cloud-based IT ecosystems.

    This difficulty stems from various factors, primarily related to the management of cloud resource configuration and distribution. Consequently, achieving precise and immediate cost predictability for scalability remains elusive.

    Public cloud models, in particular, tend to simplify scaling up while making scaling down more complex unless managed through automated systems with predictive controls.

    Many organizations still find themselves integrating traditional IT systems with cloud services, resulting in hybrid ecosystems rather than purely cloud-native solutions. This adds an intermediate layer of complexity, impacting Total Cost of Ownership (TCO) and Return on Investment (ROI), as these environments still follow OPEX models.

    Moreover, many companies opt for multi-cloud strategies, not necessarily to duplicate environments, but to take advantage of specialized SaaS or PaaS services like Microsoft 365, Google Workspace, Google Cloud BigQuery, or Microsoft Azure Fabric.

    In these scenarios, services cannot always be replicated across different cloud providers. High availability and geographical reliability are guaranteed by contracts with a single provider.

    Over time, regulations have introduced mandatory measures for cloud ecosystems hosting core and sensitive applications. Businesses must ensure service continuity by replicating services across multiple clouds to mitigate risks such as provider bankruptcy, prolonged cyberattacks, or service outages.

    This has led to the need for further classification of cloud resources, independent of the service model, to assist in corporate strategy planning:

    • Cloud resources are generally not portable or transferable across different cloud providers.
    • What can be transferred is the configuration—the software defining the cloud resources—provided the ecosystem follows a cloud-native operational model (as described in the book’s second section).
    • Applications can also be transferred, but only if they have been designed to be compatible with cloud-native principles.

    Navigating cloud adoption is a challenging but feasible journey. Much like an expedition, success requires careful preparation, endurance, and a well-charted map of the landscape.

    Having a guide can be invaluable.

    There are multiple paths to cloud adoption. Some are narrow, requiring technical expertise to reach peak efficiency, while others are more accessible but still yield tangible results in terms of efficiency and effectiveness.

    Understanding the cloud, mapping its capabilities, and assessing an organization’s actual potential is crucial in choosing a realistic path to achieving cloud computing success.


    Holistic Vision

    The NIST definition of cloud computing, with its essential characteristics, continues to serve as a compass more than two decades after it was first introduced. While technology has evolved, and the cloud has become more layered and complex, these principles still form the backbone of how organizations approach adoption.

    Beyond the technicalities of on-demand resources, elasticity, and measured services, cloud computing is also a matter of culture and economics. The shift from CAPEX to OPEX redefines how businesses plan, invest, and innovate. FinOps practices, hybrid strategies, and multi-cloud ecosystems are not exceptions but the natural evolution of NIST’s foundational vision.

    Seen holistically, the essential characteristics of cloud computing are less about the mechanics of servers and storage, and more about trust, adaptability, and transparency. They remind us that the cloud is not simply infrastructure: it is a shared environment where resilience, scalability, and financial sustainability converge.

    In this light, adopting the cloud is less a technical migration and more an expedition into a dynamic ecosystem. Success depends not only on technology but on preparation, governance, and the ability to align financial strategies with digital ambitions. The NIST framework remains the map — but every organization must chart its own path across the terrain.



    References

    This article is an excerpt from the book

    Cloud-Native Ecosystems

    A Living Link — Technology, Organization, and Innovation