Tag: cloud-native ecosystems

  • Cloud Adoption – Driving Digital Transformation with Strategies and Innovation

    Cloud Adoption – Driving Digital Transformation with Strategies and Innovation

    The adoption of high-value yet high-impact technologies such as AI and cloud should always follow a roadmap aligned with the organization’s maturity level and operational context.

    It is essential that decision-makers have a clear vision and motivation to ensure the successful introduction of new technologies.

    Forward-thinking organizations should also consider the post-adoption phase—what happens after achieving an initial goal, which in most cases will be intermediate or exploratory in a first adoption phase.

    A consolidation phase should be planned, where service adjustments identified during the adoption phase, but not initially foreseen, can be implemented.

    Simultaneously, expansion or evolution phases can be considered.

    In the case of cloud, it is now common to think in terms of consolidation or expansion phases. However, in some cases where expected results are not met (NOT OK), cloud services may even be decommissioned.

    In contrast, AI adoption is still in an early stage for many organizations, with some in a standby phase, waiting to observe results from other experiences.


    Cloud Adoption – Driving Digital Transformation with Strategies and Innovation

    The Cloud Adoption Process.

    Cloud adoption typically unfolds in three phases.

    The first phase is the actual adoption, where the organization defines objectives for which the cloud is deemed central. The motivations behind a cloud adoption project can vary widely experimental, tactical, or part of a long-term strategy.2

    Cycle of cloud adoption phases

    The key factor determining the success of the adoption phase is the realistic definition of expected outcomes.
    This success factor is strongly influenced by the organization’s posture toward the adoption project. Has the company already gained experience at the technical, administrative, and managerial levels?
    This, in turn, depends on the organization’s cloud maturity level and its awareness that cloud requires a different service model from traditional IT.
    A sensible approach to cloud adoption involves clear goal setting. If your primary objective is cost reduction, you have chosen the most challenging goal—sometimes even unfeasible within a cloud adoption process.
    If this is your goal, there are only two possibilities: either you have been using a well-configured cloud-native information ecosystem for some time, or you have little experience with cloud.

    To properly assess cloud adoption, organizations should answer five key questions:
    • Why?
    • What?
    • Who?
    • When?
    • Where?

    Each response must be precise and well-defined.
    Cloud adoption should not be seen as a strategy in itself but rather as a tactical step that becomes strategic over time.
    Consider the analogy of home renovation. If you decide to renovate your entire home inside and out, you may need to temporarily live elsewhere while the construction takes place, ensuring that work follows the agreed-upon project plan.
    For most organizations, such a scenario is impractical.
    Instead, cloud adoption is more akin to renovating one room at a time—accepting temporary inconveniences such as noise, dust, interruptions, and the presence of external workers in the house.
    The transition from an on-premises to a cloud ecosystem is similar: careful planning and incremental changes ensure a smoother process and a better final outcome.

    Diagram showing the impact of cloud adoption on an IT ecosystem, transitioning from a local setup to a hybrid ecosystem where component A1 moves to the cloud.

    Cloud adoption reshapes the ecosystem: applications migrate to the cloud, evolving traditional architectures into hybrid ecosystems.

    In Figure a highly schematic representation illustrates the state transition of an ecosystem due to cloud adoption.

    At time t0, the ecosystem operates in a traditional configuration, executing two processes, A and B. Each process relies on specific services:

    • Process A utilizes services A1 and A2.
    • Process B utilizes services B1 and B2.

    Process B was introduced after process A, and service B1 has a functional and operational dependency on service A1 (e.g., data flows, APIs, functions, or other dependencies).

    For specific business or technical reasons, the decision is made to migrate service A1 to the cloud.

    By the end of the adoption process, at time t1, the final ecosystem has transitioned into a hybrid model, where A1 now resides in the cloud.

    But what happened to the ecosystem during the phase represented by?

    Step-by-step diagram of the cloud adoption transformation phase, showing how an ecosystem evolves over time (t₀ to t₁) into a hybrid ecosystem with components migrated to the cloud.

    The transformation phase of cloud adoption: applications gradually migrate, reshaping the ecosystem into a hybrid model over time.m.

    In Figure the cloud adoption process is depicted through key milestones, capturing the ecosystem’s transformation.

    If, instead of A1, service A2 were migrated first, the complexity of the adoption process would increase significantly.

    Service A2 has multiple dependencies. Moving it first would extend the adoption timeline due to the additional integrations required between cloud-based and on-premises components.

    Interestingly, in real-world scenarios, organizations often face the challenge of migrating A2 rather than A1. Core services like A2 are frequently used by multiple other services and may need to be made accessible to cloud-based services already in place.

    A common example is a data warehouse, a data mart, or a dataset generated from a complex SQL view or procedure executed in real-time.

    Typically, the more valuable the data, the more it becomes entangled within multiple application layers, incoming and outgoing data flows, and business logic layers.

    Older hosting technologies tend to accumulate these layers over time, making migration increasingly complex.

    Not all cases follow this pattern, but in many situations, organizations prioritize immediate cost and time savings, leading to layered and entangled legacy systems.

    If an organization lacks cloud maturity and experience, it is advisable to start with peripheral scenarios before tackling core components.

    However, budget constraints often mean that smaller, peripheral projects receive less funding, limiting their ability to serve as meaningful test cases.

    Eventually, the need arises for data and services to be shared across the organization. This is where different cloud adoption scenarios emerge, which we will explore in the next chapter.

    A recommended best practice is to include one or two low-risk migration projects (such as A1) in the early phases of a larger cloud adoption initiative. This allows organizations to gain experience and refine their migration process before addressing more complex cases.

    The Next Phase After Cloud Adoption: Consolidation

    After the adoption phase, if the process has been successful, the organization enters the consolidation phase. Around the newly implemented cloud ecosystem, service extensions begin to emerge, generally aimed at improving efficiency, optimizing costs, and enhancing overall effectiveness.

    A service built in the cloud has a smoother evolution: if designed with a cloud-native approach, it will be easier to extend and enhance over time.

    If this phase also yields satisfactory results, the next step is expansion.

    At this point, the organization may embark on a full-fledged race towards the cloud, often accompanied—or even preceded—by the adoption of artificial intelligence. This dynamic is reminiscent of the gold rush in the American West: exciting, full of opportunities, but also highly delicate from an IT and FinOps perspective.

    In this phase, the cloud-native ecosystem may evolve into a hybrid or multi-cloud model, a scenario that, while representing a natural evolution, introduces new risks and complexities.

    The initial core of the cloud ecosystem expands with new resource clusters, designed to meet emerging business needs. At the same time, other business areas begin exploring the cloud, creating additional resource clusters. In these early stages, each cluster typically consists of only a few dozen resources, allocated according to the operational needs of each process.

    This is the moment when an organization can take a strategic step and decide to migrate entire business processes to the cloud ecosystem, firmly establishing cloud adoption. However, managing expansion across multiple business lines requires a high level of cloud maturity and a strong grasp of the FinOps framework to maintain cost control and ensure operational sustainability.

    In general, the cloud adoption journey can be classified as either digital transformation or innovation, depending on the nature of the business being migrated.

    A Special Case: Cloud-Specialized Services

    For years, marketing and communication departments have been using tools such as Google Analytics. For these users, extending their infrastructure with a service like BigQuery is a natural step, often quickly leading to integration with Google’s Gemini AI. Once inside this ecosystem, alternatives become increasingly limited, and the trajectory of technological evolution is, in practice, guided by the service provider.ding 2

    Cloud Adoption Models

    Cloud adoption strategies can be classified into different categories, each describing how organizations migrate or adopt applications and infrastructure in the cloud.

    There are various ways to represent these models, and the following classification does not claim to be exhaustive or definitive for all possible cloud adoption strategies (or tactics).

    Rehost (Lift and Shift)

    The Rehost strategy involves migrating existing applications and infrastructure to the cloud without significantly modifying their architecture. This approach is quick and allows resources to be moved from on-premises data centers to the cloud with minimal changes.

    Operational Example:

    • Moving a legacy application to a virtual machine on AWS EC2 or Azure VM without altering its code.

    Advantages:

    • Fast migration times.
    • Low initial complexity.

    Disadvantages:

    • Limited efficiency gains.
    • Higher operational costs.
    • Does not fully leverage cloud-native capabilities like auto-scaling and managed services.

    Refactor (Replatform)

    The Refactor or Replatform strategy involves optimizing or modifying parts of an application to better leverage cloud services, without fully rewriting it. Minor changes to the code or infrastructure can improve efficiency and scalability.

    Operational Example:

    • Migrating an application to a managed database service like Amazon RDS or Azure SQL, eliminating the need for on-premises database management.

    Advantages:

    • Improved performance and scalability.
    • Lower operational costs compared to Rehost.
    • No need for a complete rewrite.

    Disadvantages:

    • More complex than Rehost.
    • Requires additional effort for adaptation.

    Repurchase (Drop and Shop)

    With the Repurchase strategy, an organization replaces its legacy applications with ready-to-use SaaS (Software as a Service) solutions. This means abandoning existing infrastructure and directly adopting cloud-native solutions.

    Operational Example:

    • Replacing an internal CRM system with a SaaS solution like Salesforce.

    Advantages:

    • Significant reduction in management and maintenance costs.
    • Immediate access to modern solutions with automatic updates.

    Disadvantages:

    • Loss of customization and control over the application.
    • Data migration challenges.

    Rebuild (Re-architect)

    The Rebuild strategy involves completely rewriting an application to fully exploit cloud-native capabilities. This approach allows rethinking architecture using microservices, containers, and serverless technologies.

    Operational Example:

    • Transforming a monolithic application into a microservices-based architecture deployed on Kubernetes (EKS/AKS) or using AWS Lambda serverless functions.

    Advantages:

    • Maximum benefit from cloud scalability, flexibility, and resilience.
    • Complete modernization of the application.

    Disadvantages:

    • Long and costly development process.
    • Requires specialized skills and significant resources.

    Retire

    With the Retire strategy, an organization decommissions or removes obsolete applications or infrastructure. Sometimes, during migration planning, certain applications are found to be redundant and can be eliminated.

    Operational Example:

    • Decommissioning an old application that is no longer in use or has been replaced by a more efficient solution.

    Advantages:

    • Cost reduction from eliminating maintenance of unused systems.
    • Simplification of the IT landscape.

    Disadvantages:

    • Possible resistance from teams still relying on the retired application.
    • Potential loss of historical data.

    Retain (Hybrid)

    The Retain strategy involves keeping certain applications or data on-premises due to security, compliance, or operational dependencies on legacy systems. Organizations adopting this approach often manage a hybrid infrastructure, using both cloud and on-premises resources.

    Operational Example:

    • Keeping an ERP system on-premises while migrating fewer sensitive applications to the cloud.

    Advantages:

    • Flexibility in maintaining critical applications on-premises.
    • Compliance with security and regulatory requirements.

    Disadvantages:

    • Increased management complexity.
    • Higher operational costs.
    • Challenges in integrating cloud and on-premises data.

    New Application (Cloud-native Development)

    With this strategy, new applications are developed directly in the cloud, following a cloud-native approach from the start. This model takes full advantage of PaaS (Platform as a Service) and SaaS capabilities.

    Operational Example:

    • Building a new application using AWS Lambda, DynamoDB, and S3, eliminating the need for physical servers.

    Advantages:

    • Maximum flexibility and scalability.
    • Optimal use of modern cloud technologies.

    Disadvantages:

    • Requires cloud-native development expertise.
    • High initial investment in development.

    Evaluating Cloud Adoption Strategies: Benefits and Risks

    These strategies allow organizations to gradually adopt the cloud according to their operational and technological needs. Each approach has its benefits and challenges, and the choice depends on factors such as cost, complexity, internal expertise, and business objectives.

    Each cloud adoption strategy presents distinct benefits and risks. The selection depends on an organization’s specific needs, technological maturity, regulatory constraints, and balance between initial costs and long-term benefits. Companies must carefully evaluate which strategy to adopt based on their priorities, capabilities, and business goals.

    Table – Analysis of Benefits and Risks for Cloud Adoption Strategies

    StrategyBenefitsRisks
    Rehosting (Lift and Shift)Fast migration, Low initial costs, Simple implementationLimited efficiency, Higher operational costs, Limited cloud benefits
    Refactoring (Replatform)Optimized performance, Lower operational costs, Improved scalabilityLonger migration times, Higher initial investment, Need for new skills
    Buyback (Drop and Shop)Simplified complexity, Automatic updates, Predictable costsLoss of customization, Training costs, Vendor lock-in risk
    Rebuild (Redesign)Full cloud benefits, Improved performance, High scalabilityHigh initial costs, Operational risks, long implementation times
    RetireCost savings, Simplified infrastructure, Increased focus on core servicesLoss of historical data, Resistance to change, Potential operational impact
    Retain (hybrid)Flexibility, Security and compliance, Control over sensitive dataIncreased complexity, Higher costs, Data integration challenges
    New application (cloud-native development)Maximizes cloud advantages, Accelerated innovation, DevOps compatibilityLimited efficiency, Higher operational costs, Limited cloud benefits

    Success Cases for Different Cloud Adoption Scenarios

    Below are some publicly known success stories, each representing a specific cloud adoption strategy on a particular cloud provider.

    Rehost (Lift and Shift) – Netflix (AWS)

    Netflix initially migrated its on-premises infrastructure to AWS using a lift-and-shift approach, moving applications without significant modifications. This transition allowed Netflix to enhance scalability and disaster recovery while reducing operational overhead. Over time, Netflix evolved its architecture to leverage more cloud-native services, but the initial move provided the foundation for its current highly resilient, global streaming platform

    See more on https://aws.amazon.com/solutions/case-studies/netflix/

    .

    Refactor (Replatform) – Coca-Cola (Google Cloud)

    Coca-Cola leveraged Google Cloud’s Kubernetes Engine (GKE) to refactor and optimize its vending machine order management system. By migrating its microservices architecture to a managed Kubernetes environment, Coca-Cola improved service reliability, enhanced real-time analytics, and achieved better cost efficiency through auto-scaling and optimized infrastructure usage.

    See more on https://cloud.google.com/customers/coca-cola


    Repurchase (Drop and Shop) – Royal Dutch Shell (Microsoft Azure)

    hell opted for a SaaS-based approach by transitioning its legacy ERP systems to Microsoft Dynamics 365. This move eliminated the need for complex on-premises infrastructure management, providing Shell with a more agile and integrated business platform that supports predictive analytics, automation, and streamlined global operations.

    See more on https://customers.microsoft.com/en-us/story/royaldutchshell-energy-azure-dynamics365

    Rebuild (Re-architect) – Capital One (AWS)

    Capital One undertook a full application re-architecture by adopting microservices, serverless computing, and AI-driven automation on AWS. The company replaced monolithic banking applications with cloud-native services utilizing AWS Lambda, Amazon DynamoDB, and Amazon SageMaker for AI-driven fraud detection. This strategy resulted in improved security, better operational efficiency, and enhanced customer experience.

    See more on https://aws.amazon.com/solutions/case-studies/capital-one


    Retire – Dropbox (AWS to private Infrastructure)

    Dropbox originally hosted its storage services on AWS but later decided to decommission parts of its cloud-based infrastructure in favor of an in-house solution called Magic Pocket. This transition allowed Dropbox to optimize its storage architecture, reduce dependency on third-party providers, and significantly cut operational costs while maintaining high-performance scalability.

    See more on https://www.wired.com/2016/03/epic-story-dropboxs-exodus-amazon-cloud-empire/

    Retain (Hybrid) – Volkswagen (Microsoft Azure + on-premises)

    Volkswagen adopted a hybrid cloud strategy by keeping critical manufacturing and vehicle telemetry data on-premises while shifting other workloads to Microsoft Azure. This approach enabled Volkswagen to comply with strict data sovereignty regulations while taking advantage of Azure’s AI and analytics services for predictive maintenance, supply chain optimization, and autonomous vehicle development.

    See more on https://customers.microsoft.com/en-us/story/volkswagen-groupmanufacturing-azure

    New Application (Cloud-native Development) – Airbnb (AWS)

    Airbnb was built from the ground up as a cloud-native platform using AWS services. By leveraging AWS EC2 for compute, Amazon RDS for database management, and Amazon S3 for storage, Airbnb ensured high scalability and global availability. Over time, it integrated AI and big data analytics to optimize search, pricing strategies, and fraud detection, making its infrastructure a benchmark for digital platform scalability and efficiency.

    See more on https://aws.amazon.com/solutions/case-studies/airbnb


    Conclusion

    Cloud adoption is not a one-size-fits-all journey but rather a progressive transformation shaped by each organization’s context, priorities, and maturity. The strategies explored — from rehosting to cloud-native development — highlight that every choice carries both opportunities and trade-offs. Success depends less on the technology itself and more on the clarity of vision, the ability to balance risks and benefits, and the willingness to foster cultural and organizational change.

    Adopting the cloud means embracing new operating models, strengthening governance and compliance, and developing the skills needed to manage complexity. Organizations that approach this transformation holistically — considering people, processes, and technology together — are better equipped to unlock the full potential of the cloud.

    Ultimately, cloud adoption is not an end point but a continuous journey. As ecosystems evolve, hybrid and multi-cloud models will become increasingly common, enabling flexibility, resilience, and innovation at scale. By aligning strategy with execution, and innovation with responsibility, organizations can transform cloud adoption from a technical migration into a true driver of digital transformation.



    References

    This article is an excerpt from the book

    Cloud-Native Ecosystems

    A Living Link — Technology, Organization, and Innovation

  • How the Cloud is Built and How It Works | Essential Guide to Cloud Infrastructure & Digital Transformation

    How the Cloud is Built and How It Works | Essential Guide to Cloud Infrastructure & Digital Transformation

    In today’s digital world, the cloud is often perceived as an abstract concept, hidden behind the simplicity of a web interface. Yet, behind every click, there is a vast and complex infrastructure made of data centers, high-speed connections, and advanced virtualization technologies. In this article, adapted from my book Exploring Cloud-Native Ecosystems, we’ll explore the physical and logical foundations of the cloud to understand how it is truly built and how it works.


    How the Cloud is Built and How It Works | Essential Guide to Cloud Infrastructure & Digital Transformation

    The widespread adoption of cloud computing, as detailed in my post Cloud Adoption, would not have been possible without several enabling industrial factors:

    • The expansion of a stable, high-speed, and highly available global network infrastructure.
    • The exponential growth of computational capacity per unit of physical space, along with a reduction in equivalent energy consumption.
    • The evolution of computational models.

    We have seen that the cloud can be described through its service models and distribution models, presenting itself as a ready-to-use service for consumers.

    We have also seen how cloud resources, and therefore the entire cloud, can be summarized into a few key elements: computational power, data storage, and data transport.

    Moreover, we have seen that these characteristics are enabled by specific electronic devices.

    In reality, the cloud consists of all these components—just on a much larger scale.

    Whether public or private, cloud services are delivered through a vast network of data centers distributed worldwide, managed directly by public cloud providers.

    Each data center contains enormous stacks of computing units, such as the DGX SuperPOD (though not all of them 😊)

    What is Inside a Cloud Data Center?

    Illuminated server racks in data center

    A cloud data center is a facility that can span vast physical dimensions, as shown in Figure.

    Inside a cloud data center, we find rows of specialized servers neatly stored inside rack enclosures—tall, standardized metal cabinets designed to house multiple computing units in a compact and organized manner.

    Unlike traditional office computers, which typically have keyboards, monitors, and user interfaces for direct interaction, cloud servers are headless meaning they lack direct input/output devices. Instead, they are designed for remote management and automated operation, ensuring maximum efficiency and scalability.

    Each server rack contains:

    • Motherboards with powerful multi-core processors (CPUs & GPUs) optimized for parallel workloads.
    • High-speed RAM (memory modules) to handle intensive data processing.
    • Storage devices (HDDs, SSDs, or NVMe drives) that provide ultra-fast access to data.
    • Network interface cards (NICs) that allow high-speed communication with other servers.
    • Redundant power supply units (PSUs) to ensure continuous operation.

    To enable seamless operation across thousands of machines, these rack-mounted servers are interconnected through high-speed data buses, forming a massively parallel computing environment.

    Key technologies enabling communication within a cloud data center include:

    1. Backplane Bus Systems
      1. Each rack has an integrated backplane—a high-speed communication backbone that interconnects all servers within the same cabinet.
    2. High-Speed Network Switching
      1. Servers are connected via fiber-optic networking switches, enabling low-latency data exchange between different racks and clusters.
    3. Software-Defined Networking (SDN)
      1. Instead of relying on traditional manual network configurations, cloud providers use software-defined networking, which allows dynamic traffic routing and load balancing across the entire data center.
    4. Inter-Rack Optical Links
      1. Since cloud computing requires extreme bandwidth, data is transmitted using fiber-optic cables inside the data center, connecting racks at speeds of 100 Gbps or higher.
    5. Distributed Storage Systems
      1. Cloud servers don’t store data locally like personal computers. Instead, they access a distributed storage layer that spans multiple racks and even multiple data centers, ensuring redundancy and fault tolerance.

    How These Servers Work Together

    Each server in a rack is not an isolated unit but part of a cluster, working together to handle massive computational workloads. Cloud data centers are architected using the concept of hyperscale computing, meaning:

    • Workloads are dynamically distributed across multiple physical machines.
    • A single task (e.g., processing an AI model or serving a website) may run across dozens or even hundreds of servers simultaneously.
    • If one server fails, its workload is automatically shifted to another available machine, ensuring continuous service availability.

    The Role of Virtualization and Containers

    Each server in a rack is not an isolated unit but part of a cluster, working together to handle massive computational workloads. Cloud data centers are architected using the concept of hyperscale computing, meaning:

    • Workloads are dynamically distributed across multiple physical machines.
    • A single task (e.g., processing an AI model or serving a website) may run across dozens or even hundreds of servers simultaneously.
    • If one server fails, its workload is automatically shifted to another available machine, ensuring continuous service availability.

    The Importance of Rack Density & Cooling

    Because cloud data centers must pack thousands of high-performance servers into a limited space, rack density is a critical factor. Modern high-density racks can house:

    • 40 to 60 blade servers per rack
    • Up to 10,000 CPU cores per data hall

    This extreme density generates massive amounts of heat, requiring advanced cooling technologies, including:

    • Liquid cooling solutions that circulate coolant to dissipate heat.
    • Hot aisle / cold aisle configurations to optimize airflow and prevent overheating.
    • AI-powered energy management to dynamically adjust cooling based on real-time workloads.

    Geographical Distribution of the Cloud.

    The geographical distribution of data centers is a key factor in service quality. Over time, alongside massive data centers, edge data centers and modular data centers have been introduced.

    A modular data center can be expanded over time by adding new units to increase computing power. This strategy is widely used by cloud providers offering public cloud services in newly developing areas, ensuring low-latency service for a limited set of cloud resources.

    However, as you might expect, the computing power of a modular container-based data center (as shown in  Figure 26) cannot match that of a large-scale data center (as shown in Figure 24).

    The geographical distribution of cloud providers’ data centers follows a two-tiered structure:

    • Consumers see only the service delivery regions (referred to as regions).
    • Each region consists of multiple redundant data centers providing high availability at the regional level.

    Cloud providers do not disclose the exact physical location of data centers, mainly for security reasons.

    However, users can explore the cloud providers’ regional maps, such as:

    Regions, once created, gradually expand with additional cloud resources over time.

    The time required to establish a new region depends on the regulatory frameworks of the host country where the data centers for that region are located.

    Due to legislative constraints, data centers must first comply with national regulations before adhering to international standards.

    As a result, each cloud region is effectively tied to data centers within a single country.

    The creation of a new region does not immediately guarantee the availability of all cloud resources present in a long-established region.

    The cloud resource availability map for each region enables the analysis of two critical factors:

    1. Cost control – Identifying available resources within a specific region helps optimize expenses, reducing unnecessary data transfers and avoiding unexpected costs.
    2. Legal risk assessment – If a required cloud resource is unavailable in the designated national region or outside the compliance perimeter dictated by regulations, it may introduce regulatory and compliance risks.

    Moreover, data traffic between different regions, even when hosted within the same public or private cloud, can lead to higher operational costs, making strategic regional resource planning essential for both financial efficiency and regulatory compliance.

    What Is the Cloud Made Of?

    What materials are used in cloud computing?

    From a materials science perspective, a server farm consists of various materials used in electronic components, network infrastructure, and cooling systems.

    Key Materials Used in Cloud Infrastructure:

    1. Metals and Minerals:

    • Silicon – Used for semiconductors and processor chips.
    • Copper – Used in wiring and circuit boards due to its high electrical conductivity.
    • Aluminum – Used for server chassis and heat sinks.
    • Gold – Used in connector plating to prevent corrosion.
    • Nickel & Cobalt – Used in batteries and electronic components.
    • Rare Earth Elements – Used in hard disk magnets and high-performance electronics.

    2. Cooling Systems:

    • Water – Used in liquid cooling systems for data centers.
    • Plastic Pipes – Used for cooling distribution systems.
    • Refrigerants – Special chemical compounds used in high-efficiency air conditioning.

    3. Power and Storage Technologies:

    • Lead & Sulfuric Acid – Used in UPS backup batteries (Uninterruptible Power Supply).
    • Lithium – Used in modern lithium-ion batteries for energy storage.
    • Ferromagnetic Materials – Used in transformers and voltage regulators.

    4. Structural and Environmental Materials:

    • Concrete & Steel – Used to construct data center buildings.
    • Thermal Insulation Materials – Used to maintain temperature stability.
    • Lightweight Alloys – Used for server racks.

    5. Sustainable Energy Materials:

    • Solar Panels – Made from silicon and other semiconductors to provide renewable energy.
    • Eco-friendly Materials – Used in new green data centers to minimize environmental impact.

    These materials are essential for constructing and operating cloud data centers, which house thousands of servers running in a stable and energy-efficient environment.

    The Critical Role of Communication Infrastructure in the Cloud

    One of the key challenges of cloud computing is its underlying communication infrastructure.

    In today’s world, the widespread availability of broadband connections has enabled millions of people to continue working remotely during the COVID-19 pandemic. It is clear that without high-speed, large-scale connectivity, this transition would not have been possible.

    I live in Italy, in a town where broadband has been deployed, but it has not yet reached every street—a “no man’s land” where no one intervenes. As a result, my neighbor, just 50 meters away, has full broadband access, while my family does not. (That said, with 60 Mbps download speed, we don’t face too many issues! 😊)

    The Cloud’s Dependency on Communication Infrastructure

    Public cloud services rely heavily on data transport capabilities—both in terms of infrastructure capacity and global and local network integration.

    At the lower layers of the ISO/OSI stack, we find the telecommunications carriers that facilitate global data exchanges.

    Let’s take, for example, data transmission across the Atlantic Ocean, which connects Europe and the United States.

    This massive undersea communication backbone is built on fiber-optic submarine cables, utilizing Dense Wavelength Division Multiplexing (DWDM) technology. DWDM allows multiple data channels to travel through the same fiber, using different wavelengths, significantly boosting bandwidth efficiency.

    Cloud Providers and Network Connectivity

    To ensure seamless and reliable connectivity, cloud service providers leverage a mix of:

    • Global network providers
    • Data transport service providers
    • Internet connectivity providers

    Many cloud vendors implement hybrid network solutions, combining their own private infrastructure with the existing telecommunications networks of local providers.

    A prime example is the MAREA cable, a joint project between Microsoft, Facebook, and Telxius. MAREA is one of the most powerful transatlantic cables, boasting a data transport capacity of 160 terabits per second.

    The Strategic Importance of Interconnection Infrastructures

    Global network of submarine internet cables

    These interconnection infrastructures are not just essential for commercial cloud services—they are strategic assets for national security as well.

    Most of these critical network infrastructures are designed and managed by private companies. However, governments retain some level of control over their operation, particularly when it comes to critical security configurations.

    For a deeper dive into the role of submarine cables in global internet connectivity, you can check out GeoPop’s Italian-language YouTube video: CAVI SOTTOMARINI – la fibra ottica del mondo passa in fondo agli oceani, altro che satelliti – Ep-1 (youtube.com)

    Global network of submarine internet cables (from https://www.submarinecablemap.com/)


    Conclusion

    The cloud is not magic, but the outcome of decades of technological progress and cultural transformation. By uncovering its inner workings—from industrial enablers to global networks—we gain the tools to navigate digital transformation with awareness. The better we understand its foundations, the better we can design, govern, and innovate our future cloud-native ecosystems.



    References

    This article is an excerpt from the book

    Cloud-Native Ecosystems

    A Living Link — Technology, Organization, and Innovation

  • Essential Cloud Distribution Models

    Essential Cloud Distribution Models

    Cloud distribution models—private, public, community, and hybrid—define how cloud infrastructures are deployed and governed. Beyond service models, these classifications are crucial for compliance, security, and organizational strategy. Learn how NIST definitions shape adoption paths and why hybrid solutions dominate modern ecosystems.


    Cloud Distribution Models

    Beyond the service model of a cloud resource, understanding the cloud distribution model is crucial, as it plays a key role in the application of industry-specific regulations and national or continental security policies.

    The NIST 800-105 (16) document provides definitions for the different cloud service distribution models.

    Private Cloud

    A cloud infrastructure that is exclusively used by a single organization composed of multiple consumers across various operational locations or branches. It may be owned, operated, and managed by the organization itself, a third party, or a combination of both. The infrastructure can exist either on-premises or off-premises.

    Community Cloud

    A cloud infrastructure that is exclusively used by a specific community of consumers from distinct organizations that share common interests and service objectives (e.g., operational missions, security requirements, policies, or compliance regulations). Ownership, operation, and management can be carried out by one or more organizations within the community, a third party, or a combination of both. The infrastructure may be located on or off the premises of the participating organizations.

    Public Cloud

    A cloud infrastructure that is made available for open use by any individual or business consumer. Ownership, operation, and management may be carried out by a commercial, academic, or governmental organization, or a combination thereof. This infrastructure is located at the cloud provider’s premises.

    Hybrid Cloud

    A cloud infrastructure that combines two or more distinct cloud infrastructures (private, community, or public), which remain unique entities but are connected through standardized or proprietary technology that enables data and application portability. Examples include load balancing across geographically distributed environments, high availability management, and disaster recovery planning for core business services.

    Considerations on Cloud Distribution Models

    Public cloud is often the first model that comes to mind when discussing cloud computing.

    However, it is important to recognize that there are no inherent technological differences that distinguish cloud distribution models at their core; the primary differences lie in contractual agreements.

    In public cloud models, there is a clear distinction between the provider (supplier) and the consumer (client), whereas this distinction becomes increasingly blurred in other distribution models.

    Fundamentally, a public cloud is characterized by the fact that a data center is not contractually dedicated to a single client. Even large enterprises that request dedicated cloud farms adjacent to their data centers still operate in a shared cloud environment.

    Conversely, a private cloud is designed to ensure the highest level of segregation. However, in practice, data must eventually traverse public infrastructure—such as global fiber-optic backbones—to enable communication, even in strictly controlled environments.

    Modern data centers introduce the concept of edge computing, providing localized computing and storage resources closer to the end user. These edge data centers offer limited local capacity while ensuring direct integration with major fiber and satellite communication carriers.

    Despite the high level of isolation an edge data center may provide, it cannot truly be classified as a private cloud if it economically relies on shared communication bandwidth provided by major carriers. Essentially, data transport follows the same principle as cargo transportation: whether by rail, ship, or aircraft, multiple clients share the infrastructure.

    Given these complexities, hybrid cloud solutions have become the most common approach in cloud adoption strategies, allowing organizations to combine multiple cloud models based on evolving needs.

    From the author’s perspective, any cloud distribution model should meet all the requirements defined by NIST to be properly classified as cloud computing.

    One key aspect to focus on is the responsibility matrix associated with each cloud distribution model, which will be further explored in the chapter on cloud regulations.

    The history of cloud computing offers a broad and detailed overview of the key milestones in the development of this technology. While not exhaustive, it provides an interpretation of innovation as a driving force.

    We can divide this history into dis


    ConclusionHolistic Vision

    Understanding cloud distribution models is more than an academic exercise. It represents a key step in aligning technology with governance, compliance, and business resilience.

    • Public cloud pushes scalability and global reach, but also requires careful risk management.
    • Private cloud promises control and segregation, though it inevitably intersects with shared infrastructures.
    • Community cloud shows the strength of collective approaches, where compliance and missions converge.
    • Hybrid cloud emerges as the pragmatic solution, balancing innovation with regulation and providing flexibility in uncertain times.

    In practice, the choice of a distribution model is rarely absolute. Organizations evolve, regulations tighten, and infrastructures adapt. What matters is not only selecting a model but building an ecosystem capable of integrating them all.

    From a cloud-native perspective, distribution models are not silos: they are complementary dimensions of the same continuum. Recognizing this helps enterprises navigate complexity with confidence, ensuring that security, compliance, and innovation can coexist in a sustainable way.H2



    References

    This article is an excerpt from the book

    Cloud-Native Ecosystems

    A Living Link — Technology, Organization, and Innovation

  • The Encoding of data

    The Encoding of data

    This is a wide post for a good read experience use a tablet, large smartphone or pc


    The Encoding Of Data

    The word data (or its plural data points) is also one of the most frequently used terms in IT jargon, sometimes to the point of losing its true meaning and occasionally taking on an almost “romanticized” interpretation, often distant from reality.

    A cloud-native ecosystem is built around the data lifecycle.

    Let’s start with the basics, starting from a robust definition following then what’s mean data encoding until to get a first look on a data Lakes scenario.

    Definition of the “data” in a cloud native ecosystem

    The most general definition of the word data, in the context of the cloud and particularly within an informational ecosystem, is as follows:

    Any recorded information used to describe events, objects, or entities, which can be collected, stored, processed, and shared to support decision-making, operational, or analytical processes.

    The way data is managed within an informational ecosystem defines its quality, form, and security.
    This leads to a discussion of the data lifecycle, which is integrated into the lifecycle of the informational ecosystem.

    As we will see later in the book, the operations related to managing the lifecycle of an informational ecosystem, implemented through the DevOps framework, also encompass the data lifecycle.

    The simplest way to represent data is tied to its possible representation in the physical world, particularly within the electronic circuits of our motherboards.

    Let us attempt to provide a basic representation of data at the physical level. Experts in the field are kindly asked to bear with this extreme simplification.

    Data can be said to be represented by an electrical voltage in a specific electronic component within a circuit, called a transistor.

    At a given moment, if a voltage (measured in volts) is present across the transistor, the value of the data is considered to be 1; in the absence of voltage, it is considered 0.

    In reality, this process is much more complex, but for our purposes, this simplification suffices.

    At its most basic physical and electronic level, data can assume only two values: 0 or 1.

    The basic unit of digital information is called a bit.
    A bit can assume only two values, typically represented as 0 or 1, which correspond to two distinct states, such as on/off or true/false, within the binary system used by computers.

    Now, for those willing to follow along, we will attempt to understand how the entire description of our world can be based on just two values: 1 and 0.

    The Encoding of Data

    The definitions of “bit” and “byte” originated around the mid-20th century.

    The term bit, short for “binary digit,” was coined in 1948 by mathematician and engineer Claude Shannon, considered the father of information theory. Shannon introduced the concept in his famous paper “A Mathematical Theory of Communication” (a must-read), describing how information could be represented and manipulated using sequences of bits, or binary digits (0 and 1).

    The term byte was coined by Werner Buchholz in 1956 during the development of the IBM 7030 Stretch computer. Initially, a byte represented a variable group of bits, but it was later standardized as a sequence of 8 bits. The byte thus became the basic unit of information storage in modern computer systems.

    To better understand what a data unit represented by bits and an archive containing them might look like, imagine a household chest of drawers used for storing clothing—a compact and practical solution.

    In one drawer, you might store socks; in another, undergarments; and in yet another, scarves, and so forth.

    The Simplest Form of Data: The Bit

    The simplest form of data is the bit. A bit can take only two values: 0 or 1.

    If you wanted to represent four types of clothing, you could decide to encode them using two bits. Placing two bits side by side, each of which can assume two values, yields four possible combinations:

    • 00
    • 01
    • 10
    • 11

    To these four combinations, you could assign specific meanings, such as types of clothing:

    • 00: Socks
    • 01: Undergarments
    • 10: Scarves
    • 11: Shirts

    With this, you can encode the type of clothing—that is, the type of data—but you still lack information about quantity or position.

    Representing Quantity

    To include quantity, you could align additional bits, deciding that their combination of 0 and 1 represents a numerical value. You would need to establish a calculation rule or standard so that whoever writes or reads the data interprets the value consistently.

    For instance, you might define a standard using a table:

    Table 1 – A bit coding standard

    Bit 1Bit 2Bit 3Bit 4Valore
    00000
    10001
    11002
    11103

    With four bits, you can achieve 2⁴ = 16 possible combinations, allowing you to assign 16 different meanings.

    One widely used standard over time is represented in the following table:

    Table 2 – Binary encoding as a power of 2

    ValoreBit 1Bit 2Bit 3Bit 4
    00000
    10001
    20010
    30011
    40100
    50101
    60110
    70111
    81000
    91001
    101010
    111011
    121100
    131101
    141110
    151111

    By carefully interpreting the table, you can assert that the bit in the leftmost column is the most significant bit (MSB) because changing its value significantly alters the overall number.

    Conversely, the rightmost bit is the least significant bit (LSB) because altering its value changes the overall number only slightly.

    This binary numeric encoding has been used in computers since the first practical application of the Von Neumann cycle.

    Combining Standards for Comprehensive Representation

    Returning to our chest of drawers, suppose you want to indicate both the type and quantity of clothing. By adding another four bits, as in the numeric encoding table, you can represent up to 15 items for each type of clothing.

    For example:

    110011 means 4 shirts (11 = shirts, 0011 = 4).

    Expanding further, you can also add the location of the items. Imagine the chest has four drawers, each represented by two bits:

    BitsDrawer
    00First
    01Second
    10Third
    11Fourth

    Now, the first two bits of your sequence encode the drawer:

    01110011 means 4 shirts in the third drawer (01 = third drawer, 110011 = 4 shirts).

    Now let’s also add the location of my clothing. Let’s imagine the chest of drawers has four levels, each with a single drawer. At this point, I know I can use two bits to encode the levels.

    As before, I need to establish a shared standard with you:

    00: First level
    01: Second level
    10: Third level
    11: Fourth level

    I decide to include the level as the first two bits of the previous sequence, resulting in something like the following:

    01110011
    10001100
    00010101
    11101011

    On which level are my four shirts?

    Chest with dwarf design by Old

     A chest of drawers (with socks) invented by Old (ChatGPT Dall-e)

    First Protocol about data

    The first widely recognized protocol for data communication is the Telegraphy Code, which laid the groundwork for encoding and transmitting information.
    However, when we refer specifically to the modern era of digital data, the Transmission Control Protocol (TCP) and Internet Protocol (IP) stand out as foundational elements.

    Early Data Communication Standards

    1. Morse Code (1837):
      1. Developed by Samuel Morse, it was one of the earliest methods to encode and transmit data as sequences of dots and dashes over telegraph systems.
      2. While not a protocol in the modern sense, it established the concept of encoding information for transmission.
    2. Baudot Code (1870):
      1. Created by Émile Baudot, this telegraphy code was a more advanced system for encoding text into binary-like sequences of signals.
      2. It represents an early attempt at creating standardized data representation.

    The Rise of Modern Protocols

    1. ASCII (American Standard Code for Information Interchange, 1963):
      1. Developed as a character encoding standard for text, allowing different systems to communicate text data reliably.
    2. ARPANET’s NCP (Network Control Protocol, 1970s):
      1. Predecessor to TCP/IP, NCP was the first protocol suite used to manage data communication on ARPANET, the forerunner of the modern internet.
    3. TCP/IP (1980s):
      1. Transmission Control Protocol (TCP): Ensures reliable data transmission by managing packet sequencing, error correction, and acknowledgments.
      2. Internet Protocol (IP): Governs the routing of data packets across networks.
      3. Together, TCP/IP became the backbone of modern data exchange, enabling the internet to flourish

    Key Contributions of TCP/IP:

    • Standardization: Provided a universal framework for data transmission across heterogeneous systems.
    • Scalability: Supported the rapid growth of interconnected networks.
    • Interoperability: Allowed devices from different manufacturers to communicate seamlessly.

    Significance Today

    The first protocols, while rudimentary, laid the conceptual foundation for modern data communication.

    Today, TCP/IP and its derivatives (like HTTP, FTP, and DNS) remain essential for data exchange in the cloud, IoT, and AI ecosystems.

    These protocols demonstrate how early innovations continue to influence the digital infrastructure of the modern world.

    The screen from which you are reading this document uses a highly sophisticated encoding system based on standards in which the data stream stored in an archive is transmitted in sequences of bits to the screen. The screen translates this stream into an interpretative form suitable for the human eye and brain. These could be characters, still images, or sequences of images; they are all sequences of bits.

    It is clear that those who store the data and those who represent the data must use the same standard, which in this case is called a protocol.

    Similarly, data storage from my mind as I write to the computer happens using a tool—the keyboard—that interprets the key I press and transforms it into a sequence of bits. The same operation is performed by your smartphone’s camera, albeit with a different protocol.

    The fact remains that all this implies two things: data is transferred, and during the transfer, a common protocol must be defined among the writer, the storage medium, and the reader.

    For example, “This sentence translated into bits would become:”

    01010100 01101000 01101001 01110011 00100000 01110011 01100101 01101110 01110100 01100101 01101110 01100011 01100101 00100000 01110100 01110010 01100001 01101110 01110011 01101100 01100001 01110100 01100101 01100100 00100000 01101001 01101110 01110100 01101111 00100000 01100010 01101001 01110100 01110011 00100000 01110111 01101111 01110101 01101100 01100100 00100000 01100010 01100101 01100011 01101111

    In the box, the binary code representation includes additional spaces between sequences to make it more interpretable and comparable with the words of the sentence.

    As a pure sequence of bits, it becomes:

    010101000110100001101001011100110010000001110011011001010110111001110100011001010110111001100011011001010010000001110100011100100110000101101110011100110110110001100001011101000110010101100100001000000110100101101110011101000110111100100000011000100110100101110100011100110010000001110111011011.

    The translation was carried out using an international standard based on the ASCII format (American Standard Code for Information Interchange).

    In modern systems, storing and transmitting data requires shared standards to interpret electrical signals that encode binary data. Your display, for instance, converts stored binary sequences into visual content, such as text or images, using specific protocols.

    The ASCII (American Standard Code for Information Interchange) standard is one such example, where each character corresponds to a unique 8-bit or 16-bit code. Similarly, Unicode extends this to support characters from multiple languages and symbols, using up to 32 bits per character.

    These standards form the backbone of how data is encoded, stored, transmitted, and represented across digital systems, enabling seamless communication and functionality in modern ecosystems.

    The Weight of Data

    Each character in the sentence is encoded as a set of either 8 bits or 16 bits (international ASCII, known as UNICODE).

    A set of 8 bits in sequence is called a byte.

    Since writing in binary format is cumbersome, a different notation called Hexadecimal was introduced over time.

    The same sequence in Hexadecimal becomes much more compact:

    51756573746120467261736520747261646f74746120696e2062697420646976656e74657265626265

    The sentence occupies 41 bytes.

    By the end, the book might take up around 700,000 characters/bytes, including spaces. However, this will depend on the encoding adopted in the publication format (e.g., EPUB, MOBI, KFD).

    A keyboard encodes more than 256 characters, which is the maximum combination of 1s and 0s in 8 bits, because it must also encode special characters and various typographic forms (fonts in English).

    To handle the encoding of special characters, such as those specific to different languages, a standard called Unicode was developed. Each character can be encoded with 8 up to 32 bits, meaning up to 4 bytes.

    We refer to this as a multi-standard system because various methods of interpreting bits have emerged. Thankfully, these are converging into three main standards: UTF-8UTF-16, and UTF-32.

    Any type of information is considered data that is encoded into sequences of bytes through standards or rules.

    We have understood that data can be represented as sequences of bits, which in computing terms are sometimes referred to as bit strings or byte strings (in this case, multiply the number by eight to get the byte count).

    Just as there is a conversion table for computational power, there is one for bytes, which allows us to compactly represent large numbers:

    • 1 kilobyte (KB) = 1,024 bytes
    • 1 byte = 8 bits
    • 1 gigabyte (GB) = 1,024 megabytes = 1,024 \u00d7 1,048,576 = 1,073,741,824 bytes
    • 1 terabyte (TB) = 1,024 gigabytes
    • 1 petabyte (PB) = 1,024 terabytes
    • 1 megabyte (MB) = 1,024 kilobytes = 1,024 \u00d7 1,024 = 1,048,576 bytes

    Now, when someone tells you that your smartphone has 1 GB of storage space, you’ll know it can hold 1,073,741,824 bytes or words, and you’ll also be able to compare it with other devices.

    We’ve come to realize that all the information in an informational ecosystem can be represented as sequences of bits and interpreted through shared protocols and standards.

    Could we enumerate them all?

    If the ecosystem were static, perhaps yes, within a finite time.

    Unfortunately, information in a digital ecosystem is constantly transforming; it is created, sometimes it disappears, it changes type while retaining meaning, and protocols and standards evolve. In fact, informational ecosystems struggle to eliminate information and tend to accumulate it, often for regulatory reasons.

    a roll of white tape with holes named punched tape

    Data storage

    Where Are Data Stored? And With What Devices?

    At the beginning of the history of computing, data were stored on paper. Yes, paper that was produced with holes, each corresponding to a 1.

    Even the response was sometimes printed, including on punched tape.

    I recall a scene from the British sci-fi series U.F.O., in which Commander Ed Straker (played by Ed Bishop) often read computer responses printed on sheets of paper.

    Later, data storage transitioned to tape archives, used both for providing input to the computer and for saving information. During calculations, information was maintained in bit format by specific electrical devices, which initially were mechanical, then electromechanical (vacuum tubes), and eventually electronic (transistors).

    In fact, the advent of the transistor was essential for both digital and analog electronics. Invented in 1947 by physicists John Bardeen, Walter Brattain, and William Shockley, it revolutionized technology and paved the way for the computer era.

    Even today, one of the backup systems available is a magnetic tape device.

    Given the high initial cost of maintaining data, a distinction was made between transient data and persistent data.

    Persistent data refers to information that is not lost when the computer is powered off.

    Nowadays, when we turn off one of our devices, it doesn’t fully shut down. To ensure a complete shutdown, we would have to remove all the onboard batteries and, in some cases, even the CPUs. However, in the early days, powering off the computer would result in the complete loss of all data.

    Today, thanks to the high redundancy present at all stages of data transmission, losing data has become much less likely. Nevertheless, storing data persistently still incurs a significantly higher cost than storing it temporarily, and it also comes with the energy cost of keeping bits set to 1.

    For this reason, our devices are equipped with two types of storage commonly referred to as RAM (Random Access Memory) and hard drives (Hard Disk, HD).

    There are also write-once memories, such as ROMs (Read-Only Memory), which have near-zero energy maintenance costs.

    RAM has limited storage space compared to hard drives.

    Over the past 50 years, consumer-grade storage has evolved while adhering to this combinatory model.
    New forms of storage have emerged, but they remain highly expensive and are reserved for niche scenarios.

    The two storage modes address two primary needs and represent a trade-off: read/write speed and storage capacity.

    Modern physical data storage technologies continuously innovate, employing different technologies with varying storage costs per byte.

    Temporary storage systems are also referred to as short-term memory, while others are considered long-term memory.

    Even long-term memory degrades over time. For data that needs to persist for more than a year, more robust solutions must be used to prevent wear. Unfortunately, these second-tier storage systems are significantly slower and more expensive per byte handled.

    RAM has followed a development model similar to what Moore’s Law predicts, with steady improvements in performance and capacity.

    Technologies like DDR (Double Data Rate) memory have increased speed and efficiency.

    Recent innovations, such as 3D NAND and other advanced technologies, continue to enhance memory density and performance.

    However, the rate of growth in RAM density is slowing as chip miniaturization approaches physical limits.

    Emerging memory technologies, such as Phase-Change Memory (PCM) or Magnetoresistive RAM (MRAM), could provide alternative solutions to overcome these limitations.

    Table 3 – Data Storage Technologies provides an overview (not exhaustive) of the main technologies in use today for persistently storing data—our byte strings.


    Table 3 – Data Storage Technologies

    StorageTechnology NameDescriptionCharacteristicsUsage
    LocalHDDMechanical hard drives with rotating plattersHigher capacity for cost, slower than SSDsLong-term storage, backups
    LocalSSDSolid-state drives based on flash memoryHigh speed, no moving parts, more expensive capacityHigh-performance storage, used in computers and servers
    LocalNASNetwork-connected storage deviceCentralized, file sharing, easy to manageBackup and file sharing for small and medium-sized businesses
    LocalSANNetwork of storage devices connected via a dedicated networkHigh performance, scalable, more expensive and complexManaging large volumes of data in large enterprises
    CloudPublic CloudStorage provided by third parties over the InternetHigh scalability, remote access, pay-per-use modelBackup and globally accessible storage
    CloudPrivate CloudCloud infrastructure dedicated to a single organizationGreater control and security compared to public cloudStorage for sensitive or regulated data
    LocalMagnetic TapeTechnology that uses magnetic tapes for storageVery low cost per byte, slow access timeLong-term storage with infrequent access
    Local3D NAND MemoryFlash memory stacked vertically for greater capacity and performanceHigher density and performance compared to traditional NANDHigh-end SSD storage for enhanced performance

    Table 4 – Wear Resistance of Storage Media

    Storage TechnologySpeedSecurityCostWear Resistance
    HDD2211
    SSD5322
    NAS3333
    SAN4444
    Magnetic Tape1445

    Legend
    Evaluation criteria range from 1 to 5, with 1 = low and 5 = high.

    Storage technology can be chosen based on the storage objective, balancing characteristics such as speed, capacity, scalability, redundancy, and relative costs.

    In an informational ecosystem, data continuously oscillate across many layers of the ISO/OSI stack. This results in logical and application-level differences in how our data are organized, even when using the same physical storage medium.

    It’s important to consider that, apart from specialized informational systems, a typical system has a RAM capacity that is many factors smaller than its long-term storage capacity. Furthermore, persistent storage is even larger but significantly slower. This disparity is primarily due to production costs, which are tied to differing engineering approaches.

    While the engineering of long-term storage devices is simpler—though still highly sophisticated—it is less expensive but bulkier compared to short-term memory devices.

    Thus, short-term memory manages transient information, while long-term memory handles persistent information.

    Where is this ebook information stored?

    The information in my Book is likely stored on the same device you’re using to read it, in long-term memory. However, the page you’re currently reading is probably stored in short-term memory until you turn to the next or previous page.

    The program you’re using to read it transfers sequences of bytes to the hardware (the graphics card), which displays them on your screen using a specific standard.

    Over the years, models and algorithms have been developed to organize data in both short-term and long-term memory, aiming to address various user challenges. Writing data incurs a higher processing time cost (throughput) compared to reading it. This transfer cost is greater in long-term memory but also applies to high-speed short-term memory.

    Even transferring data across the various buses of a motherboard has a cost, as does transmitting data through external communication channels of your device.

    All information passes through communication channels that use different technologies but must still obey physical laws governing the transport of information. Bandwidth, latency, noise, and signal energy loss are the continuous challenges faced by electronic and telecommunications engineering.

    These physical laws can be explained by comparing a data flow to a liquid flow. For instance, the capacity of a pipe indicates the maximum number of liters per second it can sustain. Under normal gravity and pressure conditions, this capacity cannot be exceeded. If the pipe is widened, turbulence may occur; if too many pipes are placed together, space constraints arise, and they cannot be infinite.

    Additionally, if there isn’t enough stored potential energy, when you open the faucet, the water might not arrive immediately or in large quantities. This delay is analogous to bandwidth latency. A relatable example is when you return home after a long time away and reopen the main water valve (usually located near the water meter). It takes time for the water to reach the kitchen faucet if the valve is outside your house. This is an example of bandwidth latency.

    If applied to your data packets (made of bytes), this means there’s a specific time it takes to reach you. If this time exceeds the rate at which you update the data, it leads to critical information loss.

    This is why, for instance, a bank might build a cloud farm near its service center (and not the other way around). It’s also one of the main reasons to develop an entire cloud-native ecosystem, ensuring minimal latency even in multiregional architectures.

    To save 1 GB of data from your smartphone onto another device will take time. This time depends on many factors, which, for better or worse, consistently affect the flow of information.

    The cloud is designed to achieve better efficiency in transferring information compared to traditional ecosystems, but only if the data remains within the cloud itself.

    Keep in mind that the cloud provider is contractually responsible for many of the key characteristics of data services: speed, preservation, high availability, etc. Depending on the service model you’ve purchased, you’ll receive guarantees with specific response times.

    The cloud was born to manage data effectively and efficiently with a reduced Total Cost of Ownership (TCO), provided the data remains within the same cloud.

    This is a crucial factor to consider in the adoption process.

    The Emergence of Data Lakes

    One of the cloud’s evolving strengths in recent years is the concept of a data lake.

    If we think back to our chest-of-drawers example, it represents a data set modeled by its structure (drawers, levels, and types of clothing). For a long time, programmers developed algorithms aimed at addressing two different needs: fast data writing and fast data reading (especially recently written data).

    Various algorithms have been created to make reading related data more efficient. It was discovered that efficient data reading depends on how related information is stored. This led to algorithms designed to store information about, for instance, the state of the chest at a given time, enabling fast querying of related data. These specialized algorithms operate on a data model guided by context.

    Would the same algorithm work equally well in a different context, such as organizing a party? Over time, it was discovered that there is no universal algorithm that fits all scenarios, despite claims from advocates of one approach or another.

    Here, we are essentially discussing the engines behind databases, algorithms initially developed to respond to specific software and hardware contexts that have since become de facto standards (much like binary encoding).

    The cloud has changed this landscape, enabling a different approach. The data lake concept, which is applicable only in the cloud, promises to decouple data storage structures (the chest with socks and shirts) from the context, which is considered only during queries.

    A data lake introduces an intermediary layer between the storage world and the world of searching and reading archived data.

    Currently, this characterization of specific cloud resources is a specialization of a particular provider, raising concerns about vendor lock-in.

    How response at lock-in challenge, in recent years, open solutions for data lake architectures have started to emerge. These aim to address the challenges of vendor lock-in by providing standardized, interoperable frameworks that allow organizations to build and manage data lakes across different cloud providers. Open data lake solutions promote portability, flexibility, and collaboration, enabling businesses to maintain control over their data while leveraging the benefits of a cloud-native approach.

    It is essential to standardize and open up data lake architectures to ensure portability across different cloud providers.

    I talk about the challenges in data lake adoption in a future post.

    Human vs. AI Data Interpretation

    A final consideration is the difference between human and AI-driven data interpretation.

    For human users, structured information is essential, and data is often represented through abstractions such as pages, chapters, and formatted documents.

    However, AI does not require these abstractions. Instead, AI processes data based on context and meaning rather than its visual or structural representation.

    For example, a machine-learning model does not perceive a document in terms of pages and chapters—instead, it processes semantic relationships and extracts insights directly from unstructured data.The history of cloud computing offers a broad and detailed overview of the key milestones in the development of this technology. While not exhaustive, it provides an interpretation of innovation as a driving force


    Holistic Vision

    rom the very first spark of a voltage in a transistor to the massive architectures of today’s cloud-native ecosystems, data has always been the common thread. What begins as a simple binary choice—0 or 1—evolves through protocols, standards, and storage systems into the vast informational flows that power our world.

    Encoding is not just a technical process: it is the language through which humans and machines agree to describe reality. From Shannon’s definition of the bit to ASCII and Unicode, to TCP/IP and the protocols that sustain the Internet, each layer of encoding adds meaning, reliability, and universality.

    Storage, too, has mirrored our collective journey. From punched tape to RAM, SSDs, and data lakes, every new step has been driven by the need to preserve information, ensure resilience, and make it accessible at scale. The cloud amplifies this paradigm, reducing latency, lowering costs, and enabling organizations to build ecosystems where data does not merely survive but thrives as a living resource.

    In this sense, encoding is more than a technical detail: it is the foundation of trust in digital ecosystems. Without a shared standard, communication collapses; without persistent storage, memory disappears; without scalable protocols, innovation stalls.

    The holistic vision is clear:

    Data encoding, storage, and transmission are not isolated layers of technology but interconnected dimensions of a single living ecosystem.

    They enable organizations to transform raw signals into knowledge, knowledge into decisions, and decisions into actions.

    Cloud-native ecosystems are the natural evolution of this continuum—where encoding is not static but adaptive, where storage expands dynamically, and where protocols evolve to keep pace with the complexity of human creativity and artificial intelligence.

    Ultimately, to understand data encoding is to understand how we, as a society, give shape, order, and meaning to the information age.



    References

    This article is an excerpt from the book

    Cloud-Native Ecosystems

    A Living Link — Technology, Organization, and Innovation

  • Key references on cloud-native ecosystems

    Key references on cloud-native ecosystems

    This post is a living bibliography for cloud-native ecosystems, continuously updated with references, frameworks, standards, and case studies.


    Key references on cloud-native ecosystems

    This page is a living bibliography for Exploras.cloud and the book Exploring Cloud-Native Ecosystems.
    Its purpose is to give readers direct access to the sources, frameworks, and organizations mentioned in the book and blog, while also offering extended context for further exploration.
    Unlike a static list, this page will be continuously updated: each reference may grow with notes, links, and commentary over time.

    Reference Table

    #ReferenceExtended Description
    1Emory Goizueta Business School. Ramnath K. Chellappa. WebsiteOne of the first to formally define “cloud computing” (1997), emphasizing economics as a driver for computing boundaries. His work bridges IT, economics, and digital business strategy.
    2Wikipedia. Analytical EngineCharles Babbage’s 1837 design for a programmable mechanical computer. It introduced memory, arithmetic logic, and conditional branching—ideas that anticipate modern computing.
    3Wikipedia. George StibitzBuilt early relay-based digital computers (1937), demonstrating remote computation—precursor to networked and cloud-based computing.
    4Wikipedia. Howard Hathaway AikenCreator of the Harvard Mark I (1944), one of the first automatic calculators. Pioneered large-scale computer engineering.
    5Wikipedia. John von NeumannProposed the “stored-program” model that underpins most computer architectures. His contributions define modern computing logic.
    6Wikipedia. Von Neumann architectureDescribes a computer design where instructions and data share memory. Still the basis of most CPUs today.
    7MIT OpenCourseWare. The von Neumann ModelA video course explaining von Neumann’s architecture in a didactic way. Useful for foundational understanding.
    8Wikipedia. History of cloud computingOutlines the shift from mainframes and distributed computing to modern cloud. Traces milestones in virtualization, SaaS, and IaaS.
    9RackspaceEarly managed hosting provider, instrumental in developing commercial IaaS solutions and co-founding OpenStack.
    10Akamai TechnologiesPioneer in Content Delivery Networks (CDNs), enabling global scale, speed, and resilience—key for cloud adoption.
    11Salesforce. HistoryIntroduced SaaS at scale (1999), proving the viability of subscription-based enterprise software.
    12Wikipedia. AWSFounded 2006, AWS revolutionized IT with elastic infrastructure and pay-as-you-go pricing.
    13Abandy, Roosevelt. The History of Microsoft AzureChronicles Azure’s launch (2010) and its evolution into a leading cloud platform.
    14Google. Announcing App Engine for BusinessOfficial blog post introducing Google App Engine for enterprise workloads.
    15Wikipedia. Microsoft AzureEntry describing Azure services, history, and growth.
    16NIST. SP 800-145 – Definition of Cloud ComputingCanonical definition of cloud computing (2011): essential for regulatory, policy, and academic work.
    17Meier, Reto. History of Google CloudAnnotated narrative of Google Cloud’s growth, strategy, and milestones.
    18Microsoft. Ten Years of Microsoft 365Reflects on Microsoft’s SaaS transformation through Office 365 and Teams.
    19Wikipedia. OSI ModelConceptual framework for networking protocols, fundamental to understanding modern internet and cloud communication.
    20Wikipedia. Internet Protocol SuiteBasis of the internet (TCP/IP), providing transport and application standards for all cloud ecosystems.
    21European Commission. Maritime Data FrameworkEU project applying digital frameworks to maritime data—an example of sectoral digital ecosystems.
    22EU. ESG rating activitiesEU resources on environmental, social, and governance (ESG) standards. Increasingly tied to cloud sustainability.
    23Green-Cloud EU StrategyPolicy initiative for greener, sustainable cloud adoption in Europe.
    24AWS. Netflix Case StudyCase study showing how Netflix scales globally using AWS infrastructure.
    25Google Cloud. Coca-Cola Case StudyDescribes Coca-Cola’s modernization via Google Cloud for data-driven marketing.
    26Microsoft Azure. Royal Dutch ShellExplains Shell’s adoption of Azure for energy transition and digital platforms.
    27AWS. Capital One Case StudyBank using AWS for secure, regulated workloads and innovation.
    28Wired. Dropbox’s Exodus from AWSNarrative on Dropbox’s decision to exit AWS and build its own infrastructure.
    29Microsoft Azure. Volkswagen ManufacturingAzure case study: digital manufacturing and Industry 4.0.
    30AWS. Airbnb Case StudyAirbnb’s use of AWS to scale a global marketplace.
    31Wikipedia. DevOpsCollaborative methodology bridging development and operations. Core to cloud-native culture.
    32Kim, Behr, Spafford. The Phoenix Project. (2018, IT Revolution)Influential novel about DevOps transformation in a struggling IT org.
    33Axelos. What is ITILOverview of ITIL, the global framework for IT Service Management.
    34Tefertiller, Jeffrey. ITIL 4: The New Frontier. (2021)Explains ITIL 4’s innovations and alignment with agile, DevOps, and value streams.
    35ISO. ISO/IEC 27001:2022Standard for Information Security Management Systems (ISMS), essential in cloud governance.
    36EU. Fighting CybercrimeArticle outlining the EU’s evolving cybersecurity regulations.
    37MIT OCW. NP-Complete ProblemsLecture notes introducing NP-complete problems, critical to computational theory.
    38DORA. Get Better at Getting BetterSite of DevOps Research and Assessment (DORA), creators of key DevOps performance metrics.
    39Kim, Humble, Debois, Willis. The DevOps Handbook.Definitive handbook on DevOps culture, tools, and leadership.
    40J.R. Storment & Mike Fuller. Cloud FinOps.Foundational book on financial operations in cloud environments.
    41NISTThe U.S. National Institute of Standards and Technology, setting essential frameworks for cloud, cybersecurity, and digital trust.
    42NIST. SP 800-192 – Access Control PoliciesFramework for testing and verifying access control policies.
    43NIST. SP 800-207 – Zero Trust ArchitectureCore reference on Zero Trust, published 2020.
    44NIST. SP 800-59 – National Security SystemsGuidance for classifying systems as National Security Systems.
    45NIST. SP 800-63 – Digital Identity GuidelinesFramework for authentication, identity assurance, and federation.
    46Terraform. Landing Zones FrameworkCloud Adoption Framework for Terraform landing zones: governance, hierarchy, and automation.
    47DORA State of DevOps ReportAnnual industry-leading survey analyzing DevOps performance metrics.


    Holistic Vision

    Cloud service models are more than layers of technology — they represent choices in how organizations design their informational ecosystems. Each model shapes not only cost and scalability, but also governance, compliance, and the ability to innovate.

    Seen holistically, IaaS, PaaS, and SaaS are not rigid categories but strategic levers in the architecture of an information system. The real challenge is balancing speed with resilience, abstraction with control, efficiency with responsibility.

    Ultimately, the question is not “Which model is best?” but “Which model best aligns with our people, processes, and long-term vision?”
    In this way, service models become part of a larger ecosystem — one that connects technology with organizational culture, regulatory frameworks, and human creativity.



    References

    This article is an excerpt from the book

    Cloud-Native Ecosystems

    A Living Link — Technology, Organization, and Innovation