Author: Mauro Giuliano

  • Cloud Portability and Specialized Resources (Lock-In)

    Cloud Portability and Specialized Resources (Lock-In)

    “Cloud portability is a key factor in reducing vendor lock-in and enabling freedom of choice across cloud providers. From containerized applications to storage virtualization and data abstraction, organizations can design architectures that work across AWS, Azure, and Google Cloud. This post explores the challenges of specialized resources and the strategies that make true interoperability possible.”


    Cloud Portability and Specialized Resources (Lock-In)

    Many public cloud providers have started introducing specialized cloud resources that are native to a specific cloud platform and not portable to other cloud environments.

    This chapter focuses specifically on public cloud because, in the author’s view, the introduction of proprietary cloud-native specialized resources undermines one of the fundamental potential benefits of cloud computing: portability.

    In a private cloud or traditional data center, the model typically involves deploying applications on virtualized host systems based on well-known operating systems, using service models comparable to IaaS or PaaS.

    With the widespread adoption of containerization, which is dominant in cloud-native ecosystems, portability—the ability to run the same container across different cloud environments—has become relatively feasible, especially when Platform Engineering, Infrastructure as Code (IaC), and DevOps practices are applied.

    In this model, the source code inside the container remains agnostic of the lower ISO/OSI layers through which it executes. Ideally, developers writing the source code should also be unaware of these underlying layers.

    This approach enables portability via automated DevOps deployment pipelines, which are discussed in detail later in this book.

    However, SaaS solutions have always been cloud-native and inherently lack portability. Examples include Microsoft 365 (formerly Office 365) and Google Workspace, which enable integration and interoperability but do not facilitate migration between cloud providers.

    For example, if an enterprise builds micro-automation and computational processes around Microsoft 365 or Google Workspace, data migration complexity increases significantly, leading to potential loss of information.

    AWS S3 as a Notable Exception

    AWS S3, which originated as a native storage service within the Amazon cloud ecosystem, has evolved into a widely adopted storage standard.

    Thanks to third-party virtualization software libraries, AWS S3 storage mechanisms can now be used outside of Amazon’s cloud environment, making it one of the rare cloud resources that can transcend provider boundaries.

    Cloud Providers and Data Lock-In

    Today, the primary focus in cloud computing revolves around data management, and major cloud providers actively work to keep data within their own ecosystems.

    The reason is simple: data lifecycle management drives the highest consumption of fundamental cloud resources such as compute, storage, and data transfer.

    With the rise of Generative AI (Gen AI) services, both data consumption and cloud dependency have increased. As a result, cloud providers now offer highly specialized SaaS/PaaS cloud-native resources.

    Three notable examples of cloud-specific services include:

    Currently, there is no portability between solutions such as BigQuery on GCP and Fabric on Azure, or vice versa. This lack of interoperability results in vendor lock-in, forcing businesses to commit to a specific cloud ecosystem.

    Cloud Resource Classification: Beyond Service and Distribution Models

    To accurately classify cloud resources, portability must be considered in addition to service model and distribution model.

    Can Modern Architectures Reduce Lock-In?

    The answer is yes, but not for all scenarios.

    Later in a next post, we will explore architectural strategies that can minimize dependence on a single cloud provider.

    But we can anticipate a core-based strategy to enable the cloud portability: use a Virtualizing ISO/OSI Storage Layers strategy.

    While containers enable software portability, ensuring data portability requires a similar approach.

    The key is to virtualize the ISO/OSI layers responsible for data storage and adopt an abstraction model that decouples data storage from data lifecycle management.

    By implementing layered abstractions, organizations can design virtualized ecosystems where both software and data operate independently from the underlying cloud infrastructure.


    Holistic Vision

    Cloud portability is more than a technical choice—it is a strategic foundation for building resilient and future-proof ecosystems. Specialized resources may offer innovation, but they also increase dependency and risk. By combining containerization, storage abstraction, and modern DevOps practices, organizations can strike a balance between leveraging advanced cloud-native services and preserving the freedom to evolve across providers. In this perspective, portability is not only about moving workloads—it is about safeguarding autonomy, enabling innovation, and ensuring that both human and AI-driven systems can thrive in a truly interoperable digital ecosystem.



    References

    This article is an excerpt from the book

    Cloud-Native Ecosystems

    A Living Link — Technology, Organization, and Innovation

  • The Encoding of data

    The Encoding of data

    This is a wide post for a good read experience use a tablet, large smartphone or pc


    The Encoding Of Data

    The word data (or its plural data points) is also one of the most frequently used terms in IT jargon, sometimes to the point of losing its true meaning and occasionally taking on an almost “romanticized” interpretation, often distant from reality.

    A cloud-native ecosystem is built around the data lifecycle.

    Let’s start with the basics, starting from a robust definition following then what’s mean data encoding until to get a first look on a data Lakes scenario.

    Definition of the “data” in a cloud native ecosystem

    The most general definition of the word data, in the context of the cloud and particularly within an informational ecosystem, is as follows:

    Any recorded information used to describe events, objects, or entities, which can be collected, stored, processed, and shared to support decision-making, operational, or analytical processes.

    The way data is managed within an informational ecosystem defines its quality, form, and security.
    This leads to a discussion of the data lifecycle, which is integrated into the lifecycle of the informational ecosystem.

    As we will see later in the book, the operations related to managing the lifecycle of an informational ecosystem, implemented through the DevOps framework, also encompass the data lifecycle.

    The simplest way to represent data is tied to its possible representation in the physical world, particularly within the electronic circuits of our motherboards.

    Let us attempt to provide a basic representation of data at the physical level. Experts in the field are kindly asked to bear with this extreme simplification.

    Data can be said to be represented by an electrical voltage in a specific electronic component within a circuit, called a transistor.

    At a given moment, if a voltage (measured in volts) is present across the transistor, the value of the data is considered to be 1; in the absence of voltage, it is considered 0.

    In reality, this process is much more complex, but for our purposes, this simplification suffices.

    At its most basic physical and electronic level, data can assume only two values: 0 or 1.

    The basic unit of digital information is called a bit.
    A bit can assume only two values, typically represented as 0 or 1, which correspond to two distinct states, such as on/off or true/false, within the binary system used by computers.

    Now, for those willing to follow along, we will attempt to understand how the entire description of our world can be based on just two values: 1 and 0.

    The Encoding of Data

    The definitions of “bit” and “byte” originated around the mid-20th century.

    The term bit, short for “binary digit,” was coined in 1948 by mathematician and engineer Claude Shannon, considered the father of information theory. Shannon introduced the concept in his famous paper “A Mathematical Theory of Communication” (a must-read), describing how information could be represented and manipulated using sequences of bits, or binary digits (0 and 1).

    The term byte was coined by Werner Buchholz in 1956 during the development of the IBM 7030 Stretch computer. Initially, a byte represented a variable group of bits, but it was later standardized as a sequence of 8 bits. The byte thus became the basic unit of information storage in modern computer systems.

    To better understand what a data unit represented by bits and an archive containing them might look like, imagine a household chest of drawers used for storing clothing—a compact and practical solution.

    In one drawer, you might store socks; in another, undergarments; and in yet another, scarves, and so forth.

    The Simplest Form of Data: The Bit

    The simplest form of data is the bit. A bit can take only two values: 0 or 1.

    If you wanted to represent four types of clothing, you could decide to encode them using two bits. Placing two bits side by side, each of which can assume two values, yields four possible combinations:

    • 00
    • 01
    • 10
    • 11

    To these four combinations, you could assign specific meanings, such as types of clothing:

    • 00: Socks
    • 01: Undergarments
    • 10: Scarves
    • 11: Shirts

    With this, you can encode the type of clothing—that is, the type of data—but you still lack information about quantity or position.

    Representing Quantity

    To include quantity, you could align additional bits, deciding that their combination of 0 and 1 represents a numerical value. You would need to establish a calculation rule or standard so that whoever writes or reads the data interprets the value consistently.

    For instance, you might define a standard using a table:

    Table 1 – A bit coding standard

    Bit 1Bit 2Bit 3Bit 4Valore
    00000
    10001
    11002
    11103

    With four bits, you can achieve 2⁴ = 16 possible combinations, allowing you to assign 16 different meanings.

    One widely used standard over time is represented in the following table:

    Table 2 – Binary encoding as a power of 2

    ValoreBit 1Bit 2Bit 3Bit 4
    00000
    10001
    20010
    30011
    40100
    50101
    60110
    70111
    81000
    91001
    101010
    111011
    121100
    131101
    141110
    151111

    By carefully interpreting the table, you can assert that the bit in the leftmost column is the most significant bit (MSB) because changing its value significantly alters the overall number.

    Conversely, the rightmost bit is the least significant bit (LSB) because altering its value changes the overall number only slightly.

    This binary numeric encoding has been used in computers since the first practical application of the Von Neumann cycle.

    Combining Standards for Comprehensive Representation

    Returning to our chest of drawers, suppose you want to indicate both the type and quantity of clothing. By adding another four bits, as in the numeric encoding table, you can represent up to 15 items for each type of clothing.

    For example:

    110011 means 4 shirts (11 = shirts, 0011 = 4).

    Expanding further, you can also add the location of the items. Imagine the chest has four drawers, each represented by two bits:

    BitsDrawer
    00First
    01Second
    10Third
    11Fourth

    Now, the first two bits of your sequence encode the drawer:

    01110011 means 4 shirts in the third drawer (01 = third drawer, 110011 = 4 shirts).

    Now let’s also add the location of my clothing. Let’s imagine the chest of drawers has four levels, each with a single drawer. At this point, I know I can use two bits to encode the levels.

    As before, I need to establish a shared standard with you:

    00: First level
    01: Second level
    10: Third level
    11: Fourth level

    I decide to include the level as the first two bits of the previous sequence, resulting in something like the following:

    01110011
    10001100
    00010101
    11101011

    On which level are my four shirts?

    Chest with dwarf design by Old

     A chest of drawers (with socks) invented by Old (ChatGPT Dall-e)

    First Protocol about data

    The first widely recognized protocol for data communication is the Telegraphy Code, which laid the groundwork for encoding and transmitting information.
    However, when we refer specifically to the modern era of digital data, the Transmission Control Protocol (TCP) and Internet Protocol (IP) stand out as foundational elements.

    Early Data Communication Standards

    1. Morse Code (1837):
      1. Developed by Samuel Morse, it was one of the earliest methods to encode and transmit data as sequences of dots and dashes over telegraph systems.
      2. While not a protocol in the modern sense, it established the concept of encoding information for transmission.
    2. Baudot Code (1870):
      1. Created by Émile Baudot, this telegraphy code was a more advanced system for encoding text into binary-like sequences of signals.
      2. It represents an early attempt at creating standardized data representation.

    The Rise of Modern Protocols

    1. ASCII (American Standard Code for Information Interchange, 1963):
      1. Developed as a character encoding standard for text, allowing different systems to communicate text data reliably.
    2. ARPANET’s NCP (Network Control Protocol, 1970s):
      1. Predecessor to TCP/IP, NCP was the first protocol suite used to manage data communication on ARPANET, the forerunner of the modern internet.
    3. TCP/IP (1980s):
      1. Transmission Control Protocol (TCP): Ensures reliable data transmission by managing packet sequencing, error correction, and acknowledgments.
      2. Internet Protocol (IP): Governs the routing of data packets across networks.
      3. Together, TCP/IP became the backbone of modern data exchange, enabling the internet to flourish

    Key Contributions of TCP/IP:

    • Standardization: Provided a universal framework for data transmission across heterogeneous systems.
    • Scalability: Supported the rapid growth of interconnected networks.
    • Interoperability: Allowed devices from different manufacturers to communicate seamlessly.

    Significance Today

    The first protocols, while rudimentary, laid the conceptual foundation for modern data communication.

    Today, TCP/IP and its derivatives (like HTTP, FTP, and DNS) remain essential for data exchange in the cloud, IoT, and AI ecosystems.

    These protocols demonstrate how early innovations continue to influence the digital infrastructure of the modern world.

    The screen from which you are reading this document uses a highly sophisticated encoding system based on standards in which the data stream stored in an archive is transmitted in sequences of bits to the screen. The screen translates this stream into an interpretative form suitable for the human eye and brain. These could be characters, still images, or sequences of images; they are all sequences of bits.

    It is clear that those who store the data and those who represent the data must use the same standard, which in this case is called a protocol.

    Similarly, data storage from my mind as I write to the computer happens using a tool—the keyboard—that interprets the key I press and transforms it into a sequence of bits. The same operation is performed by your smartphone’s camera, albeit with a different protocol.

    The fact remains that all this implies two things: data is transferred, and during the transfer, a common protocol must be defined among the writer, the storage medium, and the reader.

    For example, “This sentence translated into bits would become:”

    01010100 01101000 01101001 01110011 00100000 01110011 01100101 01101110 01110100 01100101 01101110 01100011 01100101 00100000 01110100 01110010 01100001 01101110 01110011 01101100 01100001 01110100 01100101 01100100 00100000 01101001 01101110 01110100 01101111 00100000 01100010 01101001 01110100 01110011 00100000 01110111 01101111 01110101 01101100 01100100 00100000 01100010 01100101 01100011 01101111

    In the box, the binary code representation includes additional spaces between sequences to make it more interpretable and comparable with the words of the sentence.

    As a pure sequence of bits, it becomes:

    010101000110100001101001011100110010000001110011011001010110111001110100011001010110111001100011011001010010000001110100011100100110000101101110011100110110110001100001011101000110010101100100001000000110100101101110011101000110111100100000011000100110100101110100011100110010000001110111011011.

    The translation was carried out using an international standard based on the ASCII format (American Standard Code for Information Interchange).

    In modern systems, storing and transmitting data requires shared standards to interpret electrical signals that encode binary data. Your display, for instance, converts stored binary sequences into visual content, such as text or images, using specific protocols.

    The ASCII (American Standard Code for Information Interchange) standard is one such example, where each character corresponds to a unique 8-bit or 16-bit code. Similarly, Unicode extends this to support characters from multiple languages and symbols, using up to 32 bits per character.

    These standards form the backbone of how data is encoded, stored, transmitted, and represented across digital systems, enabling seamless communication and functionality in modern ecosystems.

    The Weight of Data

    Each character in the sentence is encoded as a set of either 8 bits or 16 bits (international ASCII, known as UNICODE).

    A set of 8 bits in sequence is called a byte.

    Since writing in binary format is cumbersome, a different notation called Hexadecimal was introduced over time.

    The same sequence in Hexadecimal becomes much more compact:

    51756573746120467261736520747261646f74746120696e2062697420646976656e74657265626265

    The sentence occupies 41 bytes.

    By the end, the book might take up around 700,000 characters/bytes, including spaces. However, this will depend on the encoding adopted in the publication format (e.g., EPUB, MOBI, KFD).

    A keyboard encodes more than 256 characters, which is the maximum combination of 1s and 0s in 8 bits, because it must also encode special characters and various typographic forms (fonts in English).

    To handle the encoding of special characters, such as those specific to different languages, a standard called Unicode was developed. Each character can be encoded with 8 up to 32 bits, meaning up to 4 bytes.

    We refer to this as a multi-standard system because various methods of interpreting bits have emerged. Thankfully, these are converging into three main standards: UTF-8UTF-16, and UTF-32.

    Any type of information is considered data that is encoded into sequences of bytes through standards or rules.

    We have understood that data can be represented as sequences of bits, which in computing terms are sometimes referred to as bit strings or byte strings (in this case, multiply the number by eight to get the byte count).

    Just as there is a conversion table for computational power, there is one for bytes, which allows us to compactly represent large numbers:

    • 1 kilobyte (KB) = 1,024 bytes
    • 1 byte = 8 bits
    • 1 gigabyte (GB) = 1,024 megabytes = 1,024 \u00d7 1,048,576 = 1,073,741,824 bytes
    • 1 terabyte (TB) = 1,024 gigabytes
    • 1 petabyte (PB) = 1,024 terabytes
    • 1 megabyte (MB) = 1,024 kilobytes = 1,024 \u00d7 1,024 = 1,048,576 bytes

    Now, when someone tells you that your smartphone has 1 GB of storage space, you’ll know it can hold 1,073,741,824 bytes or words, and you’ll also be able to compare it with other devices.

    We’ve come to realize that all the information in an informational ecosystem can be represented as sequences of bits and interpreted through shared protocols and standards.

    Could we enumerate them all?

    If the ecosystem were static, perhaps yes, within a finite time.

    Unfortunately, information in a digital ecosystem is constantly transforming; it is created, sometimes it disappears, it changes type while retaining meaning, and protocols and standards evolve. In fact, informational ecosystems struggle to eliminate information and tend to accumulate it, often for regulatory reasons.

    a roll of white tape with holes named punched tape

    Data storage

    Where Are Data Stored? And With What Devices?

    At the beginning of the history of computing, data were stored on paper. Yes, paper that was produced with holes, each corresponding to a 1.

    Even the response was sometimes printed, including on punched tape.

    I recall a scene from the British sci-fi series U.F.O., in which Commander Ed Straker (played by Ed Bishop) often read computer responses printed on sheets of paper.

    Later, data storage transitioned to tape archives, used both for providing input to the computer and for saving information. During calculations, information was maintained in bit format by specific electrical devices, which initially were mechanical, then electromechanical (vacuum tubes), and eventually electronic (transistors).

    In fact, the advent of the transistor was essential for both digital and analog electronics. Invented in 1947 by physicists John Bardeen, Walter Brattain, and William Shockley, it revolutionized technology and paved the way for the computer era.

    Even today, one of the backup systems available is a magnetic tape device.

    Given the high initial cost of maintaining data, a distinction was made between transient data and persistent data.

    Persistent data refers to information that is not lost when the computer is powered off.

    Nowadays, when we turn off one of our devices, it doesn’t fully shut down. To ensure a complete shutdown, we would have to remove all the onboard batteries and, in some cases, even the CPUs. However, in the early days, powering off the computer would result in the complete loss of all data.

    Today, thanks to the high redundancy present at all stages of data transmission, losing data has become much less likely. Nevertheless, storing data persistently still incurs a significantly higher cost than storing it temporarily, and it also comes with the energy cost of keeping bits set to 1.

    For this reason, our devices are equipped with two types of storage commonly referred to as RAM (Random Access Memory) and hard drives (Hard Disk, HD).

    There are also write-once memories, such as ROMs (Read-Only Memory), which have near-zero energy maintenance costs.

    RAM has limited storage space compared to hard drives.

    Over the past 50 years, consumer-grade storage has evolved while adhering to this combinatory model.
    New forms of storage have emerged, but they remain highly expensive and are reserved for niche scenarios.

    The two storage modes address two primary needs and represent a trade-off: read/write speed and storage capacity.

    Modern physical data storage technologies continuously innovate, employing different technologies with varying storage costs per byte.

    Temporary storage systems are also referred to as short-term memory, while others are considered long-term memory.

    Even long-term memory degrades over time. For data that needs to persist for more than a year, more robust solutions must be used to prevent wear. Unfortunately, these second-tier storage systems are significantly slower and more expensive per byte handled.

    RAM has followed a development model similar to what Moore’s Law predicts, with steady improvements in performance and capacity.

    Technologies like DDR (Double Data Rate) memory have increased speed and efficiency.

    Recent innovations, such as 3D NAND and other advanced technologies, continue to enhance memory density and performance.

    However, the rate of growth in RAM density is slowing as chip miniaturization approaches physical limits.

    Emerging memory technologies, such as Phase-Change Memory (PCM) or Magnetoresistive RAM (MRAM), could provide alternative solutions to overcome these limitations.

    Table 3 – Data Storage Technologies provides an overview (not exhaustive) of the main technologies in use today for persistently storing data—our byte strings.


    Table 3 – Data Storage Technologies

    StorageTechnology NameDescriptionCharacteristicsUsage
    LocalHDDMechanical hard drives with rotating plattersHigher capacity for cost, slower than SSDsLong-term storage, backups
    LocalSSDSolid-state drives based on flash memoryHigh speed, no moving parts, more expensive capacityHigh-performance storage, used in computers and servers
    LocalNASNetwork-connected storage deviceCentralized, file sharing, easy to manageBackup and file sharing for small and medium-sized businesses
    LocalSANNetwork of storage devices connected via a dedicated networkHigh performance, scalable, more expensive and complexManaging large volumes of data in large enterprises
    CloudPublic CloudStorage provided by third parties over the InternetHigh scalability, remote access, pay-per-use modelBackup and globally accessible storage
    CloudPrivate CloudCloud infrastructure dedicated to a single organizationGreater control and security compared to public cloudStorage for sensitive or regulated data
    LocalMagnetic TapeTechnology that uses magnetic tapes for storageVery low cost per byte, slow access timeLong-term storage with infrequent access
    Local3D NAND MemoryFlash memory stacked vertically for greater capacity and performanceHigher density and performance compared to traditional NANDHigh-end SSD storage for enhanced performance

    Table 4 – Wear Resistance of Storage Media

    Storage TechnologySpeedSecurityCostWear Resistance
    HDD2211
    SSD5322
    NAS3333
    SAN4444
    Magnetic Tape1445

    Legend
    Evaluation criteria range from 1 to 5, with 1 = low and 5 = high.

    Storage technology can be chosen based on the storage objective, balancing characteristics such as speed, capacity, scalability, redundancy, and relative costs.

    In an informational ecosystem, data continuously oscillate across many layers of the ISO/OSI stack. This results in logical and application-level differences in how our data are organized, even when using the same physical storage medium.

    It’s important to consider that, apart from specialized informational systems, a typical system has a RAM capacity that is many factors smaller than its long-term storage capacity. Furthermore, persistent storage is even larger but significantly slower. This disparity is primarily due to production costs, which are tied to differing engineering approaches.

    While the engineering of long-term storage devices is simpler—though still highly sophisticated—it is less expensive but bulkier compared to short-term memory devices.

    Thus, short-term memory manages transient information, while long-term memory handles persistent information.

    Where is this ebook information stored?

    The information in my Book is likely stored on the same device you’re using to read it, in long-term memory. However, the page you’re currently reading is probably stored in short-term memory until you turn to the next or previous page.

    The program you’re using to read it transfers sequences of bytes to the hardware (the graphics card), which displays them on your screen using a specific standard.

    Over the years, models and algorithms have been developed to organize data in both short-term and long-term memory, aiming to address various user challenges. Writing data incurs a higher processing time cost (throughput) compared to reading it. This transfer cost is greater in long-term memory but also applies to high-speed short-term memory.

    Even transferring data across the various buses of a motherboard has a cost, as does transmitting data through external communication channels of your device.

    All information passes through communication channels that use different technologies but must still obey physical laws governing the transport of information. Bandwidth, latency, noise, and signal energy loss are the continuous challenges faced by electronic and telecommunications engineering.

    These physical laws can be explained by comparing a data flow to a liquid flow. For instance, the capacity of a pipe indicates the maximum number of liters per second it can sustain. Under normal gravity and pressure conditions, this capacity cannot be exceeded. If the pipe is widened, turbulence may occur; if too many pipes are placed together, space constraints arise, and they cannot be infinite.

    Additionally, if there isn’t enough stored potential energy, when you open the faucet, the water might not arrive immediately or in large quantities. This delay is analogous to bandwidth latency. A relatable example is when you return home after a long time away and reopen the main water valve (usually located near the water meter). It takes time for the water to reach the kitchen faucet if the valve is outside your house. This is an example of bandwidth latency.

    If applied to your data packets (made of bytes), this means there’s a specific time it takes to reach you. If this time exceeds the rate at which you update the data, it leads to critical information loss.

    This is why, for instance, a bank might build a cloud farm near its service center (and not the other way around). It’s also one of the main reasons to develop an entire cloud-native ecosystem, ensuring minimal latency even in multiregional architectures.

    To save 1 GB of data from your smartphone onto another device will take time. This time depends on many factors, which, for better or worse, consistently affect the flow of information.

    The cloud is designed to achieve better efficiency in transferring information compared to traditional ecosystems, but only if the data remains within the cloud itself.

    Keep in mind that the cloud provider is contractually responsible for many of the key characteristics of data services: speed, preservation, high availability, etc. Depending on the service model you’ve purchased, you’ll receive guarantees with specific response times.

    The cloud was born to manage data effectively and efficiently with a reduced Total Cost of Ownership (TCO), provided the data remains within the same cloud.

    This is a crucial factor to consider in the adoption process.

    The Emergence of Data Lakes

    One of the cloud’s evolving strengths in recent years is the concept of a data lake.

    If we think back to our chest-of-drawers example, it represents a data set modeled by its structure (drawers, levels, and types of clothing). For a long time, programmers developed algorithms aimed at addressing two different needs: fast data writing and fast data reading (especially recently written data).

    Various algorithms have been created to make reading related data more efficient. It was discovered that efficient data reading depends on how related information is stored. This led to algorithms designed to store information about, for instance, the state of the chest at a given time, enabling fast querying of related data. These specialized algorithms operate on a data model guided by context.

    Would the same algorithm work equally well in a different context, such as organizing a party? Over time, it was discovered that there is no universal algorithm that fits all scenarios, despite claims from advocates of one approach or another.

    Here, we are essentially discussing the engines behind databases, algorithms initially developed to respond to specific software and hardware contexts that have since become de facto standards (much like binary encoding).

    The cloud has changed this landscape, enabling a different approach. The data lake concept, which is applicable only in the cloud, promises to decouple data storage structures (the chest with socks and shirts) from the context, which is considered only during queries.

    A data lake introduces an intermediary layer between the storage world and the world of searching and reading archived data.

    Currently, this characterization of specific cloud resources is a specialization of a particular provider, raising concerns about vendor lock-in.

    How response at lock-in challenge, in recent years, open solutions for data lake architectures have started to emerge. These aim to address the challenges of vendor lock-in by providing standardized, interoperable frameworks that allow organizations to build and manage data lakes across different cloud providers. Open data lake solutions promote portability, flexibility, and collaboration, enabling businesses to maintain control over their data while leveraging the benefits of a cloud-native approach.

    It is essential to standardize and open up data lake architectures to ensure portability across different cloud providers.

    I talk about the challenges in data lake adoption in a future post.

    Human vs. AI Data Interpretation

    A final consideration is the difference between human and AI-driven data interpretation.

    For human users, structured information is essential, and data is often represented through abstractions such as pages, chapters, and formatted documents.

    However, AI does not require these abstractions. Instead, AI processes data based on context and meaning rather than its visual or structural representation.

    For example, a machine-learning model does not perceive a document in terms of pages and chapters—instead, it processes semantic relationships and extracts insights directly from unstructured data.The history of cloud computing offers a broad and detailed overview of the key milestones in the development of this technology. While not exhaustive, it provides an interpretation of innovation as a driving force


    Holistic Vision

    rom the very first spark of a voltage in a transistor to the massive architectures of today’s cloud-native ecosystems, data has always been the common thread. What begins as a simple binary choice—0 or 1—evolves through protocols, standards, and storage systems into the vast informational flows that power our world.

    Encoding is not just a technical process: it is the language through which humans and machines agree to describe reality. From Shannon’s definition of the bit to ASCII and Unicode, to TCP/IP and the protocols that sustain the Internet, each layer of encoding adds meaning, reliability, and universality.

    Storage, too, has mirrored our collective journey. From punched tape to RAM, SSDs, and data lakes, every new step has been driven by the need to preserve information, ensure resilience, and make it accessible at scale. The cloud amplifies this paradigm, reducing latency, lowering costs, and enabling organizations to build ecosystems where data does not merely survive but thrives as a living resource.

    In this sense, encoding is more than a technical detail: it is the foundation of trust in digital ecosystems. Without a shared standard, communication collapses; without persistent storage, memory disappears; without scalable protocols, innovation stalls.

    The holistic vision is clear:

    Data encoding, storage, and transmission are not isolated layers of technology but interconnected dimensions of a single living ecosystem.

    They enable organizations to transform raw signals into knowledge, knowledge into decisions, and decisions into actions.

    Cloud-native ecosystems are the natural evolution of this continuum—where encoding is not static but adaptive, where storage expands dynamically, and where protocols evolve to keep pace with the complexity of human creativity and artificial intelligence.

    Ultimately, to understand data encoding is to understand how we, as a society, give shape, order, and meaning to the information age.



    References

    This article is an excerpt from the book

    Cloud-Native Ecosystems

    A Living Link — Technology, Organization, and Innovation

  • Key references on cloud-native ecosystems

    Key references on cloud-native ecosystems

    This post is a living bibliography for cloud-native ecosystems, continuously updated with references, frameworks, standards, and case studies.


    Key references on cloud-native ecosystems

    This page is a living bibliography for Exploras.cloud and the book Exploring Cloud-Native Ecosystems.
    Its purpose is to give readers direct access to the sources, frameworks, and organizations mentioned in the book and blog, while also offering extended context for further exploration.
    Unlike a static list, this page will be continuously updated: each reference may grow with notes, links, and commentary over time.

    Reference Table

    #ReferenceExtended Description
    1Emory Goizueta Business School. Ramnath K. Chellappa. WebsiteOne of the first to formally define “cloud computing” (1997), emphasizing economics as a driver for computing boundaries. His work bridges IT, economics, and digital business strategy.
    2Wikipedia. Analytical EngineCharles Babbage’s 1837 design for a programmable mechanical computer. It introduced memory, arithmetic logic, and conditional branching—ideas that anticipate modern computing.
    3Wikipedia. George StibitzBuilt early relay-based digital computers (1937), demonstrating remote computation—precursor to networked and cloud-based computing.
    4Wikipedia. Howard Hathaway AikenCreator of the Harvard Mark I (1944), one of the first automatic calculators. Pioneered large-scale computer engineering.
    5Wikipedia. John von NeumannProposed the “stored-program” model that underpins most computer architectures. His contributions define modern computing logic.
    6Wikipedia. Von Neumann architectureDescribes a computer design where instructions and data share memory. Still the basis of most CPUs today.
    7MIT OpenCourseWare. The von Neumann ModelA video course explaining von Neumann’s architecture in a didactic way. Useful for foundational understanding.
    8Wikipedia. History of cloud computingOutlines the shift from mainframes and distributed computing to modern cloud. Traces milestones in virtualization, SaaS, and IaaS.
    9RackspaceEarly managed hosting provider, instrumental in developing commercial IaaS solutions and co-founding OpenStack.
    10Akamai TechnologiesPioneer in Content Delivery Networks (CDNs), enabling global scale, speed, and resilience—key for cloud adoption.
    11Salesforce. HistoryIntroduced SaaS at scale (1999), proving the viability of subscription-based enterprise software.
    12Wikipedia. AWSFounded 2006, AWS revolutionized IT with elastic infrastructure and pay-as-you-go pricing.
    13Abandy, Roosevelt. The History of Microsoft AzureChronicles Azure’s launch (2010) and its evolution into a leading cloud platform.
    14Google. Announcing App Engine for BusinessOfficial blog post introducing Google App Engine for enterprise workloads.
    15Wikipedia. Microsoft AzureEntry describing Azure services, history, and growth.
    16NIST. SP 800-145 – Definition of Cloud ComputingCanonical definition of cloud computing (2011): essential for regulatory, policy, and academic work.
    17Meier, Reto. History of Google CloudAnnotated narrative of Google Cloud’s growth, strategy, and milestones.
    18Microsoft. Ten Years of Microsoft 365Reflects on Microsoft’s SaaS transformation through Office 365 and Teams.
    19Wikipedia. OSI ModelConceptual framework for networking protocols, fundamental to understanding modern internet and cloud communication.
    20Wikipedia. Internet Protocol SuiteBasis of the internet (TCP/IP), providing transport and application standards for all cloud ecosystems.
    21European Commission. Maritime Data FrameworkEU project applying digital frameworks to maritime data—an example of sectoral digital ecosystems.
    22EU. ESG rating activitiesEU resources on environmental, social, and governance (ESG) standards. Increasingly tied to cloud sustainability.
    23Green-Cloud EU StrategyPolicy initiative for greener, sustainable cloud adoption in Europe.
    24AWS. Netflix Case StudyCase study showing how Netflix scales globally using AWS infrastructure.
    25Google Cloud. Coca-Cola Case StudyDescribes Coca-Cola’s modernization via Google Cloud for data-driven marketing.
    26Microsoft Azure. Royal Dutch ShellExplains Shell’s adoption of Azure for energy transition and digital platforms.
    27AWS. Capital One Case StudyBank using AWS for secure, regulated workloads and innovation.
    28Wired. Dropbox’s Exodus from AWSNarrative on Dropbox’s decision to exit AWS and build its own infrastructure.
    29Microsoft Azure. Volkswagen ManufacturingAzure case study: digital manufacturing and Industry 4.0.
    30AWS. Airbnb Case StudyAirbnb’s use of AWS to scale a global marketplace.
    31Wikipedia. DevOpsCollaborative methodology bridging development and operations. Core to cloud-native culture.
    32Kim, Behr, Spafford. The Phoenix Project. (2018, IT Revolution)Influential novel about DevOps transformation in a struggling IT org.
    33Axelos. What is ITILOverview of ITIL, the global framework for IT Service Management.
    34Tefertiller, Jeffrey. ITIL 4: The New Frontier. (2021)Explains ITIL 4’s innovations and alignment with agile, DevOps, and value streams.
    35ISO. ISO/IEC 27001:2022Standard for Information Security Management Systems (ISMS), essential in cloud governance.
    36EU. Fighting CybercrimeArticle outlining the EU’s evolving cybersecurity regulations.
    37MIT OCW. NP-Complete ProblemsLecture notes introducing NP-complete problems, critical to computational theory.
    38DORA. Get Better at Getting BetterSite of DevOps Research and Assessment (DORA), creators of key DevOps performance metrics.
    39Kim, Humble, Debois, Willis. The DevOps Handbook.Definitive handbook on DevOps culture, tools, and leadership.
    40J.R. Storment & Mike Fuller. Cloud FinOps.Foundational book on financial operations in cloud environments.
    41NISTThe U.S. National Institute of Standards and Technology, setting essential frameworks for cloud, cybersecurity, and digital trust.
    42NIST. SP 800-192 – Access Control PoliciesFramework for testing and verifying access control policies.
    43NIST. SP 800-207 – Zero Trust ArchitectureCore reference on Zero Trust, published 2020.
    44NIST. SP 800-59 – National Security SystemsGuidance for classifying systems as National Security Systems.
    45NIST. SP 800-63 – Digital Identity GuidelinesFramework for authentication, identity assurance, and federation.
    46Terraform. Landing Zones FrameworkCloud Adoption Framework for Terraform landing zones: governance, hierarchy, and automation.
    47DORA State of DevOps ReportAnnual industry-leading survey analyzing DevOps performance metrics.


    Holistic Vision

    Cloud service models are more than layers of technology — they represent choices in how organizations design their informational ecosystems. Each model shapes not only cost and scalability, but also governance, compliance, and the ability to innovate.

    Seen holistically, IaaS, PaaS, and SaaS are not rigid categories but strategic levers in the architecture of an information system. The real challenge is balancing speed with resilience, abstraction with control, efficiency with responsibility.

    Ultimately, the question is not “Which model is best?” but “Which model best aligns with our people, processes, and long-term vision?”
    In this way, service models become part of a larger ecosystem — one that connects technology with organizational culture, regulatory frameworks, and human creativity.



    References

    This article is an excerpt from the book

    Cloud-Native Ecosystems

    A Living Link — Technology, Organization, and Innovation

  • Cloud Service Models

    Cloud Service Models

    In this post, we introduce the definition of service models according to NIST. The concept of a service model is central to planning a cloud adoption process. In 2011, NIST classified three service models for cloud resource management:

    • Software as a Service (SaaS)
    • Platform as a Service (PaaS)
    • Infrastructure as a Service (IaaS)
    • Extended Service Models (BPaaS)

    Cloud Service Models Explained

    In this chapter, we introduce the definition of service models according to NIST. The concept of a service model is central to planning a cloud adoption process. In 2011, NIST classified three service models for cloud resource management:

    • Software as a Service (SaaS)
    • Platform as a Service (PaaS)
    • Infrastructure as a Service (IaaS)

    Software as a Service (SaaS)

    The consumer can use the provider’s applications running on a cloud infrastructure.

    These applications are accessible from various client devices through a thin client interface, such as a web browser (e.g., web-based email) or an application program interface (API).

    The consumer does not manage or control the underlying cloud infrastructure, including network, servers, operating systems, storage, or even individual application functionalities, except for limited user-specific application settings.

    Platform as a Service (PaaS)

    The capability provided to the consumer is to deploy onto the cloud infrastructure applications created or acquired using programming languages, libraries, services, and tools supported by the provider.

    The consumer does not manage or control the underlying cloud infrastructure, including network, servers, operating systems, or storage, but has control over the deployed applications and, possibly, configuration settings for the application hosting environment.

    Infrastructure as a Service (IaaS)

    The capability provided to the consumer is to provision processing, storage, networking, and other fundamental computing resources where the consumer can deploy and run arbitrary software, which can include operating systems and applications.

    The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, and deployed applications and, possibly, limited control of selected networking components (e.g., host firewalls).

    Service Model Insights

    Service models implicitly introduce a responsibility matrix dictated by contractual agreements specific to each model.

    A comparative analysis of service models and the ISO/OSI stack helps clarify the impact of the service model on the cloud adoption journey.

    • IaaS: The cloud provider typically ensures service availability up to layer 4 of the ISO/OSI stack (Transport layer).
    • PaaS: The guaranteed level varies between layer 4 and layer 5, sometimes extending to layer 6.
    • SaaS: Services are typically maintained at layer 6, and in specific cases where no user interaction occurs through a web or mobile application, it can reach layer 7.

    The division of responsibility between the cloud provider and the consumer plays a crucial role in defining the service model.

    Another key aspect to consider is scalability expectations in the cloud. Opting for an IaaS model does not necessarily take full advantage of the cloud’s scalability capabilities. In fact, scalability responsibility falls on the hosted content within the IaaS service, which may result in classical scalability limitations, potentially leading to higher operational costs rather than reduced expenses.

    While it is relatively easy to classify cloud resources managed through IaaS models, this becomes more complex with PaaS and SaaS models.

    Many complex cloud resources are delivered via hybrid service models, such as PaaS/IaaS or PaaS/SaaS.

    For example:

    • Recognizable SaaS solutions include Salesforce CRM, Microsoft 365, and Google Workspace, all of which offer complex suites of SaaS-based solutions.

    Below, we provide a deeper insight into each service model.

    IaaS

    Classic IaaS services include basic network resources (firewalls, VPNs, etc.).

    Virtual machines (VMs) not managed by the cloud provider also fall under IaaS. This is one of the most comparable cloud services to traditional IT system management, making it one of the most considered options in lightweight cloud adoption strategies.

    The cloud provider guarantees hardware operation, including both physical and logical security.

    The security responsibility is typically covered up to layer 4 of the ISO/OSI stack.

    The consumer is responsible for configuring and managing the operating system and any additional application software installed on the VM.

    hybrid case between IaaS and PaaS is a VM with a managed operating system. Here, the provider manages OS updates and security patches autonomously but does not enforce version changes until official support ends. However, the provider does not assume responsibility for third-party applications installed by the consumer.

    IaaS/PaaS and PaaS

    PaaS is the first service model that truly defines cloud computing, though finding purely PaaS-classifiable resources can be challenging.

    Many cloud resources use hybrid service models (IaaS/PaaS), including Kubernetes and Red Hat OpenShift.

    Some databases are offered as PaaS services, where the consumer only defines:

    • DDL (Data Definition Language): Schema structures, stored procedures, functions, and user permissions.
    • DML (Data Manipulation Language): CRUD (Create, Read, Update, Delete) operations.

    Notable PaaS database services include Azure Database and AWS Aurora.

    Other pure PaaS services include Azure Functions and AWS Lambda.

    In a pure PaaS model, the consumer relinquishes control over the cloud resource’s state, which is contractually managed by the provider. The responsibility matrix is clear: the consumer is only responsible for the functional and application layer.

    SaaS

    SaaS is one of the most widely adopted cloud service models in the public cloud.

    Consider the early free webmail providers, available for more than two decades.

    SaaS encompasses social media services and widely used B2C sales platforms, some of which are expanding into enterprise solutions (e.g., WhatsApp for Business).

    The boundary between SaaS and PaaS is sometimes ambiguous, and even cloud providers themselves may struggle to classify services technically, often relying on marketing terminology instead.

    Examples of Paid SaaS Services

    Azure:

    • Microsoft 365: A well-known SaaS application that allows users to access cloud-based productivity tools such as email, calendars, and document management.
    • Microsoft Dynamics.
    • Microsoft Business Central.
    • Microsoft Teams.
    • Microsoft Chat GPT: One of the most widespread AI-driven prompt services powered by OpenAI.
    • Microsoft Copilot: A productivity acceleration service leveraging OpenAI technology.

    AWS:

    • AWS Elastic Beanstalk: A plug-and-play platform supporting multiple programming languages and environments.

    Google Cloud Platform (GCP):

    • Google Mail: One of the most widely used free and paid email services.
    • Google Workspace: A suite of productivity tools used worldwide.
    • Google Gemini: Emerging as an AI-integrated service within Google’s productivity offerings, with promising integration into BigQuery .
    • BigQuery: Google’s serverless, multi-cloud data warehouse , offered as a SaaS solution. It integrates machine learning algorithms to provide real-time insights into business processes.

    This classification helps organizations understand which service models best fit their operational and strategic needs when adopting cloud computing.

    Ambiguity in Service Model Interpretation

    In personal-use solutions, many cloud services function as pure SaaS, offering a fully managed experience with minimal user-specific customization.

    However, in enterprise environments, these services evolve into hybrid models, providing organizations with extensive customization and control over service configurations.

    While activating a solution like Microsoft 365 or Google Workspace may be simple, its enterprise-level deployment introduces significant complexity. Advanced setup demands strong networking and security expertise to ensure full-service functionality while maintaining corporate compliance.

    Additionally, enterprise versions often offer API-based integrations, allowing deep customization and automation. As a result, responsibility for service management shifts from solely the provider to a shared responsibility model, where both the provider and the consumer play key roles in governance and maintenance.

    Extending Cloud Service Models

    Cloud Service Model
    Extended Cloud Service Model

    In the initial chapters of this section, we introduced fundamental concepts regarding cloud service and distribution models.

    IaaS, PaaS, and SaaS—delivered via public cloud providers—are the primary service models offered by major cloud vendors.

    However, hybrid cloud and cloud-native solutions are evolving, creating new service opportunities.

    For example, as a software provider, I may have shifted from installing my product on customers’ local machines to offering it as a web service, with only lightweight local applications mimicking the traditional desktop experience.

    In parallel, we have defined the key roles in the cloud supply chain and the cloud consumption chain, introducing the responsibility matrix (RACI) as a simplified representation of service responsibility management.

    The cloud is continuously evolving, introducing increasingly complex cloud resources designed to address specific consumer needs—sometimes creating new demands through marketing strategies.

    For example, instead of offering my software as a web service, I might distribute it as a cloud marketplace product:

    • The customer downloads my product, which is automatically deployed in their cloud environment.
    • The software runs within their cloud but sends me usage analytics on how the customer interacts with my service.

    Alternatively, I might offer a dedicated cloud ecosystem for each client, running on my own cloud infrastructure.

    As these hybrid models emerge, cloud providers are launching more specialized solutions to prevent customer migration based on cost optimization alone.

    At the center of this strategy is data.

    If I offer a unique service that no competitor provides—even at an acceptable price—my customers will find it very difficult to switch to another provider.

    To reinforce customer retention, public cloud providers are introducing hybrid PaaS/SaaS services that are native to their specific cloud platform and not easily replicable on other clouds.

    Impact on IT Operations and Organizational Restructuring

    We have already seen how cloud consumption structures introduce specialized roles, leading to a reorganization of IT operations.

    Figure 30 attempts to illustrate how the cloud service market is adapting to this growing complexity.

    In Figure the term “hyperscaler” is used to represent a primary public cloud provider.

    Hyperscalers collaborate with telecommunications providers, ensuring the first level of service—data transport, which corresponds to the lower layers of the ISO/OSI model.

    Above this, we find:

    • The three primary cloud service models (IaaS, PaaS, SaaS).
    • Infrastructure as Code (IaC), enabling automated cloud ecosystem lifecycle management.
    • DevSecOps operations, which integrate with the Cloud Architecture COE and FinOps COE to mediate service deployment.

    What If an Organization Has a Low Cloud Maturity Level?

    Some enterprises may be unable to fully implement cloud-native operations, due to:

    • Low cloud maturity
    • Cultural gaps that cannot be easily addressed within the budget of a single business project

    In these cases, cloud services allow organizations to outsource the creation and management of a dedicated cloud ecosystem tailored to a specific business project.

    This is known as Business Process as a Service (BPaaS).

    BPaaS allows businesses to adopt cloud services incrementally, while preparing for future cloud-native transformations.

    Holistic Vision

    Choosing among IaaS, PaaS, and SaaS is not a purely technical decision; it’s an orchestration of technology, organization, finance, risk, and portability. The “right” model is the one that best aligns these lenses for a specific business outcome—now and as it evolves.

    The big picture

    • Value speed vs. control: PaaS/SaaS accelerate delivery by abstracting infrastructure; IaaS maximizes control at the cost of more operational burden.
    • Shared responsibility ≠ zero responsibility: As you move from IaaS → PaaS → SaaS, what you manage changes, but governance and data accountability remain yours.
    • Data gravity rules: Application code can be portable; data creates inertia. Plan for data lifecycle, residency, and egress from day one.

    Four lenses to decide

    1. Architecture & Ops
      • IaaS: bespoke environments, full control over OS/network; demands solid IaC, GitOps, and runbooks.
      • PaaS: managed runtimes/databases, faster releases; design to provider SLOs/limits and use 12‑factor practices.
      • SaaS: consume capabilities (e.g., email, CRM, analytics) with minimal ops; integration and identity become first‑class work.
    1. Organization & Roles
      • Define a RACI: who’s Accountable for uptime, security, backups, cost?
      • Create a Platform Team (even small) to offer “golden paths” and guardrails for IaaS/PaaS users.
    1. Risk, Compliance & Security
      • Treat identity (IAM), encryption, logging, and backup/restore as non‑negotiable baselines across all models.
      • Map data classification to service choice (e.g., regulated data may require private networking, KMS ownership, or specific regions).
    1. FinOps & Unit Economics
      • IaaS: variable costs + idle risk → rightsize, schedule, and autoscale.
      • PaaS: managed scaling; watch per‑request/GB‑second style pricing.
      • SaaS: per‑seat/per‑usage; model over 3–5 years and include egress/integration costs.

    When to prefer each model (rule of thumb)

    • SaaS for commodity capabilities (email, docs, ITSM, CRM, analytics UI). Demand: export formats, APIs, clear exit plan.
    • PaaS for differentiated apps where speed matters (functions, containers, managed DBs). Demand: SLOs, scaling policy, quotas.
    • IaaS for specialized workloads (legacy, low‑level tuning, strict isolation). Demand: automation (IaC), hardening, cost guards.

    Guardrails that travel with you

    • Policy‑as‑code (tagging, budgets, SCPs/OPA), secrets management, immutable builds, observability SLOs, DR patterns (RTO/RPO defined).
    • Standardize on containers + IaC even when using PaaS, to keep future options open.

    Reducing vendor lock‑in (pragmatically)

    • Favor open interfaces (e.g., S3‑compatible storage, OpenAPI for services).
    • Decouple domain logic from provider SDKs behind adapters.
    • Keep data export pipelines and schema ownership internal.
    • Track a short reversibility doc: what would it take to move in 90 days?

    Decision checklist (copy/paste)

    • Exit strategy (export formats, data volumes, notice periods)?
    • Data class & residency? Encryption & key ownership?
    • Required SLOs (latency, availability) and RTO/RPO?
    • Who is Accountable for uptime, security, and cost?
    • 12‑month run rate vs. 36‑month TCO (incl. egress & staff time)?
    • What business KPI does this service impact?


    References

    This article is an excerpt from the book

    Cloud-Native Ecosystems

    A Living Link — Technology, Organization, and Innovation

  • NIST Definition of Cloud Computing: Essential Characteristics

    NIST Definition of Cloud Computing: Essential Characteristics

    More than two decades after NIST first defined the essential characteristics of cloud computing, these principles continue to shape how organizations adopt the cloud. Understanding them is the first step toward building scalable, resilient, and cost-efficient digital ecosystems.


    NIST Definition of Cloud Computing: Essential Characteristics

    The essential characteristics define the cloud as a service that is directly manageable by the customer, available across a wide geographical area, and structured with organized resources.

    The concept of cloud consumption is introduced from the perspective of the buyer, who is identified as a consumer of “resources” or “services” provided by the cloud provider. The commonly used terminology refers to “cloud provider” and “cloud consumer.”

    Cloud computing, as an IT service, has distinctive features that set it apart from other IT services.

    The NIST (National Institute of Standards and Technology) (20) is a U.S. government agency that develops standards, guidelines, and best practices to support technological innovation and enhance the security and reliability of information systems. Founded in 1901, its goal is to promote industrial competitiveness and scientific progress through the adoption of shared standards.

    In this article, we will rely on NIST publications to understand the meaning of cloud computing.

    NIST has provided formal definitions of cloud computing through descriptions of certain essential properties. A cloud service must possess these characteristics to be classified as such.

    On-Demand Self-Service

    A consumer can unilaterally configure and utilize computing capabilities, such as server time and network storage, based on their needs, autonomously and without requiring interaction with each cloud service provider.

    Broad Network Access

    The functionalities are available over the network and can be accessed through standard mechanisms that promote usability across various heterogeneous devices (e.g., mobile phones, tablets, laptops, and workstations). This ensures ease of access and a wide availability of resources.

    Resource Pooling and Utilization

    The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model, where different physical and virtual resources are dynamically assigned and reassigned based on consumer demand.

    NIST also specifies that cloud consumers typically do not have control or detailed knowledge of the exact location of the provided resources. However, they may be able to specify higher-level attributes such as the country, state, or data center where resources are hosted. Examples of cloud resources include storage, processing, memory, and network bandwidth.

    Elasticity and Scalability of Cloud Resources

    In some cases, provisioning and releasing functionalities can be performed elastically and automatically, allowing rapid scaling up and down based on demand.

    From the consumer’s perspective, cloud resources appear to be highly scalable and can be allocated based on the required consumption at any given moment (just-in-time upscaling/downscaling).

    Measured Service

    Cloud systems automatically control and optimize resource usage by leveraging a metering capability. At an appropriate level of abstraction relevant to the type of service (e.g., storage, processing, bandwidth, and active user accounts), resource usage can be monitored, controlled, and reported, ensuring transparency for both the provider and the consumer.

    Cloud Computing as an OPEX-Based Expense

    Beyond NIST’s technological and functional definition, it is useful to consider that cloud computing—especially in the B2B (Business-to-Business) context discussed in this book—represents an operational expense (OPEX) rather than a capital expenditure (CAPEX).

    The field of FinOps has emerged to address the necessary integration between technology, finance, and treasury operations. The recurring cost, calculated on a monthly basis, introduces challenges in budget planning and financial management for organizations. This disrupts the traditional model in which IT expenses were typically categorized as capital investments (CAPEX) within long-term budget plans.

    This shift requires organizations to adopt service models that can fully leverage the benefits of cloud computing’s adaptability while ensuring cost predictability.

    This change also demands scalable architectures, both at the infrastructure and application levels, as well as data models oriented toward secure data sharing based on access rights. These aspects, while beneficial, introduce complexity in cost forecasting and financial planning.

    Cloud computing is not a one-size-fits-all solution. It should be interpreted and adopted only after fully understanding its potential and limitations, which is the objective of this section of the book.

    Further Considerations

    More than two decades after NIST first defined the essential characteristics of cloud computing, these principles still largely hold true in today’s market.

    Yet, the increasing complexity of cloud services often makes dynamic scaling a challenge, particularly when dealing with full-fledged cloud-based IT ecosystems.

    This difficulty stems from various factors, primarily related to the management of cloud resource configuration and distribution. Consequently, achieving precise and immediate cost predictability for scalability remains elusive.

    Public cloud models, in particular, tend to simplify scaling up while making scaling down more complex unless managed through automated systems with predictive controls.

    Many organizations still find themselves integrating traditional IT systems with cloud services, resulting in hybrid ecosystems rather than purely cloud-native solutions. This adds an intermediate layer of complexity, impacting Total Cost of Ownership (TCO) and Return on Investment (ROI), as these environments still follow OPEX models.

    Moreover, many companies opt for multi-cloud strategies, not necessarily to duplicate environments, but to take advantage of specialized SaaS or PaaS services like Microsoft 365, Google Workspace, Google Cloud BigQuery, or Microsoft Azure Fabric.

    In these scenarios, services cannot always be replicated across different cloud providers. High availability and geographical reliability are guaranteed by contracts with a single provider.

    Over time, regulations have introduced mandatory measures for cloud ecosystems hosting core and sensitive applications. Businesses must ensure service continuity by replicating services across multiple clouds to mitigate risks such as provider bankruptcy, prolonged cyberattacks, or service outages.

    This has led to the need for further classification of cloud resources, independent of the service model, to assist in corporate strategy planning:

    • Cloud resources are generally not portable or transferable across different cloud providers.
    • What can be transferred is the configuration—the software defining the cloud resources—provided the ecosystem follows a cloud-native operational model (as described in the book’s second section).
    • Applications can also be transferred, but only if they have been designed to be compatible with cloud-native principles.

    Navigating cloud adoption is a challenging but feasible journey. Much like an expedition, success requires careful preparation, endurance, and a well-charted map of the landscape.

    Having a guide can be invaluable.

    There are multiple paths to cloud adoption. Some are narrow, requiring technical expertise to reach peak efficiency, while others are more accessible but still yield tangible results in terms of efficiency and effectiveness.

    Understanding the cloud, mapping its capabilities, and assessing an organization’s actual potential is crucial in choosing a realistic path to achieving cloud computing success.


    Holistic Vision

    The NIST definition of cloud computing, with its essential characteristics, continues to serve as a compass more than two decades after it was first introduced. While technology has evolved, and the cloud has become more layered and complex, these principles still form the backbone of how organizations approach adoption.

    Beyond the technicalities of on-demand resources, elasticity, and measured services, cloud computing is also a matter of culture and economics. The shift from CAPEX to OPEX redefines how businesses plan, invest, and innovate. FinOps practices, hybrid strategies, and multi-cloud ecosystems are not exceptions but the natural evolution of NIST’s foundational vision.

    Seen holistically, the essential characteristics of cloud computing are less about the mechanics of servers and storage, and more about trust, adaptability, and transparency. They remind us that the cloud is not simply infrastructure: it is a shared environment where resilience, scalability, and financial sustainability converge.

    In this light, adopting the cloud is less a technical migration and more an expedition into a dynamic ecosystem. Success depends not only on technology but on preparation, governance, and the ability to align financial strategies with digital ambitions. The NIST framework remains the map — but every organization must chart its own path across the terrain.



    References

    This article is an excerpt from the book

    Cloud-Native Ecosystems

    A Living Link — Technology, Organization, and Innovation

  • Computational Capacity Evolution

    Computational Capacity Evolution

    The computational capacity evolution is marked by continuous miniaturization, coupled with exponential increases in computational capacity and energy efficiency. This journey has taken us from room-sized mainframes to modern smartwatches, passing through electronic calculators, portable personal computers, and mobile phones.


    Computational Capacity Evolution

    At the dawn of the computing era, around the 1920s, computational machines required significant physical space.
    Computation was not shared but confined to the physical location where the calculations were performed.
    Data storage was an arduous task.

    A good example, referenced in the 1940s-era film The Imitation Game, is the depiction of Alan Turing and Tommy Flowers building the first electromechanical computers (in fact, there were two: Colossus, which decrypted the German Enigma machine, and Bombe, which simulated Enigma itself, effectively acting as a test environment for Enigma!).

    Fast-forwarding 10 years (keeping Moore’s Law in mind), we arrive at 1951, when UNIVAC I was introduced. UNIVAC I1 was the first American computer specifically designed from the outset for business and administrative use, enabling the rapid execution of relatively simple arithmetic and data transfer operations compared to the complex numerical calculations required by scientific computers.

    UNIVAC, I consumed approximately 125,000 watts of power (TDP). This computer used 6,103 vacuum tubes, weighed 7.6 tons, and could execute about 1,905 operations per second (TOPS) with a clock speed of 2.25 MHz.

    It required approximately 35.5 square meters (382 square feet) of space.

    It could read 7,200 decimal digits per second (as it did not use binary numbers), making it by far the fastest business machine ever built at the time.

    UNIVAC, I featured a central processing unit (CPU), memory, input/output devices, and a separate console for operators (Von Neumann approves!).

    Let’s now take a brief journey through the years to explore the evolution of computing devices.

    Mainframes (1960s–1970s):

    • Weight and Dimensions: Mainframes occupied entire rooms and weighed several tons.
    • Energy Consumption: They could consume up to 1 megawatt (1,000,000 watts) of power.
    • Computational Capacity: Measured in millions of instructions per second (MIPS); for example, the Cray-1 in 1976 achieved 160 MIPS.

    Electronic Calculators (1970s):

    • Weight and Dimensions: Early electronic calculators weighed around 1–2 kg and were the size of a book.
    • Energy Consumption: Powered by batteries or mains electricity, consuming only a few watts.
    • Computational Capacity: Limited to basic arithmetic operations, with speeds in the range of a few operations per second.

    Portable Personal Computers (1980s–1990s):

    • Weight and Dimensions: Early laptops weighed between 4 and 7 kg, with significant thickness.
    • Energy Consumption: Battery-powered with limited autonomy, consuming tens of watts.
    • Computational Capacity: CPUs with clock speeds between 4.77 MHz (e.g., IBM 5155, 1984) and 100 MHz, capable of executing hundreds of thousands of instructions per second.

    Mobile Phones (2000s):

    • Weight and Dimensions: Devices weighed between 100 and 200 grams, easily portable.
    • Energy Consumption: Rechargeable batteries with capacities between 800 and 1500 mAh, consuming only a few watts.
    • Computational Capacity: Processors with clock speeds between 100 MHz and 1 GHz, capable of millions of instructions per second, supporting applications beyond simple voice communication.

    Smartwatches (2010–Present):

    • Weight and Dimensions: Devices weigh between 30 and 50 grams, with screens just a few square centimeters.
    • Energy Consumption: Batteries with capacities between 200 and 400 mAh, optimized for minimal energy consumption.
    • Computational Capacity: Processors with clock speeds between 1 and 2 GHz, capable of running complex applications, health monitoring, and advanced connectivity.

    Computational Capacity Today

    Before diving into today’s computational capacity, we need to establish a standard measurement scale for computational performance: FLOPS (Floating Point Operations Per Second) is the basic unit of measurement for floating-point operations performed in one second.

    1. Kiloflops (KFLOPS):
      1. 1 Kiloflop = 10³ FLOPS
      2. Computational capacity of computers from the 1960s.
    2. Megaflops (MFLOPS):
      1. 1 Megaflop = 10⁶ FLOPS
      2. Computational capacity of computers from the 1980s.
    3. Gigaflops (GFLOPS):
      1. 1 Gigaflop = 10⁹ FLOPS
      2. Computational capacity of mid-range CPUs and GPUs in the early 2000s.
    4. Teraflops (TFLOPS):
      1. 1 Teraflop = 10¹² FLOPS
      2. Computational capacity of modern GPUs and advanced supercomputers.
    5. Petaflops (PFLOPS):
      1. 1 Petaflop = 10¹⁵ FLOPS
      2. Achieved by supercomputers in 2008, such as the Roadrunner.
    6. Exaflops (EFLOPS):
      1. 1 Exaflop = 10¹⁸ FLOPS
      2. Computational capacity reached by the most advanced supercomputers, such as Frontier (2022).
    7. Zettaflops (ZFLOPS) (currently theoretical):
      1. 1 Zettaflop = 10²¹ FLOPS
      2. Considered the future of computing, necessary for fully simulating complex systems like the human brain.
    8. Yottaflops (YFLOPS) (currently theoretical):
      1. 1 Yottaflop = 10²⁴ FLOPS
      2. A hypothetical level of computation for technologies yet to be realized.

    The Apple A15 Bionic chip weighs just a few milligrams, has a clock speed of 3.1 GHz, can execute 15.8 trillion arithmetic operations per second (TOPS), and consumes only 6 watts (TDP).

    UNIVAC I, by comparison, used 6,103 vacuum tubes, weighed 7.6 tons, and could execute approximately 1,905 operations per second (TOPS) with a clock speed of 2.25 MHz.

    The A15 Bionic is not the fastest chip in the world.a

    The NVIDIA Blackwell B200 is currently the most powerful chip in the world, with a speed of 20 Petaflops, meaning 20,000,000 trillion operations per second.

    It consumes 1,000 watts (TDP).

    Modern technology has applied techniques that enable multiple boards or motherboards to function simultaneously, utilizing specially designed high-speed buses.

    This represents an application of Moore’s Law, not only in the miniaturization of individual chips but also in their aggregation into systems of interconnected boards dedicated to computation.

    However, certain physical factors limit scalability, with heat dissipation being the most critical constraint.

    Over the years, the ability to aggregate computational power and store the resulting data has become a defining factor in the technological and economic strength of nations and entire continents. This aggregation is especially impactful in the scientific domain globally.

    For example, consider the computational capacity required at CERN in Geneva to process the high-energy particle collisions conducted in its laboratories, or the processing power needed to create the first image of a black hole. On a more routine level, computational capacity is essential for weather forecasting, stock market predictions, and autonomous vehicle navigation.

    Computational capacity originally emerged as a concept of a central computer receiving instructions from terminals and distributing calculation results to various devices (terminals, printers).

    This model is far from obsolete. It remains highly relevant today, especially in the operation of the world’s few supercomputers, which have evolved into super data centers (vast server farms).

    Owning one of these large data centers has also become a matter of geopolitical positioning for nations. The presence or absence of such infrastructure enables countries to lead in critical scientific and military fields.

    oday, NVIDIA offers a data center solution with the DGX SuperPOD system: a setup composed of 127 DGX B200 systems, each hosting 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell CPUs.

    Let’s hypothesize the energy consumption based on the previously mentioned data for a single Blackwell B200 chip!

    This configuration reaches computational capacities measured in Exaflops.

    Currently, the world’s leading modern supercomputing data centers are oriented towards providing computational capacities measured in hundreds of Exaflops (1 Exaflop = 1,000,000,000,000,000,000 FLOPS). This progress has been achieved starting from the 1950s to today.

    A similar trend can also be observed in data storage capacity.

    High cloud capacity in Italy

    In Italy, some significant advancements are underway. The “Leonardo” supercomputer, located at the Tecnopolo in Bologna and managed by the CINECA consortium, was inaugurated in November 2022. Leonardo is an Atos BullSequana XH2000 system, equipped with nearly 14,000 Nvidia Ampere GPUs and a peak capacity of 250 Petaflops.

    Additionally, the Italian startup iGenius has announced a collaboration with NVIDIA to build “Colosseum”, one of the world’s largest supercomputers based on the NVIDIA DGX SuperPOD. This data center, located in southern Italy, will house approximately 80 NVIDIA GB200 NVL72 servers, each equipped with 72 “Blackwell” chips.
    The project, expected to be operational by mid-2025, aims to develop open-source artificial intelligence models for highly regulated sectors such as banking and healthcare see on

    Italian startup iGenius and Nvidia to build major AI system | Reuters

    A further example of innovation in sustainable high-performance computing comes from Intacture – Trentin Data Mine, an Italian project designed to combine massive computational power with radical energy efficiency. Unlike traditional supercomputing centers, Intacture is built inside a former mining facility, taking advantage of natural cooling conditions and renewable energy sources. The initiative aims to create one of the most energy-efficient computing farms in Europe, capable of supporting artificial intelligence workloads, scientific simulations, and financial modeling while minimizing its carbon footprint. This approach not only reduces operational costs but also demonstrates how the next generation of data centers can merge ecological responsibility with technological excellence. More details about the project can be found here: https://www.intacture.com.


    Holistic Vision

    The story of computational capacity is, at its core, the story of humanity’s relentless pursuit of efficiency, speed, and scale. From the massive machines that filled entire rooms in the 1940s to today’s chips weighing only a few grams yet performing trillions of operations per second, the trajectory has been nothing short of extraordinary. Each leap forward has redefined not only how we compute but also how we live, work, and organize society.

    Yet this progress comes with new responsibilities. The growth of computational capacity now intersects with global challenges such as sustainability, energy consumption, and geopolitical competition. Supercomputers and AI data centers have become strategic assets, as vital as oil reserves or transportation networks once were.

    Looking ahead, the race toward exascale and beyond—to zetta- and yotta-scale computing—will demand not only technical ingenuity but also bold choices in energy management and international collaboration. Projects like Italy’s Leonardo, the upcoming Colosseum, and sustainable initiatives such as Intacture – Trentin Data Mine highlight the dual imperative of power and responsibility: to build computing capacity that is both transformative and sustainable.

    The journey from UNIVAC I to NVIDIA’s Blackwell and beyond is far from over. It is a reminder that the future of computing will not only be measured in FLOPS, but also in how wisely we harness that power for the benefit of humanity.



    References

    This article is an excerpt from the book

    Cloud-Native Ecosystems

    A Living Link — Technology, Organization, and Innovation

  • The Essential History of Cloud Computing

    The Essential History of Cloud Computing

    The history of cloud computing offers a broad and detailed overview of the key milestones in the development of this technology. While not exhaustive, it provides an interpretation of innovation as a driving force.



    The History of Cloud Computing

    Understanding the evolution of cloud computing is essential for grasping its impact on modern business practices. The development of this technology not only reflects advancements in computer science but also parallels changes in user expectations and the digital economy. Each phase in the timeline of cloud computing reveals how technology adapts to meet the growing demands for efficiency, flexibility, and scalability.

    For instance, during the introduction of time-sharing systems, businesses began to recognize the benefits of resource sharing and centralized processing. This was a pivotal moment, as it set the stage for later developments in cloud infrastructure.

    This evolution can be further illustrated by the rise of personal computing in the 1980s, which changed how organizations thought about computing resources and accessibility.

    The period from 1995 to 2000 saw the emergence of the first cloud providers, highlighting the shift from traditional IT models to more dynamic, service-oriented approaches. Companies like Salesforce not only changed the sales software landscape but also demonstrated the viability of delivering enterprise applications via the internet.

    Moreover, the introduction of Amazon Web Services (AWS) in 2006 marked a significant milestone in cloud computing, as it paved the way for other companies to develop their cloud offerings and shifted the market towards a service-oriented architecture.

    By formally defining cloud computing, NIST helped standardize services in the industry, which fostered greater interoperability and trust between providers and consumers alike.

    The era of maturity from 2010 to 2020 further solidified cloud computing’s presence in the enterprise with innovations like Kubernetes, which enabled companies to manage and deploy microservices efficiently.

    The history of cloud computing offers a broad and detailed overview of the key milestones in the development of this technology. While not exhaustive, it provides an interpretation of innovation as a driving force. Cloud computing has evolved significantly over the decades, with various pioneers, technological advancements, and market shifts shaping its current landscape. As we delve deeper into the history of this transformative technology, we will explore its origins, key players, and the innovations that have propelled it into the mainstream.

    As organizations increasingly adopted hybrid cloud strategies, they were able to leverage both public and private cloud resources, optimizing their operational efficiency and leading to new business models.

    We can divide this history into distinct periods:

    • Precursors to Cloud Computing (1960s–1980s)
    • The First Cloud Providers (1995–2000)
    • The Era of Maturity (2010–2020)
    • The AI Era (Post-2020)

    Precursors to Cloud Computing (1960s–1980s)

    Time-Sharing and Mainframes : Introduced in the 1960s, time-sharing represented a breakthrough in resource sharing, allowing multiple users to access a centralized mainframe. This model laid the foundation for modern cloud computing.

    Virtual Machines (VMs) : In the 1970s, IBM developed the first versions of virtual machines, enabling the creation of multiple independent environments on a single physical hardware system.

    The Network and Virtualization Era (1990s)

    VPNs and Hosting : Telecommunications companies began offering Virtual Private Networks (VPNs) to improve network efficiency. At the same time, providers like GoDaddy started offering web hosting services.

    The Term “Cloud” : Coined in 1997 by Ramnath K. Chellappa, “cloud computing” was introduced to describe a computing model defined more by economic logic than by technological constraints.

    The First Cloud Providers (1995–2000)

    Rackspace and Salesforce : During this period, pioneers like Rackspace and Salesforce entered the market, laying the groundwork for cloud service models.

    Amazon Web Services (AWS) : AWS revolutionized the market in 2006 with services like S3 (Simple Storage Service) and EC2 (Elastic Compute Cloud), introducing the pay-as-you-go model.

    Google App Engine : In 2008, Google entered the market with a PaaS (Platform as a Service) offering tailored to developers.

    Microsoft Azure : In 2010, Microsoft launched Azure, initially focused on PaaS but later expanding to include IaaS (Infrastructure as a Service).

    The Cloud Becomes Standardized (2011)

    NIST Definition of Cloud Computing : The National Institute of Standards and Technology (NIST) published document 800-145, officially defining service models (IaaS, PaaS, SaaS) and essential cloud characteristics.

    The Era of Maturity (2010–2020)

    Kubernetes and Container Orchestration : With the launch of Kubernetes in 2014, supported by the CNCF (Cloud Native Computing Foundation), cloud-native became a standard model for deploying scalable applications.

    Hybrid and Multi-Cloud Models : Companies like IBM and VMware promoted hybrid and multi-cloud approaches, enabling organizations to combine public and private cloud resources.

    The AI Era (Post-2020)

    AI Integration : The integration of artificial intelligence into the cloud (e.g., Amazon SageMaker, Google Vertex AI, Microsoft OpenAI) has significantly expanded the capabilities of cloud platforms.


    From Cloud Foundations to Intelligent Horizons

    The story of cloud computing has always been one of adaptation, resilience, and innovation. As we continue to explore this journey, it is essential to note how the rise of artificial intelligence reshapes the very fabric of digital ecosystems. The fusion of these technologies is not merely about improving efficiency; it challenges existing paradigms and introduces new dynamics that will inevitably transform how cloud services are designed, delivered, and experienced. This evolution raises questions that reach far beyond technology itself, encompassing ethics, governance, and the future of work.

    The AI Era (Post-2020) signifies a convergence of cloud computing and artificial intelligence, where companies are now using cloud platforms not just for storage but for sophisticated analytics, machine learning, and automation. This integration enables businesses to make data-driven decisions faster than ever before.

    The History of Cloud Computing has profound implications for today’s businesses, allowing them to scale operations, innovate faster, and respond to market changes with unprecedented agility. By learning from the past, organizations can prepare for future advancements and leverage cloud technology to its fullest potential.



    References

    This article is an excerpt from the book

    Cloud-Native Ecosystems

    A Living Link — Technology, Organization, and Innovation

  • Understanding OSI Model and Internet Protocol Foundation

    Understanding OSI Model and Internet Protocol Foundation

    The OSI model and Internet Protocol are the foundation on which the digital world—and therefore the cloud—has been built. When we talk about networks, it is easy to think of cables, routers, and servers. Yet, behind every connection lies an invisible architecture that makes communication possible. The OSI model also named ISO/OSI stack and its successor the Internet Protocol Suite, are not simply abstract diagrams.

    In this chapter, we will take our first steps into this architecture. The ISO/OSI model, developed as a universal language to describe how information flows across networks, offers a way to interpret the cloud and classify the services it provides. Understanding its layers is like learning the anatomy of digital communication: each level has a specific role, each interaction contributes to the seamless experience of exchanging data.

    While the OSI model remains a conceptual framework, the TCP/IP suite—leaner and more pragmatic—has become the actual backbone of the Internet. Together, they represent the theoretical and practical sides of the same story: how humanity has managed to interconnect billions of devices, services, and people.

    By exploring these models, we not only gain technical knowledge but also acquire a lens through which we can better understand the cloud-native ecosystems described throughout this book. This journey begins here—with the architecture of communication itself.


    Understanding OSI Model and Internet Protocol Foundation

    The ISO/OSI stack model can be considered a “tool” for interpreting the cloud and classifying the services offered.

    For this reason, we are providing an early description of the reference model.

    The ISO/OSI (Open Systems Interconnection) model is a conceptual framework that defines how networks transmit data from sender to receiver (18).

    It consists of a stack of protocols that reduce the implementation complexity of a communication system for networking.

    The OSI model contains seven layers arranged conceptually from bottom to top.

    Below is a brief description of each:

    • Physical Layer: This layer handles the transmission and reception of raw, unstructured data from one device to another. It deals with details such as electrical voltage, data transmission speed, etc.
    • Data Link Layer: This layer transforms raw bits into structured data frames for transport across the network.
    • Network Layer: This layer manages the transport of data from a source to a destination, or more specifically, from one IP address to another.
    • Transport Layer: This layer ensures that messages are delivered error-free, in sequence, and without loss or duplication.
    • Session Layer: This layer establishes, manages, and terminates connections between local and remote applications.
    • Presentation Layer: This layer provides software applications with independence from network operations’ details.
    • Application Layer: This layer provides networking services to user applications.

    Each layer represents a separate level of abstraction and performs a defined function.

    The layers are designed to create internationally standardized protocols.

    Each layer corresponds to a specific function within network communication.

    We use communication systems daily that adhere to and implement the protocols defined by the ISO/OSI stack.

    Over time, the OSI model has effectively been replaced by a simplified version.

    The Internet Protocol Suite Based on TCP/IP

    The TCP/IP model is a streamlined version of the OSI model. (19)

    While the OSI model has seven layers, the TCP/IP model consists of only four.

    Here is a brief description of each layer in the TCP/IP model:

    • Application Layer: This layer corresponds to the application, presentation, and session layers of the OSI model. It is where data is generated by the user’s application.
    • Transport Layer: This layer corresponds to the OSI transport layer. Data is encoded for transmission across the Internet using the UDP or TCP protocol.
    • Network Layer: This layer corresponds to the OSI network layer. Data receives a header and a trailer indicating where it should be delivered.
    • Network Access Layer: This layer corresponds to the OSI physical and data link layers. The data packet is formatted and prepared for transport and routing through the network.
    a table of layers that compare ISO/OSI with TCP/IP
    a table of layers that compare ISO/OSI with TCP/IP

    Both models, OSI and TCP/IP, provide data communication services, enabling users to send and receive information from their IP address using services made available by their Internet Service Provider (ISP). However, the TCP/IP model is more practical and is based on standardized protocols, while the OSI model serves as a comprehensive, protocol-independent framework designed to encompass various network communication methods.

    The Secure Building: an ISO/OSI metaphor

    Understanding the ISO/OSI stack can be challenging for non-technical individuals. This metaphor helps illustrate its structure and importance for security in a relatable way.

    Imagine that network communication is like a package that needs to be delivered to a skyscraper with multiple floors. Each floor has a specific function and security mechanisms to ensure that only the correct packages arrive at their destination without being intercepted or tampered with.

    Ground Floor – Roads and Physical Access (Physical Layer)
    It represents the network of roads and entry doors of the building. If someone cuts the cables or blocks the doors, no package can enter or exit.

    Reception – Access Control (Data Link Layer)
    Here, incoming packages are checked. If a package has an incorrect label or comes from an unauthorized sender, it is blocked.

    Internal Mailroom – Routing Between Floors (Network Layer)
    It decides where the package should be delivered inside the building. If the route is incorrect, the package might end up in the wrong place.

    Elevators with Controlled Access (Transport Layer)
    The elevators transport packages to the correct floors. If an unauthorized person tries to access them, the system blocks them.

    Floor Reception – Identification and Authorization (Session Layer)
    It controls who can access which rooms and establishes temporary connections between offices.

    Office Encryption and Formatting (Presentation Layer)
    Here, the packages are opened, and their content is formatted to be readable and secure. If a document is encrypted, the correct key is needed to read it.

    Company Departments – Actual Work (Application Layer)
    This is the level where data is used by people and business systems, such as email, browsers, or communication apps.

    Security Value

    Each layer adds a protection mechanism, just like in a well-designed building. If a breach occurs at a lower level, the other levels help mitigate its effects.

    If an attacker wants to enter the system, they must bypass multiple barriers, just like a thief trying to break into a well-guarded skyscraper.

    Firewalls, encryption, access control, and data management correspond to security measures on the different floors.


    Holistic Vision

    Looking at the ISO/OSI stack and the Internet Protocol Suite through the lens of cloud-native ecosystems allows us to see beyond mere technicality. These models are not just diagrams in textbooks — they are living architectures that still shape how we design, secure, and evolve digital infrastructures.

    Each layer, whether physical or logical, represents both a boundary and an opportunity: a boundary that protects data integrity, and an opportunity to innovate by building new services on top of stable foundations. The transition from the conceptual OSI model to the pragmatic TCP/IP suite reflects the continuous dialogue between theory and practice, vision and implementation.

    In the broader context of cloud-native ecosystems, understanding protocols means understanding trust, resilience, and scalability. Protocols are the silent contracts that allow billions of interactions to occur every second, enabling businesses, communities, and individuals to connect and thrive.

    By exploring these frameworks holistically, we can appreciate how the invisible scaffolding of communication — from copper wires to encrypted packets — underpins not only today’s Internet but also the evolving landscape of cloud-native services. And just as every archway opens into a wider landscape, so too does this knowledge open the path toward innovation, security, and sustainable growth in the digital era.


    References

    This article is an excerpt from the book

    Cloud-Native Ecosystems

    A Living Link — Technology, Organization, and Innovation