Category: Introduction to the cloud

This section provides a foundational overview of cloud computing, tracing its origins, core principles, and transformative impact on modern IT. It explains the fundamental service models (IaaS, PaaS, SaaS), deployment options (public, private, hybrid, multi-cloud), and the paradigm shift from traditional infrastructures to scalable, on-demand digital ecosystems. Whether you are new to the topic or looking for a structured refresher, this introduction offers the essential context to understand how cloud technologies are reshaping businesses, operations, and innovation worldwide.

  • Computational Capacity Evolution

    Computational Capacity Evolution

    The computational capacity evolution is marked by continuous miniaturization, coupled with exponential increases in computational capacity and energy efficiency. This journey has taken us from room-sized mainframes to modern smartwatches, passing through electronic calculators, portable personal computers, and mobile phones.


    Computational Capacity Evolution

    At the dawn of the computing era, around the 1920s, computational machines required significant physical space.
    Computation was not shared but confined to the physical location where the calculations were performed.
    Data storage was an arduous task.

    A good example, referenced in the 1940s-era film The Imitation Game, is the depiction of Alan Turing and Tommy Flowers building the first electromechanical computers (in fact, there were two: Colossus, which decrypted the German Enigma machine, and Bombe, which simulated Enigma itself, effectively acting as a test environment for Enigma!).

    Fast-forwarding 10 years (keeping Moore’s Law in mind), we arrive at 1951, when UNIVAC I was introduced. UNIVAC I1 was the first American computer specifically designed from the outset for business and administrative use, enabling the rapid execution of relatively simple arithmetic and data transfer operations compared to the complex numerical calculations required by scientific computers.

    UNIVAC, I consumed approximately 125,000 watts of power (TDP). This computer used 6,103 vacuum tubes, weighed 7.6 tons, and could execute about 1,905 operations per second (TOPS) with a clock speed of 2.25 MHz.

    It required approximately 35.5 square meters (382 square feet) of space.

    It could read 7,200 decimal digits per second (as it did not use binary numbers), making it by far the fastest business machine ever built at the time.

    UNIVAC, I featured a central processing unit (CPU), memory, input/output devices, and a separate console for operators (Von Neumann approves!).

    Let’s now take a brief journey through the years to explore the evolution of computing devices.

    Mainframes (1960s–1970s):

    • Weight and Dimensions: Mainframes occupied entire rooms and weighed several tons.
    • Energy Consumption: They could consume up to 1 megawatt (1,000,000 watts) of power.
    • Computational Capacity: Measured in millions of instructions per second (MIPS); for example, the Cray-1 in 1976 achieved 160 MIPS.

    Electronic Calculators (1970s):

    • Weight and Dimensions: Early electronic calculators weighed around 1–2 kg and were the size of a book.
    • Energy Consumption: Powered by batteries or mains electricity, consuming only a few watts.
    • Computational Capacity: Limited to basic arithmetic operations, with speeds in the range of a few operations per second.

    Portable Personal Computers (1980s–1990s):

    • Weight and Dimensions: Early laptops weighed between 4 and 7 kg, with significant thickness.
    • Energy Consumption: Battery-powered with limited autonomy, consuming tens of watts.
    • Computational Capacity: CPUs with clock speeds between 4.77 MHz (e.g., IBM 5155, 1984) and 100 MHz, capable of executing hundreds of thousands of instructions per second.

    Mobile Phones (2000s):

    • Weight and Dimensions: Devices weighed between 100 and 200 grams, easily portable.
    • Energy Consumption: Rechargeable batteries with capacities between 800 and 1500 mAh, consuming only a few watts.
    • Computational Capacity: Processors with clock speeds between 100 MHz and 1 GHz, capable of millions of instructions per second, supporting applications beyond simple voice communication.

    Smartwatches (2010–Present):

    • Weight and Dimensions: Devices weigh between 30 and 50 grams, with screens just a few square centimeters.
    • Energy Consumption: Batteries with capacities between 200 and 400 mAh, optimized for minimal energy consumption.
    • Computational Capacity: Processors with clock speeds between 1 and 2 GHz, capable of running complex applications, health monitoring, and advanced connectivity.

    Computational Capacity Today

    Before diving into today’s computational capacity, we need to establish a standard measurement scale for computational performance: FLOPS (Floating Point Operations Per Second) is the basic unit of measurement for floating-point operations performed in one second.

    1. Kiloflops (KFLOPS):
      1. 1 Kiloflop = 10³ FLOPS
      2. Computational capacity of computers from the 1960s.
    2. Megaflops (MFLOPS):
      1. 1 Megaflop = 10⁶ FLOPS
      2. Computational capacity of computers from the 1980s.
    3. Gigaflops (GFLOPS):
      1. 1 Gigaflop = 10⁹ FLOPS
      2. Computational capacity of mid-range CPUs and GPUs in the early 2000s.
    4. Teraflops (TFLOPS):
      1. 1 Teraflop = 10¹² FLOPS
      2. Computational capacity of modern GPUs and advanced supercomputers.
    5. Petaflops (PFLOPS):
      1. 1 Petaflop = 10¹⁵ FLOPS
      2. Achieved by supercomputers in 2008, such as the Roadrunner.
    6. Exaflops (EFLOPS):
      1. 1 Exaflop = 10¹⁸ FLOPS
      2. Computational capacity reached by the most advanced supercomputers, such as Frontier (2022).
    7. Zettaflops (ZFLOPS) (currently theoretical):
      1. 1 Zettaflop = 10²¹ FLOPS
      2. Considered the future of computing, necessary for fully simulating complex systems like the human brain.
    8. Yottaflops (YFLOPS) (currently theoretical):
      1. 1 Yottaflop = 10²⁴ FLOPS
      2. A hypothetical level of computation for technologies yet to be realized.

    The Apple A15 Bionic chip weighs just a few milligrams, has a clock speed of 3.1 GHz, can execute 15.8 trillion arithmetic operations per second (TOPS), and consumes only 6 watts (TDP).

    UNIVAC I, by comparison, used 6,103 vacuum tubes, weighed 7.6 tons, and could execute approximately 1,905 operations per second (TOPS) with a clock speed of 2.25 MHz.

    The A15 Bionic is not the fastest chip in the world.a

    The NVIDIA Blackwell B200 is currently the most powerful chip in the world, with a speed of 20 Petaflops, meaning 20,000,000 trillion operations per second.

    It consumes 1,000 watts (TDP).

    Modern technology has applied techniques that enable multiple boards or motherboards to function simultaneously, utilizing specially designed high-speed buses.

    This represents an application of Moore’s Law, not only in the miniaturization of individual chips but also in their aggregation into systems of interconnected boards dedicated to computation.

    However, certain physical factors limit scalability, with heat dissipation being the most critical constraint.

    Over the years, the ability to aggregate computational power and store the resulting data has become a defining factor in the technological and economic strength of nations and entire continents. This aggregation is especially impactful in the scientific domain globally.

    For example, consider the computational capacity required at CERN in Geneva to process the high-energy particle collisions conducted in its laboratories, or the processing power needed to create the first image of a black hole. On a more routine level, computational capacity is essential for weather forecasting, stock market predictions, and autonomous vehicle navigation.

    Computational capacity originally emerged as a concept of a central computer receiving instructions from terminals and distributing calculation results to various devices (terminals, printers).

    This model is far from obsolete. It remains highly relevant today, especially in the operation of the world’s few supercomputers, which have evolved into super data centers (vast server farms).

    Owning one of these large data centers has also become a matter of geopolitical positioning for nations. The presence or absence of such infrastructure enables countries to lead in critical scientific and military fields.

    oday, NVIDIA offers a data center solution with the DGX SuperPOD system: a setup composed of 127 DGX B200 systems, each hosting 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell CPUs.

    Let’s hypothesize the energy consumption based on the previously mentioned data for a single Blackwell B200 chip!

    This configuration reaches computational capacities measured in Exaflops.

    Currently, the world’s leading modern supercomputing data centers are oriented towards providing computational capacities measured in hundreds of Exaflops (1 Exaflop = 1,000,000,000,000,000,000 FLOPS). This progress has been achieved starting from the 1950s to today.

    A similar trend can also be observed in data storage capacity.

    High cloud capacity in Italy

    In Italy, some significant advancements are underway. The “Leonardo” supercomputer, located at the Tecnopolo in Bologna and managed by the CINECA consortium, was inaugurated in November 2022. Leonardo is an Atos BullSequana XH2000 system, equipped with nearly 14,000 Nvidia Ampere GPUs and a peak capacity of 250 Petaflops.

    Additionally, the Italian startup iGenius has announced a collaboration with NVIDIA to build “Colosseum”, one of the world’s largest supercomputers based on the NVIDIA DGX SuperPOD. This data center, located in southern Italy, will house approximately 80 NVIDIA GB200 NVL72 servers, each equipped with 72 “Blackwell” chips.
    The project, expected to be operational by mid-2025, aims to develop open-source artificial intelligence models for highly regulated sectors such as banking and healthcare see on

    Italian startup iGenius and Nvidia to build major AI system | Reuters

    A further example of innovation in sustainable high-performance computing comes from Intacture – Trentin Data Mine, an Italian project designed to combine massive computational power with radical energy efficiency. Unlike traditional supercomputing centers, Intacture is built inside a former mining facility, taking advantage of natural cooling conditions and renewable energy sources. The initiative aims to create one of the most energy-efficient computing farms in Europe, capable of supporting artificial intelligence workloads, scientific simulations, and financial modeling while minimizing its carbon footprint. This approach not only reduces operational costs but also demonstrates how the next generation of data centers can merge ecological responsibility with technological excellence. More details about the project can be found here: https://www.intacture.com.


    Holistic Vision

    The story of computational capacity is, at its core, the story of humanity’s relentless pursuit of efficiency, speed, and scale. From the massive machines that filled entire rooms in the 1940s to today’s chips weighing only a few grams yet performing trillions of operations per second, the trajectory has been nothing short of extraordinary. Each leap forward has redefined not only how we compute but also how we live, work, and organize society.

    Yet this progress comes with new responsibilities. The growth of computational capacity now intersects with global challenges such as sustainability, energy consumption, and geopolitical competition. Supercomputers and AI data centers have become strategic assets, as vital as oil reserves or transportation networks once were.

    Looking ahead, the race toward exascale and beyond—to zetta- and yotta-scale computing—will demand not only technical ingenuity but also bold choices in energy management and international collaboration. Projects like Italy’s Leonardo, the upcoming Colosseum, and sustainable initiatives such as Intacture – Trentin Data Mine highlight the dual imperative of power and responsibility: to build computing capacity that is both transformative and sustainable.

    The journey from UNIVAC I to NVIDIA’s Blackwell and beyond is far from over. It is a reminder that the future of computing will not only be measured in FLOPS, but also in how wisely we harness that power for the benefit of humanity.



    References

    This article is an excerpt from the book

    Cloud-Native Ecosystems

    A Living Link — Technology, Organization, and Innovation

  • The Essential History of Cloud Computing

    The Essential History of Cloud Computing

    The history of cloud computing offers a broad and detailed overview of the key milestones in the development of this technology. While not exhaustive, it provides an interpretation of innovation as a driving force.



    The History of Cloud Computing

    Understanding the evolution of cloud computing is essential for grasping its impact on modern business practices. The development of this technology not only reflects advancements in computer science but also parallels changes in user expectations and the digital economy. Each phase in the timeline of cloud computing reveals how technology adapts to meet the growing demands for efficiency, flexibility, and scalability.

    For instance, during the introduction of time-sharing systems, businesses began to recognize the benefits of resource sharing and centralized processing. This was a pivotal moment, as it set the stage for later developments in cloud infrastructure.

    This evolution can be further illustrated by the rise of personal computing in the 1980s, which changed how organizations thought about computing resources and accessibility.

    The period from 1995 to 2000 saw the emergence of the first cloud providers, highlighting the shift from traditional IT models to more dynamic, service-oriented approaches. Companies like Salesforce not only changed the sales software landscape but also demonstrated the viability of delivering enterprise applications via the internet.

    Moreover, the introduction of Amazon Web Services (AWS) in 2006 marked a significant milestone in cloud computing, as it paved the way for other companies to develop their cloud offerings and shifted the market towards a service-oriented architecture.

    By formally defining cloud computing, NIST helped standardize services in the industry, which fostered greater interoperability and trust between providers and consumers alike.

    The era of maturity from 2010 to 2020 further solidified cloud computing’s presence in the enterprise with innovations like Kubernetes, which enabled companies to manage and deploy microservices efficiently.

    The history of cloud computing offers a broad and detailed overview of the key milestones in the development of this technology. While not exhaustive, it provides an interpretation of innovation as a driving force. Cloud computing has evolved significantly over the decades, with various pioneers, technological advancements, and market shifts shaping its current landscape. As we delve deeper into the history of this transformative technology, we will explore its origins, key players, and the innovations that have propelled it into the mainstream.

    As organizations increasingly adopted hybrid cloud strategies, they were able to leverage both public and private cloud resources, optimizing their operational efficiency and leading to new business models.

    We can divide this history into distinct periods:

    • Precursors to Cloud Computing (1960s–1980s)
    • The First Cloud Providers (1995–2000)
    • The Era of Maturity (2010–2020)
    • The AI Era (Post-2020)

    Precursors to Cloud Computing (1960s–1980s)

    Time-Sharing and Mainframes : Introduced in the 1960s, time-sharing represented a breakthrough in resource sharing, allowing multiple users to access a centralized mainframe. This model laid the foundation for modern cloud computing.

    Virtual Machines (VMs) : In the 1970s, IBM developed the first versions of virtual machines, enabling the creation of multiple independent environments on a single physical hardware system.

    The Network and Virtualization Era (1990s)

    VPNs and Hosting : Telecommunications companies began offering Virtual Private Networks (VPNs) to improve network efficiency. At the same time, providers like GoDaddy started offering web hosting services.

    The Term “Cloud” : Coined in 1997 by Ramnath K. Chellappa, “cloud computing” was introduced to describe a computing model defined more by economic logic than by technological constraints.

    The First Cloud Providers (1995–2000)

    Rackspace and Salesforce : During this period, pioneers like Rackspace and Salesforce entered the market, laying the groundwork for cloud service models.

    Amazon Web Services (AWS) : AWS revolutionized the market in 2006 with services like S3 (Simple Storage Service) and EC2 (Elastic Compute Cloud), introducing the pay-as-you-go model.

    Google App Engine : In 2008, Google entered the market with a PaaS (Platform as a Service) offering tailored to developers.

    Microsoft Azure : In 2010, Microsoft launched Azure, initially focused on PaaS but later expanding to include IaaS (Infrastructure as a Service).

    The Cloud Becomes Standardized (2011)

    NIST Definition of Cloud Computing : The National Institute of Standards and Technology (NIST) published document 800-145, officially defining service models (IaaS, PaaS, SaaS) and essential cloud characteristics.

    The Era of Maturity (2010–2020)

    Kubernetes and Container Orchestration : With the launch of Kubernetes in 2014, supported by the CNCF (Cloud Native Computing Foundation), cloud-native became a standard model for deploying scalable applications.

    Hybrid and Multi-Cloud Models : Companies like IBM and VMware promoted hybrid and multi-cloud approaches, enabling organizations to combine public and private cloud resources.

    The AI Era (Post-2020)

    AI Integration : The integration of artificial intelligence into the cloud (e.g., Amazon SageMaker, Google Vertex AI, Microsoft OpenAI) has significantly expanded the capabilities of cloud platforms.


    From Cloud Foundations to Intelligent Horizons

    The story of cloud computing has always been one of adaptation, resilience, and innovation. As we continue to explore this journey, it is essential to note how the rise of artificial intelligence reshapes the very fabric of digital ecosystems. The fusion of these technologies is not merely about improving efficiency; it challenges existing paradigms and introduces new dynamics that will inevitably transform how cloud services are designed, delivered, and experienced. This evolution raises questions that reach far beyond technology itself, encompassing ethics, governance, and the future of work.

    The AI Era (Post-2020) signifies a convergence of cloud computing and artificial intelligence, where companies are now using cloud platforms not just for storage but for sophisticated analytics, machine learning, and automation. This integration enables businesses to make data-driven decisions faster than ever before.

    The History of Cloud Computing has profound implications for today’s businesses, allowing them to scale operations, innovate faster, and respond to market changes with unprecedented agility. By learning from the past, organizations can prepare for future advancements and leverage cloud technology to its fullest potential.



    References

    This article is an excerpt from the book

    Cloud-Native Ecosystems

    A Living Link — Technology, Organization, and Innovation