The explosive growth in cloud computing systems has permitted the design and realization of novel digital services. This growth has ripple effects, such as significant data growth: by 2020, about 1.7 megabytes of new information will be created every second for every person on the planet and the accumulated digital universe of data will grow from today’s 4.4 zettabytes today to around 44 zettabytes or 44 trillion gigabytes. Meanwhile, internet-connected things are growing rapidly in number as well. According to Gartner estimations the number of function-dedicated objects (i.e., excluding general purpose ones such as smart phones, tablets and computers) is expected to reach 14.2 billion in 2019 and 25 billion by 2021, most of them offering and requiring collection and sharing of data. Within this context, new opportunities arise from the wealth of data generated, processed and stored, also imposing however significant challenges for data-aware architecture and data-centric operation of the underlying Cloud infrastructures.
Additionally, while enormous progress has been made on optimizing computational units over the last 70 years, the same cannot be said for data storage and movement. This has led to unbalanced, inefficient computing systems, with as much as 95-99% of the “real estate” being dedicated to units that simply store and move data. Specifically, a single memory access costs 2-3 orders of magnitude more energy than a complex arithmetic operation, reduces performance and increases security vulnerabilities by exposing data to the outside world for longer durations. These drawbacks imply that methods to reduce data movement are a key to creating systems that are higher performance, more energy efficient, more reliable and secure. Facilitated by the decreasing cost of RAM memory and increasingly common 64-bit operating systems, which allow a much larger memory set to be addressed, in-memory computing, or in-memory processing, is one approach for reducing data movement and accelerating computation. It consists of processing data in the main memory (RAM) rather than only in the CPU, thus benefiting from much lower access latencies and higher transfer speeds. In-memory is therefore especially useful when large amounts of data are to be processed, such as in the Cloud domain.
Since the need for a new general infrastructure is important, Europe should play a crucial role in it by engaging in a series of decisions, moves and investments to give a push to this general idea. For example, Europe should invest in tools / technologies that allow us to create secure and safe solutions for this new infrastructure concept. Furthermore, it should also develop systems that can be adequately understood to be easily explainable and solutions using mature (above 10nm technology) nodes. In addition, Europe should build on its strengths, extending its lead in solutions related to Intelligence at the Edge, Cognitive Cyber-Physical systems, the use of Collective Data and in energy efficient, sustainable, and long lifetime ICT. Also, ICT domains should be considered as a Continuum, encouraging collaboration between ICT at the edge and cloud/ICT initiatives. It is natural that research on post CMOS-technologies is continued, but a link to existing ICT technologies should be maintained.
Once all the building blocks of the system are available, their integration and orchestration in a coherent cloud system is a challenge and so is also their dynamic adaption to their environment. This is especially crucial for ecosystems where ICT platforms are increasingly forming a continuum, ranging from ultra-edge (microcontrollers linked to sensors or actuators) to edge, concentrators, micro-servers, servers, and Cloud or HPC. In that view, a system itself is now a component of a larger system, or system-of-systems. Due to their complexity and size, and the heterogeneity of the systems and their providers, interoperability is key. Of course, standardization could play an important role, but de facto approaches are likely to be winners due to their rapid introduction and acceptance. In addition to static approaches, creating nodes which are dynamic and “intelligent” – and thereby able to communicate with their peers, exchange capabilities and interface formats – will enable easy-to-build systems. However, this still entails challenges of defining and ensuring the quality of service (QoS) in various configurations and situations. This was also addressed in several roadmaps, where a strong shift toward centralization of IT infrastructure is continuously gaining momentum with companies and organizations increasing their products (e.g. services and applications) in private and public Clouds.