Edge Computing for RIS

  •  Reconfigurable Intelligent Surfaces

RIS technology emerges as a key player in shaping the future of wireless communications. At 6G frequencies, it is highly probable that signals are absorbed, reflected, or scattered by common urban and rural elements such as buildings, hills, and vehicles. Thus, the environment can become hostile to signal transmission. In such a scenario, maintaining a direct line-of-sight (LOS) between the emitter or base station (BS) and the users is crucial, and this is precisely how RIS technology becomes a key feature in the 6G era. The way this is accomplished is by effectively establishing a virtual LOS . RIS can be strategically placed in the radio channel between the transmitter and the receiver, so the RIS cell configuration is adjusted to purposefully reflect the signal toward the user’s receptor.
RIS 6g AI eesy innovation jpeg - eesy-innovation
RIS technology does not only bring the advantage of preventing signals from being blocked by obstacles, but it also has the potential to establish a secure network by simultaneously increasing the received signal power for the intended user and minimizing any information leakage to potential eavesdroppers; at the same time, tracking the position of the the target user ensures uninterrupted communications despite users moving around. Furthermore, there are additional beamforming applications in which the incident signal is divided into multiple beams and redirected towards multiple users. For instance, the potential benefits that RIS could bring to multicast networks or IoT networks have been explored in the literature. RIS applications are thus almost limitless.
An RIS can be defined as an array functioning as an antenna, typically built using either metamaterials or conventional patch-array antennas equipped with rapid electronic-switching capabilities. These arrays have the capacity to control electromagnetic waves by enabling anomalous reflection, refraction, polarization transformation, and various other functionalities. In this context, our focus is on RIS configured as anomalous reflective and/or refractive surfaces capable of tailoring the propagation environment by directing signals to desired directions through reflection and/or refraction. Depending on the RIS application and the throughput required, various hardware configurations and operational modes come into play. In terms of cell architecture, RIS can be continuous, in which the finite surface is made up of a virtually infinite number of elements, or discrete, where a limited number of independent elements are configured to achieve the desired phase shift.

The number of elements is closely related to the resolution achieved in the target angle by the RIS device and depends on the number of phase shifts each cell can perform. The simplest cell is a binary cell, which allows two-phase shifts, 0 and 180, coded in a single bit. In any case, the availability of more phase-shift levels implies better resolution at the cost of higher complexity in the computational problem of RIS configuration.

 

  • Why On The Edge

With the rapid evolution of technology, the increasing number of data-transmitting devices, including IoT devices, and the resulting substantial increase in the volume of data sent to the cloud for processing, edge computing has emerged as a pivotal paradigm nowadays. Instead of sending a large amount of data to a central server, data are processed locally, just where sensor or actuator devices are deployed. Consequently, edge devices, placed close to data sources and end-users, play a crucial role in processing and analyzing data locally, thus mitigating the challenges posed by latency, bandwidth, and privacy concerns.
This shift towards edge computing is also a consequence of the current state of data science, which demands the processing of vast quantities of data during both the learning and inference processes for artificial neural networks (ANNs). In this context, edge computing holds the potential to enhance performance significantly, enabling efficient AI computational acceleration through edge devices suitable for AI processing such as central processing units (CPUs), graphical processing units (GPUs), tensor processing units (TPUs), FPGAs, or dedicated application-specific integrated circuits (ASICs). A clear example of this is the emergence of embedded GPU-based technologies, also referred to as neural processing units (NPUs), that several smartphone manufacturers are integrating into their devices to process data with AI algorithms on the edge.

This study proposes a novel approach to compute RIS configurations from data derived from target angles, in which a signal must be redirected using a RIS device whose configuration is inferred by AI algorithms. This derived information can contain large volumes of data and, furthermore, the computational load can be intensified as the size of the target RIS increases. Consequently, sending all these data to be processed in a server and having the RIS configuration sent back to the device or devices modifying the RIS setup could result in significant data bandwidth, along with notable data latency. As a result, this approach might not be efficient in meeting real-time requirements. Considering all this, the use of edge devices becomes essential to mitigate latency and reduce data bandwidth effectively.y

 

deep learning AI eesy innovation jpeg - eesy-innovation

 

  • Target Edge Devices

Numerous devices have been explored in the literature to enhance the performance of edge computing. These devices are designed with the aim of optimizing various aspects of edge computing, such as latency reduction, enhanced processing capabilities, or improved energy efficiency. GPUs are among the devices that are more generally used to compute AI on the edge. GPUs were originally developed and architected to process images and videos. Comprising multiple parallel processors, GPUs facilitate parallelization, i.e., breaking down complex problems into smaller tasks that can be simultaneously computed. This feature makes GPUs suitable for AI training and inference, where a vast amount of data and calculations are needed, and the parallel computing capacity significantly speeds up the process.
In recent years, GPUs have played a pivotal role in accelerating AI tasks.
However, GPUs imply more power consumption than other specific devices aimed for AI, such as TPUs, or devices with a hardware configuration specifically designed for the goal, such as FPGAs or ASIC devices. For this reason, along with the booming interest in AI, Google developed a device specifically intended to run DL models with an exceptional degree of efficiency.
These devices are known as TPUs, which comprise arrays of multiplication units. Initially designed for cloud computing, the first versions from Google, TPU1 and TPU2, were enormous servers to compute data in a data center. However, the evolving trend towards edge computing has driven the evolution to edge TPUs, designed to meet power consumption and size requirements while delivering high-performance acceleration. One such example of these devices is Google Coral, which has been chosen to implement the neural network developed in this study, thereby enabling a comparison with other target devices.
The final devices considered in this study are FPGAs. FPGAs are reconfigurable devices that provide the capability to implement customized hardware designs. Due to their inherent flexibility, they can be applied to a wide range of fields, and, notably, recent studies have positioned them as key components in the realm of AI science . The development of tailored hardware to compute the target NN and the required operations within an FPGA brings the benefit of optimizing and parallelizing the computation according to the design limit and the capacity of the target hardware device. Flexible architectures of FPGA devices not only offer the advantage of optimizing NN architectures, but also enable the implementation of the additional features required in the final implementation. For instance, the development on FPGAs of digital control systems for reconfigurable antennas has been explored in the literature. This approach opens up the possibility of implementing the RIS-cell control system along with the AI optimization algorithm to configure each RIS cell according to the desired redirection.
In a recent white paper, our colleague Alberto Martín with eesy-innovation GmbH and many more reasearchers delve into the fascinating world of Reconfigurable Intelligent Surfaces (RIS)—a game-changer for 6G communication systems. These surfaces act as intelligent mirrors, manipulating wireless signals to enhance network performance.
you can find the full article here.