The Demo sessions of ICC 2025 is a high-profile, leading-edge forum for researchers and engineers in the field of communications. The Demo sessions create a unique opportunity to showcase the recent developments in the field through tangible demonstrations of systems, applications, services, and solutions. That is also an opportunity to engage with a highly skilled and innovating audience and discuss emerging technologies and recent research prototypes with key thought leaders. We are welcoming demonstrations showcasing prototypes, innovative applications, groundbreaking ideas, and novel concepts related to communications technologies, principles, and concepts.
DEMO 1: RaaS in heterogeneous data control planes
DEMO 2: Hybrid Quantum-Classical Benders’ Decomposition (HQC-Bend) Open-Source Software Package
DEMO 3: OAI/O-RAN Experimentation in the Upper Mid-Band (6-24 GHz)
DEMO 4: Multi-user Wireless sub-Terahertz Backhaul
DEMO 5: Artificial Intelligence-Powered Radio Frequency Impairment Recognition for Satellite Communications
DEMO 6: Prototype Demo of Semantic Image Wireless Transmission System
DEMO 1: RaaS in heterogeneous data control planes
The recent trend of applying Software-Defined Networking (SDN) principles to mainstream network architecture (e.g., Enterprise, Service Provider, and Data Center) has dramatically promoted programmability and automation, resulting in significant operation scalability. Additionally, introducing the packet-based core of the 4G onward telecommunication architectures has harmonised the interoperability of mobile networks with the rest of the IP-based infrastructures. Nevertheless, architectures such as ORAN also accommodate close-loop orchestration using NetApps.
State-of-the-art SDN offerings related to control-data plane decoupling, management, and orchestration, such as Cisco SD-WAN, LF Naphio, and even Cloud Networking options, have successfully demonstrated the value proposition leveraging virtualisation and micro-service-based approach. However, in the context of Routing, the routing protocols are still packaged within the network functions.
Looking ahead, the era of Intent-Based, on-demand and service-specific network provisioning, which leverages the current Network Slicing technology, holds great promise. It presents a compelling use case for a customisable, automated, and platform-agnostic implementation of Policy-Based Routing.
The motivation for Routing as a Service (RaaS) is to build a service layer on top of the present infrastructures and provision them by injecting routes computed using customisable routing logic (e.g., metric, and pathfinding algorithm) leveraging the standard network programmability options like RESTCONF.
This exhibition aims to demonstrate the capabilities of RaaS through an intent-based route configuration in a hybrid SDN environment within a single administrative domain defined by a custom routing logic. The platform consists of the following two components.
1. The RaaS Client interfaces with the customer and the underlying network controllers. It Receives routing logic as intent from the customer and topology
and network state as telemetry from the downstream network controllers. Combining the state and intent, it requests the RaaS server to compute routes
and conveys the response to the controllers.
2. The RaaS Server processes route requests from the RaaS Client. The two components are microservices and communicate with a private message bus. We shall exhibit the RaaS capabilities by deploying it over a simulated hybrid SDN infrastructure in GNS3 using Open VSwitch and Cisco IOSv routers.
Demonstrator (s):
Professor Tasos Dagiuklas is a leading researcher and expert in the fields of smart Internet technologies. He is the leader of the Smart Internet Technologies Hub (SITHub) research group at the London South Bank University. Tasos received the Engineering Degree from the University of Patras-Greece in 1989, the M.Sc. from the University of Manchester- UK in 1991 and the Ph.D. from the University of Essex-UK in 1995, all in Electrical Engineering. He has been a principal investigator, co-investigator, project and technical manager, coordinator and focal person of more than 30 internationally R&D and Capacity training projects in the areas of Fixed-Mobile Convergence, 4G/5G/6G networking technologies, VoIP and multimedia networking. His research interests lie in the fields of Systems beyond 5G and 6G networking technologies, edge cloud computing and video analytics.
Saptarshi Ghosh is a Future Network Technologist in Orchestration at Digital Catapult with over five years of experience in B5G technologies. He received his M.E. in Software Engineering from Jadavpur University, India (2016), M.Sc. in Smart Networks from the University of the West of Scotland, UK (2017) and PhD in Computer Science and Informatics from London South Bank University, UK (2021) with GATE, Erasmus-Mundus and Marie Skłodowska-Curie Fellowships respectively. Saptarshi has contributed to the knowledge focusing on network softwarisation, automation, orchestration and intelligence through several EU/UK projects funded by Horizon 2020, Innovate UK, Erasmus+, DSTL-DASA, EPSRC, and DSIT and has obtained industrial certifications like JNCIA(DevOps) and CCNP (Enterprise Infrastructure). Formerly, he was associated with London South Bank University as a sessional lecturer and Senior Research Fellow. His domain of research includes network orchestration, knowledge-defined networking, IP routing, 6G Self-Organised-Networking and Graph Theory.
Supported by
DEMO 2: Hybrid Quantum-Classical Benders’ Decomposition (HQC-Bend) Open-Source Software Package
The objective of this exhibition is to present the Hybrid Quantum-Classical Multi-Cuts Benders' Decomposition (HQC-Bend) algorithm, implemented as an open-source Python package. The algorithm uses Canadian D-Wave’s quantum annealing for solving the master problem, while efficient classical solvers like Gurobi handle the subproblems. This hybrid approach significantly improves computational efficiency by allowing multiple cuts per iteration, thus accelerating convergence. The package can automatically run the entire Benders' decomposition process for MILP models built in Gurobi, requiring minimal user intervention. In addition, the software offers multiple solver options for the master problem and cut-adding methods for subproblems, providing users with customizable solutions. With comprehensive information on the solution process for effective debugging and research, the package is valuable for both academic and industrial use. The exhibition aims to showcase how integrating Canadian quantum technology enhances classical methodologies, demonstrating the algorithm’s performance and versatility.
The demo will showcase the Hybrid Quantum-Classical Multi-Cuts Benders' Decomposition (HQC-Bend) algorithm, emphasizing its advanced features and performance capabilities. We will demonstrate its application using various MILP models, such as transportation optimization, facility location, and resource allocation. The demo harnesses D-Wave’s quantum annealing technology, a groundbreaking innovation from the Canadian quantum computing leader. D-Wave has pioneered quantum annealing techniques, allowing the HQC-Bend algorithm to solve the master problem efficiently by leveraging quantum sampling to add multiple cuts in a single iteration, thus significantly accelerating convergence. Classical solvers like Gurobi are integrated to handle subproblems, demonstrating the flexibility of combining quantum and classical computing for optimized results. Attendees will witness the entire Bender's decomposition process running automatically, with outputs including data, visual plots, and comprehensive records. This highlights the package’s user-friendliness and detailed solution-tracking capabilities. The demo will also illustrate how users can customize the algorithm by selecting different solver combinations and cut-adding methods to optimize performance for specific problem types. This exhibition aims to demonstrate how integrating D-Wave’s Canadian quantum technology enhances classical optimization, providing an efficient, cutting-edge tool for both academic research and industrial applications.
Demonstrator (s):
Zhongqi Zhao
Zhongqi Zhao (Graduate Student Member, IEEE) received the B.S. degree in electronic engineering from Beijing Jiaotong University in 2018, the B.S. degree in mathematics and the B.S. and M.S. degrees in electrical engineering from the University of Minnesota in 2018 and 2020, respectively. He is a Ph.D. student at the University of Houston. His research interests include quantum computing, optimization methods, complex system operations, and communication networks.
Mingze Li
Mingze li (Graduate Student Member, IEEE) received his B.S. degree in computer science from Tianjin University, China, in 2017, and a master’s degree in computer engineering from San Jose State University, in 2020. He is currently a Ph.D. student at the University of Houston. His research interests include quantum computing, and optimization of complex energy systems.
Lei Fan
Lei Fan (Senior Member, IEEE) received the Ph.D. in operations research from the Industrial and Systems Engineering Department, University of Florida. He is an Assistant Professor with the Engineering Technology Department and with Electrical and Computer Engineering Department, University of Houston. His research interests include optimization algorithms, complex network systems, and quantum computing.
Zhu Han
Zhu Han (Fellow, IEEE) received the B.S. degree in electronic engineering from Tsinghua University in 1997, and the M.S. and Ph.D. degrees in electrical and computer engineering from the University of Maryland at College Park, College Park, in 1999 and 2003, respectively. He is currently a John and Rebecca Moores Professor with the Electrical and Computer Engineering Department and the Computer Science Department, University of Houston. His research interests include game theory, wireless resource allocation and management, quantum computing, data science, and smart grid. Dr. Han received an NSF Career Award in 2010. He is also the Winner of the 2021 IEEE Kiyo Tomiyasu Award (an IEEE Field Award), an AAAS fellow since 2019, and an ACM Fellow since 2024.
Supported by:
DEMO 3: OAI/O-RAN Experimentation in the Upper Mid-Band (6-24 GHz)
The NYU/Politecnico di Milano/Pi-Radio team will provide a live demonstration of the world’s first O-RU that operates at FR3. Traffic such as iPerf and Youtube will be exchanged between an OAI-O-RU and an OAI-UE passing through an OAI 5G core network. The OAI-O-RU includes a powerful computer, the Analog Devices sub-6 GHz O-RU (Kerberos), and a Pi-Radio FR3 box. The OAI-UE includes a powerful laptop and a USRP B210 that connects to Pi-Radio’s FR3 front-end using SMA cables. The OAI 5G core network is created using docker images on the same PC that runs the OAI-O-RU.
Demonstrator (s):
Marco Mezzavilla (Senior Member, IEEE) received the B.Sc., M.Sc., and Ph.D. at the University of Padua, Italy, in Electrical Engineering. He held visiting research positions at the NEC Network Laboratories in Heidelberg, at the Centre Tecnologic Telecomunicacions Catalunya in Barcelona, and at Qualcomm Research in San Diego. He joined Politecnico di Milano as Associate Professor in 2024 after a 10-year research tenure at New York University (NYU), where he led several research projects that focus on upper mid-band, mmWave, and sub-THz radio access technologies for next generation wireless systems. His research interests include communication protocols, wireless prototyping, cybersecurity, and robotics. He is a co-founder of Pi-Radio, a spin-off of NYU that develops frontier software-defined radios.
Supported by:
DEMO 4: Multi-user Wireless sub-Terahertz Backhaul
Five distinct software-defined radios (SDRs), each configured with unique bandwidths, latencies, and channel conditions, are assigned sub-6 GHz frequency bands. These SDRs establish one-way wireless communication links to a stationary base station (FEX-1). At FEX-1, the incoming signals are aggregated and upconverted to the sub-THz frequency range (140–220 GHz). The upconverted signal is then transmitted to the core network (FEX-2), where it undergoes downconversion. The downconverted signal is analyzed using a high-rate-capable spectrum analyzer and an SDR. These tools separate the aggregated signals, enabling the visualization and examination of individual user data. All RF equipment and instruments used in this setup are part of PolyGrames, the advanced microwave and terahertz research laboratory at Polytechnique Montréal.
Our demonstration proves and represents the use of sub-THz in high capacity for wireless backhaul transmission. All operations comply with FCC regulations for sub-6 GHz bands. To ensure safety and security, isolator cones are employed, and all equipment is RFI shielded and grounded.
Demonstrator (s):
Gunes Karabulut Kurt, IEEE Senior Member, is a Canada Research Chair (Tier 1) in New Frontiers in Space Communications and Associate Professor at Polytechnique Montréal, Montréal, QC, Canada. From 2010 to 2021, she was a professor at Istanbul Technical University. She is a Marie Curie Fellow and has received the Turkish Academy of Sciences Outstanding Young Scientist (TUBA-GEBIP) Award in 2019. She received her Ph.D. degree in electrical engineering from the University of Ottawa, ON, Canada. Her research interests include multi-functional space networks, space security, and wireless testbeds.
Supported by:
DEMO 5: Artificial Intelligence-Powered Radio Frequency Impairment Recognition for Satellite Communications
The rapid expansion of satellite communications, driven by decreasing launch costs and advances in small satellite technology, has led to unprecedented orbital congestion. With companies deploying thousands of low Earth orbit satellites, wireless spectrum faces severe contention issues, threatening network reliability and performance.
Signal impairments, environmental factors, and inter-satellite interference pose significant operational challenges. Impairments can degrade satellite links, directly impacting network viability. Current systems attempt to manage impairments through spectrum monitoring and predefined thresholds, but struggle with the dynamic, unpredictable nature of modern satellite networks.
Effective impairment recognition enables adaptive responses—such as changing waveforms, error correction, or frequencies—to maintain link quality. Deep learning offers a promising solution for impairment recognition, particularly when processing complex baseband IQ signals. Unlike traditional analytical approaches which rely on static rules, deep learning approaches can be leveraged to provide a more nuanced understanding of the sources of an impairment and can therefore enable more robust response actions. While offering improved performance, deep learning models can also offer flexibility with continual learning and deployment efficiency.
Qoherent completed a project with the support of European Space Agency and the Canadian Space Agency to develop impairment recognition technology with deep learning, which will be showcased in this demonstration with live inference.
The objective of the exhibition is to showcase one of the models Qoherent developed for RF impairment recognition, whereby a model is trained to recognize events such as signal collisions, bleedovers, jamming and other scenarios, directly from time series signal IQ data captured from a software radio. Qoherent will also share information on other AI-driven wireless communications & sensing projects and will invite conversations with potential academic partners that are interested in the areas of AI, software-defined radio, and space. The technology also has merit for 5G non-terrestrial networking (NTN) and communications technologies.
We will be showcasing live inference of one of our impairment recognition models – specifically a model trained to recognize different “human caused” impairments that can occur on spectrum.
There will be two or more SDR’s transmitting, an incumbent and 2 interferers, and a receiver that is observing with a deep learning model. The model will be inferring the impairment scenario that is present. An example of the current prototype is shown below.
The primary purpose is to showcase the model and the end-to-end software tools that were used to produce and deploy it. These tools can be used for other use-cases by the wireless research community without too much effort. A version of the model running directly on a 5G gNodeB will also be showcased.
It is possible although not yet something we can commit to, that the demo will run on a live srsRAN-based gNodeB, in which case it will also showcase our methodology for building a system for Joint Communications and Sensing using open source 5G. In this case, one or more commercial handsets would be utilized showcasing 5G service while the inference system operates.
Demonstrator (s):
Ashkan Beigi is the founder and CEO of Qoherent and works with scientists and engineers to build the next generation of wireless technologies with machine learning. Qoherent leads multiple projects in this area in collaboration with the Canadian Space Agency, the European Space Agency, and the Department of Defense – in partnership with National Instrument/Ettus, Xanadu, SRS, Signalcraft, and academic institutions. Ashkan has delivered multiple talks and demos on AI for O-RAN and 5G in multiple venues including IEEE MILCOM and the srsRAN Workshop. Ashkan's career focus has been in the business development of test and measurement solutions, including RF instrumentation and software-defined radios. Ashkan's career has spanned several industries, including test & measurement, energy, mining, and consumer electronics. Ashkan has a Bachelor’s degree in Engineering Physics & Management from McMaster University in Hamilton, Ontario, Canada, and a Master of Business Administration from the Schulich School of Business at York University, in Toronto, Ontario, Canada.
Supported by:
DEMO 6: Prototype Demo of Semantic Image Wireless Transmission System
This demo aims to evaluate the applicability of semantic communication systems in real-world scenarios, and demonstrate how the proposed techniques can further enhance communication performance under practical conditions. Detailed descriptions are provided below:
The proposed semantic communication-based image transmission scheme demonstrates improvements in peak signal-to-noise (PSNR)-channel bandwidth ratio (CBR) performance compared to conventional communication schemes where source coding and channel coding are separately performed (e.g., JPEG + LDPC), even in real-world communication environments.
Additionally, when compared to existing semantic communication systems designed entirely with neural network-based encoder/decoders, this demo shows that the proposed system achieves equivalent or superior image transmission performance with significantly fewer model parameters.
Furthermore, this demo highlights that the proposed system can adapt to changes in channel quality and available communication resources while minimizing performance degradation. This is achieved without incurring additional communication or computational costs, ensuring efficiency and robustness.
The demo consists of three stages: compressing the image to be transmitted on a PC, transmitting and receiving the image over a real wireless channel using an RF device connected to the PC, and reconstructing the image on the PC.
In the first stage, image compression is performed in a local desktop environment and transmitted to the RF transmitter via a wired connection. The image is compressed in parallel into text prompts and features. Features are encoded using a trained Variational Autoencoder (VAE) into independent Gaussian vectors and hyperpriors containing distribution information of the Gaussian vectors. The text prompts and features share a limited power budget, with features robustly encoded against variations in rate and channel conditions using the SoftCast method. Hyperpriors and text are mapped into channel inputs through entropy coding, channel coding, and QAM modulation. Real-valued features are paired and directly mapped to the channel as complex values.
In the second stage, the channel inputs are transmitted to the receiver through an RF device over a real channel. The RF transmitter converts the compressed image into OFDM-based RF signals, which are transmitted via an antenna as electromagnetic waves. These electromagnetic signals propagate through the air and are received by the antenna of the RF receiver, where they are conveyed to the RF receiver via a wired connection.
Finally, in the third stage, the receiver PC decodes the features using LMMSE estimation to reconstruct a low-quality image. The features and text prompt information are simultaneously input into a multi-modal large-language model (LLM) to reconstruct an image containing semantic information.
Demonstrator (s):
Namyoon Lee received his Ph.D. from The University of Texas at Austin in 2014. He worked at Samsung Advanced Institute of Technology (2008–2011) and Intel Labs (2015–2016). From 2016 to 2022, he was an assistant and associate professor at POSTECH and is now a professor at Korea University. He has received numerous awards, including the 2016 IEEE ComSoc Asia Pacific Outstanding Young Researcher Award and the 2021 IEEE-IEIE Joint Award for Young Engineer and Scientist. He is an Associate Editor for IEEE Transactions on Wireless Communications and IEEE Transactions on Communications since 2021. His research focuses on advanced MIMO and short-blocklength channel coding.
Supported by: