Colosseum is an open-access and publicly-available large-scale wireless testbed for experimental research via virtualized and softwarized waveforms and protocol stacks on a fully programmable, "white-box" platform. Through 256 state-of-the-art Software-defined Radios and a Massive Channel Emulator core, Colosseum can model virtually any scenario, enabling the design, development and testing of solutions at scale in a variety of deployments and channel conditions. These Colosseum radio-frequency scenarios are reproduced through high-fidelity FPGA-based emulation with finite-impulse response filters. Filters model the taps of desired wireless channels and apply them to the signals generated by the radio nodes, faithfully mimicking the conditions of real-world wireless environments. In this paper we describe the architecture of Colosseum and its experimentation and emulation capabilities. We then demonstrate the effectiveness of Colosseum for experimental research at scale through exemplary use cases including prevailing wireless technologies (e.g., cellular and Wi-Fi) in spectrum sharing and unmanned aerial vehicle scenarios. A roadmap for Colosseum future updates concludes the paper.

### 相关内容

This paper focusses on Service Level Agreement (SLA) based end-to-end Quality of Service (QoS) maintenance across a wireless optical integrated network. We use long term evolution (LTE) based spectrum access system (SAS) in the wireless network and the optical network is comprised of an Ethernet Passive Optical Network (EPON). The proposal targets a learning-based intelligent SAS where opportunistic allocation of any available bandwidth is done after meeting the SLA requirements. Such an opportunistic allocation is particularly beneficial for nomadic users with varying QoS requirements. The opportunistic allocation is carried out with the help of Vickrey-Clarke-Groves (VCG) auction. The proposal allows the users of the integrated network to decide the payment they want to make in order to opportunistically avail bandwidth. Learning automata is used for the users to intelligently converge to the optimal payment value based on the network load. The payment made by the users is later used by the optical network units of the EPON to prepare the bids for the auction. The proposal has been verified through extensive simulations.

The continually growing demands for traffic as a result of advanced technologies in 5G and 6G systems offering services with intensive demands such as IoT and virtual reality applications has resulted in significant performance expectations of data center networks (DCNs). More specifically, DCNs are expected to meet high bandwidth connectivity, high throughput, low latency, and high scalability requirements. However, the current wired DCN architectures introduce large cabling requirements and limit the ability to reconfigure data centres as they expand. To that end, wireless technologies such as Optical Wireless Communication (OWC) have been proposed as a viable and cost-effective solution to meet the aforementioned requirements. This paper proposes the use of Infrared (IR) OWC systems that employ Wavelength Division Multiplexing (WDM) to enhance the DCN communication in the downlink direction; i.e. from Access Points (APs) in the ceiling, connected to spine switches, to receivers attached to the top of the racks representing leaf switches. The proposed systems utilize Angle Diversity Transmitters (ADTs) mounted on the room ceiling to facilitate inter-rack communication. Two different optical receiver types are considered, namely Angle Diversity Receivers (ADRs) and Wide Field-of-View Receivers (WFOVR). The simulation (i.e. channel modeling) results show that our proposed data center links achieve good data rates in the data centre up to 15 Gbps.

The recently commercialized fifth-generation (5G) wireless communication networks achieved many improvements, including air interface enhancement, spectrum expansion, and network intensification by several key technologies, such as massive multiple-input multiple-output (MIMO), millimeter-wave communications, and ultra-dense networking. Despite the deployment of 5G commercial systems, wireless communications is still facing many challenges to enable connected intelligence and a myriad of applications such as industrial Internet-of-things, autonomous systems, brain-computer interfaces, digital twin, tactile Internet, etc. Therefore, it is urgent to start research on the sixth-generation (6G) wireless communication systems. Among the candidate technologies for such systems, cell-free massive MIMO which combines the advantages of distributed systems and massive MIMO is considered as a key solution to enhance the wireless transmission efficiency and becomes the international frontier. In this paper, we present a comprehensive study on cell-free massive MIMO for 6G wireless communication networks, especially from the signal processing perspective. We focus on enabling physical layer technologies for cell-free massive MIMO, such as user association, pilot assignment, transmitter and receiver design, as well as power control and allocation. Furthermore, some current and future research problems are highlighted.

Computational drug repositioning technology is an effective tool to accelerate drug development. Although this technique has been widely used and successful in recent decades, many existing models still suffer from multiple drawbacks such as the massive number of unvalidated drug-disease associations and inner product in the matrix factorization model. The limitations of these works are mainly due to the following two reasons: first, previous works used negative sampling techniques to treat unvalidated drug-disease associations as negative samples, which is invalid in real-world settings; Second, the inner product lacks modeling on the crossover information between dimensions of the latent factor. In this paper, we propose a novel PUON framework for addressing the above deficiencies, which models the joint distribution of drug-disease associations using validated and unvalidated drug-disease associations without employing negative sampling techniques. The PUON also modeled the cross-information of the latent factor of drugs and diseases using the outer product operation. For a comprehensive comparison, we considered 7 popular baselines. Extensive experiments in two real-world datasets showed that PUON achieved the best performance based on 6 popular evaluation metrics.

Data protection is a severe constraint in the heterogeneous IoT era. This article presents a Hardware-Software Co-Simulation of AES-128 bit encryption and decryption for IoT Edge devices using the Xilinx System Generator (XSG). VHDL implementation of AES-128 bit algorithm is done with ECB and CTR mode using loop unrolled and FSM-based architecture. It is found that AES-CTR and FSM architecture performance is better than loop unrolled architecture with lesser power consumption and area. For performing the Hardware-Software Co-Simulation on Zedboard and Kintex-Ultra scale KCU105 Evaluation Platform, Xilinx Vivado 2016.2 and MATLAB 2015b is used. Hardware emulation is done for grey images successfully. To give a practical example of the usage of proposed framework, we have applied it for Biomedical Images (CTScan Image) as a case study. Security analysis in terms of the histogram, correlation, information entropy analysis, and keyspace analysis using exhaustive search and key sensitivity tests is also done to encrypt and decrypt images successfully.

A critical aspect in the manufacturing process is the visual quality inspection of manufactured components for defects and flaws. Human-only visual inspection can be very time-consuming and laborious, and is a significant bottleneck especially for high-throughput manufacturing scenarios. Given significant advances in the field of deep learning, automated visual quality inspection can lead to highly efficient and reliable detection of defects and flaws during the manufacturing process. However, deep learning-driven visual inspection methods often necessitate significant computational resources, thus limiting throughput and act as a bottleneck to widespread adoption for enabling smart factories. In this study, we investigated the utilization of a machine-driven design exploration approach to create TinyDefectNet, a highly compact deep convolutional network architecture tailored for high-throughput manufacturing visual quality inspection. TinyDefectNet comprises of just ~427K parameters and has a computational complexity of ~97M FLOPs, yet achieving a detection accuracy of a state-of-the-art architecture for the task of surface defect detection on the NEU defect benchmark dataset. As such, TinyDefectNet can achieve the same level of detection performance at 52$\times$ lower architectural complexity and 11x lower computational complexity. Furthermore, TinyDefectNet was deployed on an AMD EPYC 7R32, and achieved 7.6x faster throughput using the native Tensorflow environment and 9x faster throughput using AMD ZenDNN accelerator library. Finally, explainability-driven performance validation strategy was conducted to ensure correct decision-making behaviour was exhibited by TinyDefectNet to improve trust in its usage by operators and inspectors.

Next-generation wireless systems are rapidly evolving from communication-only systems to multi-modal systems with integrated sensing and communications. In this paper a novel joint sensing and communication framework is proposed for enabling wireless extended reality (XR) at terahertz (THz) bands. To gather rich sensing information and a higher line-of-sight (LoS) availability, THz-operated reconfigurable intelligent surfaces (RISs) acting as base stations are deployed. The sensing parameters are extracted by leveraging THz's quasi-opticality and opportunistically utilizing uplink communication waveforms. This enables the use of the same waveform, spectrum, and hardware for both sensing and communication purposes. The environmental sensing parameters are then derived by exploiting the sparsity of THz channels via tensor decomposition. Hence, a high-resolution indoor mapping is derived so as to characterize the spatial availability of communications and the mobility of users. Simulation results show that in the proposed framework, the resolution and data rate of the overall system are positively correlated, thus allowing a joint optimization between these metrics with no tradeoffs. Results also show that the proposed framework improves the system reliability in static and mobile systems. In particular, the highest reliability gains of 10% in reliability are achieved in a walking speed mobile environment compared to communication only systems with beam tracking.

Recent advances in algorithm-hardware co-design for deep neural networks (DNNs) have demonstrated their potential in automatically designing neural architectures and hardware designs. Nevertheless, it is still a challenging optimization problem due to the expensive training cost and the time-consuming hardware implementation, which makes the exploration on the vast design space of neural architecture and hardware design intractable. In this paper, we demonstrate that our proposed approach is capable of locating designs on the Pareto frontier. This capability is enabled by a novel three-phase co-design framework, with the following new features: (a) decoupling DNN training from the design space exploration of hardware architecture and neural architecture, (b) providing a hardware-friendly neural architecture space by considering hardware characteristics in constructing the search cells, (c) adopting Gaussian process to predict accuracy, latency and power consumption to avoid time-consuming synthesis and place-and-route processes. In comparison with the manually-designed ResNet101, InceptionV2 and MobileNetV2, we can achieve up to 5% higher accuracy with up to 3x speed up on the ImageNet dataset. Compared with other state-of-the-art co-design frameworks, our found network and hardware configuration can achieve 2% ~ 6% higher accuracy, 2x ~ 26x smaller latency and 8.5x higher energy efficiency.

The generation of tailored light with multi-core fiber (MCF) lensless microendoscopes is widely used in biomedicine. However, the computer-generated holograms (CGHs) used for such applications are typically generated by iterative algorithms, which demand high computation effort, limiting advanced applications like in vivo optogenetic stimulation and fiber-optic cell manipulation. The random and discrete distribution of the fiber cores induces strong spatial aliasing to the CGHs, hence, an approach that can rapidly generate tailored CGHs for MCFs is highly demanded. We demonstrate a novel phase encoder deep neural network (CoreNet), which can generate accurate tailored CGHs for MCFs at a near video-rate. Simulations show that CoreNet can speed up the computation time by two magnitudes and increase the fidelity of the generated light field compared to the conventional CGH techniques. For the first time, real-time generated tailored CGHs are on-the-fly loaded to the phase-only SLM for dynamic light fields generation through the MCF microendoscope in experiments. This paves the avenue for real-time cell rotation and several further applications that require real-time high-fidelity light delivery in biomedicine.

Network Virtualization is one of the most promising technologies for future networking and considered as a critical IT resource that connects distributed, virtualized Cloud Computing services and different components such as storage, servers and application. Network Virtualization allows multiple virtual networks to coexist on same shared physical infrastructure simultaneously. One of the crucial keys in Network Virtualization is Virtual Network Embedding, which provides a method to allocate physical substrate resources to virtual network requests. In this paper, we investigate Virtual Network Embedding strategies and related issues for resource allocation of an Internet Provider(InP) to efficiently embed virtual networks that are requested by Virtual Network Operators(VNOs) who share the same infrastructure provided by the InP. In order to achieve that goal, we design a heuristic Virtual Network Embedding algorithm that simultaneously embeds virtual nodes and virtual links of each virtual network request onto physic infrastructure. Through extensive simulations, we demonstrate that our proposed scheme improves significantly the performance of Virtual Network Embedding by enhancing the long-term average revenue as well as acceptance ratio and resource utilization of virtual network requests compared to prior algorithms.

Abrar S. Alhazmi,Sanaa H. Mohamed and,T. E. H. El-Gorashi,Jaafar M. H. Elmirghani
0+阅读 · 11月30日
Hengtao He,Xianghao Yu,Jun Zhang,S. H. Song,Khaled B. Letaief
0+阅读 · 11月30日
Xinxing Yang,Genke Yang,Jian Chu
0+阅读 · 11月29日
Mohammad Javad Shafiee,Mahmoud Famouri,Gautam Bathla,Francis Li,Alexander Wong
0+阅读 · 11月29日
Christina Chaccour,Walid Saad,Omid Semiari,Mehdi Bennis,Petar Popovski
0+阅读 · 11月28日
Hongxiang Fan,Martin Ferianc,Zhiqiang Que,He Li,Shuanglong Liu,Xinyu Niu,Wayne Luk
0+阅读 · 11月24日
Jiawei Sun,Jiachen Wu,Nektarios Koukourakis,Robert Kuschmierz,Liangcai Cao,Juergen Czarske
0+阅读 · 11月24日
Duc-Lam Nguyen,HyungHo Byun,Naeon Kim,Chong-Kwon Kim
4+阅读 · 2018年1月30日

60+阅读 · 2月16日

61+阅读 · 2020年12月23日

76+阅读 · 2020年6月10日

45+阅读 · 2019年10月10日

47+阅读 · 2019年10月10日

5+阅读 · 2019年10月29日
CreateAMind
8+阅读 · 2019年5月18日
Call4Papers
6+阅读 · 2019年5月16日

5+阅读 · 2019年3月22日
Call4Papers
5+阅读 · 2019年1月10日
CreateAMind
32+阅读 · 2019年1月3日
CreateAMind
5+阅读 · 2018年12月28日

10+阅读 · 2018年12月24日
Call4Papers
3+阅读 · 2018年10月18日

24+阅读 · 2017年9月8日
Top