您的检索:
学者姓名:方娟
精炼检索结果:
年份
成果类型
收录类型
来源
综合
合作者
语言
清除所有精炼条件
摘要 :
Serving the ever-growing demand for computation, storage, and networking resources for multi-tenant in cloud computing is an important mission of Data Center Networks (DCNs). In this paper, we study the dynamic request updating problem, and our objective is to maximize the elasticity of cloud-based DCNs while achieving rapid response to multi-tenants. We use virtual clusters under the hose communication model to denote requests. Instead of using heuristic algorithms as the existing work does, this paper introduces a novel two-stage dynamic request updating framework with elastic resource scheduling strategy. In the first stage, we propose a multi-tenant fast initial provisioning scheme to realize the real-time response and analyze its optimality and complexity. Additionally, we provide a deep reinforcement learning-based dynamic updating strategy to enhance the elasticity of virtual clusters that are being used or scaling during the second stage. We train a fully connected neural network by creating a new feasible action set to realize the reduction, and it approximates the policy based on a proposed aggressive objective selection method to improve training speed while avoiding high dimensions caused by large scales of tenants and DCNs. Extensive evaluations demonstrate that our scheme outperforms baselines in terms of both elasticity and efficiency.
关键词 :
Clustering algorithms Clustering algorithms Elasticity Elasticity Neural networks Neural networks dynamic request updating dynamic request updating Data centers Data centers Dynamic scheduling Dynamic scheduling elastic scheduling elastic scheduling multi-tenant multi-tenant Cloud computing Cloud computing Processor scheduling Processor scheduling Data center network Data center network resource provisioning resource provisioning
引用:
复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。
GB/T 7714 | Lu, Shuaibing , Wu, Jie , Shi, Jiamei et al. Towards Dynamic Request Updating With Elastic Scheduling for Multi-Tenant Cloud-Based Data Center Network [J]. | IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING , 2024 , 11 (2) : 2223-2237 . |
MLA | Lu, Shuaibing et al. "Towards Dynamic Request Updating With Elastic Scheduling for Multi-Tenant Cloud-Based Data Center Network" . | IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING 11 . 2 (2024) : 2223-2237 . |
APA | Lu, Shuaibing , Wu, Jie , Shi, Jiamei , Fang, Juan , Zhang, Jiayue , Liu, Haiming . Towards Dynamic Request Updating With Elastic Scheduling for Multi-Tenant Cloud-Based Data Center Network . | IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING , 2024 , 11 (2) , 2223-2237 . |
导入链接 | NoteExpress RIS BibTex |
摘要 :
The problem of shared node selection and cache placement in wireless networks is challenging due to the difficulty of finding low-complexity optimal solutions. This paper proposes a new approach combining Lyapunov optimization and reinforcement learning (LoRL) to address content sharing in heterogeneous mobile edge computing (MEC) networks with base station (BS) and device-to-device (D2D) communication. Device in this network can choose to establish D2D links with neighboring devices for content sharing or send requests directly to the base station for content. Content access and energy consumption of shared nodes are modeled as a queuing system. The goal is to assign content sharing nodes to stabilize all queues while maximizing D2D sharing gain and minimizing latency, even in the presence of unknown network state distribution and user sharing costs. The proposed approach enables edge device to independently select associated nodes and make caching decisions, thereby minimizing time-averaged network costs and stabilizing the queuing system. Experimental results show that the proposed algorithm converges to the optimal policy and outperforms other policies in terms of total queue backlog trade-off and network cost.
关键词 :
device-to-device communication device-to-device communication content sharing content sharing Device-to-device communication Device-to-device communication Energy consumption Energy consumption Edge cache Edge cache Optimization Optimization Wireless communication Wireless communication Lyapunov optimization Lyapunov optimization Costs Costs Collaboration Collaboration Reinforcement learning Reinforcement learning deep reinforcement learning deep reinforcement learning
引用:
复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。
GB/T 7714 | Teng, Ziyi , Fang, Juan , Liu, Yaqi . Combining Lyapunov Optimization and Deep Reinforcement Learning for D2D Assisted Heterogeneous Collaborative Edge Caching [J]. | IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT , 2024 , 21 (3) : 3236-3248 . |
MLA | Teng, Ziyi et al. "Combining Lyapunov Optimization and Deep Reinforcement Learning for D2D Assisted Heterogeneous Collaborative Edge Caching" . | IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT 21 . 3 (2024) : 3236-3248 . |
APA | Teng, Ziyi , Fang, Juan , Liu, Yaqi . Combining Lyapunov Optimization and Deep Reinforcement Learning for D2D Assisted Heterogeneous Collaborative Edge Caching . | IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT , 2024 , 21 (3) , 3236-3248 . |
导入链接 | NoteExpress RIS BibTex |
摘要 :
Edge computing has emerged as a promising paradigm to meet the increasing demands of latency-sensitive and computationally intensive applications. In this context, efficient server deployment and service placement are crucial for optimizing performance and increasing platform profit. This paper investigates the problem of server deployment and service placement in a multi-user scenario, aiming to enhance the profit of Mobile Network Operators while considering constraints related to distance thresholds, resource limitations, and connectivity requirements. We demonstrate that this problem is NP-hard. To address it, we propose a two-stage method to decouple the problem. In stage I, server deployment is formulated as a combinatorial optimization problem within the framework of a Markov Decision Process (MDP). We introduce the Server Deployment with Q-learning (SDQ) algorithm to establish a relatively stable server deployment strategy. In stage II, service placement is formulated as a constrained Integer Nonlinear Programming (INLP) problem. We present the Service Placement with Interior Barrier Method (SPIB) and Tree-based Branch-and-Bound (TDB) algorithms and theoretically prove their feasibility. For scenarios where the number of users changes dynamically, we propose the Distance-and-Utilization Balance Algorithm (DUBA). Extensive experiments validate the exceptional performance of our proposed algorithms in enhancing the profit.
关键词 :
Mobile handsets Mobile handsets Symbols Symbols Mobile edge computing Mobile edge computing profit-driven optimization profit-driven optimization Servers Servers Clustering algorithms Clustering algorithms Heuristic algorithms Heuristic algorithms Programming Programming integer nonlinear programming integer nonlinear programming Costs Costs Telecommunications Telecommunications Base stations Base stations Optimization Optimization reinforcement learning reinforcement learning
引用:
复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。
GB/T 7714 | Fang, Juan , Wu, Shen , Lu, Shuaibing et al. Enhanced Profit-Driven Optimization for Flexible Server Deployment and Service Placement in Multi-User Mobile Edge Computing Systems [J]. | IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING , 2024 , 11 (6) : 6194-6206 . |
MLA | Fang, Juan et al. "Enhanced Profit-Driven Optimization for Flexible Server Deployment and Service Placement in Multi-User Mobile Edge Computing Systems" . | IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING 11 . 6 (2024) : 6194-6206 . |
APA | Fang, Juan , Wu, Shen , Lu, Shuaibing , Teng, Ziyi , Chen, Huijie , Xiong, Neal N. . Enhanced Profit-Driven Optimization for Flexible Server Deployment and Service Placement in Multi-User Mobile Edge Computing Systems . | IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING , 2024 , 11 (6) , 6194-6206 . |
导入链接 | NoteExpress RIS BibTex |
摘要 :
Modern processors employ data prefetchers to alleviate the impact of long memory access latency. However, current prefetchers are designed for specific memory access patterns, which perform poorly on mixed applications with multiple memory access patterns. To address these issues, RL-CoPref, a reinforcement learning (RL)-based coordinated prefetching controller for multiple prefetchers, is proposed in this paper. RL-CoPref takes diverse program context information as the input, learns to maximize cumulative rewards, and evaluates prefetch quality based on prefetch hits/misses and memory bandwidth utilization. It can dynamically adjust the prefetch activation and prefetch degree, enabling multiple prefetchers to complement each other on mixed applications. Our extensive evaluation, utilizing the ChampSim simulator, demonstrates that RL-CoPref can effectively adapt to various workloads and system configurations, optimizing prefetch control. On average, RL-CoPref achieves 76.15% prefetch coverage, having 35.50% IPC improvement, outperforming state-of-the-art individual prefetchers by 5.91-16.54% and outperforming SBP, a state-of-the-art (non-RL) prefetch controller, by 4.64%.
关键词 :
Data prefetchers Data prefetchers Prefetch activation Prefetch activation Reinforcement learning Reinforcement learning Prefetch degree Prefetch degree Memory bandwidth Memory bandwidth
引用:
复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。
GB/T 7714 | Yang, Huijing , Fang, Juan , Su, Xing et al. RL-CoPref: a reinforcement learning-based coordinated prefetching controller for multiple prefetchers [J]. | JOURNAL OF SUPERCOMPUTING , 2024 , 80 (9) : 13001-13026 . |
MLA | Yang, Huijing et al. "RL-CoPref: a reinforcement learning-based coordinated prefetching controller for multiple prefetchers" . | JOURNAL OF SUPERCOMPUTING 80 . 9 (2024) : 13001-13026 . |
APA | Yang, Huijing , Fang, Juan , Su, Xing , Cai, Zhi , Wang, Yuening . RL-CoPref: a reinforcement learning-based coordinated prefetching controller for multiple prefetchers . | JOURNAL OF SUPERCOMPUTING , 2024 , 80 (9) , 13001-13026 . |
导入链接 | NoteExpress RIS BibTex |
摘要 :
The rapid advancement of mobile edge computing (MEC) networks has enabled the augmentation of the computational power of mobile devices (MDs) by offloading computationally intensive tasks to resource-rich edge nodes. This paper discusses the decision-making process for task offloading and resource allocation among multiple mobile devices connected to a base station. The primary objective is to minimize the time taken to complete tasks while simultaneously reducing energy consumption on the device under a time-varying wireless fading channel. This objective is formulated as an energy-efficiency cost (EEC) minimization problem, which cannot be solved by conventional methods. To address this challenge, we propose a dynamic offloading decision algorithm of dependent tasks (DODA-DT) that adjusts local task execution based on edge node status. The proposed algorithm facilitates fair competition among all devices for edge resources. Additionally, we use a deep reinforcement learning (DRL) algorithm based on an actor-critic learning structure to train the system to quickly identify near-optimal solutions. Numerical simulations demonstrate that the proposed algorithm effectively reduces the total cost of the task in comparison to previous algorithms.
关键词 :
Servers Servers optimization algorithm optimization algorithm Mobile edge computing Mobile edge computing Mobile handsets Mobile handsets Heuristic algorithms Heuristic algorithms task offloading task offloading Reinforcement learning Reinforcement learning deep reinforcement learning deep reinforcement learning Costs Costs Deep learning Deep learning Task analysis Task analysis
引用:
复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。
GB/T 7714 | Fang, Juan , Qu, Dezheng , Chen, Huijie et al. Dependency-Aware Dynamic Task Offloading Based on Deep Reinforcement Learning in Mobile-Edge Computing [J]. | IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT , 2024 , 21 (2) : 1403-1415 . |
MLA | Fang, Juan et al. "Dependency-Aware Dynamic Task Offloading Based on Deep Reinforcement Learning in Mobile-Edge Computing" . | IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT 21 . 2 (2024) : 1403-1415 . |
APA | Fang, Juan , Qu, Dezheng , Chen, Huijie , Liu, Yaqi . Dependency-Aware Dynamic Task Offloading Based on Deep Reinforcement Learning in Mobile-Edge Computing . | IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT , 2024 , 21 (2) , 1403-1415 . |
导入链接 | NoteExpress RIS BibTex |
摘要 :
The vigorous development of IoT technology has spawned a series of applications that are delay-sensitive or resource-intensive. Mobile edge computing is an emerging paradigm that provides services between end devices and traditional cloud data centers to users. However, with the continuously increasing investment of demands, it is nontrivial to maintain a higher quality-of-service (QoS) under the erratic activities of mobile users. In this paper, we investigate the service provisioning and updating problem under the multiple-users scenario by improving the performance of services with long-term cost constraints. We first decouple the original long-term optimization problem into a per-slot deterministic one by using Lyapunov optimization. Then, we propose two service updating decision strategies by considering the trajectory prediction conditions of users. Based on that, we design an online strategy by utilizing the committed horizon control method looking forward to multiple slots predictions. We prove the performance bound of our online strategy theoretically in terms of the trade-off between delay and cost. Extensive experiments demonstrate the superior performance of the proposed algorithm.
关键词 :
mobile edge computing mobile edge computing Servers Servers Trajectory Trajectory Cost-efficient Cost-efficient Cloud computing Cloud computing online service provisioning online service provisioning Optimization Optimization Costs Costs quality -of-service (QoS) quality -of-service (QoS) Delays Delays Quality of service Quality of service
引用:
复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。
GB/T 7714 | Lu, Shuaibing , Wu, Jie , Lu, Pengfan et al. QoS-Aware Online Service Provisioning and Updating in Cost-Efficient Multi-Tenant Mobile Edge Computing [J]. | IEEE TRANSACTIONS ON SERVICES COMPUTING , 2024 , 17 (1) : 113-126 . |
MLA | Lu, Shuaibing et al. "QoS-Aware Online Service Provisioning and Updating in Cost-Efficient Multi-Tenant Mobile Edge Computing" . | IEEE TRANSACTIONS ON SERVICES COMPUTING 17 . 1 (2024) : 113-126 . |
APA | Lu, Shuaibing , Wu, Jie , Lu, Pengfan , Wang, Ning , Liu, Haiming , Fang, Juan . QoS-Aware Online Service Provisioning and Updating in Cost-Efficient Multi-Tenant Mobile Edge Computing . | IEEE TRANSACTIONS ON SERVICES COMPUTING , 2024 , 17 (1) , 113-126 . |
导入链接 | NoteExpress RIS BibTex |
摘要 :
The dynamic mechanism of joint proactive caching and cache replacement, which involves placing content items close to cache-enabled edge devices ahead of time until they are requested, is a promising technique for enhancing traffic offloading and relieving heavy network loads. However, due to limited edge cache capacity and wireless transmission resources, accurately predicting users' future requests and performing dynamic caching is crucial to effectively utilizing these limited resources. This article investigates joint proactive caching and cache replacement strategies in a general mobile-edge computing (MEC) network with multiple users under a cloud-edge-device collaboration architecture. The joint optimization problem is formulated as a Markov decision process (MDP) problem with an infinite range of average network load costs, aiming to reduce network load traffic while efficiently utilizing the limited available transport resources. To address this issue, we design an attention-weighted deep deterministic policy gradient (AWD2PG) model, which uses attention weights to allocate the number of channels from server to user, and applies deep deterministic policies on both user and server sides for Cache decision-making, so as to achieve the purpose of reducing network traffic load and improving network and cache resource utilization. We verify the convergence of the corresponding algorithms and demonstrate the effectiveness of the proposed AWD2PG strategy and benchmark in reducing network load and improving hit rate.
关键词 :
Internet of Things Internet of Things Servers Servers Telecommunication traffic Telecommunication traffic Attention-weighted channel assignment Attention-weighted channel assignment Load modeling Load modeling deep reinforcement learning deep reinforcement learning Resource management Resource management edge caching edge caching wireless network wireless network Optimization Optimization Wireless communication Wireless communication
引用:
复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。
GB/T 7714 | Teng, Ziyi , Fang, Juan , Yang, Huijing et al. Attention Mechanism-Aided Deep Reinforcement Learning for Dynamic Edge Caching [J]. | IEEE INTERNET OF THINGS JOURNAL , 2024 , 11 (6) : 10197-10213 . |
MLA | Teng, Ziyi et al. "Attention Mechanism-Aided Deep Reinforcement Learning for Dynamic Edge Caching" . | IEEE INTERNET OF THINGS JOURNAL 11 . 6 (2024) : 10197-10213 . |
APA | Teng, Ziyi , Fang, Juan , Yang, Huijing , Yu, Lu , Chen, Huijie , Xiang, Wei . Attention Mechanism-Aided Deep Reinforcement Learning for Dynamic Edge Caching . | IEEE INTERNET OF THINGS JOURNAL , 2024 , 11 (6) , 10197-10213 . |
导入链接 | NoteExpress RIS BibTex |
摘要 :
Recently vision transformer models have become prominent models for a multitude of vision tasks. These models, however, are usually opaque with weak feature interpretability, making their predictions inacces-sible to the users. While there has been a surge of interest in the development of post-hoc solutions that explain model decisions, these methods can not be broadly applied to different transformer architectures, as rules for interpretability have to change accordingly based on the heterogeneity of data and model structures. Moreover, there is no method currently built for an intrinsically interpretable transformer, which is able to explain its reasoning process and provide a faithful explanation. To close these crucial gaps, we propose a novel vision transformer dubbed the eXplainable Vision Transformer (eX-ViT), an in-trinsically interpretable transformer model that is able to jointly discover robust interpretable features and perform the prediction. Specifically, eX-ViT is composed of the Explainable Multi-Head Attention (E-MHA) module, the Attribute-guided Explainer (AttE) module with the self-supervised attribute-guided loss. The E-MHA tailors explainable attention weights that are able to learn semantically interpretable representations from tokens in terms of model decisions with noise robustness. Meanwhile, AttE is pro-posed to encode discriminative attribute features for the target object through diverse attribute discov-ery, which constitutes faithful evidence for the model predictions. Additionally, we have developed a self-supervised attribute-guided loss for our eX-ViT architecture, which utilizes both the attribute dis-criminability mechanism and the attribute diversity mechanism to enhance the quality of learned repre-sentations. As a result, the proposed eX-ViT model can produce faithful and robust interpretations with a variety of learned attributes. To verify and evaluate our method, we apply the eX-ViT to several weakly supervised semantic segmentation (WSSS) tasks, since these tasks typically rely on accurate visual expla-nations to extract object localization maps. Particularly, the explanation results obtained via eX-ViT are regarded as pseudo segmentation labels to train WSSS models. Comprehensive simulation results illus-trate that our proposed eX-ViT model achieves comparable performance to supervised baselines, while surpassing the accuracy and interpretability of state-of-the-art black-box methods using only image-level labels. & COPY; 2023 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license ( http://creativecommons.org/licenses/by/4.0/ )
关键词 :
Explainable Explainable Transformer Transformer Weakly supervised Weakly supervised Attention map Attention map
引用:
复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。
GB/T 7714 | Yu, Lu , Xiang, Wei , Fang, Juan et al. eX-ViT: A Novel explainable vision transformer for weakly supervised semantic segmentation * [J]. | PATTERN RECOGNITION , 2023 , 142 . |
MLA | Yu, Lu et al. "eX-ViT: A Novel explainable vision transformer for weakly supervised semantic segmentation *" . | PATTERN RECOGNITION 142 (2023) . |
APA | Yu, Lu , Xiang, Wei , Fang, Juan , Chen, Yi-Ping Phoebe , Chi, Lianhua . eX-ViT: A Novel explainable vision transformer for weakly supervised semantic segmentation * . | PATTERN RECOGNITION , 2023 , 142 . |
导入链接 | NoteExpress RIS BibTex |
摘要 :
Cache prefetching is a traditional way to reduce memory access latency. In multi-core systems, aggressive prefetching may harm the system. In the past, prefetching throttling strategies usually set thresholds through certain factors. When the threshold is exceeded, prefetch throttling strategies will control the aggressive prefetcher. However, these strategies usually work well in homogeneous multi-core systems and do not work well in heterogeneous multi-core systems. This paper considers the performance difference between cores under the asymmetric multi-core architecture. Through the improved hill-climbing method, the aggressiveness of prefetching for different cores is controlled, and the IPC of the core is improved. Through experiments, it is found that compared with the previous strategy, the average performance of big core is improved by more than 3%, and the average performance of little cores is improved by more than 24%.
关键词 :
High-performance computing High-performance computing Big Big LITTLE architecture LITTLE architecture Heterogeneous multi-core system Heterogeneous multi-core system Cache prefetching control Cache prefetching control
引用:
复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。
GB/T 7714 | Fang, Juan , Xu, Yixiang , Kong, Han et al. A prefetch control strategy based on improved hill-climbing method in asymmetric multi-core architecture [J]. | JOURNAL OF SUPERCOMPUTING , 2023 , 79 (10) : 10570-10588 . |
MLA | Fang, Juan et al. "A prefetch control strategy based on improved hill-climbing method in asymmetric multi-core architecture" . | JOURNAL OF SUPERCOMPUTING 79 . 10 (2023) : 10570-10588 . |
APA | Fang, Juan , Xu, Yixiang , Kong, Han , Cai, Min . A prefetch control strategy based on improved hill-climbing method in asymmetric multi-core architecture . | JOURNAL OF SUPERCOMPUTING , 2023 , 79 (10) , 10570-10588 . |
导入链接 | NoteExpress RIS BibTex |
摘要 :
In an asymmetric multi-core architecture, multiple heterogeneous cores share the last-level cache (LLC). Due to the different memory access requirements among heterogeneous cores, the LLC competition is more intense. In the current work, we propose a heterogeneity-aware replacement policy for the partitioned cache (HAPC), which reduces the mutual interference between cores through cache partitioning, and tracks the shared reuse state of each cache block within the partition at runtime to guide the replacement policy to keep cache blocks shared by multiple cores in multithreaded programs. In the process of updating the reuse state, considering the difference of memory accesses to LLC by heterogeneous cores, the cache replacement policy tends to keep cache blocks required by big cores, to better improve the LLC access efficiency of big cores. Compared with LRU and the SRCP, which are the state-of-the-art cache replacement algorithms, the performance of big cores can be significantly improved by HAPC when running multithreaded programs, while the impact on little cores is almost negligible, thus improving the overall performance of the system.
关键词 :
heterogeneity-aware heterogeneity-aware asymmetric multi-core asymmetric multi-core last-level cache last-level cache replacement policy replacement policy
引用:
复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。
GB/T 7714 | Fang, Juan , Kong, Han , Yang, Huijing et al. A Heterogeneity-Aware Replacement Policy for the Partitioned Cache on Asymmetric Multi-Core Architectures [J]. | MICROMACHINES , 2022 , 13 (11) . |
MLA | Fang, Juan et al. "A Heterogeneity-Aware Replacement Policy for the Partitioned Cache on Asymmetric Multi-Core Architectures" . | MICROMACHINES 13 . 11 (2022) . |
APA | Fang, Juan , Kong, Han , Yang, Huijing , Xu, Yixiang , Cai, Min . A Heterogeneity-Aware Replacement Policy for the Partitioned Cache on Asymmetric Multi-Core Architectures . | MICROMACHINES , 2022 , 13 (11) . |
导入链接 | NoteExpress RIS BibTex |
导出
数据: |
选中 到 |
格式: |