• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:王鼎

Refining:

Former Name

Submit

Language

Submit

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 8 >
Safe Q-Learning for Data-Driven Nonlinear Optimal Control with Asymmetric State Constraints SCIE
期刊论文 | 2024 , 11 (12) , 2408-2422 | IEEE-CAA JOURNAL OF AUTOMATICA SINICA
WoS CC Cited Count: 2
Abstract&Keyword Cite

Abstract :

This article develops a novel data-driven safe Q-learning method to design the safe optimal controller which can guarantee constrained states of nonlinear systems always stay in the safe region while providing an optimal performance. First, we design an augmented utility function consisting of an adjustable positive definite control obstacle function and a quadratic form of the next state to ensure the safety and optimality. Second, by exploiting a pre-designed admissible policy for initialization, an off-policy stabilizing value iteration Q-learning (SVIQL) algorithm is presented to seek the safe optimal policy by using offline data within the safe region rather than the mathematical model. Third, the monotonicity, safety, and optimality of the SVIQL algorithm are theoretically proven. To obtain the initial admissible policy for SVIQL, an offline VIQL algorithm with zero initialization is constructed and a new admissibility criterion is established for immature iterative policies. Moreover, the critic and action networks with precise approximation ability are established to promote the operation of VIQL and SVIQL algorithms. Finally, three simulation experiments are conducted to demonstrate the virtue and superiority of the developed safe Q-learning method.

Keyword :

Adaptive critic control Adaptive critic control Optimal control Optimal control Safety Safety Mathematical models Mathematical models stabilizing value iteration Q-learning (SVIQL) stabilizing value iteration Q-learning (SVIQL) Heuristic algorithms Heuristic algorithms Learning systems Learning systems adaptive dynamic programming (ADP) adaptive dynamic programming (ADP) control barrier functions (CBF) control barrier functions (CBF) state constraints state constraints Q-learning Q-learning Iterative methods Iterative methods

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhao, Mingming , Wang, Ding , Song, Shijie et al. Safe Q-Learning for Data-Driven Nonlinear Optimal Control with Asymmetric State Constraints [J]. | IEEE-CAA JOURNAL OF AUTOMATICA SINICA , 2024 , 11 (12) : 2408-2422 .
MLA Zhao, Mingming et al. "Safe Q-Learning for Data-Driven Nonlinear Optimal Control with Asymmetric State Constraints" . | IEEE-CAA JOURNAL OF AUTOMATICA SINICA 11 . 12 (2024) : 2408-2422 .
APA Zhao, Mingming , Wang, Ding , Song, Shijie , Qiao, Junfei . Safe Q-Learning for Data-Driven Nonlinear Optimal Control with Asymmetric State Constraints . | IEEE-CAA JOURNAL OF AUTOMATICA SINICA , 2024 , 11 (12) , 2408-2422 .
Export to NoteExpress RIS BibTex
Adaptive Fuzzy Resilient Fixed-Time Bipartite Consensus Tracking Control for Nonlinear MASs Under Sensor Deception Attacks SCIE
期刊论文 | 2024 | IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING
WoS CC Cited Count: 7
Abstract&Keyword Cite

Abstract :

This paper studies the adaptive fuzzy resilient fixed-time bipartite consensus tracking control problem for a class of nonlinear multi-agent systems (MASs) under sensor deception attacks. Firstly, in order to reduce the impact of unknown sensor deception attacks on the nonlinear MASs, a novel coordinate transformation technique is proposed, which is composed of the states after being attacked. Then, in the case of unbalanced directed topological graph, a partition algorithm (PA) is utilized to implement the bipartite consensus tracking control, which is more widely applicable than the previous control strategies that only apply to balanced directed topological graph. Moreover, the fixed-time control strategy is extended to nonlinear MASs under sensor deception attacks, and the singularity problem that exists in fixed-time control is successfully avoided by employing a novel switching function. The developed distributed adaptive resilient fixed-time control strategy ensures that all the signals in the closed-loop system are bounded and the bipartite consensus tracking control is achieved in fixed time. Finally, the designed control strategy's validity is demonstrated by means of a simulation experiment.

Keyword :

sensor deception attacks sensor deception attacks Bipartite consensus tracking Bipartite consensus tracking fuzzy logic systems fuzzy logic systems nonlinear MASs nonlinear MASs fixed-time control fixed-time control

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Niu, Ben , Shang, Zihao , Zhang, Guangju et al. Adaptive Fuzzy Resilient Fixed-Time Bipartite Consensus Tracking Control for Nonlinear MASs Under Sensor Deception Attacks [J]. | IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING , 2024 .
MLA Niu, Ben et al. "Adaptive Fuzzy Resilient Fixed-Time Bipartite Consensus Tracking Control for Nonlinear MASs Under Sensor Deception Attacks" . | IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING (2024) .
APA Niu, Ben , Shang, Zihao , Zhang, Guangju , Chen, Wendi , Wang, Huanqing , Zhao, Xudong et al. Adaptive Fuzzy Resilient Fixed-Time Bipartite Consensus Tracking Control for Nonlinear MASs Under Sensor Deception Attacks . | IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING , 2024 .
Export to NoteExpress RIS BibTex
Neural Q-learning for discrete-time nonlinear zero-sum games with adjustable convergence rate SCIE
期刊论文 | 2024 , 175 | NEURAL NETWORKS
Abstract&Keyword Cite

Abstract :

In this paper, an adjustable Q -learning scheme is developed to solve the discrete -time nonlinear zero -sum game problem, which can accelerate the convergence rate of the iterative Q -function sequence. First, the monotonicity and convergence of the iterative Q -function sequence are analyzed under some conditions. Moreover, by employing neural networks, the model -free tracking control problem can be overcome for zerosum games. Second, two practical algorithms are designed to guarantee the convergence with accelerated learning. In one algorithm, an adjustable acceleration phase is added to the iteration process of Q -learning, which can be adaptively terminated with convergence guarantee. In another algorithm, a novel acceleration function is developed, which can adjust the relaxation factor to ensure the convergence. Finally, through a simulation example with the practical physical background, the fantastic performance of the developed algorithm is demonstrated with neural networks.

Keyword :

Adaptive dynamic programming Adaptive dynamic programming Optimal tracking control Optimal tracking control Neural networks Neural networks Q-learning Q-learning Zero-sum games Zero-sum games Convergence rate Convergence rate

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Yuan , Wang, Ding , Zhao, Mingming et al. Neural Q-learning for discrete-time nonlinear zero-sum games with adjustable convergence rate [J]. | NEURAL NETWORKS , 2024 , 175 .
MLA Wang, Yuan et al. "Neural Q-learning for discrete-time nonlinear zero-sum games with adjustable convergence rate" . | NEURAL NETWORKS 175 (2024) .
APA Wang, Yuan , Wang, Ding , Zhao, Mingming , Liu, Nan , Qiao, Junfei . Neural Q-learning for discrete-time nonlinear zero-sum games with adjustable convergence rate . | NEURAL NETWORKS , 2024 , 175 .
Export to NoteExpress RIS BibTex
Evolution-guided value iteration for optimal tracking control SCIE
期刊论文 | 2024 , 593 | NEUROCOMPUTING
WoS CC Cited Count: 2
Abstract&Keyword Cite

Abstract :

In this article, an evolution-guided value iteration (EGVI) algorithm is established to address optimal tracking problems for nonlinear nonaffine systems. Conventional adaptive dynamic programming algorithms rely on gradient information to improve the policy, which adheres to the first order necessity condition. Nonetheless, these methods encounter limitations when gradient information is intricate or system dynamics lack differentiability. In response to this challenge, evolutionary computation is leveraged by EGVI to search for the optimal policy without requiring an action network. The competition within the policy population serves as the driving force for policy improvement. Therefore, EGVI can effectively handle complex and non-differentiable systems. Additionally, this innovative method has the potential to enhance exploration efficiency and bolster the robustness of algorithms due to its population-based characteristics. Furthermore, the convergence of the algorithm and the stability of the policy are investigated based on the EGVI framework. Finally, the effectiveness of the established method is comprehensively demonstrated through two simulation experiments.

Keyword :

Adaptive dynamic programming Adaptive dynamic programming Intelligent control Intelligent control Optimal tracking Optimal tracking Reinforcement learning Reinforcement learning Adaptive critic designs Adaptive critic designs Evolutionary computation Evolutionary computation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Huang, Haiming , Wang, Ding , Zhao, Mingming et al. Evolution-guided value iteration for optimal tracking control [J]. | NEUROCOMPUTING , 2024 , 593 .
MLA Huang, Haiming et al. "Evolution-guided value iteration for optimal tracking control" . | NEUROCOMPUTING 593 (2024) .
APA Huang, Haiming , Wang, Ding , Zhao, Mingming , Hu, Qinna . Evolution-guided value iteration for optimal tracking control . | NEUROCOMPUTING , 2024 , 593 .
Export to NoteExpress RIS BibTex
Action-Dependent Heuristic Dynamic Programming With Experience Replay for Wastewater Treatment Processes SCIE
期刊论文 | 2024 , 20 (4) , 6257-6265 | IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS
WoS CC Cited Count: 7
Abstract&Keyword Cite

Abstract :

The wastewater treatment process (WWTP) is beneficial for maintaining sufficient water resources and recycling wastewater. A crucial link of WWTP is to ensure that the dissolved oxygen (DO) concentration is continuously maintained at the predetermined value, which can actually be considered as a tracking problem. In this article, an experience replay-based action-dependent heuristic dynamic programming (ER-ADHDP) method is developed to design the model-free tracking controller to accomplish the tracking goal of the DO concentration. First, the online ER-ADHDP controller is regarded as a supplementary controller to conduct the model-free tracking control alongside a stabilizing controller with a priori knowledge. The online ER-ADHDP method can adaptively adjust weight parameters of critic and action networks, thereby continuously ameliorating the tracking result over time. Second, the ER technique is integrated into the critic and action networks to promote the data utilization efficiency and accelerate the learning process. Third, a rational stability result is provided to theoretically ensure the usefulness of the ER-ADHDP tracking design. Finally, simulation experiments including different reference trajectories are conducted to show the superb tracking performance and excellent adaptability of the proposed ER-ADHDP method.

Keyword :

wastewater treatment applications wastewater treatment applications tracking control tracking control Action-dependent heuristic dynamic programming (ADHDP) Action-dependent heuristic dynamic programming (ADHDP) adaptive dynamic programming (ADP) adaptive dynamic programming (ADP) adaptive critic control adaptive critic control

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Qiao, Junfei , Zhao, Mingming , Wang, Ding et al. Action-Dependent Heuristic Dynamic Programming With Experience Replay for Wastewater Treatment Processes [J]. | IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS , 2024 , 20 (4) : 6257-6265 .
MLA Qiao, Junfei et al. "Action-Dependent Heuristic Dynamic Programming With Experience Replay for Wastewater Treatment Processes" . | IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS 20 . 4 (2024) : 6257-6265 .
APA Qiao, Junfei , Zhao, Mingming , Wang, Ding , Li, Menghua . Action-Dependent Heuristic Dynamic Programming With Experience Replay for Wastewater Treatment Processes . | IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS , 2024 , 20 (4) , 6257-6265 .
Export to NoteExpress RIS BibTex
Decentralized Optimal Neurocontroller Design for Mismatched Interconnected Systems via Integral Policy Iteration SCIE
期刊论文 | 2024 , 71 (2) , 687-691 | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS
Abstract&Keyword Cite

Abstract :

In this brief, the decentralized optimal control problem of continuous-time input-affine nonlinear systems with mismatched interconnections is investigated by utilizing data-based integral policy iteration. Initially, the decentralized mismatched subsystems are converted into the nominal auxiliary subsystems. Then, we derive the optimal controllers of the nominal auxiliary subsystems with a well-defined discounted cost function under the framework of adaptive dynamic programming. In the implementation process, the integral reinforcement learning algorithm is employed to explore the partially or completely unknown system dynamics. It is worth mentioning that the actor-critic structure is adopted based on neural networks, in order to evaluate the control policy and the performance of the control system. Besides, the least squares method is also involved in this online learning process. Finally, a simulation example is provided to illustrate the validity of the developed algorithm.

Keyword :

Heuristic algorithms Heuristic algorithms Cost function Cost function Interconnected systems Interconnected systems integral policy iteration integral policy iteration Optimal control Optimal control optimal control optimal control mismatched interconnections mismatched interconnections Reinforcement learning Reinforcement learning decentralized control decentralized control Adaptive dynamic programming Adaptive dynamic programming data-based online control data-based online control Integrated circuit interconnections Integrated circuit interconnections Dynamic programming Dynamic programming neural networks neural networks

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Ding , Fan, Wenqian , Liu, Ao et al. Decentralized Optimal Neurocontroller Design for Mismatched Interconnected Systems via Integral Policy Iteration [J]. | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS , 2024 , 71 (2) : 687-691 .
MLA Wang, Ding et al. "Decentralized Optimal Neurocontroller Design for Mismatched Interconnected Systems via Integral Policy Iteration" . | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS 71 . 2 (2024) : 687-691 .
APA Wang, Ding , Fan, Wenqian , Liu, Ao , Qiao, Junfei . Decentralized Optimal Neurocontroller Design for Mismatched Interconnected Systems via Integral Policy Iteration . | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS , 2024 , 71 (2) , 687-691 .
Export to NoteExpress RIS BibTex
Supplementary heuristic dynamic programming for wastewater treatment process control SCIE
期刊论文 | 2024 , 247 | EXPERT SYSTEMS WITH APPLICATIONS
Abstract&Keyword Cite

Abstract :

With the rapid development of industry, the amount of wastewater discharge is increasing. In order to improve the efficiency of the wastewater treatment process (WWTP), we often desire that the dissolved oxygen (DO) concentration and the nitrate nitrogen (NO) concentration can be controlled to track set values. However, the wastewater treatment system is a type of unknown nonlinear plant with time -varying dynamics and strong disturbances. Some traditional control methods are difficult to achieve this goal. To overcome these challenges, a supplementary heuristic dynamic programming (SUP-HDP) control scheme is established by combining the traditional control method and heuristic dynamic programming (HDP). A parallel control structure is constructed in the SUP-HDP control scheme, which not only complements the shortcomings of traditional control schemes in learning and adaptive abilities but also improves the convergence speed and the stability of the learning process of HDP. Besides, the convergence proof of the designed control scheme is provided. The SUP-HDP control scheme is implemented utilizing neural networks. Finally, we validate the effectiveness of the SUP-HDP control method through a benchmark simulation platform for the WWTP. Compared with other control methods, SUP-HDP has better control performance.

Keyword :

Neural networks Neural networks Wastewater treatment process control Wastewater treatment process control Tracking control Tracking control Reinforcement learning Reinforcement learning programming programming Supplementary heuristic dynamic Supplementary heuristic dynamic

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Ding , Li, Xin , Xin, Peng et al. Supplementary heuristic dynamic programming for wastewater treatment process control [J]. | EXPERT SYSTEMS WITH APPLICATIONS , 2024 , 247 .
MLA Wang, Ding et al. "Supplementary heuristic dynamic programming for wastewater treatment process control" . | EXPERT SYSTEMS WITH APPLICATIONS 247 (2024) .
APA Wang, Ding , Li, Xin , Xin, Peng , Liu, Ao , Qiao, Junfei . Supplementary heuristic dynamic programming for wastewater treatment process control . | EXPERT SYSTEMS WITH APPLICATIONS , 2024 , 247 .
Export to NoteExpress RIS BibTex
Model-free intelligent critic design with error analysis for neural tracking control SCIE
期刊论文 | 2024 , 572 | NEUROCOMPUTING
Abstract&Keyword Cite

Abstract :

The core of the optimal tracking control problem for nonlinear systems is how to ensure that the controlled system tracks the desired trajectory. The utility functions in previous studies have different properties which affect the final tracking effect of the intelligent critic algorithm. In this paper, we introduce a novel utility function and propose a Q -function based policy iteration algorithm to eliminate the final tracking error. In addition, neural networks are used as function approximator to approximate the performance index and control policy. Considering the impact of the approximation error on the tracking performance, an approximation error bound for each iteration of the novel Q -function is established. Under the given conditions, the approximation Q -function converges to the finite neighborhood of the optimal value. Moreover, it is proved that weight estimation errors of neural networks are uniformly ultimately bounded. Finally, the effectiveness of the algorithm is verified by the simulation example.

Keyword :

Optimal tracking control Optimal tracking control Policy iteration Policy iteration Neural networks Neural networks Approximation errors Approximation errors Model-free control Model-free control Adaptive dynamic programming Adaptive dynamic programming

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Gao, Ning , Wang, Ding , Zhao, Mingming et al. Model-free intelligent critic design with error analysis for neural tracking control [J]. | NEUROCOMPUTING , 2024 , 572 .
MLA Gao, Ning et al. "Model-free intelligent critic design with error analysis for neural tracking control" . | NEUROCOMPUTING 572 (2024) .
APA Gao, Ning , Wang, Ding , Zhao, Mingming , Hu, Lingzhi . Model-free intelligent critic design with error analysis for neural tracking control . | NEUROCOMPUTING , 2024 , 572 .
Export to NoteExpress RIS BibTex
Model-Free Optimal Tracking Design With Evolving Control Strategies via Q-Learning SCIE
期刊论文 | 2024 , 71 (7) , 3373-3377 | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS
WoS CC Cited Count: 5
Abstract&Keyword Cite

Abstract :

This brief leverages a value-iteration-based Q-learning (VIQL) scheme to tackle optimal tracking problems for nonlinear nonaffine systems. The optimal policy is learned from measured data instead of a precise mathematical model. Furthermore, a novel criterion is proposed to determine the stability of the iterative policy based on measured data. The evolving control algorithm is developed to verify the proposed criterion by employing these stable policies for system control. The advantage of the early elimination of tracking errors is provided by this approach since various stable policies can be employed before obtaining the optimal strategy. Finally, the effectiveness of the developed algorithm is demonstrated by a simulation experiment.

Keyword :

intelligent control intelligent control optimal tracking control system optimal tracking control system Adaptive dynamic programming Adaptive dynamic programming value-iteration-based Q-learning value-iteration-based Q-learning stability criterion stability criterion

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Ding , Huang, Haiming , Zhao, Mingming . Model-Free Optimal Tracking Design With Evolving Control Strategies via Q-Learning [J]. | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS , 2024 , 71 (7) : 3373-3377 .
MLA Wang, Ding et al. "Model-Free Optimal Tracking Design With Evolving Control Strategies via Q-Learning" . | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS 71 . 7 (2024) : 3373-3377 .
APA Wang, Ding , Huang, Haiming , Zhao, Mingming . Model-Free Optimal Tracking Design With Evolving Control Strategies via Q-Learning . | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS , 2024 , 71 (7) , 3373-3377 .
Export to NoteExpress RIS BibTex
Prescribed Performance Adaptive Containment Control for Full-State Constrained Nonlinear Multiagent Systems: A Disturbance Observer-Based Design Strategy SCIE
期刊论文 | 2024 | IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING
WoS CC Cited Count: 16
Abstract&Keyword Cite

Abstract :

This paper focuses on the prescribed performance adaptive containment control problem for a class of nonlinear nonstrict-feedback multiagent systems (MASs) with unknown disturbances and full-state constraints. First, the radial basis function neural networks (RBF NNs) technology is employed to approximate the unknown nonlinear functions in the system, and the problem of "explosion of complexity" caused by repeated derivation of virtual controls is solved by using the dynamic surface control (DSC) technology. Then, the nonlinear disturbance observers are designed to estimate the external disturbance, and the barrier Lyapunov functions (BLFs) and the prescribed performance function (PPF) are combined to achieve the control objective of prescribed performance without violating the full-state constraints. The theoretical result shows that all signals in the closed-loop system are semiglobally uniformly ultimately bounded (SGUUB), and the local neighborhood containment errors can converge to the specified boundary. Finally, two simulation examples show the effectiveness of the proposed method. Note to Practitioners-The containment control problem is a hot topic in the field of control, which plays an important role in practical engineering. Especially for this problem of nonlinear MASs, the mathematical models are difficult to be obtained accurately. This paper investigates the prescribed performance adaptive containment control problem for the nonlinear nonstrict-feedback MASs, whose model can be extended to more complex engineering applications, such as unmanned aerial vehicle formations and intelligent traffic management. It is worth noting that external disturbances and state constraint problems often exist in practical applications. Therefore, the disturance observers are designed to compensate for the system disturbances, which can eliminate the impacts of disturbances on the systems. By introducing BLFs, it is ensured that all states of the system are constrained within the specified regions. To sum up, the paper proposes a prescribed performance adaptive containment control strategy, which contributes to the development of containment control for MASs in practical applications.

Keyword :

Nonlinear systems Nonlinear systems Complexity theory Complexity theory adaptive containment control adaptive containment control Consensus control Consensus control prescribed performance prescribed performance disturbance observer disturbance observer Nonlinear nonstrict-feedback MASs Nonlinear nonstrict-feedback MASs Multi-agent systems Multi-agent systems Explosions Explosions Backstepping Backstepping full-state constraints full-state constraints Disturbance observers Disturbance observers

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Sui, Jihang , Liu, Chao , Niu, Ben et al. Prescribed Performance Adaptive Containment Control for Full-State Constrained Nonlinear Multiagent Systems: A Disturbance Observer-Based Design Strategy [J]. | IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING , 2024 .
MLA Sui, Jihang et al. "Prescribed Performance Adaptive Containment Control for Full-State Constrained Nonlinear Multiagent Systems: A Disturbance Observer-Based Design Strategy" . | IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING (2024) .
APA Sui, Jihang , Liu, Chao , Niu, Ben , Zhao, Xudong , Wang, Ding , Yan, Bocheng . Prescribed Performance Adaptive Containment Control for Full-State Constrained Nonlinear Multiagent Systems: A Disturbance Observer-Based Design Strategy . | IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING , 2024 .
Export to NoteExpress RIS BibTex
10| 20| 50 per page
< Page ,Total 8 >

Export

Results:

Selected

to

Format:
Online/Total:673/5280812
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.