• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:王鼎

Refining:

Language

Submit

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 8 >
Model-free intelligent critic design with error analysis for neural tracking control SCIE
期刊论文 | 2024 , 572 | NEUROCOMPUTING
Abstract&Keyword Cite

Abstract :

The core of the optimal tracking control problem for nonlinear systems is how to ensure that the controlled system tracks the desired trajectory. The utility functions in previous studies have different properties which affect the final tracking effect of the intelligent critic algorithm. In this paper, we introduce a novel utility function and propose a Q -function based policy iteration algorithm to eliminate the final tracking error. In addition, neural networks are used as function approximator to approximate the performance index and control policy. Considering the impact of the approximation error on the tracking performance, an approximation error bound for each iteration of the novel Q -function is established. Under the given conditions, the approximation Q -function converges to the finite neighborhood of the optimal value. Moreover, it is proved that weight estimation errors of neural networks are uniformly ultimately bounded. Finally, the effectiveness of the algorithm is verified by the simulation example.

Keyword :

Optimal tracking control Optimal tracking control Policy iteration Policy iteration Neural networks Neural networks Approximation errors Approximation errors Model-free control Model-free control Adaptive dynamic programming Adaptive dynamic programming

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Gao, Ning , Wang, Ding , Zhao, Mingming et al. Model-free intelligent critic design with error analysis for neural tracking control [J]. | NEUROCOMPUTING , 2024 , 572 .
MLA Gao, Ning et al. "Model-free intelligent critic design with error analysis for neural tracking control" . | NEUROCOMPUTING 572 (2024) .
APA Gao, Ning , Wang, Ding , Zhao, Mingming , Hu, Lingzhi . Model-free intelligent critic design with error analysis for neural tracking control . | NEUROCOMPUTING , 2024 , 572 .
Export to NoteExpress RIS BibTex
A model-free deep integral policy iteration structure for robust control of uncertain systems SCIE
期刊论文 | 2024 , 55 (8) , 1571-1583 | INTERNATIONAL JOURNAL OF SYSTEMS SCIENCE
WoS CC Cited Count: 2
Abstract&Keyword Cite

Abstract :

In this paper, we develop an improved data-based integral policy iteration method to address the robust control issue for nonlinear systems. Combining multi-step neural networks with pre-training, the condition of selecting the initial admissible control policy is relaxed even though the information of system dynamics is unknown. Based on adaptive critic learning, the established algorithm is conducted to attain the optimal controller. Then, the robust control strategy is derived by adding the feedback gain. Furthermore, the computing error is considered during the process of implementing matrix inverse operation. Finally, two examples are presented to verify the effectiveness of the constructed algorithm.

Keyword :

multi-step neural networks multi-step neural networks robust control robust control uncertain systems uncertain systems integral policy iteration integral policy iteration Adaptive critic learning Adaptive critic learning

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Ding , Liu, Ao , Qiao, Junfei . A model-free deep integral policy iteration structure for robust control of uncertain systems [J]. | INTERNATIONAL JOURNAL OF SYSTEMS SCIENCE , 2024 , 55 (8) : 1571-1583 .
MLA Wang, Ding et al. "A model-free deep integral policy iteration structure for robust control of uncertain systems" . | INTERNATIONAL JOURNAL OF SYSTEMS SCIENCE 55 . 8 (2024) : 1571-1583 .
APA Wang, Ding , Liu, Ao , Qiao, Junfei . A model-free deep integral policy iteration structure for robust control of uncertain systems . | INTERNATIONAL JOURNAL OF SYSTEMS SCIENCE , 2024 , 55 (8) , 1571-1583 .
Export to NoteExpress RIS BibTex
Adaptive critic design with weight allocation for intelligent learning control of wastewater treatment plants SCIE
期刊论文 | 2024 , 133 | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE
WoS CC Cited Count: 3
Abstract&Keyword Cite

Abstract :

With the deepening of modernization and industrialization, the issues of water pollution and scarcity have become more pressing. To address these issues, many wastewater treatment factories have been built to improve the reuse of water resources. However, the control of the wastewater treatment process (WWTP) is a complex task due to the highly nonlinear and strongly coupled nature. It is challenging to develop the accurate mechanism models of the wastewater treatment system. The improvement of the efficiency for the WWTP is crucial to safeguard the urban ecological environment. In this paper, adaptive critic with weight allocation (ACWA) is developed to address the optimal control problem in the WWTP. Different from the previous methods of the WWTP, system modeling is not adopted in this paper, which meets the actual physical background of the wastewater treatment system to a great extent. In addition, the actor -critic algorithm in reinforcement learning is used as the basic structure in the ACWA. It is worth noting that a novel weighted action -value function and the advantage function are introduced in the weight updating process of the action network and the critic network. The experimental results show that the control accuracy of the ACWA is greatly improved compared with the previous control methods.

Keyword :

Adaptive critic design Adaptive critic design Actor-critic Actor-critic Reinforcement learning Reinforcement learning Neural networks Neural networks Wastewater treatment processes Wastewater treatment processes

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Ding , Ma, Hongyu , Ren, Jin et al. Adaptive critic design with weight allocation for intelligent learning control of wastewater treatment plants [J]. | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE , 2024 , 133 .
MLA Wang, Ding et al. "Adaptive critic design with weight allocation for intelligent learning control of wastewater treatment plants" . | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE 133 (2024) .
APA Wang, Ding , Ma, Hongyu , Ren, Jin , Gao, Ning , Qiao, Junfei . Adaptive critic design with weight allocation for intelligent learning control of wastewater treatment plants . | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE , 2024 , 133 .
Export to NoteExpress RIS BibTex
Reinforcement Learning for Robust Dynamic Event-Driven Constrained Control SCIE
期刊论文 | 2024 | IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
WoS CC Cited Count: 16
Abstract&Keyword Cite

Abstract :

We consider a robust dynamic event-driven control (EDC) problem of nonlinear systems having both unmatched perturbations and unknown styles of constraints. Specifically, the constraints imposed on the nonlinear systems' input could be symmetric or asymmetric. Initially, to tackle such constraints, we construct a novel nonquadratic cost function for the constrained auxiliary system. Then, we propose a dynamic event-triggering mechanism relied on the time-based variable and the system states simultaneously for cutting down the computational load. Meanwhile, we show that the robust dynamic EDC of original nonlinear-constrained systems could be acquired by solving the event-driven optimal control problem of the constrained auxiliary system. After that, we develop the corresponding event-driven Hamilton-Jacobi-Bellman equation, and then solve it through a unique critic neural network (CNN) in the reinforcement learning framework. To relax the persistence of excitation condition in tuning CNN's weights, we incorporate experience replay into the gradient descent method. With the aid of Lyapunov's approach, we prove that the closed-loop auxiliary system and the weight estimation error are uniformly ultimately bounded stable. Finally, two examples, including a nonlinear plant and the pendulum system, are utilized to validate the theoretical claims.

Keyword :

Adaptive critic designs (ACDs) Adaptive critic designs (ACDs) event-driven control (EDC) event-driven control (EDC) optimal control optimal control reinforcement learning (RL) reinforcement learning (RL)

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yang, Xiong , Wang, Ding . Reinforcement Learning for Robust Dynamic Event-Driven Constrained Control [J]. | IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS , 2024 .
MLA Yang, Xiong et al. "Reinforcement Learning for Robust Dynamic Event-Driven Constrained Control" . | IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS (2024) .
APA Yang, Xiong , Wang, Ding . Reinforcement Learning for Robust Dynamic Event-Driven Constrained Control . | IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS , 2024 .
Export to NoteExpress RIS BibTex
Reinforcement learning control with n-step information for wastewater treatment systems SCIE
期刊论文 | 2024 , 133 | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE
Abstract&Keyword Cite

Abstract :

Wastewater treatment is important for maintaining a balanced urban ecosystem. To ensure the success of wastewater treatment, the tracking error between the crucial variable concentrations and the set point needs to be minimized as much as possible. Since the multiple biochemical reactions are involved, the wastewater treatment system is a nonlinear system with unknown dynamics. For this class of systems, this paper develops an online action dependent heuristic dynamic programming (ADHDP) algorithm combining the temporal difference with lambda [TD(lambda)], which is called ADHDP(lambda). By introducing the TD(lambda), the future n-step information is considered and the learning efficiency of the ADHDP algorithm is improved. We not only give the implementation process of the ADHDP(lambda) algorithm based on neural networks, but also prove the stability of the algorithm under certain conditions. Finally, the effectiveness of the ADHDP(lambda) algorithm is verified through two nonlinear systems, including a wastewater treatment system and a torsional pendulum system. Simulation results show that the ADHDP(lambda) algorithm has higher learning efficiency compared to the general ADHDP algorithm.

Keyword :

Reinforcement learning Reinforcement learning Wastewater treatment processes Wastewater treatment processes Temporal difference with lambda Temporal difference with lambda Action dependent heuristic dynamic programming Action dependent heuristic dynamic programming Online control Online control

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Li, Xin , Wang, Ding , Zhao, Mingming et al. Reinforcement learning control with n-step information for wastewater treatment systems [J]. | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE , 2024 , 133 .
MLA Li, Xin et al. "Reinforcement learning control with n-step information for wastewater treatment systems" . | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE 133 (2024) .
APA Li, Xin , Wang, Ding , Zhao, Mingming , Qiao, Junfei . Reinforcement learning control with n-step information for wastewater treatment systems . | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE , 2024 , 133 .
Export to NoteExpress RIS BibTex
Safe Q-Learning for Data-Driven Nonlinear Optimal Control with Asymmetric State Constraints SCIE
期刊论文 | 2024 , 11 (12) , 2408-2422 | IEEE-CAA JOURNAL OF AUTOMATICA SINICA
WoS CC Cited Count: 2
Abstract&Keyword Cite

Abstract :

This article develops a novel data-driven safe Q-learning method to design the safe optimal controller which can guarantee constrained states of nonlinear systems always stay in the safe region while providing an optimal performance. First, we design an augmented utility function consisting of an adjustable positive definite control obstacle function and a quadratic form of the next state to ensure the safety and optimality. Second, by exploiting a pre-designed admissible policy for initialization, an off-policy stabilizing value iteration Q-learning (SVIQL) algorithm is presented to seek the safe optimal policy by using offline data within the safe region rather than the mathematical model. Third, the monotonicity, safety, and optimality of the SVIQL algorithm are theoretically proven. To obtain the initial admissible policy for SVIQL, an offline VIQL algorithm with zero initialization is constructed and a new admissibility criterion is established for immature iterative policies. Moreover, the critic and action networks with precise approximation ability are established to promote the operation of VIQL and SVIQL algorithms. Finally, three simulation experiments are conducted to demonstrate the virtue and superiority of the developed safe Q-learning method.

Keyword :

Adaptive critic control Adaptive critic control Optimal control Optimal control Safety Safety Mathematical models Mathematical models stabilizing value iteration Q-learning (SVIQL) stabilizing value iteration Q-learning (SVIQL) Heuristic algorithms Heuristic algorithms Learning systems Learning systems adaptive dynamic programming (ADP) adaptive dynamic programming (ADP) control barrier functions (CBF) control barrier functions (CBF) state constraints state constraints Q-learning Q-learning Iterative methods Iterative methods

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhao, Mingming , Wang, Ding , Song, Shijie et al. Safe Q-Learning for Data-Driven Nonlinear Optimal Control with Asymmetric State Constraints [J]. | IEEE-CAA JOURNAL OF AUTOMATICA SINICA , 2024 , 11 (12) : 2408-2422 .
MLA Zhao, Mingming et al. "Safe Q-Learning for Data-Driven Nonlinear Optimal Control with Asymmetric State Constraints" . | IEEE-CAA JOURNAL OF AUTOMATICA SINICA 11 . 12 (2024) : 2408-2422 .
APA Zhao, Mingming , Wang, Ding , Song, Shijie , Qiao, Junfei . Safe Q-Learning for Data-Driven Nonlinear Optimal Control with Asymmetric State Constraints . | IEEE-CAA JOURNAL OF AUTOMATICA SINICA , 2024 , 11 (12) , 2408-2422 .
Export to NoteExpress RIS BibTex
Recent Progress in Reinforcement Learning and Adaptive Dynamic Programming for Advanced Control Applications SCIE
期刊论文 | 2024 , 11 (1) , 18-36 | IEEE-CAA JOURNAL OF AUTOMATICA SINICA
WoS CC Cited Count: 153
Abstract&Keyword Cite

Abstract :

Reinforcement learning (RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming (ADP) within the control community. This paper reviews recent developments in ADP along with RL and its applications to various advanced control fields. First, the background of the development of ADP is described, emphasizing the significance of regulation and tracking control problems. Some effective offline and online algorithms for ADP/adaptive critic control are displayed, where the main results towards discrete-time systems and continuous-time systems are surveyed, respectively. Then, the research progress on adaptive critic control based on the event-triggered framework and under uncertain environment is discussed, respectively, where event-based design, robust stabilization, and game design are reviewed. Moreover, the extensions of ADP for addressing control problems under complex environment attract enormous attention. The ADP architecture is revisited under the perspective of data-driven and RL frameworks, showing how they promote ADP formulation significantly. Finally, several typical control applications with respect to RL and ADP are summarized, particularly in the fields of wastewater treatment processes and power systems, followed by some general prospects for future research. Overall, the comprehensive survey on ADP and RL for advanced control applications has demonstrated its remarkable potential within the artificial intelligence era. In addition, it also plays a vital role in promoting environmental protection and industrial intelligence.

Keyword :

complex environment complex environment optimal control optimal control data-driven control data-driven control Adaptive dynamic programming (ADP) Adaptive dynamic programming (ADP) nonlinear systems nonlinear systems intelligent control intelligent control advanced control advanced control event-triggered design event-triggered design reinforcement learning (RL) reinforcement learning (RL) neural networks neural networks

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Ding , Gao, Ning , Liu, Derong et al. Recent Progress in Reinforcement Learning and Adaptive Dynamic Programming for Advanced Control Applications [J]. | IEEE-CAA JOURNAL OF AUTOMATICA SINICA , 2024 , 11 (1) : 18-36 .
MLA Wang, Ding et al. "Recent Progress in Reinforcement Learning and Adaptive Dynamic Programming for Advanced Control Applications" . | IEEE-CAA JOURNAL OF AUTOMATICA SINICA 11 . 1 (2024) : 18-36 .
APA Wang, Ding , Gao, Ning , Liu, Derong , Li, Jinna , Lewis, Frank L. . Recent Progress in Reinforcement Learning and Adaptive Dynamic Programming for Advanced Control Applications . | IEEE-CAA JOURNAL OF AUTOMATICA SINICA , 2024 , 11 (1) , 18-36 .
Export to NoteExpress RIS BibTex
Prescribed Performance Adaptive Containment Control for Full-State Constrained Nonlinear Multiagent Systems: A Disturbance Observer-Based Design Strategy SCIE
期刊论文 | 2024 | IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING
WoS CC Cited Count: 16
Abstract&Keyword Cite

Abstract :

This paper focuses on the prescribed performance adaptive containment control problem for a class of nonlinear nonstrict-feedback multiagent systems (MASs) with unknown disturbances and full-state constraints. First, the radial basis function neural networks (RBF NNs) technology is employed to approximate the unknown nonlinear functions in the system, and the problem of "explosion of complexity" caused by repeated derivation of virtual controls is solved by using the dynamic surface control (DSC) technology. Then, the nonlinear disturbance observers are designed to estimate the external disturbance, and the barrier Lyapunov functions (BLFs) and the prescribed performance function (PPF) are combined to achieve the control objective of prescribed performance without violating the full-state constraints. The theoretical result shows that all signals in the closed-loop system are semiglobally uniformly ultimately bounded (SGUUB), and the local neighborhood containment errors can converge to the specified boundary. Finally, two simulation examples show the effectiveness of the proposed method. Note to Practitioners-The containment control problem is a hot topic in the field of control, which plays an important role in practical engineering. Especially for this problem of nonlinear MASs, the mathematical models are difficult to be obtained accurately. This paper investigates the prescribed performance adaptive containment control problem for the nonlinear nonstrict-feedback MASs, whose model can be extended to more complex engineering applications, such as unmanned aerial vehicle formations and intelligent traffic management. It is worth noting that external disturbances and state constraint problems often exist in practical applications. Therefore, the disturance observers are designed to compensate for the system disturbances, which can eliminate the impacts of disturbances on the systems. By introducing BLFs, it is ensured that all states of the system are constrained within the specified regions. To sum up, the paper proposes a prescribed performance adaptive containment control strategy, which contributes to the development of containment control for MASs in practical applications.

Keyword :

Nonlinear systems Nonlinear systems Complexity theory Complexity theory adaptive containment control adaptive containment control Consensus control Consensus control prescribed performance prescribed performance disturbance observer disturbance observer Nonlinear nonstrict-feedback MASs Nonlinear nonstrict-feedback MASs Multi-agent systems Multi-agent systems Explosions Explosions Backstepping Backstepping full-state constraints full-state constraints Disturbance observers Disturbance observers

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Sui, Jihang , Liu, Chao , Niu, Ben et al. Prescribed Performance Adaptive Containment Control for Full-State Constrained Nonlinear Multiagent Systems: A Disturbance Observer-Based Design Strategy [J]. | IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING , 2024 .
MLA Sui, Jihang et al. "Prescribed Performance Adaptive Containment Control for Full-State Constrained Nonlinear Multiagent Systems: A Disturbance Observer-Based Design Strategy" . | IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING (2024) .
APA Sui, Jihang , Liu, Chao , Niu, Ben , Zhao, Xudong , Wang, Ding , Yan, Bocheng . Prescribed Performance Adaptive Containment Control for Full-State Constrained Nonlinear Multiagent Systems: A Disturbance Observer-Based Design Strategy . | IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING , 2024 .
Export to NoteExpress RIS BibTex
Model-Free Optimal Tracking Design With Evolving Control Strategies via Q-Learning SCIE
期刊论文 | 2024 , 71 (7) , 3373-3377 | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS
WoS CC Cited Count: 5
Abstract&Keyword Cite

Abstract :

This brief leverages a value-iteration-based Q-learning (VIQL) scheme to tackle optimal tracking problems for nonlinear nonaffine systems. The optimal policy is learned from measured data instead of a precise mathematical model. Furthermore, a novel criterion is proposed to determine the stability of the iterative policy based on measured data. The evolving control algorithm is developed to verify the proposed criterion by employing these stable policies for system control. The advantage of the early elimination of tracking errors is provided by this approach since various stable policies can be employed before obtaining the optimal strategy. Finally, the effectiveness of the developed algorithm is demonstrated by a simulation experiment.

Keyword :

intelligent control intelligent control optimal tracking control system optimal tracking control system Adaptive dynamic programming Adaptive dynamic programming value-iteration-based Q-learning value-iteration-based Q-learning stability criterion stability criterion

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Ding , Huang, Haiming , Zhao, Mingming . Model-Free Optimal Tracking Design With Evolving Control Strategies via Q-Learning [J]. | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS , 2024 , 71 (7) : 3373-3377 .
MLA Wang, Ding et al. "Model-Free Optimal Tracking Design With Evolving Control Strategies via Q-Learning" . | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS 71 . 7 (2024) : 3373-3377 .
APA Wang, Ding , Huang, Haiming , Zhao, Mingming . Model-Free Optimal Tracking Design With Evolving Control Strategies via Q-Learning . | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS , 2024 , 71 (7) , 3373-3377 .
Export to NoteExpress RIS BibTex
Adaptive Fuzzy Resilient Fixed-Time Bipartite Consensus Tracking Control for Nonlinear MASs Under Sensor Deception Attacks SCIE
期刊论文 | 2024 | IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING
WoS CC Cited Count: 7
Abstract&Keyword Cite

Abstract :

This paper studies the adaptive fuzzy resilient fixed-time bipartite consensus tracking control problem for a class of nonlinear multi-agent systems (MASs) under sensor deception attacks. Firstly, in order to reduce the impact of unknown sensor deception attacks on the nonlinear MASs, a novel coordinate transformation technique is proposed, which is composed of the states after being attacked. Then, in the case of unbalanced directed topological graph, a partition algorithm (PA) is utilized to implement the bipartite consensus tracking control, which is more widely applicable than the previous control strategies that only apply to balanced directed topological graph. Moreover, the fixed-time control strategy is extended to nonlinear MASs under sensor deception attacks, and the singularity problem that exists in fixed-time control is successfully avoided by employing a novel switching function. The developed distributed adaptive resilient fixed-time control strategy ensures that all the signals in the closed-loop system are bounded and the bipartite consensus tracking control is achieved in fixed time. Finally, the designed control strategy's validity is demonstrated by means of a simulation experiment.

Keyword :

sensor deception attacks sensor deception attacks Bipartite consensus tracking Bipartite consensus tracking fuzzy logic systems fuzzy logic systems nonlinear MASs nonlinear MASs fixed-time control fixed-time control

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Niu, Ben , Shang, Zihao , Zhang, Guangju et al. Adaptive Fuzzy Resilient Fixed-Time Bipartite Consensus Tracking Control for Nonlinear MASs Under Sensor Deception Attacks [J]. | IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING , 2024 .
MLA Niu, Ben et al. "Adaptive Fuzzy Resilient Fixed-Time Bipartite Consensus Tracking Control for Nonlinear MASs Under Sensor Deception Attacks" . | IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING (2024) .
APA Niu, Ben , Shang, Zihao , Zhang, Guangju , Chen, Wendi , Wang, Huanqing , Zhao, Xudong et al. Adaptive Fuzzy Resilient Fixed-Time Bipartite Consensus Tracking Control for Nonlinear MASs Under Sensor Deception Attacks . | IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING , 2024 .
Export to NoteExpress RIS BibTex
10| 20| 50 per page
< Page ,Total 8 >

Export

Results:

Selected

to

Format:
Online/Total:689/6457670
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.