当前位置：
首 页 > 国际学术期刊 > Transportation Rese… > 2014

Accounting for dynamic speed limit control in a stochastic traffic environment: A reinforcement learning approach

发布时间：2014-3-2110:26:33来源：作者：Feng Zhu, Satish V. Ukkusuri

Feng Zhu

Satish V. Ukkusuri

Highlights

•A dynamic networking loading model allowing for the change of speed limits.

•Formulate the dynamic speed limit problem as a MDP problem.

•Apply the RMART algorithm to solve the problem.

•Incorporate the uncertainties of both demand and supply.

•Test case study in the Sioux Falls network in the real world.

Keywords

Dynamic speed limit control; Stochastic network; Connected vehicle; Reinforcement learning; Network loading

Abstract

This paper proposes a novel dynamic speed limit control model accounting for uncertain traffic demand and supply in a stochastic traffic network. First, a link based dynamic network loading model is developed to simulate the traffic flow propagation allowing the change of speed limits. Shockwave propagation is well defined and captured by checking the difference between the queue forming end and the dissipation end. Second, the dynamic speed limit problem is formulated as a Markov Decision Process (MDP) problem and solved by a real time control mechanism. The speed limit controller is modeled as an intelligent agent interacting with the stochastic network environment stochastic network environment to assign time dependent link based speed limits. Based on different metrics, e.g. total network throughput, delay time, vehicular emissions are optimized in the modeling framework, the optimal speed limit scheme is obtained by applying the R-Markov Average Reward Technique (R-MART) based reinforcement learning algorithm. A case study of the Sioux Falls network is constructed to test the performance of the model. Results show that the total travel time and emissions (in terms of CO) are reduced by around 18% and 20% compared with the base case of non-speed limit control.

Article Outline

1. Introduction

1.1. Literature review and motivations

1.2. Contributions of the paper

2. Link based dynamic network loading model

2.1. Link representation for a generalized network

2.2. Traffic flow propagation in the main part of a link

2.3. The whole formulation of the link based DNL model

2.4. Link average speed estimation

3. Reinforcement learning for dynamic speed limit control

3.1. Dynamic speed limit problem as Markov Decision Process (MDP)

3.2. State of the speed limit controller

3.3. Actions of the speed limit controller

3.4. Reward function

3.5. RMART algorithm description

4. Test case study

4.1. Experiment design

4.2. Result analysis

5. Conclusions

. Appendix A

References

Figures

Fig. 1.

Effect of speed limits in the fundamental diagram.

Fig. 2.

Link representation of an ordinary link.

Fig. 3.

Link representation of a diverging link and a merging link.

Fig. 4.

Demonstration of queue forming end and dissipation end.

Fig. 5.

Flow-density fundamental diagram for the link based DNL model.

Fig. 6.

Flow chart of the R-MART algorithm.

Fig. 7.

(a) Google map of the Sioux Falls network (b) link representation of the Sioux Falls network.

Fig. 8.

Demand input variation of origin link.

Fig. 9.

Exit capacity variation of links under speed limit control.

Fig. 10.

Variation of total travel time and emission (in terms of CO) of different simulation runs.

Tables

Table 1. Parameter settings for the test network.

扫描二维码分享本页面