Lawrence Technological University, USA.
* Corresponding author
University of Cincinnati, USA.

Article Main Content

With the recent expansion in Self-Driving and Autonomy field, every vehicle is occupied with some kind or alter driver assist features in order to compensate driver comfort. Expansion further to fully Autonomy is extremely complicated since it requires planning safe paths in unstable and dynamic environments. Impression learning and other path learning techniques lack generalization and safety assurances.  Selecting the model and avoiding obstacles are two difficult issues in the research of autonomous vehicles. Q-learning has evolved into a potent learning framework that can now acquire complicated strategies in high-dimensional contexts to the advent of deep feature representation.  A deep Q-learning approach is proposed in this study by using experienced replay and contextual expertise to address these issues. A path planning strategy utilizing deep Q-learning on the network edge node is proposed to enhance the driving performance of autonomous vehicles in terms of energy consumption. When linked vehicles maintain the recommended speed, the suggested approach simulates the trajectory using a proportional-integral-derivative (PID) concept controller. Smooth trajectory and reduced jerk are ensured when employing the PID controller to monitor the terminals. The computational findings demonstrate that, in contrast to traditional techniques, the approach could investigate a path in an unknown situation with small iterations and a higher average payoff. It can also more quickly converge to an ideal strategic plan.

References

  1. Huang Y, Ding H, Zhang Y, Wang H, Cao D, Xu N, Hu C. A Motion Planning and Tracking Framework for Autonomous Vehicles Based on Artificial Potential Field Elaborated Resistance Network Approach. IEEE Transactions on Industrial Electronics, 2020; 67(2): 1376–1386. https://doi.org/10.1109/tie.2019.2898599.
     Google Scholar
  2. Zhang Y, Zhang J, Zhang J, Wang J, Lu K, Hong J. A Novel Learning Framework for Sampling-Based Motion Planning in Autonomous Driving. Proc. AAAI Conf. Artif. Intell., 2020; 34(01): 1202–1209. doi: 10.1609/aaai.v34i01.5473.
     Google Scholar
  3. Huang Y, Wang H, Khajepour A, Ding H, Yuan K, Qin Y. A Novel Local Motion Planning Framework for Autonomous Vehicles Based on Resistance Network and Model Predictive Control. IEEE Trans. Veh. Technol., 2020; 69(1): 55–66. doi: 10.1109/TVT.2019.2945934.
     Google Scholar
  4. Li J, Chen Y, Zhao X, Huang J. An improved DQN path planning algorithm. J. Supercomput., 2022; 78(1): 616–639. doi: 10.1007/s11227-021-03878-2.
     Google Scholar
  5. Chen C, Jiang J, Lv N, Li S. An Intelligent Path Planning Scheme of Autonomous Vehicles Platoon Using Deep Reinforcement Learning on Network Edge. IEEE Access, 2020; 8: 99059–99069. doi: 10.1109/ACCESS.2020.2998015.
     Google Scholar
  6. Abdi A, Ranjbar MH, Park JH. Computer Vision-Based Path Planning for Robot Arms in Three-Dimensional Workspaces Using Q-Learning and Neural Networks. Sensors, 2022; 22(5): 1697. doi: 10.3390/s22051697.
     Google Scholar
  7. Kiran BR, Sobh I, Talpaert V, Mannion P, Sallab AAA, Yogamani S, Perez P. (2022). Deep Reinforcement Learning for Autonomous Driving: A Survey. IEEE Transactions on Intelligent Transportation Systems, 2022; 23(6): 4909–4926. https://doi.org/10.1109/tits.2021.3054625.
     Google Scholar
  8. Song R, Liu Y, Bucknall R. Smoothed A* algorithm for practical unmanned surface vehicle path planning. Appl. Ocean Res., 2019; 83: 9–20. doi: 10.1016/j.apor.2018.12.001.
     Google Scholar
  9. Gu T, Snider J, Dolan JM, Lee JW. Focused Trajectory Planning for autonomous on-road driving. 2013 IEEE Intelligent Vehicles Symposium (IV). https://doi.org/10.1109/ivs.2013.6629524.
     Google Scholar
  10. Kebria PM, Khosravi A, Salaken SM, Nahavandi S. Deep imitation learning for autonomous vehicles based on convolutional neural networks. IEEECAA J. Autom. Sin., 2020; 7(1): 82–95. doi: 10.1109/JAS.2019.1911825.
     Google Scholar
  11. Fayjie AR, Hossain S, Oualid D, Lee DJ. Driverless Car: Autonomous Driving Using Deep Reinforcement Learning in Urban Environment. in 2018 15th International Conference on Ubiquitous Robots (UR), Honolulu, HI; Jun. 2018: 896–901. doi: 10.1109/URAI.2018.8441797.
     Google Scholar
  12. Altche F, Polack P, de La Fortelle A. High-speed trajectory planning for autonomous vehicles using a simple dynamic model. in 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), Yokohama, Oct. 2017: 1–7. doi: 10.1109/ITSC.2017.8317632.
     Google Scholar
  13. Li G, Yang Y, Qu X, Cao D, Li K. A deep learning based image enhancement approach for autonomous driving at night. Knowledge-Based Syst., 2021; 213: 106617.
     Google Scholar
  14. Kawasaki A, Seki A. Multimodal trajectory predictions for autonomous driving without a detailed prior map. Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2021: 3723–3732.
     Google Scholar
  15. Altché F, Polack P, de La Fortelle A. High-speed trajectory planning for autonomous vehicles using a simple dynamic model. in 2017 IEEE 20th international conference on intelligent transportation systems (ITSC); 2017, pp. 1–7.
     Google Scholar
  16. Batkovic I, Zanon M, Ali M, Falcone P. Real-time constrained trajectory planning and vehicle control for proactive autonomous driving with road users. in 2019 18th European Control Conference (ECC); 2019, pp. 256–262.
     Google Scholar
  17. Fujiyoshi H, Hirakawa T, Yamashita T. Deep learning-based image recognition for autonomous driving. IATSS Res., 2019; 43(4): 244–252.
     Google Scholar
  18. Cui H, Radosavljevic V, Chou FC, Lin TH, Nguyen T, Huang TK, Schneider J, Djuric N. Multimodal trajectory predictions for autonomous driving using deep convolutional networks. in 2019 International Conference on Robotics and Automation (ICRA), 2019: 2090–2096.
     Google Scholar
  19. Cho RLT, Liu, JS, Ho MHC. The development of autonomous driving technology: perspectives from patent citation analysis. Transp. Rev., 2021; 41(5): 685–711. doi: 10.1080/01441647.2021.1879310.
     Google Scholar
  20. Zhu S, Aksun-Guvenc B. Trajectory Planning of Autonomous Vehicles Based on Parameterized Control Optimization in Dynamic on-Road Environments. J. Intell. Robot. Syst., 2020; 100(3–4): 1055–1067. doi: 10.1007/s10846-020-01215-y.
     Google Scholar
  21. Naveed KB, Qiao Z, Dolan JM. Trajectory Planning for Autonomous Vehicles Using Hierarchical Reinforcement Learning. arXiv, 2020. [Online]. Retrieved from: http://arxiv.org/abs/2011.04752.
     Google Scholar