A Data-Driven Adaptive Scheduling Framework for Vehicle Maintenance Using Deep Reinforcement Learning

Data-Driven Adaptive Algorithm Deep Deterministic Policy Gradient Dynamic Vehicle Maintenance Scheduling Simulation Analysis

Authors

Downloads

This paper proposes a data-driven adaptive scheduling method based on the Deep Deterministic Policy Gradient (DDPG) algorithm to address the challenges that traditional vehicle dynamic maintenance scheduling methods struggle to cope with real-time, complex and resource optimization issues. A mathematical model of vehicle dynamic maintenance scheduling is constructed, defining the state space, action space and reward function. Then, the DDPG reinforcement learning framework is used to optimize strategies through the Actor-Critic structure. Contrastive experiments are also carried out in a simulation environment to evaluate the algorithm's performance. The results indicate that the DDPG algorithm achieves an average maintenance response time of 23.4 minutes, approximately 34% shorter than the genetic algorithm. Its resource utilization reaches 88.7%, over 13% higher than traditional methods. Moreover, the maintenance satisfaction score is 4.6 out of 5. The findings show that the algorithm has remarkable advantages in multi-objective scheduling optimization and provides feasible paths and technical support for the intelligence of vehicle dynamic maintenance systems.