Skip to main navigation Skip to search Skip to main content

Comparing Optimal and Adaptive EV Charging in Smart Cities: MILP vs. Reinforcement Learning

Research output: Contribution to conferencePaperpeer-review

Abstract

The coordinated scheduling of electric vehicle (EV) charging is a critical challenge for smart cities, particularly in high-density infrastructure such as Mobility Hubs (MHs). This paper evaluates and compares two prominent approaches to the EV Charging Scheduling Problem (CSP): Mixed-Integer Linear Programming (MILP) and Reinforcement Learning (RL). We formulate a shared problem framework and apply both strategies under two structured scenarios: a small-scale deterministic benchmark and a medium-scale, realistic deployment with higher heterogeneity. Results show that MILP achieves optimal cost and
SoC compliance in tractable cases but struggles with scalability. RL, based on Proximal Policy Optimization (PPO), achieves nearoptimal performance while scaling to 100 EVs with minimal computation time. Despite occasional SoC deviations, the RL agent exhibits robust and adaptive behavior under dynamic conditions. This study offers actionable insights for selecting and deploying EV scheduling strategies in real-world urban environments.
Original languageEnglish
Pages452-459
Number of pages6
DOIs
StatePublished - 30 Dec 2025
Event 21st Symposium on Performance Evaluation of Wireless Ad Hoc, Sensor, and Ubiquitous Networks (PE-WASUN 2025). - Universidad Politécnica de Cataluña, Barcelona, Spain
Duration: 27 Oct 202531 Oct 2025
Conference number: 21
http://pewasun.upc.edu/PEWASUN2025/

Conference

Conference 21st Symposium on Performance Evaluation of Wireless Ad Hoc, Sensor, and Ubiquitous Networks (PE-WASUN 2025).
Abbreviated titlePE-WASUN 2025
Country/TerritorySpain
CityBarcelona
Period27/10/2531/10/25
Internet address

Cite this