International Journal of Computer Applications |
Foundation of Computer Science (FCS), NY, USA |
Volume 187 - Number 13 |
Year of Publication: 2025 |
Authors: Isha Das, Md. Jisan Ahmed, Abhay Shukla |
![]() |
Isha Das, Md. Jisan Ahmed, Abhay Shukla . Optimizing Solar Microgrid Efficiency via Reinforcement Learning: An Empirical Study Using Real-Time Energy Flow and Weather Forecasts. International Journal of Computer Applications. 187, 13 ( Jun 2025), 33-38. DOI=10.5120/ijca2025925190
This paper investigates the use of deep reinforcement learning (DRL) to optimize the energy efficiency of a solar-powered microgrid under real-time energy flow and weather forecasting. The research generates a fully synthetic dataset simulating a solar microgrid’s hourly photovoltaic (PV) generation, battery state, load demand, and weather-based solar irradiance forecasts. Four RL algorithms are applied and compared: Deep Q-Network (DQN), Proximal Policy Optimization (PPO), Advantage Actor-Critic (A2C), and Deep Deterministic Policy Gradient (DDPG). Each agent learns to control battery charging/discharging actions to balance supply and demand, incorporating solar forecasts to handle uncertainty. Methodology details include dataset generation, environment formulation, and RL training procedures. This paper presents performance metrics (e.g., reward curves, energy utilization) and graphical analyses. In the study’s empirical results, PPO and DDPG achieve the highest efficiency under clear conditions, while A2C adapts best to sudden changes; DQN performs robustly but converges more slowly. All DRL agents significantly outperform a rule-based baseline. The study demonstrates that DRL can adaptively manage real-time microgrid operations under weather variability, improving renewable utilization and resilience. This work provides a comprehensive evaluation of modern RL methods for smart-energy systems.