I am trying to model operational decisions in inventory control. The control policy is base stock with a fixed stock level of $S$. That is replenishment orders are placed for every demand arrival to take the stock level to $S$. The replenishments arrive at constant lead time $L$. There is an upper limit $D$ on the allowed stock out time and it is measured every $T$ periods, otherwise, a cost is incurred $C_p$. This system functions in a similar manner to the M/G/S queue. The stock out time can be thought as the customer waiting time due to all server busy. So every $R$ period ($R$ is less than $T$) the inventory level and pipeline of outstanding orders are monitored and a decision about whether to expedite outstanding order (a cost involved $C_e$) or not is taken in order to control the waiting/stock-out time and to minimize the total costs.
I feel it is a time and state-dependent problem and would like to use $Q$-learning to solve this MDP problem. The time period $T$ is typically a quarter i.e. 3 months and I plan to simulate demands as poisson arrivals. My apprehension is whether simulating arrivals would help to evaluate the Q-values because the simulation is for such a short period. Am I not overestimating the Q value in this way? I request some help on how should I proceed with implementation.