The proof you are given in the above post is not wrong. It's just they skip some of the steps and directly written the final answer. Let me go through those steps:

I will simplify some of the things to avoid complication but the generosity remains the same. Like I will think of the reward as only dependent on the current state, $s$, and current action, $a$. So, $r = r(s,a)$

First, we will define the average reward as:
$$r(\pi) = \sum_s \mu(s)\sum_a \pi(a|s)\sum_{s^{\prime}} P_{ss'}^{a} r $$
We can further simplify average reward as:
$$r(\pi) = \sum_s \mu(s)\sum_a \pi(a|s)r(s,a) $$
My notation may be slightly different than the aforementioned slides since I'm only following Sutton's book on RL. Our objective function is:
$$ J(\theta) = r(\pi) $$
We want to prove that:
$$ \nabla_{\theta} J(\theta) = \nabla_{\theta}r(\pi) = \sum_s \mu(s) \sum_a \nabla_{\theta}\pi(a|s) Q(s,a)$$

Now let's start the proof:
$$\nabla_{\theta}V(s) = \nabla_{\theta} \sum_{a} \pi(a|s) Q(s,a)$$
$$\nabla_{\theta}V(s) = \sum_{a} [Q(s,a) \nabla_{\theta} \pi(a|s) + \pi(a|s) \nabla_{\theta}Q(s,a)]$$
$$\nabla_{\theta}V(s) = \sum_{a} [Q(s,a) \nabla_{\theta} \pi(a|s) + \pi(a|s) \nabla_{\theta}[R(s,a) - r(\pi) + \sum_{s^{\prime}}P_{ss^{\prime}}^{a}V(s^{\prime})]]$$
$$\nabla_{\theta}V(s) = \sum_{a} [Q(s,a) \nabla_{\theta} \pi(a|s) + \pi(a|s) [- \nabla_{\theta}r(\pi) + \sum_{s^{\prime}}P_{ss^{\prime}}^{a}\nabla_{\theta}V(s^{\prime})]]$$
$$\nabla_{\theta}V(s) = \sum_{a} [Q(s,a) \nabla_{\theta} \pi(a|s) + \pi(a|s) \sum_{s^{\prime}}P_{ss^{\prime}}^{a}\nabla_{\theta}V(s^{\prime})] - \nabla_{\theta}r(\pi)\sum_{a}\pi(a|s)$$
Now we will rearrange this:
$$\nabla_{\theta}r(\pi) = \sum_{a} [Q(s,a) \nabla_{\theta} \pi(a|s) + \pi(a|s) \sum_{s^{\prime}}P_{ss^{\prime}}^{a}\nabla_{\theta}V(s^{\prime})] - \nabla_{\theta}V(s)$$
Multiplying both sides by $\mu(s)$ and summing over $s$:
$$\nabla_{\theta}r(\pi) \sum_{s}\mu(s)= \sum_{s}\mu(s) \sum_{a} Q(s,a) \nabla_{\theta} \pi(a|s) + \sum_{s}\mu(s) \sum_a \pi(a|s) \sum_{s^{\prime}}P_{ss^{\prime}}^{a}\nabla_{\theta}V(s^{\prime}) - \sum_{s}\mu(s) \nabla_{\theta}V(s)$$
$$\nabla_{\theta}r(\pi) = \sum_{s}\mu(s) \sum_{a} Q(s,a) \nabla_{\theta} \pi(a|s) + \sum_{s^{\prime}}\mu(s^{\prime})\nabla_{\theta}V(s^{\prime}) - \sum_{s}\mu(s) \nabla_{\theta}V(s)$$
Now we are there:
$$\nabla_{\theta}r(\pi) = \sum_{s}\mu(s) \sum_{a} Q(s,a) \nabla_{\theta} \pi(a|s)$$
This is the policy gradient theoram for average reward formulation (ref. Policy gradient).

Form a math view, If the expression of expected value of reward is correct then that is the correct thing to do. $\mu$ is a function of state $s$ so a derivative w.r.t $\theta$ will not affect it. So if what you are claiming is true then $\mu$ would be a function of $\theta$. So you just need to check whether the expression of expected reward is correct or not. – DuttaA – 2020-08-27T11:20:38.093