TL;DR: Relative scale of multiple different rewards can be important. However, granting +10 for a win and -1 for a loss in a game will not improve speed of learning how to win any better than tuning the learning rate.
from a given state if a agent takes a good action i give a positive reward, and if the action is bad, i give a negative reward.
Usually you do not know what a "good action" or "bad action" are, and the reward system is based on the immediate outcome of the action from a certain state.
You may already know that, and I will phrase the rest of this answer as if that is what you meant by "good action". However, that is an important detail, so if you are not sure what the difference is, you could ask another question about it.
So if i give the agent very high positive rewards when it takes a good action, like 100 times positive value as compared to negative rewards, will it help agent during the training?
The ideal reward scheme is based on easy to measure outcomes that you care about. E.g. reaching a destination, solving a puzzle, how many items are collected, the score in a game, winning a game against an opponent.
If you have both negative and positive rewards, they usually need to be scaled relative each other in a way that makes sense given the problem. It is quite common to have a low negative reward for every time step for instance, if some resource such as time, fuel or money is being used up just from the agent acting and not solving the task. In that case it may make sense to have e.g have a small -0.1 reward for "bad actions" and a larger +10 reward for "good action = completing the task".
Scaling up rewards is like increasing the learning rate. There is usually some optimal learning rate where the agent learns fastest. Too high and the learning is not stable. Too low and the learning is too slow. Given that is the case, usually you just care about getting relative sizes of rewards correct, then you can scale the learning rate to get the best learning speed.
There is no specific benefit to scale up positive rewards only. You should generally do so only when the problem definition allows you to. It might help with speed that the agent learns when you test it, but most of that effect will be the same as scaling the learning rate.
This is different to training animals, teaching small children or rewards for humans in general, where you may be advised to use positive rewards and positive signals more often than negative. However, this is likely to be related to problem domain of general survival and generalist learning in living creatures, which most RL does not replicate. It definitely is not advice that extends to the simpler statistical agents built using RL such as Q-learning.