There is a huge part of game sound that is totally absent from traditional Audio Post: Implementation. It is huge and can take up more of the game sound designers time than the actual "asset creation" as it is called in games.
Example: Generic Gun
When the sound designer creates a gun sound, he doesn't know in advance at the time of firing, whether there are any bullets in the gun or how far the gun is from the 'listener' (usually but not always the camera), or if the player is in a cave or outside in a forest or if the player has added a silencer...
The game sends data to the Sound Engine (part of the game program responsible for playing back the sounds in-game) that affect the playback of the gun. That data includes, how far the gun is from the listener (fade up distant gun assets and fade down big boom and foley sounds), who is firing the gun (the Player's weapon is often beefier than enemy weapons to give the player a sense of power), is there any objects between the listener and the gun (obstruction and occlusion effects can be added), is the environment relevant (reverbs and echoes may be needed).
The sound designer creates Sound Cues (also called Sound Events and other things as well depending on the tools being used) that takes into consideration all possible combinations of situations and plays back something that (hopefully) fits the moment. Even simple Sound Cues can have a dozen or so WAV files associated. Complex Cues can have more than 50 WAVs.
That, for me is the biggest difference between game audio and traditional audio post.