In this paper we develop an influence map based agent for Pac-Man and analyze its performance compared to a previous implementation. The major particularity comes from the fact that the proposed model limits the influence area of the components in the game. This difference proves to be crucial for the lifetime of the Pac-Man agent, as he is more $careful$ when compared to his predecessor. A thorough comparison between the two approaches, under similar circumstances, reveals that our expectations are met and the proposed model is competitive or better when considering several outcomes. Experiments also reveal how important rewards and enemy agents are and how their influence should be propagated. Overall, results are promising, showing that limited influence propagation is a valid solution in some cases, managing to solve some of the problems of the previous agent.