This thesis investigates the effect of communication as a function of time on a multi-agent simulation based on a military distillation utilizing reinforcement learning for a group of agents. The original contribution to knowledge is a new model of cooperative learning developed as an enhanced Q-learning update function which also includes learning events communicated by other agents. Further contributions lie in the detailed analysis of simulation results establishing evidence of the cause-effect relationship between communication and improved performance. The improvement in performance is visualized by utilizing surface plot diagrams of the agents state-action matrix. These diagrams show the how group communications reinforce effective actions for the agents at an early stage in the simulation.