The techniques involved in general game playing with Artificial Intelligence (AI) have advanced to meet the challenges of the most popular video game and board game domains. Until recently, the video game domains used as testbeds have been relatively simple. That is, the most complex console-wide domain that has been solved using Deep Reinforcement Learning is the Atari domain, which is decades behind modern video game domains. This work explores a more complex domain (the Nintendo Entertainment System, or NES) and the associated difficulties in developing deep reinforcement learning agents for it. To understand these difficulties, we trained agents on NES games with little to no expert knowledge provided to the agents. After developing some understanding of the challenges of the domain, we suggest areas on which to focus to solve this domain, work which will hopefully lead the field to solving ever more complex environments both real world and theoretical. This paper determines some of the necessary changes in hyperparameters and reward functions to solve the NES domain compared to the more popular Atari domain while using game pixel data as the only inputs to the agents’ neural networks.
Article ID: 2021L20
Publisher: Canadian Artificial Intelligence Association