Analysis and Experimental Research of Model-Free Reinforcement Learning Method
DOI:
https://doi.org/10.31649/1997-9266-2019-144-3-40-49Keywords:
artificial intelligence, machine learning, reinforcement learning, Q-learning, learning strategy, intellectual software agent, bot, optimal parameters, learning curves, experimental researchesAbstract
In this article there has been considered a modern method of machine learning, which is called reinforcement learning. In tasks, that are solved based on interaction, is often impractical to try to get the desired behavior examples of an intellectual software agent, that would be both correct and appropriate for all situations, since the uncertainty conditions exist, arising from incomplete information about an environment and possible actions of other bots or humans. Therefore, the software agent should be trained on the basis of its own experience. An important advantage of the reinforcement learning is the possibility of learning a bot «from scratch» by the balanced combination (search of the compromise) of the «exploration» — «exploitation» modes and learning of the strategies, which allow to sacrifice some scores at this stage for the sake of greater benefit in the future. Researches in the field of the reinforcement learning can be considered as a part of the overall process, that developed over a last few years. It consists of an interaction of an artificial intelligence and other engineering disciplines that is why reinforcement learning develops ideas drawn from the optimal control theory, stochastic optimization and approximation, following common and ambitious goals of the artificial intelligence.
In this work there has been presented the mathematical apparatus of reinforcement learning with the usage of the model-free Q-learning method, practical aspects of its application have been shown, also an effective strategy for the bot learning in an artificial environment (computer video game) has been developed. The role of the observed object variables is accepted by the information used by the agent, and the hidden variables are long-term estimates of the benefit it gains. Depending on the current status of the environment and bot activities is calculated the benefit function, which is received by the agent at the next time moment. With the usage of the developed software, experimental researches of the considered method have been performed. The optimal setting parameters, curves and time learning of the bot have been obtained. The research results may be useful for computer systems of various functional purposes; they can be used in modeling and design, in automatic control and decision making systems, in robotics, in stock markets, etc.
References
O. Hernández-Lerma, J. Hennet, and J. Lasserre, “Average Сost Markov Decision Processes: Optimality conditions,” Journal of Mathematical Analysis and Applications, vol. 158, no. 2, pp. 396-406, 1991.
R. Bellman, “A Markovian Decision Process,” Indiana University Mathematics Journal, vol. 6, no. 4, pp. 679-684, 1957.
L. Busoniu, R. Babuska, B. Schutter, and D. Ernst, “Reinforcement Learning and Dynamic Programming Using Function Approximators,” Automation and Control Engineering, pp. 55-88, 2010.
А. С. Васюра, Т. Б. Мартинюк, та Л. М. Куперштейн, Методи та засоби нейроподібної обробки даних для систем керування. Вінниця, Україна: Універсум-Вінниця, 2008.
C. J. C. H. Watkins, and P. Dayan, Reinforcement Learning, Technical Note, 1992, pp. 55-68.
F. Chollet, Deep learning with Python. Shelter Island. NY: Manning Publications Co., 2018, pp. 27-38.
J. Gläscher, N. Daw, P. Dayan, and J. P. O’doherty, “States versus Rewards: Dissociable Neural Prediction Error Signals Underlying Model-Based and Model-Free Reinforcement Learning,” Neuron, vol. 66, no. 4, pp. 585-595, 2010.
R. S. Sutton, and A. G. Barto, Reinforcement learning: an introduction. Cambridge: The MIT Press, 2015, pp. 143-160.
Т. М. Боровська, А. С. Васюра, та В. А. Северілов, Моделювання та оптимізація систем автоматичного управління. Вінниця, Україна: ВНТУ, 2009.
C. Jin, Z. Allen-Zhu, S. Bubeck, and M. Jordan, "Is Q-learning Provably Efficient?", arXiv.org, 2018. [Electronic resource]. Available: https://arxiv.org/pdf/1807.03765.pdf . Accessed: Jul. 10, 2018.
J. Dornheim, N. Link, and P. Gumbsch, “Model-Free Adaptive Optimal Control of Sequential Manufacturing Processes Using Reinforcement Learning,” arXiv.org, 2019. [Electronic resource]. Available: https://arxiv.org/abs/1809.06646v1 . Accessed: Jan. 07. 2019.
W. Haskell, and W. Huang, "Stochastic Approximation for Risk-Aware Markov Decision Processes", Arxiv.org, 2018. [Electronic resource]. Available: https://arxiv.org/pdf/1805.04238.pdf. Accessed: May. 17, 2018.
R. Bellman, “Dynamic programming and stochastic control processes,” Information and Control, vol. 1, no. 3, pp. 228-239, 1958.
C. J. C. H. Watkins, Learning from delayed rewards. University of Cambridge, 1989, pp. 55-68.
L. P. Kaelbling, M. L. Littman, and A. W. Moore, “An Introduction to Reinforcement Learning,” The Biology and Technology of Intelligent Autonomous Agents, 1995, pp. 90–127.
M. Rahman and H. Rashid, “Implementation of Q Learning and Deep Q Network for Controlling a Self-Balancing Robot Model,” ArXiv.org, 2018. [Electronic resource]. Available: https://arxiv.org/pdf/1807.08272.pdf . Accessed: Jul. 22, 2018.
C. J. C. H. Watkins and P. Dayan, “Q-learning,” Machine Learning, vol. 8, no. 3-4, pp. 279-292, 1992.
E. Even-Dar and Y. Mansour, “Learning Rates for Q-Learning,” Lecture Notes in Computer Science Computational Learning Theory, 2001, pp. 589–604.
Downloads
-
PDF (Українська)
Downloads: 324
Published
How to Cite
Issue
Section
License
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).