Commit Graph

14 Commits

Author SHA1 Message Date
Jan Löwenstrom e8f4fa06b6 add specific environment RNG 2020-04-05 12:52:49 +02:00
Jan Löwenstrom b0ca634b64 add every visit no jump results 2020-04-02 17:07:15 +02:00
Jan Löwenstrom f2aa7487af Merge remote-tracking branch 'origin/epsilonBehavior' into epsilonBehavior
# Conflicts:
#	src/main/java/core/algo/EpisodicLearning.java
2020-04-02 15:57:32 +02:00
Jan Löwenstrom 6477251545 add Every-Visit Monte-Carlo 2020-04-02 15:56:11 +02:00
Jan Löwenstrom eca0d8db4d create Dino Sampling state 2020-03-26 19:22:50 +01:00
Jan Löwenstrom 4641f50b79 add results for convergence for advanced dino jumping 2020-03-05 13:17:54 +01:00
Jan Löwenstrom 9b54b72a25 add epsilon convergence test and will remove unnecessary multithreaded learning 2020-03-03 02:52:39 +01:00
Jan Löwenstrom 0e4f52a48e first epsilon decaying method 2020-02-27 15:29:15 +01:00
Jan Löwenstrom 77898f4e5a add TD algorithms and started adopting to continous tasks
- add Q-Learning and SARSA
- more config variables
2020-02-17 13:56:55 +01:00
Jan Löwenstrom 195722e98f enhance save/load feature and change thread handling
- saving monte carlo did not include returnSum and returnCount, so it the state would be wrong after loading. Learning, EpisodicLearning and MonteCarlo classes are all overriding custom save and load methods, calling super() each time but including fields that are necessary to replace on runtime.
- moved generic episodic behaviour from monteCarlo to abstract top level class
- using AtomicInteger for episodesToLearn
- moved learning-Thread-handling from controller to model. Learning got one extra Leaning thread.
- add feature to use custom speed and distance for dino world obstacles
2019-12-29 01:12:11 +01:00
Jan Löwenstrom b2c3854b3a change RL-Controller initialization process and action space iterable
- no fake builder pattern anymore, moved needed fields into constructor
- add serializeUID
- action space extends iterable interface to simplify looping over all actions (and not returning the actual list)
2019-12-24 19:38:35 +01:00
Jan Löwenstrom 5a4e380faf add dino jumping environment, deterministic/reproducable behaviour and save-and-load feature
- add feature to save and load learning progress (Q-Table) and current episode count
- episode end is now purely decided by environment instead of monte carlo algo capping it on 10 actions
- using linkedHashMap on all locations to ensure deterministic behaviour
- fixed major RNG issue to reproduce algorithmic behaviour
- clearing rewardHistory, to only save the last 10k rewards
- added google dino jump environment
2019-12-22 23:33:56 +01:00
Jan Löwenstrom b1246f62cc add features to gui to control learning and moving learning listener interface to controller
- Add metric to display episodes per second
- view not implementing learning listener anymore, controller does. Controller is controlling all view actions based upon learning events. Reacts to view events via viewListener
- add executor service for learning task
- using instance of to distinguish between episodic learning and td learning
- add feature to trigger more episodes
- add checkboxes for smoothing graph, displaying last 100 rewards only and drawing environment
- remove history panel from antworld gui
2019-12-22 17:06:54 +01:00
Jan Löwenstrom 34e7e3fdd6 distinguish learning and episodic learning, enable fast-learning without drawing every step to reduce lag
- repainting every step on no time delay will certainly freeze the app, so "fast-learning" will disable it, only refreshing current episode label
- Added new abstract class "Episodic Learning". Maybe just use an interface instead?! Important because TD learning is not episodic, needs another way to represent the rewards received (maybe mean of last X rewards or sth)
- Opening two JFrames, one with learning infos and one with environment
2019-12-21 00:23:09 +01:00