Non-exponential reward discounting in deep reinforcement learning

dc.contributor.authorAli, Raja Farrukh
dc.date.accessioned2024-04-24T15:59:28Z
dc.date.available2024-04-24T15:59:28Z
dc.date.graduationmonthMay
dc.date.published2024
dc.description.abstractThe science of sequential decision making, formalized through reinforcement learning (RL), has driven various recent technological breakthroughs, from mastering complex games that require strategic thinking to driving advancements in natural language processing. Central to an RL agent's learning is how it treats rewards (the learning signal) and adjusts its policy to maximize cumulative rewards. Future rewards are weighed less than immediate rewards, and traditional RL methods employ exponential discounting to balance immediate and future rewards. However, studies from neuroscience and psychology have shown that exponential discounting does not accurately reflect human and animal behavior, who instead exhibit hyperbolic discounting of future rewards. This dissertation explores non-exponential discounting, such as hyperbolic, in different facets of deep RL such that it can mirror the intricate decision-making processes found in humans, and evaluate its impact on agent performance in a variety of settings. First, I revisit the idea of hyperbolic discounting and the auxiliary task of learning over multiple horizons in RL agents while using off-policy value-based methods, studying its impact on sample efficiency and generalization to new tasks while incorporating architectural and implementation improvements. Second, I introduce a two-parameter discounting model based on generalized hyperbolic discounting in the deep RL setting. With its sensitivity-to-delay parameter, this model enriches temporal decision-making in RL, as evaluated through empirical evidence. Third, I apply hyperbolic discounting to multi-agent systems, examining its influence on collective decision-making and performance, revealing the potential for improved cooperation among agents. These contributions highlight the impact of non-exponential discounting on agent performance, linking theory with AI practice, facilitating human-like decision-making, and paving the way for new research directions.
dc.description.advisorWilliam H. Hsu
dc.description.degreeDoctor of Philosophy
dc.description.departmentDepartment of Computer Science
dc.description.levelDoctoral
dc.identifier.urihttps://hdl.handle.net/2097/44335
dc.language.isoen_US
dc.subjectReinforcement Learning
dc.subjectReward Discounting
dc.subjectHyperbolic Discounting
dc.titleNon-exponential reward discounting in deep reinforcement learning
dc.typeDissertation

Files

Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
RajaFarrukhAli2024.pdf
Size:
4.94 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.6 KB
Format:
Item-specific license agreed upon to submission
Description: