Architectural and algorithmic strategies for generalizable deep reinforcement learning

dc.contributor.authorNafi, Nasik Muhammad
dc.date.accessioned2024-08-09T20:36:42Z
dc.date.available2024-08-09T20:36:42Z
dc.date.graduationmonthAugust
dc.date.issued2024
dc.description.abstractGeneralization in Reinforcement Learning (RL) refers to the ability to transfer the learned policy to previously unseen task variations - specifically, state and/or reward distributions. In high-dimensional observation settings, current deep RL agents struggle to generalize even over in-distribution but unseen variations, unlike biological agents which can quickly adapt to new conditions by generalizing over prior learned behaviors. To alleviate this problem, this dissertation focuses on efficiently encoding structural assumptions and inductive bias through architectural facet, which determines the computation structure and underlying learning representation; and algorithmic facet, which defines the optimization process. This dissertation investigates the policy and value networks at a high level of design abstraction and the policy and value approximation objectives. The central hypothesis is that robust representation learning and value estimation free of overfitting can facilitate zero-shot generalization to new task instances. First, an attention-based partially decoupled actor-critic architecture is introduced to enhance generalization by addressing policy-value representation asymmetry while remaining computationally efficient. Additional analysis on sensitivity to the degree of decoupling provides a designer-level model selection mechanism. Second, to improve resilience to potential observational variations, RL using augmentation-invariant representation is introduced, learning latent representations through a non-contrastive self-supervised approach with minimal modifications to the underlying RL structure. Third, variations in environmental conditions are shown to significantly impact task completion times or episode lengths, resulting in high variance in value estimation. A novel value approximation technique is therefore introduced that reshapes the value targets considering the possibility of episode length variation. Finally, horizon regularization is introduced to estimate an advantage for policy optimization that leverages multiple horizons, avoiding the implications of a fixed effective horizon imposed by a single discount factor.
dc.description.advisorWilliam H. Hsu
dc.description.degreeDoctor of Philosophy
dc.description.departmentDepartment of Computer Science
dc.description.levelDoctoral
dc.identifier.urihttps://hdl.handle.net/2097/44459
dc.language.isoen_US
dc.publisherKansas State University
dc.rights© the author. This Item is protected by copyright and/or related rights. You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s).
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/
dc.subjectReinforcement learning
dc.subjectGeneralization
dc.subjectRepresentation learning
dc.subjectValue estimation
dc.subjectDiscounting
dc.subjectPolicy optimization
dc.titleArchitectural and algorithmic strategies for generalizable deep reinforcement learning
dc.typeDissertation

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
NasikMuhammadNafi2024.pdf
Size:
13.45 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.6 KB
Format:
Item-specific license agreed upon to submission
Description: