Nafi, Nasik Muhammad2024-08-092024-08-092024https://hdl.handle.net/2097/44459Generalization in Reinforcement Learning (RL) refers to the ability to transfer the learned policy to previously unseen task variations - specifically, state and/or reward distributions. In high-dimensional observation settings, current deep RL agents struggle to generalize even over in-distribution but unseen variations, unlike biological agents which can quickly adapt to new conditions by generalizing over prior learned behaviors. To alleviate this problem, this dissertation focuses on efficiently encoding structural assumptions and inductive bias through architectural facet, which determines the computation structure and underlying learning representation; and algorithmic facet, which defines the optimization process. This dissertation investigates the policy and value networks at a high level of design abstraction and the policy and value approximation objectives. The central hypothesis is that robust representation learning and value estimation free of overfitting can facilitate zero-shot generalization to new task instances. First, an attention-based partially decoupled actor-critic architecture is introduced to enhance generalization by addressing policy-value representation asymmetry while remaining computationally efficient. Additional analysis on sensitivity to the degree of decoupling provides a designer-level model selection mechanism. Second, to improve resilience to potential observational variations, RL using augmentation-invariant representation is introduced, learning latent representations through a non-contrastive self-supervised approach with minimal modifications to the underlying RL structure. Third, variations in environmental conditions are shown to significantly impact task completion times or episode lengths, resulting in high variance in value estimation. A novel value approximation technique is therefore introduced that reshapes the value targets considering the possibility of episode length variation. Finally, horizon regularization is introduced to estimate an advantage for policy optimization that leverages multiple horizons, avoiding the implications of a fixed effective horizon imposed by a single discount factor.en-US© the author. This Item is protected by copyright and/or related rights. You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s).http://rightsstatements.org/vocab/InC/1.0/Reinforcement learningGeneralizationRepresentation learningValue estimationDiscountingPolicy optimizationArchitectural and algorithmic strategies for generalizable deep reinforcement learningDissertation