The aim of Offline Reinforcement Learning (RL) is to derive policies from static trajectory data without real-time environment interactions. Recent studies have explored using the transformer architecture to predict actions based on prior context, treating offline RL as a sequence modeling task. However, this single task learning approach may undermine the transformer model’s attention mechanism, which should ideally allocate varying attention weights across different tokens in the input context for optimal prediction. To address this limitation, we propose reformulating offline RL as a multi-objective optimization problem, extending the prediction to states and returns. We also identify a potential flaw in the trajectory representation used for sequence modeling, which can lead to inaccuracies in modeling the state and return distributions. This flaw arises from the non-smoothness of the action distribution within the trajectory, which is dictated by the behavioral policy. To mitigate this issue, we introduce action space regions to the trajectory representation. Our experiments on D4RL benchmark locomotion tasks demonstrate that our approach allows for more effective utilization of the attention mechanism in the transformer model, resulting in performance that matches or surpasses current state-of-the-art methods.