Lossy video compression is widely used for transmitting and storing video data. Despite the availability of advanced compression approaches, the current standard remains unified video codecs like H.264 or H.265. These codecs need to adapt to varying compression strengths due to dynamic network bandwidth conditions. Rate control modules are used to enhance compression while meeting bandwidth constraints and minimizing video distortion. However, existing video codecs and rate control modules focus on minimizing video distortion without considering the impact on downstream deep vision models. In this paper, we introduce an end-to-end learnable deep video codec control that takes into account both bandwidth constraints and downstream vision performance, while still adhering to standardization. We demonstrate the effectiveness of our approach on two common vision tasks (semantic segmentation and optical flow estimation) and two different datasets. Our deep codec control preserves downstream performance better than the traditional 2-pass average bit rate control, while still meeting dynamic bandwidth constraints and adhering to standardizations.