The increasing interest in autonomous navigation research has led to the introduction of a Driverless Vehicle category in Formula Student events. This study focuses on utilizing Deep Reinforcement Learning for the end-to-end control of an autonomous race car in these competitions. Two state-of-the-art RL algorithms were trained in simulation, using tracks that resemble the actual race design on a Turtlebot2 platform. The results demonstrate that our approach can effectively learn to race in simulation and then be applied to a real-world racetrack using the physical platform. Additionally, we discuss the limitations of our approach and provide suggestions for future directions in applying RL to full-scale autonomous FS racing.