Autonomous Racing  1
f1tenth Project Group of Technical University Dortmund, Germany
train_q_learning.QLearningTrainingNode Class Reference
Inheritance diagram for train_q_learning.QLearningTrainingNode:
Collaboration diagram for train_q_learning.QLearningTrainingNode:

Public Member Functions

def __init__ (self)
 
def replay (self)
 
def get_epsilon_greedy_threshold (self)
 
def select_action (self, state)
 
def get_reward (self)
 
def get_episode_summary (self)
 
def on_complete_step (self, state, action, reward, next_state)
 
- Public Member Functions inherited from training_node.TrainingNode
def __init__ (self, policy, actions, laser_sample_count, max_episode_length, learn_rate)
 
def on_crash (self, _)
 
def get_episode_summary (self)
 
def on_complete_episode (self)
 
def on_receive_laser_scan (self, message)
 
def on_complete_step (self, state, action, reward, next_state)
 
def check_car_orientation (self)
 
def on_model_state_callback (self, message)
 
- Public Member Functions inherited from reinforcement_learning_node.ReinforcementLearningNode
def __init__ (self, actions, laser_sample_count)
 
def perform_action (self, action_index)
 
def convert_laser_message_to_tensor (self, message, use_device=True)
 
def on_receive_laser_scan (self, message)
 

Public Attributes

 memory
 
 optimization_step_count
 
 net_output_debug_string
 
- Public Attributes inherited from training_node.TrainingNode
 policy
 
 max_episode_length
 
 episode_count
 
 episode_length
 
 total_step_count
 
 cumulative_reward
 
 is_terminal_step
 
 net_output_debug_string
 
 episode_length_history
 
 cumulative_reward_history
 
 state
 
 action
 
 car_position
 
 car_orientation
 
 drive_forward
 
 steps_with_wrong_orientation
 
 episode_start_time_real
 
 episode_start_time_sim
 
 optimizer
 
 episode_result_publisher
 
- Public Attributes inherited from reinforcement_learning_node.ReinforcementLearningNode
 scan_indices
 
 laser_sample_count
 
 actions
 
 drive_parameters_publisher
 

Detailed Description

ROS node to train the Q-Learning model

Definition at line 17 of file train_q_learning.py.

Constructor & Destructor Documentation

def train_q_learning.QLearningTrainingNode.__init__ (   self)

Definition at line 21 of file train_q_learning.py.

Member Function Documentation

def train_q_learning.QLearningTrainingNode.get_episode_summary (   self)

Definition at line 93 of file train_q_learning.py.

Here is the call graph for this function:

Here is the caller graph for this function:

def train_q_learning.QLearningTrainingNode.get_epsilon_greedy_threshold (   self)

Definition at line 66 of file train_q_learning.py.

Here is the caller graph for this function:

def train_q_learning.QLearningTrainingNode.get_reward (   self)

Definition at line 82 of file train_q_learning.py.

Here is the caller graph for this function:

def train_q_learning.QLearningTrainingNode.on_complete_step (   self,
  state,
  action,
  reward,
  next_state 
)

Definition at line 100 of file train_q_learning.py.

Here is the call graph for this function:

Here is the caller graph for this function:

def train_q_learning.QLearningTrainingNode.replay (   self)

Definition at line 36 of file train_q_learning.py.

Here is the caller graph for this function:

def train_q_learning.QLearningTrainingNode.select_action (   self,
  state 
)

Definition at line 70 of file train_q_learning.py.

Here is the call graph for this function:

Here is the caller graph for this function:

Member Data Documentation

train_q_learning.QLearningTrainingNode.memory

Definition at line 30 of file train_q_learning.py.

train_q_learning.QLearningTrainingNode.net_output_debug_string

Definition at line 78 of file train_q_learning.py.

train_q_learning.QLearningTrainingNode.optimization_step_count

Definition at line 31 of file train_q_learning.py.


The documentation for this class was generated from the following file: