People gain knowledge of the world and receive feedback through interaction with the world (environment).
Image source: https://storage.googleapis.com/deepmind-media/UCL%20x%20DeepMind%202021/Lecture%201%20-%20introduction.pdf
Following this paradigm, people have proposed methods for reinforcement learning.
Foundation#
First, let's introduce several concepts in reinforcement learning.
Agent, which can be understood as a decision-maker or decision-maker.
Environment, which is the environment. The agent interacts with the environment to obtain feedback.
The word "feedback" is very interesting and has rich connotations. Specifically, every decision of the agent, or every action, will incur a cost.
For example: It will cause a change in the state of the environment. At this time, the observation of the agent to the environment will change.
Based on the obtained environmental observation, we construct the agent state (Agent State).
For Fully Observable Environments, we can consider Agent State = Observation = environment. In general, unless otherwise specified, all environments are assumed to be fully observable.
The cost incurred by taking an action is not limited to the change in the environment state. The action itself also has a good or bad distinction. We measure the quality of the decision/action with a reward.
If an action has a positive reward, it means that at least in the short term, this action is good. Conversely, if the reward is negative, it means that the action is not wise.
The following figure intuitively shows the principle of reinforcement learning.
After all, what is reinforcement learning?
The goal of reinforcement learning is to optimize the agent's action strategy through continuous interaction, by selecting better actions to maximize the sum of rewards (including the rewards of each step).
Modeling#
We generally use Markov Decision Processes to model reinforcement learning.
Before proceeding, please read The relationship between Markov and Reinforcement Learning (RL) - Zhihu
Here is a schematic diagram of a Markov process.
Image source: https://towardsdatascience.com/markov-chain-analysis-and-simulation-using-python-4507cee0b06e
There are a few points to note:
- A Markov process (Markov Process) consists of a pair of binary . A Markov decision process (Markov Decision Process) consists of a quadruple . Sometimes, a discount factor for reward is also included, which will be mentioned later.
- Due to randomness, given the initial state and the final state, the Markov process (Markov Process) corresponding to the Markov chain (Markov Chain) is not unique.
Important Concepts#
During the interaction between the agent and the environment, at a certain time t:
-
Based on the observation of the environment and the received incentive , the agent constructs the agent state . (Usually, unless otherwise specified, ), and decides how to take action (submit to the environment).
-
The environment receives the action submitted by the agent and needs to bear the "cost" brought by . It then provides updated and to the agent as feedback.
This process continues.
The interaction between the individual and the environment produces the following interaction trajectory, which we denote as . This interaction trajectory stores the observation, action, and reward at each interaction.
Based on this sequence , we can construct the agent state .
When the environment satisfies the Fully Observable property, we consider Agent State . Therefore, we can replace all the O symbols in the equation with the S symbol. (Many materials also directly use the S symbol)
At this time, there is no need to use to construct . We can directly use as .
We determine what action to take based on the state and abstract it into a policy function . The policy function takes the state as input and outputs the corresponding action, denoted as . Sometimes it is abbreviated as .
We assume that the state space S is discrete and can only have |S| different states. Similarly, we assume that the action space A is discrete and can only have |A| different actions.
In this setting, how should we understand the policy function ?
The system is currently in state , where .
Under the condition of state , the policy function considers what action (which a) should be taken.
The policy function can be understood as a class of composite functions. In addition to random policies, we can generally consider that the policy function includes two parts: action evaluation and action selection.
Action evaluation is generally done using Q value.
Action selection is generally done using argmax or -greedy.
We will discuss this in more detail later.
Through the interaction between the individual or agent and the environment, rewards are obtained. As mentioned above, the goal of reinforcement learning is to maximize the total reward by selecting actions (i.e., finding a better policy) through interaction.
We define the total reward (also known as return or future reward) as .
A few points to note:
-
The total reward, or the sum of rewards, should start from , but why does it start from ? Because the values of are fixed constants and cannot be optimized. Therefore, we focus more on future rewards.
-
The in the equation is the discount factor mentioned above. Usually, the range of the discount factor is limited to .
If the value of is small, close to 0, as k increases, will become smaller and smaller, that is, the weight of will become smaller and smaller. This means that we are more inclined to consider short-term effects rather than long-term effects.
If the value of is close to 1, it means that we will take into account long-term effects more.
We define the state value function (also known as value function or value) as the expected cumulative return obtained by starting from state and following policy . The state value function is used to measure how "good" a state is, and is defined as follows:
The first line of the equation is the definition of the state value function.
The second and third lines of the equation are the recursive form obtained by expanding the return according to the definition, or the form of the Bellman Equation.
The fourth line expands the Bellman Equation, and we need to pay attention to the and in the equation.
is the state transition probability, which should be described in the Markov Decision Process. Specifically:
When we are in state s and choose an a as the action based on the policy function , it will cause a change in the observation of the environment, so the state will also change.
However, the effect caused by action a is not fixed. We cannot guarantee that state s will always change to a fixed state . In other words, can be equal to state_1, state_2, state_3, or some other state_i, so it corresponds to a probability distribution . Similarly, we have .
We define the action value function as the expected cumulative return obtained by starting from state , taking action , and following policy .
Similar to , no further explanation is given.
We define the advantage function as the difference between Q and V.
It represents the degree to which taking action a in state s is better or worse than following the current policy . The main purpose of the advantage function is to optimize the policy and help the agent understand more clearly which actions are advantageous in the current state.
How to understand the advantage function? - Zhihu
Model-based vs. Model-free
The so-called model includes the state transition probability and the reward function.
If the model is known, it is model-based, and we will plan under complete information. In other words, we can use dynamic programming algorithms to learn the desired policy.
Knowing the model means that when the action and state are determined, we can know the state transition probability and the corresponding .
On the contrary, if learning does not depend on the model, it is called model-free. For example, the Policy Gradient method is model-free.
We will discuss this in more detail later.
On-Policy vs. Off-Policy
On-Policy means that the behavior policy during episode sampling and the target policy during policy optimization are the same.
Off-Policy means that the two policies are different.
We will discuss this in more detail later.