We need the tuple (s, a, r, s', done) for updating the DQN, where s and a are respectively the state and actions at time t; s' is the new state at time t+1; and done is a Boolean value that is True or False depending on whether the episode is not completed or has ended, also referred to as the terminal value in the literature. This Boolean done or terminal variable is used so that, in the Bellman update, the last terminal state of an episode is properly handled (since we cannot do an r + γ max Q(s',a') for the terminal state). One problem in DQNs is that we use contiguous samples of the (s, a, r, s', done) tuple, they are correlated, and so the training can overfit.
To mitigate this issue, a replay buffer is used, where the tuple (s, a, r, s', done) is stored from experience, and a mini-batch of such experiences...