Reinforcement learning is based on the concept of an intelligent agent. An agent interacts with it's environment by observing some state and then taking an action. As the agent takes actions to move between states, it receives feedback about the goodness of its actions in the form of a reward signal. This reward signal is the reinforcement in reinforcement learning. It's a feedback loop that the agent can use to learn the goodness of it's choice. Of course, rewards can be both positive and negative (punishments).
Imagine a self-driving car as the agent we are building. As it's driving down the road, it's receiving a constant stream of reward signals for it's actions. Staying within the lanes would likely lead to a positive reward while running over pedestrians would likely result in a very negative reward for the agent...