The term "backpropagation" belongs to the context of Artificial Intelligence, more precisely: It refers to neural networks. It is a method that is used for training neural networks. In neural networks, signals are passed between artificial neurons. Information (pictures, text, etc.) is received by neurons from the input layer, then signals pass through the Neural Network until the output layer. Each neuron processes specific information, evaluates it and transmits a signal to the next artificial neuron. If an artificial neuron passes on a signal, we say: a neuron “fires”.
For a neural network to recognize a cat as such on an image, all relevant image information must be processed in such a way that the output cell ejects the result "cat". Let me use a simple metaphor: If several dozens of neurons each make a probability statement whether the image element processed by them belongs to a cat or not (eye, ear, tail, coat pattern, etc.), then you’ll have a probability statement at the end. An artificial neuron makes this statement by passing on a signal with a certain signal strength. At the same time, the incoming signals must be correctly "weighted". Each incoming signal of an artificial neuron has a "weighting" of its own.
The training of a neural network is basically about adjusting these "weightings". If the expected result is not achieved, the weighting must be adjusted. This is done by calculating for each weighting the "responsibility" it has in causing the deviation between expected results and actual result. These deltas are used to adjust the weights.