These are statistical names. In simple terms, we start with a pair of datasets. We suspect that one dataset may be related to the other, so we create a scatterplot with data values from one dataset measured along one axis and those from the other dataset along the other. Each data pair creates a dot on the graph, and we end up with a scattering of dots. Just looking at the way the dots are placed may lead us to think that the datasets are related if the scattering looks a bit like a rough line, like bees in a swarm or ants in a marching line. You might even be able to see how to draw a line that closely fits where the dots are placed. The correlation coefficient (the r in r²) is an indicator of how close the data in one dataset matches the other. This coefficient is mathematically derived (instead of just guessing) by carrying out mathematical operations on the data elements in each set. The closer the coefficient is to 1 or -1, the more likely it is that the datasets are related, like cause and effect. If the coefficient is close to zero, probably the datasets are not related.
If the correlation coefficient is sufficiently close to 1 or -1, the next step is to work out the equation of the line that relates the data in one dataset to a match in the other dataset. That is often in the form of a linear correlation (the equation of a straight line). This line enables us to predict or estimate other pairs of values that weren’t included in the original datasets, so it is a useful tool in making predictions, forecasts or extrapolations.