Pearson correlation is the usual correlation when nothing further is specified and specifically refers to linear association.
In the world, people use “correlation” to mean any kind of association, but this is wrong from the standpoint of statistics. Arrange points symmetrically on a parabola and run them through that equation; you’ll get zero correlation, despite the obvious relationship.
There also is Spearman correlation, which does Pearson correlation on the ranks of the values.
If the points are $(0,1)$, $(2,4)$, $(3,3)$, the Spearman correlation is calculated by converting the $x$-values to their ranks and the $y$-values to their ranks: $(1,1)$, $(2,3)$, $(3,2)$. Then run the transformed points through the usual equation for (Pearson) correction.
To separate “correlation” and “correlated”, the former is a noun while the latter is an adjective. If there is “correlation” between two variables, they are “correlated”.
Collinear seems to come up in the context of regression and refers to predictor variables that are correlated. The related “multicollinear” means multiple regression predictors that have a linear relationship with another predictor, as if you could regress one predictor on some of the others and get decent accuracy. “Multicollinearity” seems to be the more common term to use when we talk about related predictors, as “collinear” variables strikes me as perfectly related with a correlation of $1$ (think of measurements in both meters in kilometers), while multicollinearity, to me, does not mean a perfect predictive ability unless “perfect” multicollinearity is specified.
“Collinear” and “multicollinear” are adjectives; “collinearity” and “multicollinearity” are the nouns.