Both of these methods act differently. You will only get the same results in very few cases or if you are testing only one row at a time.
np.mean(y_test==y_pred) first checks if all the values in y_test is
equal to corresponding values in y_pred which either results in 0 or 1. And then takes the mean of it (which is still 0 or 1).
accuracy_score(y_test, y_pred) counts all the indexes where an element
of y_test equals to an element of y_pred and then divide it with the total number of elements in the list.
import numpy as np
from sklearn.metrics import accuracy_score
y_test = [2,2,3]
y_pred = [2,2,1]
print(accuracy_score( y_test, y_pred))
This code returns -
You will get the same result from both the method if you have only one sample/element to test. You can find more details here on accuracy_score and np.mean.
Also, accuracy_score is only for classification data. As mentioned in the first line here.