This is perfectly acceptable in a stochastic environment. Generally your loss is to minimize $-log\ p(Y|X)$ or equivalently $-\sum_i log\ p(y_i|x_i)$. This optimization is equivalent to $-\mathbb{E}\log\ p(y_i|x_i)$. In other words you are minimizing in this case:

$$
\begin{align*}
L &= -log\ p(1|x_0) - log\ p(0|x_0) \\
&= -log [p(1|x_0) * p(0|x_0)] \\
&= -log [p(1|x_0) * (1 - p(1|x_0))] \\
\end{align*}
$$

or since log is concave equivalently minimizing

$$ \hat L = -p(1|x_0) * (1 - p(1|x_0)) $$
After some basic calc 1, we see the optimal result we want the system to learn is that

$$ p(1|x_0) = .5$$

Note that if you had more evidence, the result would just be that you want it to learn that it is $1$ with probability $\mathbb{E}_i\ y_i | x$

I think your question is quite naive. If you can share your motivation for the question, then the question would attract more apt answers. – naive – 2019-08-30T12:37:03.860