A point that needs to be emphasized about statistical machine learning is that there are no guarantees. When you estimate performance using a held-out set, that is just an estimate. Estimates can be wrong.
This takes some getting used to, but it's something you're going to have to get comfortable with. When you say "What if the performance actually deteriorates?", the answer is sure, that could happen. The actual performance could be worse than you estimated/predicted. It could also be better. Both are possible. That's unavoidable. There is some inherent, irreducible uncertainty.
When you evaluate performance using a held-out test set, you are using data from the past to try to predict future performance. As they say, past performance is no guarantee of future results. This is a fact of life that we just have to accept.
You can't let this immobilize you. The fact that it's possible to do worse than you predicted is not a reason to avoid deploying to production a model trained on the data. In particular, it's also possible to do poorly if you don't do that. It's possible that a model trained on all the data (train+validation+test) will be worse than a model trained on just the train+validation portion. It's also possible that it will be better. So, rather than looking for a guarantee, we have to ask ourselves: What gives us the best chance of success? What is most likely to be the most effective?
And in this case, when you want to deploy to production, the best you can do is use all the data available to you. In terms of expected performance, using all of the data is no worse than using some of the data, and potentially better. So, you might as well use all of the data available to you to train the model when you build the production model. Things can still go badly -- it's always possible to get unlucky, whenever you use statistical methods -- but this gives you the best possible chance for things to go well.
In particular, the standard practice is as follows:
Reserve some of your data into a held-out test set. There is no hard-and-fast rule about what fraction to use, but for instance, you might reserve 20% for the test set and keep the remaining 80% for training & validation. Normally, all splits should be random.
Next, use the training & validation data to try multiple architectures and hyperparameters, experimenting to find the best model you can. Take the 80% retained for training and validation, and split it into a training set and a validation set, and train a model using the training set and then measure its accuracy on the validation set. If you are using cross-validation, you will do this split many times and average the results on the validation set; if you are not, you will do a single split (e.g., a 70%/30% split of the 80%, or something like that) and evaluate performance on the validation set. If you have many hyperparameters to try, do this once for each candidate setting of hyperparameter. If you have many architectures to try, do this for each candidate architecture. You can iterate on this, using what you've found so far to guide your choice of future architectures.
Once you're happy, you freeze the choice of architecture, hyperparameters, etc. Now your experimentation is done. Once you hit this point, you can never try any other options again (without obtaining a fresh new test set) -- so don't hit this point until you're sure you're ready.
When you're ready, then you train a model on the full training + validation set (that 80%) using the architecture and hyperparameters you selected earlier. Then, measure its accuracy on the held-out test set. That is your estimate/prediction for how accurate this modelling approach will be. You get a single number here. That number is what it is: if you're not happy with it, you can't go back to steps 1 and 2 and do more experimentation; that would be invalid.
Finally, for production use, you can train a model on the entire data set, training + validation + test set, and put it into production use. Note that you never measure the accuracy of this production model, as you don't have any remaining data for doing that; you've already used all of the data. If you want an estimate of how well it will perform, you're entitled to use the estimated accuracy from step 4 as your prediction of how well this will perform in production, as that's the best available prediction of its future performance. As always, there are no guarantees -- that's just the best estimate possible, given the information available to us. It's certainly possible that it could do worse than you predicted, or better than you predicted -- that's always true.