Mission accomplished
The mission was to perform some adversarial robustness tests on the face-mask model to determine if hospital visitors and staff can evade mandatory mask compliance. The base model performed very poorly on many evasion attacks, from the most aggressive to the most subtle.
You also looked at possible defenses to these attacks, such as spatial smoothing and adversarial retraining, and then explored ways to evaluate and certify the robustness of your proposed defenses. You can now provide an end-to-end framework for defending against this kind of attack. That being said, what you did was only a proof of concept (POC).
Next, you can propose training a certifiably robust model against attacks the hospital expects to encounter the most, but first you need the ingredients for a generally robust model. To this end, you will need to take all 210,000 images in the original dataset, make many variations on mask colors and types with them, and augment them even further...