Prototype-based models are a good candidate for explainable artificial intelligence, and in these models the prototypes store a primary – rough – information about the classes and the classification problem. If backgrounds are defined as anything but the other classes, then we argue that “prototypizing” this background class is redundant: even if there are examples of background data, due to its nature, a thresholding mechanism with a bias value will suffice. To validate the non-prototype background, we use prototype-based learning on medical image classification. Our model has a good performance, whilst maintaining interpretability and – due to the simple wavelet-based prototypes – the reduced computational complexity. In the conclusions, we argue that, by eliminating the prototypes from the background, we have a simpler and more intuitive model.