Consumers often use recommendation systems to discover relevant content more easily when reading media or watching Video-On-Demand. However, what is shown to consumers is nowadays often automatically determined by opaque AI algorithms. The highlighting or filtering of information that comes with such recommendations may lead to undesired effects on consumers or even society, for example, when an algorithm leads to the creation of filter bubbles that implicitly discriminate against sensible social aspects such as race, gender, social, cultural inclusion, etc… or amplifies the spread of misinformation.