Do your People & Travel Risk Management vendors maintain an AI usage risk management system and share it with you?
- Evo

- 4 days ago
- 1 min read

I bring this up, mainly as it is an article (read more at https://artificialintelligenceact.eu/article/9/) within the EU AI Act that critical providers who use AI need to maintain a risk management system to assess, test and mitigate for issues raised through usage of AI ... but are any platform or service vendors actually doing this? I suspect most or all of the people/travel risk software vendors are hosting, operating and/or have customers within the EU and are either now using AI within their solution or within their own organisation. I think we need some more public proof with each vendor they are aligned with Article 9 ... and much as how GDPR forced more responsible attitudes, that goes for non-EU vendors as well.
Article-9 for these risk vendor solutions to essentially use a risk management system to prep, monitor and mitigate for their own usage of AI is obviously ironic ... but essential ... and in reality should be in place ... EU or not. The sensitivity of data used in critical safety and security means any use of AI should come with responsibility.




Comments