Manage Learn to apply best practices and optimize your operations.

How machine learning model management plays into AI ethics

Listen to this podcast

In the first episode of the 'Today I Learned' podcast, we're exploring the topic of ethical AI with insights from Scott Zoldi, the chief analytics officer at FICO.

In this podcast episode "Today I Learned About Data," we're diving into the topic of ethical AI -- specifically the role that machine learning model management plays. While monitoring AI models didn't appear to be a major concern at first, the increased use of the technology during and post-pandemic has forced organizations to pay more attention to the ethics involved.

We got a chance to speak with Scott Zoldi, the chief analytics officer at FICO, who discussed the importance of machine learning model management in the enterprise and what IT leaders can do to ensure they're utilizing AI in a more ethical manner.

Transcript

Ed Burns: Hello and welcome to this episode of 'Today I Learned About Data.' Today we're talking with Gabriella Frick, site editor of SearchCIO.com. So, Gabriella, what did you learn about today?

Gabriella Frick: Today I learned about ethical AI.

Burns: Great. That's a really interesting topic and definitely one we're hearing more about these days. I remember when I first started on the analytics and AI beat, it was something that nobody cared at all about. It seemed like people were just of the mindset that if it works, let's do it -- people were really unconcerned about the ethical implications. It sounds like from your reporting people are starting to pay more attention to this.

Frick: Right. Yes, and I think that a big factor is we're in the middle of a global pandemic and we're seeing such a big increase in AI adoption within the enterprise and as a result, it's creating this bigger conversation around the topic of ethical AI.

Some are concerned that businesses are poorly prepared to implement and use AI in an ethical way. FICO, the credit reporting and analytics vendor, recently conducted a survey of AI users and it showed that 67% of respondents do not monitor their models for accuracy or drift, which is pretty mind blowing. The company's chief analytics officer Scott Zoldi said this is a problem when it comes to using AI in an ethical manner.

Scott Zoldi: That's an alarming figure frankly because to work through a responsible AI, it consists of building a model robustly, being able to have explainable AI so you understand what's driving those models, testing through ethical AI -- what those drivers do to an ethical use to the model -- whether that be somebody with a bias toward a particular group or whether it just be wrong to use a model because it's too far away from the data that was trained on.

Frick: Zoldi added that a lot of times a data science team may think ethically when building a model and use appropriate data, but if it's not monitored over time there's that risk of it behaving unethically in production due to data drift.

Burns: Right. There's such a big difference between developing a model and training it and testing it out during the development phase compared to putting it into production and actually having people interact with it. So, what can an organization do to make sure that they are using AI in more ethical ways?

Frick: Zoldi said many of his survey respondents think this question is holding back AI adoption. Solving it could take some mix of industry standards, tools that embed ethical use in their functionality and potentially even regulations. In the meantime, he said each enterprise should develop strong standards for how to use AI ethically within the organization and enforce those standards on a regular basis. He said this is going to be critical as AI matures.

Zoldi: For the last two or three years it's all been about AI hype and AI this and AI that and all the promise -- there's been a lot less about all the harm that could be done with AI and I think this is a reflection of growing up, if you will, of the fact that yes, there's tremendous opportunity here, particularly during these COVID times and the increase of digital … decisioning needed but at the same time, we need to put as much focus on making sure we have the protections in place before we use those models.

Burns: It sounds like enterprises have some work to do in this area. Like I said earlier, they've come a long way just in terms of getting this on their radar, whereas before it sounds like it wasn't really a top priority. People are at least starting to pay attention to it and we have a very large, prominent company like FICO putting this on people's radar and one of their top executives flagging this as an issue. I think it's good progress and it's good to see.

Thank you, Gabriella, for sharing some of your reporting with us and talking about some of the new things you've been learning. And to our listeners, if you're interested in this topic and want to learn more, we have lots more coverage on our site, searchCIO.com. Thank you.

Cloud Computing
Mobile Computing
Data Center
Close