Podcast Notes: Fighting bias in AI algorithms
Every Thursday, we summarize a recent podcast to save you time and keep you up-to-date. This week features Elham Tabassi, chief of staff of NIST’s information technology laboratory. She speaks with Tom Temin of the Federal Drive podcast about ways to address unwanted bias in AI algorithms as more federal agencies adopt the technology. [Note: Questions and answers were edited for brevity and clarity.]
Tom: Define what AI bias actually means.
There really isn't one good definition for bias yet, Elham explains.
Researchers are working on this and trying to find an answer. Often, we all use the same term meaning different things. Bias is one of those terms, and the International Standards Organization, ISO, has a subcommittee working on standardization of bias.
NIST has also been doing a literature survey trying to figure out how it has been defined by different experts, with a goal of coming up with a shared understanding.
The current sees bias in terms of disparities in error rates and performance for different populations, devices, or environments. If you have different error rates for different subpopulations, such as in face recognition, that’s not a good bias and something that has to be mitigated.
Another example would be car insurance, where younger people pay at a higher insurance rate than people in their 40s or 50s. The difference in error rate is not bias on the intended behavior or performance of the system. It’s something that’s problematic.
Let’s not forget about human biases, which is one source of bias in AI systems. This can creep into algorithms in different ways because AI systems learn to make decisions based on the training data, which can reflect biased human decisions or historical or societal inequalities.
Other times the bias creeps in because the data is not representative of the whole population (in sampling, one group is overrepresented or underrepresented). Another source is the design and modeling of the algorithm...