Bob Gourley, CTO and Co-Founder of OODA, discusses the unique issues that arise when dealing with securing AI, and why it’s important to monitor machine learning systems before they start having bad ideas.
The Department of Health and Human Services has finished an artificial intelligence trial. The agency looped AI into their open health data architecture, in order to improve research and help develop new treatments. According to Fedscoop, we won’t know what the results are until the partial shutdown ends. However, experts like Bob Gourley, CTO and Co-Founder of OODA, say that the growing use of AI could present security and ethics problems. Gourley told Government Matters that constructing an ethical program is difficult due to the sheer number of unknown variables. “You can have a very ethical company that fields some machine learning that starts behaving unethical. For example, Amazon fielded something to help go through resumés as part of the hiring process. That became a misogynist program that did not like women,” Gourley said. “Obviously Amazon likes women, but their program did not. They had to shut it down. There’s also been programs the judicial system designed to help provide sentencing guidelines in a fair way that began behaving unfairly. These are ethical considerations that all of us will have to deal with.”