Trending in the latest AI news is techie giant – Microsoft! According to a director at Microsoft Research Labs, the company has dropped some potential deals with customers over ethical concerns their AI technology may be misused.

Speaking at the Carnegie Mellon University – K&L Gates Conference on Ethics and AI in Pittsburgh, Eric Horvitz made the revelation. He says the group at Microsoft looking into possible misuse on a case-by-case basis is the Aether Committee (“Aether” stands for AI and Ethics in Engineering and Research.)

“Significant sales have been cut off,” Horvitz said. “And in other sales, various specific limitations were written down in terms of usage, including ‘may not use data-driven pattern recognition for use in face recognition or predictions of this type.’”

Horvitz, of course, did not reveal the specific companies of which Microsoft decided not to formulate a deal with. However, it’s pleasing to hear the company putting ethics above money when it comes to artificial intelligence. Abuses will be widely covered and hamper its potential.

Ethical Concerns of AI | iTMunch

Amidst the fallout of the Cambridge Analytica and Facebook scandal, and the use of this stolen data to target voters during the 2016 U.S. presidential campaign, people are naturally more wary of anything which involves mass data analysis.

Manipulating votes is one of the key concerns Horvitz raises for the abuse of AI, along with human rights violations, increasing the risk of physical harm, or preventing access to critical services and resources.

In reverse, we’ve already seen how AI itself can be manipulated — from Microsoft itself, no less. The company’s now infamous ‘Tay’ chatbot was taught by people online to spew racist comments. “It’s a great example of things going awry,” Horvitz acknowledged.

Rather than replace humans, Horvitz wants AI to be complementarity and not a replacement — often as more of a backstop for human decisions. However, it could still be used when invoked for tasks where a human would not be as effective.

For example, Horvitz highlights a program from Microsoft AI that helps caregivers to identify patients most at risk of being readmitted to a hospital within 30 days.

Scholars who assessed the program determined that it could reduce rehospitalisation by 18 percent while cutting a hospital’s costs by nearly 4 percent.

SEE ALSO: A Guide to Preparing Your Workforce for the Automation Era

The comments made by Horvitz once again highlight the need for AI companies to ensure their approach is responsible and ethical. The opportunities are endless if AI is developed properly, but it could just as easily lead to disaster if not.