Select Page

The future lies in artificial intelligence (AI), though many science fiction writers would have you believe that this future is not a bright one. Between famous film series such as the Terminator and Matrix franchises to Isaac Asimov’s classic I, Robot to video games such as Portal, artificial intelligence is often portrayed as unreliable at best and downright malicious at worst. While today’s AIs are much more likely to recognize faces and images than subjugate all of humanity, many are calling for ethical standards in how AI research progresses going forward.

Google is one of the contenders for improving AI, already using machine learning to optimize their search results and create software capable of recognizing faces and languages. They’re in the running with the likes of Amazon, Apple, Facebook, and Microsoft, many of whom are also experimenting with the new technology. In the interest of exploring more about how artificial intelligence can improve society, the company recently launched Gradient Ventures, a firm dedicated to investing in promising AI startups.

Anna Patterson, a Google veteran who has spent around a decade with the company, is founder and managing director of Gradient. She is determined to help support companies that are looking to build the future of artificial intelligence.

“If we’re really going to help AI happen faster, we needed to be more involved in the community,” said Patterson.

Patterson has stated that Gradient investments will range from $1 to 8 million. However, it’s not just about the funding. Gradient intends to bring the companies that it works with into the fold while still allowing them room to grow, offering advanced AI training and an onsite Google engineer to assist with issues. This includes incorporating the wealth of data that the company has already gleaned from its in-house AI initiatives.  

This collaborative approach is intended to help jumpstart any existing artificial intelligence projects, which could give them the push they need to make advancements as quickly as possible. Patterson got the idea by attending AI conferences, seeing all of the prospective talent and realizing that Google could help them flesh out their ideas.

Gradient isn’t the only recent AI news out of Google. The company’s in-house AI research is getting an update through People + AI Research (PAIR), an initiative focused on ethical interactions with AI systems. It addresses facets of AI development that many have overlooked, such as the way that AI treat users. While PAIR’s goal of ensuring that AI “benefits and empowers everyone” may seem broad, there are several distinct points in the AI supply chain that can be examined to make a positive difference.

For instance, PAIR will be addressing AI research and the machine learning that underscores it. While gathering data may seem like an objective practice, human biases can still make their way into it and thus affect how an AI operates. Studies have already uncovered the implicit biases that have found their way into artificial intelligence, with language recognition software assuming that all doctors are male and all nurses are female. While some of the other assumptions that AI have made are harmless, it also illustrates a need for professionals that work with AI to make them more neutral and friendly to any type of user.

With PAIR comes a pair—pun not intended—of open source tools intended to help programmers examine data and find prejudices coded into AI. Called Facets Overview and Facets Dive, it helps simplify dataset viewing and analysis to discover and fix patterns that may affect AI performance.

As AI moves forward, it is imperative that it is developed carefully and thoroughly. Artificial intelligence may be a new frontier, but companies such as Google are determined to bring a human perspective to a machine-focused science.