IT leaders need to consider how AI can benefit both their organizations and humanity as a whole, says former chief strategist for responsible technology at Alphabet.
Salima Bhimani has been encouraging the responsible and ethical use of AI for several years as Alphabet’s first chief strategist and director for inclusive and responsible technology, business, and leaders from 2017 to 2023.
At Google’s parent company, she worked with moonshot companies such as Waymo, Wing, and X, to shape sustainable businesses and global impact. She is now CEO of 10Xresponsibletech, a consulting company focused on helping organizations design, integrate, and adopt business-aligned and responsible AI strategies.
In a recent interview, Bhimani talked about the importance of thinking about ethical uses of AI and how it can benefit both humanity and individual organizations. AI and other advanced technologies have the potential to create huge benefits for all of humanity, she says, including solving tough problems such as health and information inequality, but vendors and users need to think about IT in new ways.
“The opportunity in front of us is not to just ride the wave of AI,” Bhimani says. “We’re going to have to look at things we haven’t looked at, like ethics, and see it as an opportunity for helping us drive this technology.”
In many cases, IT leaders and companies have focused on innovation, including benefits to users and customers, but they should think more broadly about global impacts, she says.
“In the past, the motivations around technology have been innovation, and probably innovation for serving humanity, doing good in the world, and building great products,” she adds. “Now, we have to think about innovation as a way of really reshaping the world so that it works for everybody. That’s not a philanthropic call; it’s actually a call for technology to accelerate human progress in a positive direction.”
Here’s more of a recent interview with Bhimani, edited for brevity and readability.
Grant Gross: You’ve focus on the ethical use of AI at Alphabet and at your new company. Can you define ‘ethical AI’?
There are three big components for me in this definition. One is to eliminate harm, so to ensure that the AI systems that we’re building and that we’re integrating are not going to inadvertently exasperate existing challenges that people might have or create new harms.
Another part of it is expanding benefits. We tend to focus a lot on the harm side, but expanding benefits is a big part of the ethical AI piece. What I mean by that is if we’re integrating AI, are we ensuring that it is, in fact, going to be a partner to our employees and extend their footprint, their impact within the company, rather than just eliminating roles? We want it to be a beneficial, expansive opportunity.
The last piece is that we’re building symbiotic AI systems with humans. People talk about AI as this thing that’s building itself, and there’s some truth to that; but in reality, it’s still being built by humans. There’s a symbiotic relationship between systems and what humans need and want, and we need to be pretty intentional about that on an ongoing basis. So even if we have AI systems that can use initially inputted data to create new data sets, we want to make sure there’s governance around that, and people are really involved in that process.
Why should CIOs, CAIOs, and other IT leaders pursue ethical AI for their organizations? What’s the benefit to them and to their organizations?
The CIO role is changing. In the past, the focus was on keeping the lights on, managing infrastructure, ensuring stability of systems, or just ensuring that integration is happening. Now what we’re talking about is becoming strategic visionaries within the organization. Are we building AI strategies that are aligned to business goals? Are we identifying opportunities that AI presents to us?
CIOs’ roles and CAIOs’ roles are about bridging the business with the technology, and the ethical piece is going to be imperative. How do we expand the benefits of this technology to what we’re trying to achieve as a business? Will it drive new business opportunities for us? Will it mitigate risk? Will it drive innovation?
The other piece is, will it attract top talent? Some research is saying that the top AI talent is really interested in working with organizations or companies that are thinking about the ethics side of it.
If we’re developing products or developing AI systems that are creating bias, we may have to roll back because they’re causing brand and reputational issues. The CIO or the CAIO has a very expanded role now where they’re not just thinking about technology-to-business alignment, but they’re also thinking about societal risk implications and societal benefit and opportunity.
Do you worry about recent political pushbacks against diversity, equity, and inclusion (DEI) policies? What are the implications of ignoring the ethical and equality issues involved with AI?
The challenge is we’ve thought of ethics or responsibility or DEI from the perspective of those who have generally been on the margins, but I think that it’s actually not just good for people there, it’s good for all of us. There was a survey done by DataRobot in 2022, and algorithmic bias actually caused a loss in revenue of 62%, and a 61% loss in customers. There was a 43% loss in employees, not to mention the legal fees. There are business implications. People want to know that the things that are being built are being built well.
How can a CIO or IT leader ensure that the AI products they’re building or buying are being used in an ethical manner?
They need to have a definition of ethical AI for the organization. There are general definitions of ethical AI, which we can adopt, but there are particular definitions related to what your business is trying to achieve. Those definitions need to be built in tandem with the leadership of those organizations or those companies. This is where the strategic approach to AI needs to happen at the leadership level, along with a very robust understanding of what are the tradeoffs we’re willing to make to ensure our products or services are ethical. And we need to create governance models that can be integrated across functions.
I also think literacy around AI is really important for people who are buying AI to integrate within their organizations. Do our employees know what this is going to do for them? We need to ensure that as a company, we have invested in the capability and the capacity to use AI in the best way possible for our employee bases.
The last piece is the accountability and the ongoing evaluation of the system that we have in place. We need to continue to check: Is it achieving the ethical AI goals we want, or is it producing outcomes we didn’t anticipate?
There seem to be are a lot of concerns out there about AI, from disinformation to job losses to an AI takeover of the human race. What are your major concerns about AI?
I think about lost markets. What I mean by that is that we’re still in the world of a digital divide. A lot of people around the world still don’t have access to the internet, which is wild but true.
So much of the data that we’re using is based on the sort of digital footprint that means how we’re designing and developing our AI systems is based on limited data, and that is a big concern. If AI systems are supposed to ultimately serve the world, how are they going to serve the world when the data they’re built on basically doesn’t include most of the world?
That is a big problem we need to be solving, especially if we’re serious about this being useful for everybody, and especially if a lot of the solutions also are still coming from North America or Europe. There’s an extra burden and responsibility for all of us on this end of the hemisphere to really be thinking about, how do we solve this problem with communities across the world?
And then there is the genuine cooperation and translation between the different actors who are concerned about, interested and invested in this, the question of, what AI is doing for us now, and what it’s going to do for us in the future — whether that is technology, companies themselves, or governments or builders and even users and consumers. It’s this question of, are we understanding each other, and are we finding common ground?
The regulatory piece is very, very important. If technology companies are moving at a certain pace, and government are moving at another pace, that is a concern.