Trend 4

Nonprofits say they're AI transparent; end users disagree

Background

Gracefully navigating the choppy waters of AI and trust with organizations and their users requires a skilled digital sailor. The ethical and digital complexities are daunting even with what might be perceived as the most straightforward use cases. To do it well, organizations must understand their customers better than ever. Even the most digitally mature organizations face barriers to adopting AI in a way that creates transparency and fosters trust with their users. When polled, organizations cited the following as their top three concerns in our research:

Research highlights the top three AI concerns for nonprofits: data privacy and security, legal and regulatory compliance, and lack of technical expertise. 

These challenges are likely familiar to anyone who has tried to implement new and innovative tech tools to their organization. It takes time to establish best practices, ensure compliance, and build the right team.

While all three challenges cited deserve attention, data privacy and security is in a league of its own because of the crucial role that trust plays between organizations and their end users. It’s a delicate topic that is also playing out in the B2C world as businesses gather customer actions to better understand their preferences, wants, and needs.

GenZ digital interaction with company

Building trust and transparency in AI

If you can provide clear and reliable answers about how your organization is handling end user data, trust is earned. But when customers and end users feel that they’re left in the dark, there’s always a risk that they’ll cut ties with the brand or organization.

Which brings us to one of the largest gaps in perception from our research. While nonprofits give themselves high marks for transparency with end users, the end users themselves have a much harsher assessment.

 

End users vs Nonprofits: Percentage who believe nonprofits are transparent about AI use of end user data

Gaps in perception aren't unique to the nonprofit world

A similar disparity is noted in the B2B and B2C world, where 94% of brands say they’re transparent with customers around how AI uses their data, while only 37% of customers agree.

Consumers vs B2C Businesses: Percentage who believe businesses are transparent about AI use of consumer data

Actionable insights

In our data-focused world, people are becoming more accustomed to companies and organizations collecting and storing their personal data. For example, an individual who uses Facebook likely understands that the platform couldn’t continue offering free accounts without some sort of benefit derived from its users’ data. Same goes for Gmail, Instagram, and other free services.

End users’ desire for transparency isn’t just something you have to handle–it actually presents a huge area of opportunity as AI adoption increases and there are more chances for trust to be threatened. When you can clearly communicate with end users about how and why you’re using their data, stronger connections can be formed. When only 37% of end users think that organizations are transparent about Al, your efforts in this regard can quickly help you rise above the competition.

Explore the data

Sector-specific insights for nonprofits, public sector, and 501(c)(3) healthcare and education

Percent of this sector's orgs reporting transparency with using end user data by AI 

Sector vs. End Users on whether they agree that orgs uphold data transparency

This sector's belief that AI will improve student engagement

End users believe that AI will improve engagement

Resource spotlight

Twilio's AI Nutrition Facts Label boosts transparency in a novel way

Nutritional facts labels are often used on food products to provide an easy-to-grasp description of the product’s nutritional value. At Twilio, we believe that an AI Nutrition Fact Label can likewise offer consumers and businesses a transparent view into ‘what's in the box,’ empowering them to make more informed decisions about which AI-powered capabilities they want to adopt.

We’ve developed a label creator that lets you easily customize labels for the features you’re building. Key elements include a product description, if the base model is trained with customer data, if the training data is anonymized, if data is deleted, and if a human is in the loop.

By offering clear descriptions like these, we’re all being more transparent, responsible, and accountable.