Trust and Artificial Intelligence

Trust is the core component of the USV investment thesis: “trusted brands that broaden access” is the opening description of our articulation. In a world of applications that are driven by artificial intelligence, we are now thinking about how trust will be defined and established, and in what novel ways. In particular, in considering AI agents, how can different ideas and models of trust emerge?

Jon Stokes describes AI agents as:

Something with the following qualities:

Has at least one goal, but usually more than one.

Can observe the present state of its environment.

Can use its observation of its environment to formulate a plan of action that will transform that environment from the present state to a future one where the agent’s goal is achieved.

Can act on its environment in order to carry out its plan, ideally also adjusting the plan as it goes based on continued observations of the changing environment.

Earlier this week, we discussed examples that some of us here at USV might personally find interesting: responding to messages, doing logistics for social plans, not missing important things (birthdays, emails), travel planning, health, representing you on the internet and creating a curated feed of information.

These all represent types of applications that are completing things on your behalf, as Stokes describes. As a result, a foundational question becomes: how do you trust that the agent is acting in your best interest?

This creates questions that are worth exploring:

1/ AI agents as interfaces to the world: if our mobile devices are remote controls for the world, what are the ways AI agents abstract this concept beyond the device? Specifically, what are the new and novel applications? What are the native ideas here that previously did not exist, or could not be imagined?

2/ What is the business model of an AI agent? To trust one, will it have to be a paid subscription? What other business models can be invented? 

3/ If AI agents “represent” individuals, how do we take our interaction histories with us if we want to change agents? Similarly, can we leave if we want and erase that history? Data portability and agent lock-in may be important concepts. For example, is the right to leave a new form of trust that can and should emerge?

4/ With AI agents representing individuals, does this shift agency and the balance of power in interacting with businesses? One interesting notion here is that this may allow individuals to become platforms, and not just eyeballs, with the ability to express preferences outside of any one silo. Imagine here a personal “terms of service.” 

5/ What about a personal agent’s relationship with a business agent? Brad described it this way on our internal listserv:

The business agent’s goal is to maximize the interests of the business (presumably profits). The human agent’s goal is to efficiently maximize that human’s well-being. Presumably, that means finding the best value in the services they source for the human. 

If both agents have access to the total pool of public knowledge but the human agent has exclusive access to a data set that results from the human’s interaction with all the services out there – directly and through an agent – the human agent will have an advantage relative to the business agent. Preserving human agency in the face of business agents working to maximize that business’s profits will require us to get the data architecture right both from a legal and a technical perspective.

6/ If an AI agent has exclusive access to a person’s data set, is there a more quantifiable way to value that data set? Conversely, we will now need to manage the problem of prompt injections.

7/ Rebecca wonders what the minimum level of accuracy and confidence is for trust to be established. Where do the lines cross where one would be comfortable letting an AI agent “work” for them, which may vary by category of work. How do we ensure that agents are trained on our well-being (this is something that Hume.ai is working on). One idea for proving trust could be around an AI version of “proofs” –  smaller tasks an agent can start with to prove well-being, knowledge and accuracy relevant to your data set before taking on more complex tasks. Matt believes third-party trusted brands will emerge to “validate ” AI agents as trustworthy.

8/ Health and well-being are particularly interesting, as personal AI agents may provide real utility in allowing us to manage our own health data. What’s a new notion of protecting private health data if this happens? Albert, on the other hand, isn’t so sure about this:

I am not sure that people care about privacy in the form of a specific technological implementation (data stays with them versus data goes elsewhere). 

People share health data with their healthcare providers. People share financial information with banks and investment advisors.

What people seem to care about in these situations is that institutions will keep this data confidential and use for and not against people. Also quite a few people are more motivated by price or convenience than by privacy.

These are a limited set of initial ideas; we do want to hear ideas on this or other questions we have not yet considered. Please get in touch.