The Convergence of Regulation and Innovation in AI Creates New Opportunities

CRV
Team CRV
Published in
6 min readAug 28, 2023

--

By James Green

There is no doubt that with the rapid growth and transformational impact it will have on all aspects of society that artificial intelligence will be subject to some kind of government oversight. While regulation in the U.S. is currently being managed voluntarily, in Europe, proposed legislation through the Artificial Intelligence Act (AIA) is much more aggressive with a focus on strengthening rules around data quality, transparency, human oversight and accountability. The new laws also address ethics and impact almost every industry.

With many predicting the AIA will have a Brussel’s Effect — paralleling how GDPR influenced CCPA in California — there is the potential that we will see increased regulation around data and security in industries or services such as financial institutions, human resources, security and IT, medical equipment, transportation and machinery.

Data from Applied AI’s AI Act Risk Classification Study. Page 13.

While not legal advice, nor 100 percent comprehensive. We believe that the AIA and its implications can be summarized by the following:

  1. The AIA Will Have Global Implications: It aims to achieve a “Brussels Effect,” extraterritorial impact though large fines. In this sense it mirrors the GDPR, which widely influenced data protection practices globally.
  2. Which AI Applications Are Affected? Regulation is focused on specific risk -tiers and technologies:
a. Least Affected: Chatbots and emotion recognition systems will require disclosure, but broadly they have minimal impact. Separately, many AI systems fall outside the scope, remaining unregulated.

b. Most Affected: AI systems targeting use cases considered “High Risk,” as well as foundation models. Both must meet strict requirements, even before launching or going to market commercially.

3. The Scope of the High-Risk Tier: It includes critical industries — including security and IT, HR, customer service as well as accounting and financial services, and customer service. We’re seeing EU AI startups “tinker” with use cases in the high-risk tier, but not use AI systems in production materially.

a. Global tech platforms and enterprises — including social media, cloud, and search giants — will have to assess their exposure to AIA carefully. They are included in this scope if any of their embedded AI intersects with high-risk uses, such as LinkedIin’s algorithms related to job advertising or recommendations.

b. Liability even falls on businesses that deploy high-risk AI: Requirements for customers that use high-risk AI could limit the market to sophisticated buyers, further hurting funding and development for these AI systems.

4. Restrictions on Unsupervised Learning and Autonomy: Data labeling in high-risk AI areas could be throttled, e.g. in security, the regulation could complicate the use of unsupervised learning for anomaly detection.

5. The Scope of Foundation Model Requirements: Compliance requirements — including pre-launch designs that bias mitigation, predictability, accuracy, and cyber security — apply to all foundation models, regardless of distribution channels or data type. The intensive process for proving compliant development will limit their availability for other AI applications.

6. Mandatory Disclosures of Copyright Protected Training Data: Foundation models and generative AI must publicly disclose a summary of any copyright-protected data used in training and development.

7. Responsibility Along the Value Chain: Most compliance requirements fall on the providers of AI models, rather than shifting to customers. Companies that modify other models for their own AI applications are also considered providers.

8. Impacts to Open Source Viability: The mandate to disclose copyrighted training data could challenge the market viability of many independent open source projects — which supports the majority of enterprise AI adoption — since they face more data scarcity than incumbent projects with access to large troves of data.

9. AI Infrastructure Acceleration: Data governance and cybersecurity create tailwinds for infrastructure and security tools — such as federated data governance — that prevent data poisoning, manipulation, and hallucinations.

10. Looking Forward to U.S. and Global Regulations: Early AI regulation and self regulation from the US and Japan suggest that the Brussels Effect from AIA will be uneven. That is, some points in the AIA will find a much stronger echo outside of the EU than others. Its impact on data and security infrastructure in “high-risk” industries are most likely to set global precedents — possibly to the advantage of incumbents who can more easily bear the imposed costs.

Ultimately we think that the EU AIA will lead to global precedents for AI use. Legacy data protection vendors, security providers, and infrastructure software companies are not appropriately architected for the use of modern large language models. However it’s not just how they are architected which is an issue.

Increased regulation could open the door for a crop of new startups to provide tools, products and services that assist corporations in meeting these new government requirements and mandates.

Many AI companies are already headed in this direction — the White House recently announced that OpenAI, Google and five other AI companies have committed to developing systems for AI “watermarks.”

At the same time as mentioned above, several AI applications remain completely unregulated. It remains to be seen where the dust settles on regulation here — both for closed and open source — but there will be many opportunities to build a large company off the back of this.

CRV has long been backers of open source in the cloud first world — we invested early on and led rounds in fantastic companies like Kong, Vercel, Chromatic, FleetDM, Tailscale and many others. We believe open source development plays an increasingly core role in the ecosystem and likely will continue to do so in the AI first world.

Emerging independent research labs are rapidly open sourcing the results of major lab research. The decreased cost and increased access to computational resources have allowed state-of-the-art AI research to emerge from smaller, previously unknown labs, despite hardware remaining largely consolidated with NVIDIA. One small example is obviously, Meta AI’s open source foundation model, Llama 2, which offers licenses for commercial use in downstream applications. We will likely see companies coming out of these labs in the coming months and years.

The data would prove out that open source is a large part of the AI story. The Data Science Salon’s 2023 report shows that 58 percent of respondents currently build custom AI systems in-house with open source software.

From Datasciencesalon’s State of AI in the Enterprise

However, the proposed legislation in Europe raises concerns about open source foundation models given all the regulation.

AI systems bear unique risks requiring additional security measures. Both foundation models and high-risk AI must ensure robust cybersecurity protocols, and infrastructural systems that ensure data protection and predictable output.

Advanced techniques are needed to prevent bias within datasets and protect them from manipulation, poisoning, and other output erosions. Some players to consider here are companies like Fortify AI, Protect AI, Breeze and others.

Federated data governance may become crucial to maintain compliance while retaining data control. It involves utilizing centralized definitions for data governance standards, but local domain teams still retain the autonomy and resources to execute these standards as needed in environments. Players in this space are Credo, Credal or even an Open AI private instance.

Similarly to how the GDPR — General Data Protection Regulation, the European regulation implemented in 2018, to enhance EU citizens’ control over the personal data that companies can legally hold — accelerated cloud and data infrastructure adoption, the AIA will likely accelerate AI specific infrastructure, data governance and oversight practices in regulated industries.

As the conversation and subsequent oversight around AI evolves, so too will the landscape of AI research, development and regulation.

While it is hard to predict how the regulation in Europe will impact companies in the U.S. and around the world, the regulatory burdens on open source models that have been rightly criticized are likely to suggest potential revisions ahead.

In the meantime, we at CRV are bullish on the opportunities for smart entrepreneurs to build in this space. The current AI boom feels much like the early days of the internet. AI will become part of most of our software stack — regulated or otherwise — and we are excited to be part of it. CRV backs companies from the idea stage through to IPO.

If you are a founder with ideas of how to navigate this brave new world, our team would love to connect with you.

--

--

CRV
Team CRV

CRV is a VC firm that invests in early-stage Seed and Series A startups. We’ve invested in over 600 startups including Airtable, DoorDash and Vercel.