Rubrik’s IPO filing reveals an AI governance committee. Get used to it.

Photo of money bag balanced with bag labeled with "risk" on a scale

Image Credits: William_Potter (opens in a new window) / Getty Images

Tucked into Rubrik’s IPO filing this week — between the parts about employee count and cost statements — was a nugget that reveals how the data management company is thinking about generative AI and the risks that accompany the new tech: Rubrik has quietly set up a governance committee to oversee how artificial intelligence is implemented in its business.

According to the Form S-1, the new AI governance committee includes managers from Rubrik’s engineering, product, legal and information security teams. Together, the teams will evaluate the potential legal, security and business risks of using generative AI tools and ponder “steps that can be taken to mitigate any such risks,” the filing reads.

To be clear, Rubrik is not an AI business at its core — its sole AI product, a chatbot called Ruby that it launched in November 2023, is built on Microsoft and OpenAI APIs. But like many others, Rubrik (and its current and future investors) is considering a future in which AI will play a growing role in its business. Here’s why we should expect more moves like this going forward.

Growing regulatory scrutiny

Some companies are adopting AI best practices to take the initiative, but others will be pushed to do so by regulations such as the EU AI Act.

Dubbed “the world’s first comprehensive AI law,” the landmark legislation — expected to become law across the bloc later this year — bans some AI use cases that are deemed to bring “unacceptable risk,” and defines other “high risk” applications. The bill also lays out governance rules aimed at reducing risks that might scale harms like bias and discrimination. This risk-rating approach is likely to be broadly adopted by companies looking for a reasoned way forward for adopting AI.

Privacy and data protection lawyer Eduardo Ustaran, a partner at Hogan Lovells International LLP, expects the EU AI Act and its myriad obligations to amplify the need for AI governance, which will in turn require committees. “Aside from its strategic role to devise and oversee an AI governance program, from an operational perspective, AI governance committees are a key tool in addressing and minimizing risks,” he said. “This is because collectively, a properly established and resourced committee should be able to anticipate all areas of risk and work with the business to deal with them before they materialize. In a sense, an AI governance committee will serve as a basis for all other governance efforts and provide much-needed reassurance to avoid compliance gaps.”

In a recent policy paper on the EU AI Act’s implications for corporate governance, ESG and compliance consultant Katharina Miller concurred, recommending that companies establish AI governance committees as a compliance measure.

Legal scrutiny

Compliance isn’t only meant to please regulators. The EU AI Act has teeth, and “the penalties for non-compliance with the AI Act are significant,” British-American law firm Norton Rose Fulbright noted.

Its scope also goes beyond Europe. “Companies operating outside the EU territory may be subject to the provisions of the AI Act if they carry out AI-related activities involving EU users or data,” the law firm warned. If it is anything like GDPR, the legislation will have an international impact, especially amid increased EU-U.S. cooperation on AI.

AI tools can land a company in trouble beyond AI legislation. Rubrik declined to share comments with TechCrunch, likely because of its IPO quiet period, but the company’s filing mentions that its AI governance committee evaluates a wide range of risks.

The selection criteria and analysis include consideration of how use of generative AI tools could raise issues relating to confidential information, personal data and privacy, customer data and contractual obligations, open source software, copyright and other intellectual property rights, transparency, output accuracy and reliability, and security.

Keep in mind that Rubrik’s desire to cover legal bases could be due to a variety of other reasons. It could, for example, also be there to show it is responsibly anticipating issues, which is critical since Rubrik has previously dealt with not only a data leak and hack, but also intellectual property litigation.

A matter of optics

Companies won’t solely look at AI through the lens of risk prevention. There will be opportunities they and their clients don’t want to miss. That’s one reason generative AI tools are being implemented despite having obvious flaws like “hallucinations” (i.e. a tendency to fabricate information).

It will be a fine balance for companies to strike. On one hand, boasting about their use of AI could boost their valuations, no matter how real said use is or what difference it makes to their bottom line. On the other hand, they will have to put minds at rest about potential risks.

“We’re at this key point of AI evolution where the future of AI highly depends on whether the public will trust AI systems and companies that use them,” the privacy counsel of privacy and security software provider OneTrust, Adomas Siudika, wrote in a blog post on the topic.

Establishing AI governance committees likely will be at least one way to try to help on the trust front.

admin

Leave a Reply

Your email address will not be published. Required fields are marked *