Chevron icon It indicates an expandable section or menu, or sometimes previous / next navigation options. HOMEPAGE

Inside Goldman Sachs' plans for AI, from helping non-tech workers do more with software to streamlining how code is documented

Dimitris Tsementzis, head of machine learning quants, and Marco Argenti, chief information officer, Goldman Sachs
Dimitris Tsementzis, head of machine learning quants, and Marco Argenti, chief information officer, Goldman Sachs Goldman Sachs

  • Goldman Sachs' CIO and head of machine learning quants say we are at an inflection point with AI.
  • Large language models, the form of AI behind ChatGPT, could transform how Wall Street does business.
  • Marco Argenti and Dimitris Tsementzis outline three areas the bank is experimenting with LLMs.

Every once in a while, a monumental technology advancement occurs that upends the way businesses are run. And according to one of the top tech execs on Wall Street, we're on the cusp of another thanks to recent breakthroughs with artificial intelligence. 

"I was born in the sixties. I've seen pre-computers, and I've been a nerd since the age of 10. I've been playing with pretty much everything and this thing, to me at least, instinctively feels like one of those two or three really big things that I've ever seen in my life," Marco Argenti, Goldman Sachs' chief information officer, told Insider of large language models (LLMs), the same form of AI behind ChatGPT. 

Since the launch of ChatGPT — the powerful AI chatbot that can produce human-like answers to just about any question or prompt — people and businesses alike have seen the early innings of what many believe to be a revolutionary technology.

The tech is changing everything from the way kids learn, to how software developers code, and even who people date. Wall Street is no exception, with generative AI and large-language models (the underlying tech to ChatGPT, Google's Bard, and others) standing to shake up the business, from wealth management to investment banking. 

For Goldman Sachs, AI is nothing new. The bank has thrown its weight behind the technology ever since it hired Dimitris Tsementzis in 2018 to build out a team (now about 15 people) to lay the foundation for machine learning and AI at Goldman. With advances in generative AI and large language models, the realm of possibilities have been blown way open. 

But with the potential has come some uncertainties around intellectual property, regulation, and privacy. Despite Goldman — along with Citibank and JPMorgan — blocking employees from accessing ChatGPT, the bank is still working with the tech.

Argenti noted the banks' blockage of ChatGPT is no different than the standard process of companies blocking completely unrestricted access to the internet on work devices.

"There is safety and there is usefulness, and that intersection is where we need to navigate," Argenti said. 

Argenti and Tsementzis outlined three ways Goldman is experimenting with large language models.

Summarizing and extracting data from documents

Goldman's document-management process stands to improve from the use of generative AI, Argenti said. Banks deal with countless legal documents, related to things like loans, mortgages, and derivatives. These unstructured documents, often written by lawyers, are extremely complex and aren't made ready to be put into a machine. 

"This is a problem we've been working on for a long time," Argenti said. 

Because generative AI is really good at taking unstructured information and summarizing it, the bank can use the tech to extract the necessary info and put it in a form that's readable by a machine.

Most banks already use a form of AI, called natural-language processing, to extract data from unstructured documents. However, large language models could be a more efficient way to attack the same problem, and potentially magnitudes faster than what is currently being deployed. 

Helping engineers parse through code documentation

A big time suck for software engineers is figuring out other peoples' code, Argenti said. When a company has thousands of engineers, making use of what's already available is paramount, but that involves understanding what is already out there. 

Most firms will have engineers keep track of the code, systems, and architecture they work on, but the "documentation is never really that good," Argenti said. 

AI's ability to summarize information could help. Goldman is figuring out how it can enable engineers to ask AI to explain a chunk of code and get a summary in plain English, Argenti said. 

It's a simple task, but at the scale of which Goldman's engineering organization operates (roughly 12,000 engineers), it can make a big difference, Argenti said. 

Getting non-tech workers do more with software

Wall Street firms have long tried to unlock the benefits of low-code and no-code automation, which enables non-technical employees to automate processes through graphics or symbols on a computer screen instead of traditional computer programming. It's similar to embedding a Tweet by clicking through option buttons on Twitter, instead of writing the line of code. 

But Argenti argues that these kinds of use cases have only been partially realized, and that LLMs could make for an attractive alternative. 

"For ages, we've had this dream of like citizen development, low-code, no-code, etc., right? Where you have people that are smart but they're not coders," Argenti said. "When we talk about automation, there are a lot of failed attempts," he added.

That's because a lot of automation use cases rely on what is called an imperative approach to programming, which relies on telling a system how something should be done, step by step, Argenti said. Robotic process automation, which is software that automatically repeats certain tasks, is one such example. This approach doesn't scale well, Argenti said, and if the person that created the automated workflow leaves, "you find that nobody else can actually go and figure those things out because those people are not developers at the end of the day."

But LLMs make it possible for non-technical employees to take a declarative approach to programming, which tells the system what the user wants, as opposed to how they want something done. 

Business folks and analysts could write a question in plain English, the same way they'd ask a colleague: Please do this. Be careful of that, and format it this way. 

It also makes it easier for non-technical workers to make changes after the fact, since it'd be presented in a way that they understand, as opposed to lines of code. 

Axel Springer, Business Insider's parent company, has a global deal to allow OpenAI to train its models on its media brands' reporting.

Goldman Sachs AI ChatGPT

Jump to

  1. Main content
  2. Search
  3. Account