What Generative AI Means for Investors with Adam Butler
Excess ReturnsJanuary 04, 2024x
246
01:03:4358.34 MB

What Generative AI Means for Investors with Adam Butler

Generative AI is probably the most rapidly developing technology we have ever seen. In this episode, we dig deep into it and its potential impact on both our lives and the investing world with ReSolve Asset Management CIO Adam Butler. Adam is one of the smartest people we know and has been dedicating a large amount of his free time to utilizing the technology, so we couldn't think of a better person to help us better understand it. We discuss how it works, what it means for both people who build investing strategies and those that utilize them and how it might change the world.


SEE LATEST EPISODES ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.validea.com/excess-returns-podcast⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

FIND OUT MORE ABOUT VALIDEA ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.validea.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

FIND OUT MORE ABOUT VALIDEA CAPITAL ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.valideacapital.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

FOLLOW JACK Twitter: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://twitter.com/practicalquant⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ LinkedIn: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.linkedin.com/in/jack-forehand-8015094⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

FOLLOW JUSTIN Twitter: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://twitter.com/jjcarbonneau⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ LinkedIn: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.linkedin.com/in/jcarbonneau⁠⁠⁠⁠⁠


[00:00:00] Welcome to excess returns where we focus on what works over the long term in the markets.

[00:00:04] Join us as we talk about the strategies and tactics that can help you become a better

[00:00:07] long term investor.

[00:00:08] Justin Carbon, now in Jack Forehand, are principles at the Lydia Capital Management?

[00:00:11] The opinions expressed in this podcast do not necessarily reflect the opinions of the

[00:00:13] ability of capital. No information on this podcast should be construed as investment advice.

[00:00:16] Securities discussed in the podcast may be holdings of clients at the Lydia Capital.

[00:00:19] Hey guys, this is Justin. In this episode, Jack and I sit down with Adam Butler of Resolve Asset

[00:00:23] Management to talk about AI-large language models, chat, G passionate and excited for this chat. And, you know, that's why we wanted to invite you back on because, you know, by following your, you know, you on Twitter and just knowing how much time you've spent looking at chat GPT, thinking about AI, thinking about how it's going to affect the investment management

[00:01:40] business. And I think a lot, pretty much to sort of chat with it, you give it a series of words and it then goes back into the model and finds relationships between the words that you have just sent it

[00:03:00] and all of the other words that have been either given to it or produced by it

[00:04:22] over the course of answering the question, right?

[00:04:26] And it sounds trivial. machine that humans have created in the past. You know, I've heard lots of pundits refer to this technology as kind of like just a natural evolution in the arc of human development, you know, kind of akin to the printing press or the microchip or what have you.

[00:05:41] But you know, to me, this is a completely different class of technology because now you within chat GPT, but I feel like it's kind of maybe newer with chat GPT4 is that there seems to be like some type of like, and I can't think of the exact example, but like it's giving me disclaimers or it's giving it knows if it gives a response to a prompt, if there's some risk associated with that, and that might not be the right word, but there's something that,

[00:07:04] you know, you should be aware of, it's almost at the model level where the people that develop the model give it guidance, for example, they constrain it and say, you're not allowed to give explicit medical advice or advice, you're not allowed to give explicit legal advice,

[00:08:21] you're not allowed to give explicit investment advice.

[00:08:23] And so what you'll notice is if you ask it

[00:08:26] a medical question, is injected. So you type in a question in the chat TPT, well, before your question, there's a system prompt that is injected that says, stuff like, never give out dangerous advice.

[00:09:46] Never give out medical advice without describing risks.

[00:11:02] Never give out. available medical knowledge base, but it's not really been tested as a physician who's, you know, is going to give clinical guidance to patients. And so the model developers have put constraints on it so that it's not, it's not able to do that. And until we're able to thoroughly test its capacities, then that's probably the responsible thing to do. Yeah, I just, I just saw this the other day because I was, my dad wants to update his will. And so

[00:11:05] I figured like, I'll come up with this huge prompt about exactly what he wants in his will and all and it's possible to do it, but the model developers over time continue to get better at preventing that. But I did a CFA event here in the summer using GPT-4, the model that was released in June,

[00:12:21] in the middle of June this year, and was able to go through the process of fully developing

[00:12:27] an investment policy statement. focus experts reject AI because they evaluate it against their expertise in their narrow domain and find it wanting. This completely misses the point. For now, AI is not above the 90th percentile, I expert level in most domains. The magic is that with naive prompting, it takes you to the 80th percentile in almost every domain, with better prompting, it takes you further. The real magic is it makes everyone above average in every domain relative to where they were before AI.

[00:13:40] So can you just talk about that a little bit?

[00:13:42] Sure.

[00:13:43] I mean, there's actually been some really great studies on this. But it wasn't sort of a miraculous improvement now, you know, they evaluated this over a few weeks and I do think that people get dramatically better at this as they use it and the productivity gains are You know, they do scale with use the other the other thing is that if you if you integrate code

[00:15:02] With the models your ability to scale just

[00:16:02] they had come from, right? But when they got the evaluations,

[00:16:04] what they found was that those in the bottom quintile

[00:16:10] before using chat GPT had about a 40% lower quality

[00:16:14] evaluation than the top quintile.

[00:16:18] After using GPT-4, it went up to exactly the same.

[00:16:23] So the lowest quintile consultants

[00:16:26] were able to deliver exactly the same quality of work complicated across a wide variety of domains. I think about that in terms of investing. Like let's say we've got a hedge fund that's got a bunch of analysts. Like what you said is interesting to me because on one hand like the lower level analysts might get a lot better, but on the other hand the result of this could be, you said that the top people don't see as much of an improvement, but if this makes the elite analysts really, really good,

[00:17:40] like does that mean we just get rid of the lower level

[00:17:42] analysts and you know, we focus on the elite analysts

[00:17:44] and enhancing them with this.

[00:17:45] So how do you think about that? and analysis and inference from those documents as a mid-level analyst can. You know, I think general practitioners or family doctors will be working with these kinds of models very soon.

[00:19:00] LSA working with because so much of what it is to provide or even a fraction of all of the knowledge workers that are currently employed today. And if I could just sort of extend that a little bit, because I was waiting anxiously to see how Microsoft who partnered with OpenAI on and providing them with access to Azure cloud resources

[00:20:22] for a lot of their training and deployments of their models.

[00:21:25] that kind of stuff, right? Well, if you need far fewer employees, obviously that's a massive hit to revenues. So instead, what they did was they just injected enough of the technology

[00:21:31] to empower employees, make their lives a little bit easier without deploying what the true

[00:21:38] potential would be, which is if I've got a history of all of these new machines. Right? And we're only spatching the surface with how we're embedding them in everybody's work day and everyone's life. Yeah, on this issue of knowledge workers, it's interesting to me, like when this first came out, the general consensus was this is the first place that's gonna hit is like something like truck drivers.

[00:23:01] And it turns out it's actually the opposite of that.

[00:23:03] Like the truck drivers are pretty safe

[00:23:05] because figuring that out's been really complicated,

[00:23:06] but the knowledge workers, this is a major issue. writing completely new and novel things, but it's also fantastic at taking the transcript from a meeting and turning it into a set of emails, a to-do list for all of the parties involved and coordinate with other parties

[00:24:21] outside of the organization to accomplish these tasks.

[00:25:24] But my general intuition is, number one, this is where I think I have the highest confidence,

[00:25:26] this tech because of the way it's been deployed

[00:25:30] and because of the legal decisions

[00:25:31] that have come down the pipe so far,

[00:25:36] I think this is the start of a Cambrian explosion

[00:25:40] of new entrepreneurship.

[00:25:43] Because, you know, what this has done, You don't need to know how to code. You don't need to how to build a user interface in a back-end database. None of that stuff. You can accomplish all of that with plain language. And OpenAI will pay you based on the number of people that use your, or the amount of that your chatbot is utilized.

[00:27:02] Because when your chatbot is utilized,

[00:27:04] it's using OpenAI's language models number of potential use cases. Because what you have is an intelligent agent that's able to make use of any exposed piece of code that does any task that you can possibly imagine, right? It can run back tests. It can get data toopen source model, which means they haven't given away how to fully build the model. But you can use any data set papers that I mean, even papers that I've seen over the last week that improve the efficiency of GPT-4 level models by, on the order of 11, I saw one paper that was 11 times and another paper that was 50 times, I don't even know if maybe

[00:31:04] the 11 times and the 50 times compound, they maybe don't even need to get that at all. Just DPT-4 alone is completely transformative for all knowledge work. So, I think that part's really exciting, how it impacts stock markets. I think it may allow for major efficiencies across the board for most companies,

[00:32:23] but it's gonna take a really long while to scale.

[00:32:27] Like you've gotta train people on how to make use I do wonder about that. I see the potential for, I just a gargantuan number of smaller companies. I do wonder where the 500 million billion, you know, to prompt it and get it there. So the lift is you know not nearly as heavy it just seems like it's gonna speed up maybe the innovation cycle and also to your point it brings it down to the individual level whereas before in the last big you know internet if you want to use that as an example

[00:35:03] you know what you needed massive teams to do you how to build the tools that are gonna make best use of the AI, right?

[00:36:22] And there's this recursive innovation cycle

[00:36:26] that I think is just unbelievably powerful. it at the system development level yet. I have used it for prototyping purposes. So for example, it is now trivial to take an just a bit more expensive and you do know having to know a little bit of programming in order to make effective use of it. You know, it can't program Portfolio 123 for you yet, right? But I think if I owned Portfolio 123, what an amazing way to increase the number

[00:39:03] of like the size of your available market Let's dig in. Can we replicate it on an investment universe that's actually tradable? Or that we can use with our back end trading infrastructure? Or, you know, they attempted this on equities, does this work in futures, right?

[00:40:20] Whatever, I'm sure you guys have gone through

[00:40:22] this exercise a bunch of times.

[00:40:24] So now instead of having to do that. You know, I can hire junior people to do that because the initial stage of this is, can I replicate what's in the tables?

[00:41:40] Right? back-end trading process to things that the sales team and the marketing team can read so they can answer questions, technical questions from potential clients or clients, they can write articles, et cetera, right? So exposing some of this back-end data,

[00:43:01] a lot of it's in like JSON format or CSV files.

[00:43:06] Maybe the CSV, I know we had a call, the tool, that will bring that JSON file in for me to give you the answer, right? That's a really nice tool to have it's a big back testing episode and that there would be risk to the technology just coming up with some optimal strategy that is more just random than

[00:46:46] the framework you set up, you're assuming that the language model doesn't understand good testing practices, right?

[00:46:49] That it doesn't know ideas like holdout, for example, or survivorship bias or look ahead

[00:46:58] bias or the best practices in order to develop it. Now, I would argue lots of experienced quants do a lot of really not so good back testing

[00:48:20] and don't necessarily have a great grasp

[00:48:23] of some of the nuances of market regime and these were your best performers. And, you know, just like you would, if you were showing anyone a back test and you had to talk to it, you could prompt the, you could prompt it to get that, get that information. So I kind of got to walk that back, I guess a little bit. Yeah.

[00:49:40] And you know, it knows the you a surprisingly good answer, and it'll help you execute it. I'm just curious, like, if I was thinking about someone I wanted

[00:51:00] to construct an investment strategy for me,

[00:51:02] like, you were someone I would think of,

[00:51:04] and you probably have decades of experience

[00:51:06] learning all the things you need to know

[00:51:08] to construct an investment strategy, that the machines are going to repeatedly use in order to execute tasks, there always has to be a human in the background to blame for when something goes wrong, right? And it will take a profound shift in policy

[00:52:20] and how we think about that balance

[00:52:24] of accountability and responsibility but maybe going towards sort of medical research, right? I think it's an AGI, a generalized intelligence that exceeds the intelligence of any human across all domains may get to the point where they are able to come up with new ideas and new research directions

[00:53:40] and for a time instruct humans

[00:53:44] about how to do the tasks that are required in order to that I think is much shorter than most people realize. On your point about the human to blame, I was thinking about self-driving cars because our standards for self-driving cars are gonna be very, very different than our standards. Even if they're a hundred times safer, that's not good enough because we're still gonna highlight the errors and it is probably the same thing in the financial business. Like if you screw up my financial life, even if the computer ends up being a hundred times better than the person,

[00:55:00] I still want the person to blame sitting behind it.

[00:55:03] Exactly, that's algorithm aversion.

[00:55:06] And I mean, it's just, there is a lot, I would argue almost most of the value of an asset management company is all of the think people if they haven't used it they should really engage the technology to see the real power because that's the only and you

[00:57:42] know everyone will use it differently some people will use it to do creative what could happen is now vastly wider than it was over the next 10 years than it was before we saw the existence proof of the level of AI network by entrepreneurs building small niche apps. And so I don't know. I don't know how that affects the these larger public companies. You know,, like why some of these people at places like OpenAI and other places like Microsoft are probably thinking about this more from a philosophical point of view around the impact that this

[01:01:40] is going to have for society and the future a human can add to the bottom line. When machines are able to add more value to the bottom line than any humans, then obviously that becomes a very different kind of equation. And I'm excited for humanity to come to terms with that because again, I feel like that's long overdue.

[01:03:03] Thank you very much, Adam. We really appreciate it. Happy holidays to you and your family.