{"id":151279,"date":"2025-01-04T05:26:18","date_gmt":"2025-01-04T05:26:18","guid":{"rendered":"https:\/\/globetimeline.com\/ar\/tech\/rewrite-this-title-in-arabic-does-investment-research-make-sense-in-the-age-of-ai\/"},"modified":"2025-01-04T05:26:18","modified_gmt":"2025-01-04T05:26:18","slug":"rewrite-this-title-in-arabic-does-investment-research-make-sense-in-the-age-of-ai","status":"publish","type":"post","link":"https:\/\/globetimeline.com\/ar\/tech\/rewrite-this-title-in-arabic-does-investment-research-make-sense-in-the-age-of-ai\/","title":{"rendered":"rewrite this title in Arabic Does investment research make sense in the age of AI?"},"content":{"rendered":"<p>Summarize this content to 2000 words in 6 paragraphs in Arabic Unlock the Editor\u2019s Digest for freeRoula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.The writer is a former global head of research at Morgan Stanley and former group head of research, data and analytics at UBSThe late Byron Wien, a prominent markets strategist of the 1990s, defined the best research as a non-consensus recommendation that turned out to be right.\u00a0Could AI pass Wien\u2019s test of worthwhile research and make the analyst job redundant? Or at the very least increase the probability of a recommendation to be right more than 50 per cent of the time?Well, it is important to understand that most analyst reports are devoted to the interpretation of financial statements and news.\u00a0This is about facilitating the job of investors. Here, modern large language models simplify or displace this analyst function.Next, a good amount of effort is spent predicting earnings. Given that most of the time profits tend to follow a pattern, as good years follow good years and vice versa, it is logical that a rules-based engine would work. And because the models do not need to \u201cbe heard\u201d by standing out from the crowd with outlandish projections, their lower bias and noise can outperform most analysts\u2019 estimates in periods where there is limited uncertainty. Academics wrote about this decades ago, but the practice did not take off in mainstream research. To scale, it required a good dose of statistics or building a neural network. Rarely in the skillset of an analyst.Change is under way. Academics from University of Chicago trained large language models to estimate variance of earnings.\u00a0These outperformed median estimates when compared with those of analysts.\u00a0The results are fascinating because LLMs generate insights by understanding the narrative of the earnings release, as they do not have what we may call numerical reasoning \u2014 the edge of a narrowly trained algorithm.\u00a0And their forecasts improve when instructed to mirror the steps that a senior analyst does. Like a good junior, if you wish.But analysts struggle to quantify risk. Part of this issue is because investors are so fixated with getting sure wins that they push analysts to express certainty when there is none.\u00a0The shortcut is to flex the estimates or multiples a bit up or down.\u00a0At best, taking a series of similar situations in to consideration,\u00a0LLMs can help.\u00a0Playing with the \u201ctemperature\u201d of the model, which is a proxy for the randomness of the results, we can make a statistical approximation of bands of risk and return.\u00a0Additionally, we can demand the model gives us an estimate of the confidence it has in its projections. Perhaps counter-intuitively, this is the wrong question to ask most humans. We tend to be overconfident in our ability to forecast the future. And when our projections start to err, it is not unusual to escalate our commitment.\u00a0In practical terms, when a firm produces a \u201cconviction call list\u201d it may be better to think twice before blindly following the advice.But before we throw the proverbial analyst out with the bathwater, we must acknowledge significant limitations to AI. As models try to give the most plausible answer, we should not expect they will discover the next Nvidia \u2014 or foresee another global financial crisis. These stocks or events buck any trend. Neither can LLMs suggest something \u201cworth looking into\u201d on the earnings call as the management seems to avoid discussing value-relevant information. Nor can they anticipate the gyrations of the dollar, say, because of political wrangles. The market is non-stationary and opinions on it are changing all the time.\u00a0We need intuition and the flexibility to incorporate new information in our views.\u00a0These are qualities of a top analyst.Could AI increase our intuition?\u00a0Perhaps. Adventurous researchers can use the much-maligned hallucinations of LLMs in their favour by dialling up the randomness of the model\u2019s responses. This will spill out a lot of ideas to check.\u00a0Or build geopolitical \u201cwhat if\u201d scenarios drawing more alternative lessons from history than an army of experts could provide.Early studies suggest potential in both approaches.\u00a0This is a good thing, as anyone who has been in an investment committee appreciates how difficult it is to bring alternative perspectives to the table. Beware, though: we are unlikely to see a \u201cspark of genius\u201d and there will be a lot of nonsense to weed out.\u00a0Does it make sense to have a proper research department or to follow a star analyst?\u00a0It does. But we must assume that a few of the processes can be automated, that some could be enhanced, and that strategic intuition is like a needle in a haystack. It is hard to find non-consensus recommendations that turn out to be right.\u00a0And there is some serendipity in the search.\u00a0<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Summarize this content to 2000 words in 6 paragraphs in Arabic Unlock the Editor\u2019s Digest for freeRoula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.The writer is a former global head of research at Morgan Stanley and former group head of research, data and analytics at UBSThe late Byron Wien,<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[63],"tags":[],"class_list":{"0":"post-151279","1":"post","2":"type-post","3":"status-publish","4":"format-standard","6":"category-tech"},"_links":{"self":[{"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/posts\/151279","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/comments?post=151279"}],"version-history":[{"count":0,"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/posts\/151279\/revisions"}],"wp:attachment":[{"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/media?parent=151279"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/categories?post=151279"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/tags?post=151279"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}