{"id":276426,"date":"2025-04-15T03:34:05","date_gmt":"2025-04-15T03:34:05","guid":{"rendered":"https:\/\/globetimeline.com\/ar\/tech\/rewrite-this-title-in-arabic-googles-elizabeth-reid-human-curiosity-is-boundless-and-people-ask-a-lot-of-questions\/"},"modified":"2025-04-15T03:34:05","modified_gmt":"2025-04-15T03:34:05","slug":"rewrite-this-title-in-arabic-googles-elizabeth-reid-human-curiosity-is-boundless-and-people-ask-a-lot-of-questions","status":"publish","type":"post","link":"https:\/\/globetimeline.com\/ar\/tech\/rewrite-this-title-in-arabic-googles-elizabeth-reid-human-curiosity-is-boundless-and-people-ask-a-lot-of-questions\/","title":{"rendered":"rewrite this title in Arabic Google\u2019s Elizabeth Reid: \u2018Human curiosity is boundless and people ask a lot of questions\u2019"},"content":{"rendered":"<p>Summarize this content to 2000 words in 6 paragraphs in Arabic Elizabeth Reid has over the past year led Google\u2019s push to reinvent its core product: search. About a year ago her team launched the company\u2019s biggest revamp in years with AI Overviews, in which generative artificial intelligence models summarise search results.The feature began tentatively, with the AI summaries prompting ridicule when they advised users that eating rocks can be healthy and told others to glue cheese to pizza. Since then, Reid says, the company has worked to balance accuracy and usefulness, and is seeing people change the way they seek information online.In this conversation with the Financial Times\u2019 AI correspondent Melissa Heikkil\u00e4, Reid talks about the future of AI-powered search and how it is changing the business model of the internet.Melissa Heikkil\u00e4: You graduated from Dartmouth College, which is where the definition of AI was first conceived in 1956. Tell me about your journey to AI. Has Dartmouth influenced you in any way?Elizabeth Reid: Dartmouth definitely got me into computer science. I did very little of it in high school. I went to a small school in Massachusetts whose idea of computer classes at the time was typing and learning to use Microsoft Excel and Word. I did a little programming on my graphing calculator because they told me I couldn\u2019t take this class unless I knew how to do that.And I went to Dartmouth, thinking I was going to go into physics. I was good at maths. I did an internship in my freshman summer, and it was in material science and, in theory, was really interesting [but] I wanted something more applied. So, I thought I would go into engineering physics.\u00a0I took [a computer science class] at the same time I was taking thermodynamics and physics. And I spent time doing extra credits for computer science, [rather than] focusing as much as I probably should have on my physics. I talked to Professor [Thomas] Cormen, a longtime Dartmouth professor, and he convinced me to switch into computer science.\u00a0Then I needed a job. It was 2003. Dartmouth had a good computer science department, but it was not Stanford or MIT or Carnegie Mellon. [Cormen] had a previous student who was at Google, and he helped me. He contacted her, and she helped me get an interview. So, I landed at Google in the New York office. There were about 10 engineers there and maybe 500 or 1,000 total employees [at the corporate headquarters] in Mountain View in California.\u00a0Something we\u2019ve seen over and over again with search is that human curiosity is boundless. People have a lot of questionsI started in search on a project that became local search. At some point, that moved to the geo-map space, and I worked on engineering problems there. We sometimes synonymise AI with generative AI but, really, AI isn\u2019t just about generative AI. And so, across time, in both local search and some of the maps, we were using AI in [many] different areas.\u00a0I moved to [Google] Search a few years ago and was talking to the engineers about what they were doing [and] what was possible. The technology then had a tipping point, and we were suddenly able to do a lot more with it. It was pretty exciting.\u00a0MH: You\u2019re working on one of the most concrete applications of AI. And it\u2019s been just under a year since AI Overviews was launched. Could you tell me a bit about the past year and how it\u2019s gone?ER: It\u2019s been a great launch. We see some of the strongest growth in [Google] Search and people issuing more queries. It unlocks the difficulty of asking a question. It allows you to ask questions you couldn\u2019t ask before because the information wasn\u2019t on a single webpage.\u00a0It was scattered across the web, and you\u2019d have to have pulled it together. Something we\u2019ve seen over and over again with [Google] Search is that human curiosity is boundless. People have a lot of questions.\u00a0A three-year-old will go: \u201cWhy, why, why, why, why?\u201d But, as an adult, you don\u2019t assume the person you ask the question knows the answer. You don\u2019t know if you have enough time. You don\u2019t know if it\u2019s worth the effort. And so you don\u2019t ask those questions. But if you lower the [barrier] to asking the question, then people just come. They have a lot more questions and they ask anything these days. MH: And how else are you seeing AI changing search?\u00a0ER: Besides seeing people ask more questions, they ask longer questions. And the way you can think about a longer question is: do you have to take the actual question you have and turn it into the strictest \u201ckeywordese\u201d or can you ask what\u2019s on your mind? With AI Overviews, people start asking these longer queries that express more of the constraints, more of the angles that they see.\u00a0We see it resonate in particular with younger users. They are often the first to push expectations about what should be possible and to adapt to new technology. More and longer questions. They start asking more nuanced questions.AI Overviews is the start with thinking about transforming search. How can you think about transforming the whole page, organising the information in a way that\u2019s easier, even finding what the right web links are that you want to go and pursue? We see a lot of growth in multi-modality: people asking these text-plus-image questions. So, it\u2019s not just, \u201cWhat is this image?\u201d or \u201cHere\u2019s my question\u201d, but combining them.\u00a0MH: With ChatGPT, we\u2019ve seen some evidence that people are changing the way they behave. Are you thinking about adapting to more chat-based search functions? ER: We\u2019re not looking in that direction in the same way: to the extent that somebody will think of a chatbot as talking to something that feels personified and you can ask it, \u201cHow was your day?\u201d, then expect a response.We think of search as more of an information-focused question. We are starting to experiment more with the idea that people sometimes have a question that has multiple parts plus a follow-up. And if you have a follow-up question, you don\u2019t want to start over from scratch. But it\u2019s more designed as: how can you further your journey without repeating it the same way you might to a human \u2014 rather than designing it in the sense of: do you have a friend to chat with and ask them their views? It\u2019s much more about organising information.\u00a0MH: There\u2019s been lots of criticism about search being broken, people having to add \u201cReddit\u201d as [a] search keyword or, when they search, they\u2019re getting hallucinations, or incorrect or misleading results, as answers. Or the AI answers are telling them to eat rocks or glue. How are you working to fix that?People, especially younger users, want to hear directly from others who have experienced somethingER: I don\u2019t think adding the word \u201cReddit\u201d is a bad thing. Some people want more discussions. Others may want it from more mainstream or authoritative sources. So, the ability to express more of what you want can be a win. But what we have seen is that people, especially younger users, want to hear directly from others who have experienced something.\u00a0And so, it\u2019s not just, \u201chere\u2019s a site that\u2019s done some research\u201d, but \u201cdid you go there yourself?\u201d Did you use the product yourself, or did you read about it and write some summary on it? We\u2019ve been doing a lot of work to figure out how we bring more human voices on.\u00a0It is the case with generative AI that the technology sometimes makes mistakes. We saw, with eating rocks, that it was an extremely small-use case. Despite our extensive work and testing, it was not the type of query we had seen previously. People didn\u2019t ask us, \u201cHow many rocks should I eat a day?\u201d People use new technology in ways that you hadn\u2019t imagined. We took it seriously. It didn\u2019t matter that it was a small incident.\u00a0We put a lot of effort in our models on paying attention to factuality. That\u2019s a way that we make a different choice on search, compared with a chatbot. You typically have to choose between how factual it is versus how creative or how conversational it is.\u00a0If you\u2019re building a product that\u2019s designed to be conversational, you might weigh it one way. But in the case of [Google] Search, we have weighted factuality and put extensive work into that. We have continued to raise the bar on that for the past several months.MH: Language models do have this technical flaw where it\u2019s easy for outsiders to inject unwanted prompts, and that then influences what the overviews say, or hallucinations. Are these models fit for purpose for something like search, which requires accuracy? And how do you think about these security weaknesses and how to fix them?ER: There\u2019s a difference between \u201ccan you hack the prompts\u201d, versus \u201care they going to make occasional mistakes\u201d? Those are different things. From a security perspective, on the prompting, everyone is working to figure out how to avoid jailbreaking, or finding loopholes that make AI models bypass their guardrails. We\u2019re doing that. The way search is designed, in terms of how it uses the web, it tends not to have that problem in the same way that a traditional chatbot might have.But in terms of, are they ready to be used, one of the things that we do rely on for search is the use of high-quality information from the internet. It\u2019s a different use, in that it\u2019s not so much the model generating everything and using a little bit of web, but feeding the web at the centre and designing it. Our models are trained not just to try and be highly accurate, but to try and base their answers on information on the web.\u00a0That helps in two ways. One, it increases the accuracy and, two, we can then tell you where to look for further confirmation.\u00a0AI Overviews aren\u2019t designed to be a standalone product. They are designed to get you started and then help you dive deeperAI Overviews aren\u2019t designed to be a standalone product. They are designed to get you started and then help you dive deeper. And so, when it\u2019s important, the idea is that you get some context on where to check and then you can choose to double-check more on some of them.\u00a0There are lots of questions people ask, where, if you are just relying on webpages, it can be difficult. So, tech support is one of the AI Overviews areas that people rely on. The tech documents are not necessarily extensive online. Maybe there\u2019s a form that talks about your problem, but maybe not. Or the forum talks about your problem, but you\u2019ve tried those two or three things.\u00a0We don\u2019t show AI Overviews in every query. In order to show AI Overviews, we have to believe the response is high quality [and] is it a net value over the rest of the search results? If we think the rest of the search results page provides the answer, then we don\u2019t feel an obligation to respond.\u00a0MH: What kind of behaviour change are you seeing in people double-checking sources? Are people doing that, or how often do they rely on the AI Overviews?\u00a0ER: We do see people dive in, often to continue. That can be because they want to confirm data, but often it\u2019s not just because they want to confirm. They come in with an initial question and then they read something, and it sparks the next question. Or they really want to hear a more in-depth perspective now they have a sense of the topic and what parts they\u2019re interested in, and they can zero in. We see them engage.\u00a0We see the clicks are of higher quality, because they\u2019re not clicking on a webpage, realising it wasn\u2019t what they want and immediately bailing. So, they spend more time on those sites. We see that it shows a greater diversity of websites that come up. And that might be surprising. But if your question is long, finding a webpage that covers every part of your question is hard, and sometimes what you get is a very surface-level webpage.\u00a0Technically it talks about every one of your words, but you didn\u2019t get much substance. With generative AI, we can go and look for web pages that talk about specific subsets. So, we\u2019ll take that query, and we\u2019ll turn it into multiple queries.\u00a0And then we\u2019ll say, a-ha, OK, you\u2019re comparing two items that are not traditionally compared. Let me find a webpage about one item. Let me find a webpage about another. And then, you can expose websites that go in more depth on part of a topic, instead of just a webpage that is surface level about the whole topic. MH: Some people have criticised language models in search, not for the \u201ceat rocks\u201d mistakes but for these subtle, inaccurate mistakes that people don\u2019t pick up if they\u2019re not experts in the field. How concerned are you about that?\u00a0ER: Besides trying to place a high bar on quality, we take extra effort on things we call \u201cyour money or your life\u201d. So, questions of finance, questions on medical topics \u2014 we try to be thoughtful in our answers about both. Maybe we should not give a response at all or where we think we can give you something to get started, but we should recommend you talk to a doctor, dig in more and find out details.\u00a0Besides trying to place a high bar on quality, we take extra effort on things we call \u2018your money or your life\u2019And that\u2019s an important thing to do, because in many of those cases, you\u2019d prefer that they seek out a medical professional. But there are many people who don\u2019t necessarily have access to a medical professional. So, if you said: I\u2019m not going to answer anything, even some basics about a rash, and you\u2019re a stressed mother and it\u2019s the middle of the night, and you can\u2019t reach someone in some part of the world, do you not help them?\u00a0We try to be clear that the technology is more experimental. [With] a lot of questions people ask, though, the stakes aren\u2019t as high. If you\u2019re trying to get tech support on figuring out how to fix your phone, hopefully we give you the right instructions, but if we don\u2019t give you exactly the right instructions on how to turn something on, you usually figure that out and then you can do more searching. But often we can get you there faster.\u00a0MH: Going back to what you said about information and different publishers getting access, publishers have criticised AI search for dropping traffic and ad revenue. How are you avoiding this or taking this into account?\u00a0ER: We do believe, in [Google] Search, that people continuing to hear from other people is essential and at the heart of our product. That\u2019s important, not just for a healthy ecosystem, but for users. Lots of times you want a quick answer, but often you want to hear from other people.\u00a0I often use a fashion example: most of the people I know who want to delegate their choices to a bot for fashion are the set of people who weren\u2019t trying to spend any time on fashion before.\u00a0The people who are following influencers and creators and others, they\u2019re not ready to go there. They want to hear from the people they trust. So, we spend a lot of time thinking about, how do we elevate the right content? How do we present it? We run different experiments. We design it to not just show links, but think about where it could add additional links within the response. Not just at the end, but maybe we can say, \u201caccording to the Financial Times\u201d and put a link to the Financial Times.\u00a0What you see with something like AI Overviews, when you bring the friction down for users, is people search more and that opens up new opportunities for websites, for creators, for publishers to access. And they get higher-quality clicks. MH: Is there a risk that you end up cannibalising your own product? Generative search is expensive, and this is changing the whole ad revenue model.\u00a0ER: There are a lot of opportunities for ads. We show them both above and below in AI Overviews, but also within. Ads are relevant whenever users are going to make a choice that has some commercial aspect.When a query is predominantly commercial intent \u2014 like we think you want to buy something \u2014 then we might often show ads. But sometimes we think you probably don\u2019t want to [see] ads, and so we don\u2019t want to give everyone ads. But some people might want to buy something. If [you search] \u201chow to clean a stain out of the couch\u201d and the first thing we show is a bunch of ads, you\u2019re like, \u201cWhoa, I just wanted some advice.\u201d\u00a0But if we\u2019re giving you ideas and then we say, \u201cif you\u2019re having trouble you might want to consider a stain-remover product\u201d, and then we give you some ads for stain-remover products, it feels natural and in context. And so, there are new opportunities.MH: Are we going to see a paid version of [Google] Search? And what would that include?ER: Never say never about what the future will hold. Ensuring that search in general, the essence of it, is available for free, to allow access to information, will be important. There may be some aspects for people who have subscriptions in the future. But the core of search we want to have available for everyone for free, yes.\u00a0MH: What does the future of search look like? Are you thinking about other modalities or agents?ER: One thing that\u2019s really at the heart of it is this idea that we want to make search effortless. That assumes multimodalities, because humans are wired not just to type or text or use voice. They see things. They use different ways of expressing what they want.Ensuring that search in general, the essence of it, is available for free, to allow access to information, will be importantIt will get more personalised over time, not just in the results, but in how you learn well. Are you somebody who learns well with videos or are you someone who prefers text? So, that ability for the technology to meet you where you are \u2014 can we make it as easy as possible for you to learn and explore the world?\u00a0This question is about how you make use of tools. People use the word \u201cagents\u201d to mean different things. But the sense of \u201cyou can use tools to ask hard questions\u201d will continue. [Google] Search will remain an information product at heart, but sometimes information is hard and there\u2019s a lot of work.\u00a0MH: Have your search habits changed in this AI era?\u00a0ER: I personally ask more questions. So, one example: I work with people who are into cricket. They would say something, and it would make no sense. But I didn\u2019t have enough time to go and do an hour-long tutorial on cricket.I would start asking the question and finally get the answer. So, for instance, there\u2019s this thing in cricket where if there\u2019s rain that cuts the game short, the scoring uses an algorithm to decide how many runs you might have been able to score based on where they are.\u00a0I ask questions about a book my son is reading and is talking about. I haven\u2019t read the book, so I\u2019ll ask a question about it. I\u2019d love to be able to read all of the books at the rate he does. I don\u2019t have the time to do that. So, instead of thinking about the question and having it pop out, I find myself asking the question and learning about new things.\u00a0This transcript has been edited for brevity and clarity<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Summarize this content to 2000 words in 6 paragraphs in Arabic Elizabeth Reid has over the past year led Google\u2019s push to reinvent its core product: search. About a year ago her team launched the company\u2019s biggest revamp in years with AI Overviews, in which generative artificial intelligence models summarise search results.The feature began tentatively,<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[63],"tags":[],"class_list":{"0":"post-276426","1":"post","2":"type-post","3":"status-publish","4":"format-standard","6":"category-tech"},"_links":{"self":[{"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/posts\/276426","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/comments?post=276426"}],"version-history":[{"count":0,"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/posts\/276426\/revisions"}],"wp:attachment":[{"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/media?parent=276426"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/categories?post=276426"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/tags?post=276426"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}