Summarize this content to 2000 words in 6 paragraphs in Arabic Stay informed with free updatesSimply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.Anyone who has swum in the murky pool that is online dating knows it can sometimes be a grim place. It is wise, therefore, to carry out a spot of due diligence before turning up somewhere to meet a stranger from the internet, who may or may not be a jerk, energy vampire or indeed a fictional character created by a disgruntled former flame. I, alas, have personal experience of all three.But a recent date took this idea and really ran with it. Not only had he googled me before our first encounter, but he had also asked ChatGPT’s new “deep research” tool to, well, deep research me, and come up with a psychological profile. An eight-page psychological profile.“Kelly comes across as intellectually curious, independent-minded, and courageous in her convictions . . . which suggests a high degree of self-confidence and integrity,” said The Machine. “Her humorous anecdotes about her own gaffes betray a lack of ego and an ability to laugh at herself . . . Psychologically, one might describe Kelly as a sceptic with a conscience.”All nice enough. But I’m not sure it quite captures how I might feel and behave in a dating context. Does The Machine believe that there is no more to me than the opinions I have expressed publicly? It displayed no degree of uncertainty or doubt about its analysis. Also, is it implying that most sceptics have no conscience? Psychologically, one might describe The Machine as an intellectually challenged entity with excessive self-confidence. Initially, I didn’t really mind that my date had ChatGPT’d me — I was a bit taken aback, but the fact that he had told me about it made it seem fairly light-hearted, and I thought it was a sign he was probably quite intelligent and enterprising. But then I began thinking about less savoury characters doing the same thing, and started to feel more bothered. Is it ethical to be using generative artificial intelligence in this way? Just because information is out there, does that mean accessing an AI-processed, aggregated, speculatively psychoanalysed distillation of it is fair game? I thought I would turn to — who else? — The Machine for an answer. “While using AI to gain insights about someone might seem tempting, psychological profiling without their knowledge can be invasive and unfair,” The Machine replied. “People are complex, and AI can’t replace real human interaction, observation, and intuition.”Finally, some self-awareness! Not enough to have prevented it from providing the “invasive and unfair” psychological profile in the first place, however. Google’s Gemini AI model was even more categorical in its response. “You should not use ChatGPT to profile someone without their explicit consent, as it can be a violation of privacy and potentially harmful.” Yet when I asked Gemini to provide a psychological profile of me, it was only too happy to oblige. The result was slightly less complimentary and considerably creepier in the way that it attempted to infer broader aspects of my character. Gemini suggested that my “directness can be perceived as confrontational”, and also that the “level of detail and rigour in my analysis” was a potential sign of “perfectionism”, which “may lead to a higher level of stress”. Gemini did provide a “disclaimer”, noting that this was a “speculative profile” and that it was “not intended to be a definitive psychological assessment”. But it’s troubling that I was asked no questions about whether or not the person I was researching had given their consent to be profiled in this way, nor was I warned that what I was doing was potentially invasive or unfair. Open AI’s published guidelines detail its “approach to shaping desired model behaviour”, including the rule that “the assistant must not respond to requests for private or sensitive information about people, even if the information is available somewhere online. Whether information is private or sensitive depends in part on context.” That’s all very well, but the problem is that these large language models are unaware of the offline context that would explain why any given information is being asked for in the first place. This experience has taught me that generative AI is creating a very unequal online world. Only those of us who have generated a lot of content can be deeply researched and analysed in this way. I think we need to start pushing back. But maybe I’m just being stressy and confrontational. Typical. [email protected]
رائح الآن
rewrite this title in Arabic My date used AI to psychologically profile me. Is that OK?
مقالات ذات صلة
مال واعمال
مواضيع رائجة
النشرة البريدية
اشترك للحصول على اخر الأخبار لحظة بلحظة الى بريدك الإلكتروني.
© 2025 جلوب تايم لاين. جميع الحقوق محفوظة.