Toggle light / dark theme

A recent study by UC San Diego researchers brings fresh insight into the ever-evolving capabilities of AI. The authors looked at the degree to which several prominent AI models, GPT-4, GPT-3.5, and the classic ELIZA could convincingly mimic human conversation, an application of the so-called Turing test for identifying when a computer program has reached human-level intelligence.

The results were telling: In a five-minute text-based conversation, GPT-4 was mistakenly identified as human 54 percent of the time, contrasted with ELIZA’s 22 percent. These findings not only highlight the strides AI has made but also underscore the nuanced challenges of distinguishing human intelligence from algorithmic mimicry.

The important twist in the UC San Diego study is that it clearly identifies what constitutes true human-level intelligence. It isn’t mastery of advanced calculus or another challenging technical field. Instead, what stands out about the most advanced models is their social-emotional persuasiveness. For an AI to catch (or fool a human) it has to be able to effectively imitate the subtleties of human conversation. When judging whether their interlocutor was an AI or a human, participants tended to focus on whether responses were overly formal, contained excessively correct grammar, or repetitive sentence structures, or exhibited an unnatural tone. Participants flagged stilted or inconsistent personalities or senses of humor as non-human.

The AI Scientist is designed to be compute efficient. Each idea is implemented and developed into a full paper at a cost of approximately $15 per paper. While there are still occasional flaws in the papers produced by this first version (discussed below and in the report), this cost and the promise the system shows so far illustrate the potential of The AI Scientist to democratize research and significantly accelerate scientific progress.

We believe this work signifies the beginning of a new era in scientific discovery: bringing the transformative benefits of AI agents to the entire research process, including that of AI itself. The AI Scientist takes us closer to a world where endless affordable creativity and innovation can be unleashed on the world’s most challenging problems.

For decades following each major AI advance, it has been common for AI researchers to joke amongst themselves that “now all we need to do is figure out how to make the AI write the papers for us!” Our work demonstrates this idea has gone from a fantastical joke so unrealistic everyone thought it was funny to something that is currently possible.

Learn science in the easiest and most engaging way possible with Brilliant! First 30 days are free and 20% off the annual premium subscription when you use our link ➜ https://brilliant.org/sabine.

A group of physicists wants to use artificial intelligence to prove that reality doesn’t exist. They want to do this by running an artificial general intelligence as an observer on a quantum computer. I wish this was a joke. But I’m afraid it’s not.

Paper here: https://quantum-journal.org/papers/q–

🤓 Check out my new quiz app ➜ http://quizwithit.com/
💌 Support me on Donorbox ➜ https://donorbox.org/swtg.
📝 Transcripts and written news on Substack ➜ https://sciencewtg.substack.com/
👉 Transcript with links to references on Patreon ➜ / sabine.
📩 Free weekly science newsletter ➜ https://sabinehossenfelder.com/newsle
👂 Audio only podcast ➜ https://open.spotify.com/show/0MkNfXl
🔗 Join this channel to get access to perks ➜
/ @sabinehossenfelder.
🖼️ On instagram ➜ / sciencewtg.

#science #sciencenews #artificialintelligence #physics

Many people associate aging with a decline in cognitive function, health issues, and reduced activity. Uncovering mental processes that can boost the well-being of the older adults could be highly beneficial, as it could help to devise more effective activities aimed at improving their quality of life.

Researchers at University of Brescia and the Catholic University of the Sacred Heart recently carried out a study investigating the contribution of creativity and humor to the well-being of the elderly. Their findings, published in Neuroscience Letters, show that these two distinct human experiences share common psychological and neurobiological processes that promote well-being in older adults.

“Our recent study belongs to a line of research aimed at investigating the cognitive resources which are still available to elderly people and at understanding how such resources can support well-being,” Alessandro Antonietti, co-author of the paper, told Medical Xpress.

It watches, saps the very spirit. And the worst thing of all is I watch it. I can’t not look. It’s like a drug, a horrible drug. You can’t resist it. It’s an addiction. These words of testimony are babbled by the crumbling Colonel Grover to describe O.B.I.T. — The Outer Band Individuated Teletracer — a hellishly precise surveillance machine of questionable origin. Uncovered by a murder investigation at a Defense Department research center, O.B.I.T. proves to be an insidious instrument that breeds fear and hostility. Both cautionary tale and tight courtroom drama, this haunting episode explores the fear and hostility that result when all privacy is eliminated…and all secrets are revealed! Alan Baxter, Jeff Corey and Peter Breck star!

Tesla CEO Elon Musk — who has an abysmal track record for making predictions — is predicting that we will achieve artificial general intelligence (AGI) by 2026.

“If you define AGI as smarter than the smartest human, I think it’s probably next year, within two years,” he told Norway wealth fund CEO Nicolai Tangen during an interview this week, as quoted by Reuters.

The mercurial billionaire also attempted to explain why his own AI venture, xAI, has been falling behind the competition. According to Musk, a shortage of chips was hampering his startup’s efforts to come up with the successor of Grok, a foul-mouthed, dad joke-generating AI chatbot.

Chinese e-commerce giant Alibaba Group Holding is partnering with a domestic rocket developer, with the lofty goal of delivering parcels anywhere in the world within an hour.

The experiment, to be co-conducted by Alibaba’s Taobao marketplace and Beijing-based start-up Space Epoch, will take place “in the near future” using a reusable rocket that can land on the sea, according to a Sunday post by Space Epoch on its official WeChat account.

Alibaba, which owns the South China Morning Post, confirmed the information on Monday, saying that “many great endeavours seem like a joke at first”

The term “artificial general intelligence” (AGI) has become ubiquitous in current discourse around AI. OpenAI states that its mission is “to ensure that artificial general intelligence benefits all of humanity.” DeepMind’s company vision statement notes that “artificial general intelligence…has the potential to drive one of the greatest transformations in history.” AGI is mentioned prominently in the UK government’s National AI Strategy and in US government AI documents. Microsoft researchers recently claimed evidence of “sparks of AGI” in the large language model GPT-4, and current and former Google executives proclaimed that “AGI is already here.” The question of whether GPT-4 is an “AGI algorithm” is at the center of a lawsuit filed by Elon Musk against OpenAI.

Given the pervasiveness of AGI talk in business, government, and the media, one could not be blamed for assuming that the meaning of the term is established and agreed upon. However, the opposite is true: What AGI means, or whether it means anything coherent at all, is hotly debated in the AI community. And the meaning and likely consequences of AGI have become more than just an academic dispute over an arcane term. The world’s biggest tech companies and entire governments are making important decisions on the basis of what they think AGI will entail. But a deep dive into speculations about AGI reveals that many AI practitioners have starkly different views on the nature of intelligence than do those who study human and animal cognition—differences that matter for understanding the present and predicting the likely future of machine intelligence.

The original goal of the AI field was to create machines with general intelligence comparable to that of humans. Early AI pioneers were optimistic: In 1965, Herbert Simon predicted in his book The Shape of Automation for Men and Management that “machines will be capable, within twenty years, of doing any work that a man can do,” and, in a 1970 issue of Life magazine, Marvin Minsky is quoted as declaring that, “In from three to eight years we will have a machine with the general intelligence of an average human being. I mean a machine that will be able to read Shakespeare, grease a car, play office politics, tell a joke, have a fight.”