Toggle light / dark theme

AI thinks like us—flaws and all: Study finds ChatGPT mirrors human decision biases in half the tests

Can we really trust AI to make better decisions than humans? A new study says … not always. Researchers have discovered that OpenAI’s ChatGPT, one of the most advanced and popular AI models, makes the same kinds of decision-making mistakes as humans in some situations—showing biases like overconfidence of hot-hand (gambler’s) fallacy—yet acting inhuman in others (e.g., not suffering from base-rate neglect or sunk cost fallacies).

Published in the Manufacturing & Service Operations Management journal, the study reveals that ChatGPT doesn’t just crunch numbers—it “thinks” in ways eerily similar to humans, including mental shortcuts and blind spots. These remain rather stable across different business situations but may change as AI evolves from one version to the next.

Lilly’s lepodisiran reduced levels of genetically inherited heart disease risk factor, lipoprotein(a), by nearly 94% from baseline at the highest tested dose in adults with elevated levels

The Investor Relations website contains information about Eli Lilly and Company’s business for stockholders, potential investors, and financial analysts.

Creative Rights In AI Coalition launches to protect copyright in government policy on generative AI

A new coalition of rights-holders has called on the government to support growth in the creative and tech sectors by protecting copyright ahead of an imminent AI consultation.

The BPI, PRS For Music, PPL, MPA and UK Music are among the group of publishers, authors, artists, music businesses, specialist interest publications, unions and photographers.

Launching today, the Creative Rights In AI Coalition has published three key principles for copyright and generative AI policy and a statement supported by all member organisations. The coalition is calling on government to adopt the principles as a framework for developing AI policy.

Here’s my take: I was in the music industry for many years, so I know how it operates. People pay royalties every time an artists music is used. My friend Ayub Ogada made an ungodly amount of money from only one album that supported him all the way past death. His music still generates rotalties. Much of it was due to the smarts of Rob Bozas who ran royalties for Peter Gabriel’s Real World Records. AI companies also will have to start paying royalties to creatives whose intellectual property they use to train their AI just like royalties are paid in the music industry. Many AI companies may not be as profitable as many may think due to liabilities from use of intellectual property to train the AI, as without the content the AI could not be trained. Many lawsuits will happen in the foreseeable future.

Who’s to blame when AI makes a medical error?

Assistive artificial intelligence technologies hold significant promise for transforming health care by aiding physicians in diagnosing, managing, and treating patients. However, the current trend of assistive AI implementation could actually worsen challenges related to error prevention and physician burnout, according to a new brief published in JAMA Health Forum.

The brief, written by researchers from the Johns Hopkins Carey Business School, Johns Hopkins Medicine, and the University of Texas at Austin McCombs School of Business, explains that there is an increasing expectation of physicians to rely on AI to minimize medical errors. However, proper laws and regulations are not yet in place to support physicians as they make AI-guided decisions, despite the fierce adoption of these technologies among health care organizations.

The researchers predict that will depend on whom society considers at fault when the fails or makes a mistake, subjecting physicians to an unrealistic expectation of knowing when to override or trust AI. The authors warn that such an expectation could increase the risk of burnout and even errors among physicians.

INTERPOL Arrests 306 Suspects, Seizes 1,842 Devices in Cross-Border Cybercrime Bust

Law enforcement authorities in seven African countries have arrested 306 suspects and confiscated 1,842 devices as part of an international operation codenamed Red Card that took place between November 2024 and February 2025.

The coordinated effort “aims to disrupt and dismantle cross-border criminal networks which cause significant harm to individuals and businesses,” INTERPOL said, adding it focused on targeted mobile banking, investment, and messaging app scams.

The cyber-enabled scams involved more than 5,000 victims. The countries that participated in the operation include Benin, Côte d’Ivoire, Nigeria, Rwanda, South Africa, Togo, and Zambia.

Buried Timebombs: Saitama Sinkhole Draws Attention to Japan’s Aging Wastewater Infrastructure

The impact of the collapse extended far beyond the immediate disruption to traffic and businesses in the area, affecting wastewater services for 1.2 million residents in the prefecture. Authorities called on inhabitants of surrounding cities to reduce their water usage, pressing them to curtail activities like bathing and washing clothing. In addition, wastewater was collected to reduce the flow to the damaged pipe, with the effluent then chlorinated and released into a nearby river, potentially damaging the environment.

The wide scale of service disruption is linked to Japan’s approach to wastewater management. Wastewater operations are overseen by public sewage systems operated by a single municipality or regional sewage systems operated jointly by multiple municipalities. The pipeline in Yashio was in the latter category and carried wastewater collected from 12 municipalities. While such regional management systems provide significant advantages to cities in terms of efficiency and cost savings, the failed pipe and resulting sinkhole illustrates the risk of widespread disruption of services when anything goes wrong.

Moreover, the incident laid bare the fragile state of Japan’s sewage infrastructure. Morita Hiroaki, who heads the prefectural panel studying the gargantuan task of repairing the damaged pipe, highlighted the direness of the situation when he warned that reconstruction could take at least two or three years to complete.

Keeping Japanese Children Safe in Cyberspace: Weighing the Roles of Government, Business, and Family

The use of smartphones in Japan is extending to younger and younger children, raising serious concerns about the dangers of social media. An online safety expert provides a snapshot of Japanese teens’ use of current platforms and considers the options for protecting children from cyberbullying, exploitation, and toxic content.

Neura Robotics bets on 4NE-1 humanoid robot to compete with China

Neura Robotics has built a diverse portfolio of robots, including MAiRA, the world’s first cognitive cobot. MAiRA uses artificial intelligence for autonomous operation and safe human interaction. The company also offers the MAV, a mobile robot for heavy load transport, and MiPA, a humanoid robot designed for tasks like serving trays in hospitals.

Through its cloud-based Neuraverse platform, Neura also creates cutting-edge software, in contrast to many robotics companies that only concentrate on hardware. Known as an “ecosystem for cognitive robotics,” the Neuraverse is a marketplace for robotic abilities and an operating system designed to spur innovation.

Many businesses displayed humanoid robots at CES 2025, demonstrating the momentum of the robotics sector. The humanoid robot “Melody,” created by Realbotix, is simple to assemble and disassemble. In the meantime, the full-size bipedal humanoid robot known as the “CASBOT 01” was introduced by China’s Lingbao CASBOT.