Summary: A new study suggests that ChatGPT’s healthcare-related responses are hard to distinguish from those provided by human healthcare providers.
The study, involving 392 participants, presented a mix of responses from both ChatGPT and humans, finding participants correctly identified the chatbot and provider responses with similar accuracy.
However, the level of trust varied based on the complexity of the health-related task, with administrative tasks and preventive care being more trusted than diagnostic and treatment advice.
Comments are closed.