Toggle light / dark theme

To explain or not? Online dating experiment shows need for AI transparency depends on user expectation

Artificial intelligence (AI) is said to be a “black box,” with its logic obscured from human understanding—but how much does the average user actually care to know how AI works?

It depends on the extent to which a system meets users’ expectations, according to a new study by a team that includes Penn State researchers. Using a fabricated algorithm-driven dating website, the team found that whether the system met, exceeded or fell short of user expectations directly corresponded to how much the user trusted the AI and wanted to know about how it worked.

The findings are published in the journal Computers in Human Behavior.

Chainlit AI Framework Flaws Enable Data Theft via File Read and SSRF Bugs

Security vulnerabilities were uncovered in the popular open-source artificial intelligence (AI) framework Chainlit that could allow attackers to steal sensitive data, which may allow for lateral movement within a susceptible organization.

Zafran Security said the high-severity flaws, collectively dubbed ChainLeak, could be abused to leak cloud environment API keys and steal sensitive files, or perform server-side request forgery (SSRF) attacks against servers hosting AI applications.

Chainlit is a framework for creating conversational chatbots. According to statistics shared by the Python Software Foundation, the package has been downloaded over 220,000 times over the past week. It has attracted a total of 7.3 million downloads to date.

New Android malware uses AI to click on hidden browser ads

A new family of Android click-fraud trojans leverages TensorFlow machine learning models to automatically detect and interact with specific advertisement elements.

The mechanism relies on visual analysis based on machine learning instead of predefined JavaScript click routines, and does not involve script-based DOM-level interaction like classic click-fraud trojans.

The threat actor is using TensorFlow.js, an open-source library developed by Google for training and deploying machine learning models in JavaScript. It permits running AI models in browsers or on servers using Node.js.

Chinese military says it is developing over 10 quantum warfare weapons

China’s military says it is using quantum technology to gather high-value military intelligence from public cyberspace.

The People’s Liberation Army said more than 10 experimental quantum cyber warfare tools were “under development”, many of which were being “tested in front-line missions”, according to the official newspaper Science and Technology Daily.

The project is being led by a supercomputing laboratory at the National University of Defence Technology, according to the report, with a focus on cloud computing, artificial intelligence and quantum technology.

‘Largest Infrastructure Buildout in Human History’: Jensen Huang on AI’s ‘Five-Layer Cake’ at Davos

From skilled trades to startups, AI’s rapid expansion is the beginning of the next massive computing platform shift, and for the world’s workforce, a move from tasks to purpose.

At a packed mainstage session at the annual meeting of the World Economic Forum in Davos, Switzerland, NVIDIA founder and CEO Jensen Huang described artificial intelligence as the foundation of what he called “the largest infrastructure buildout in human history,” driving job creation across the global economy.

Speaking with BlackRock CEO Larry Fink, Huang framed AI not as a single technology but as a “a five-layer cake,” spanning energy, chips and computing infrastructure, cloud data centers, AI models and, ultimately, the application layer.

Meet the new biologists treating LLMs like aliens

How large is a large language model? Think about it this way.

In the center of San Francisco there’s a hill called Twin Peaks from which you can view nearly the entire city. Picture all of it—every block and intersection, every neighborhood and park, as far as you can see—covered in sheets of paper. Now picture that paper filled with numbers.

That’s one way to visualize a large language model, or at least a medium-size one: Printed out in 14-point type, a 200-­billion-parameter model, such as GPT4o (released by OpenAI in 2024), could fill 46 square miles of paper—roughly enough to cover San Francisco. The largest models would cover the city of Los Angeles.

We now coexist with machines so vast and so complicated that nobody quite understands what they are, how they work, or what they can really do—not even the people who help build them. “You can never really fully grasp it in a human brain,” says Dan Mossing, a research scientist at OpenAI.

That’s a problem. Even though nobody fully understands how it works—and thus exactly what its limitations might be—hundreds of millions of people now use this technology every day. If nobody knows how or why models spit out what they do, it’s hard to get a grip on their hallucinations or set up effective guardrails to keep them in check. It’s hard to know when (and when not) to trust them.

Whether you think the risks are existential—as many of the researchers driven to understand this technology do—or more mundane, such as the immediate danger that these models might push misinformation or seduce vulnerable people into harmful relationships, understanding how large language models work is more essential than ever.


Using AI to understand how emotions are formed

Emotions are a fundamental part of human psychology—a complex process that has long distinguished us from machines. Even advanced artificial intelligence (AI) lacks the capacity to feel. However, researchers are now exploring whether the formation of emotions can be computationally modeled, providing machines with a deeper, more human-like understanding of emotional states.

In this vein, Assistant Professor Chie Hieida from the Nara Institute of Science and Technology (NAIST), Japan, in collaboration with Assistant Professor Kazuki Miyazawa and then-master’s student Kazuki Tsurumaki from Osaka University, Japan, explore computational approaches to model the formation of emotions.

The team built a computational model that aims to explain how humans may form the concept of emotion. The study was published in the journal IEEE Transactions on Affective Computing.

Smart Golden Cities of the Future: 1 Hour Exploring Nature & Sci-Fi Innovation in 2050

Step into the future with “Smart Golden Cities of the Future”, a 1-hour journey exploring how technology and nature will merge to create sustainable, intelligent cities by 2050. In this immersive video, we’ll dive deep into a world where urban spaces are powered by Sci-Fi innovation, green infrastructure, and advanced technologies. From eco-friendly architecture to autonomous transportation systems, discover how the cities of tomorrow will function in harmony with the environment. Imagine a future with clean energy, smart public services, and a thriving connection to nature—where sustainability and futuristic technology drive every aspect of life. Join us for an hour-long exploration of the Smart Cities of 2050, as we uncover the incredible possibilities and challenges of creating urban spaces that work for both people and the planet. ✨ This video was created with passion and love for sharing creative production using AI tools such as: • 🧠 Research: ChatGPT • 🖼️ Image Creation: Leonardo, Midjourney, ImageFX • 🎬 Video Production: Veo 3.1, Runway ML • 🎵 Music Generation: Suno AI • ✂️ Video Editing: CapCut Pro 💡 Note: All of the above AI tools are subscription-based. This project combines imagination and creativity from my perspective as a mechanical engineer who loves exploring the future. 🙏🏻 Please Support: • ✅ Subscribe • 👍 Like • 💬 Comment Thank you so much for watching!I hope you enjoy this journey and gain inspiration from this creative experience ❤️ #SmartCities #Sustainability #FutureOfLiving #SciFiInnovation #EcoFriendlyCities #midjourney #veo3 #sunoai

/* */