Toggle light / dark theme

AI model powers skin cancer detection across diverse populations

Researchers at the University of California San Diego School of Medicine have developed a new approach for identifying individuals with skin cancer that combines genetic ancestry, lifestyle and social determinants of health using a machine learning model. Their model, more accurate than existing approaches, also helped the researchers better characterize disparities in skin cancer risk and outcomes.

The research is published in the journal Nature Communications.

Skin cancer is among the most common cancers in the United States, with more than 9,500 new cases diagnosed every day and approximately two deaths from skin cancer occurring every hour. One important component of reducing the burden of skin cancer is risk prediction, which utilizes technology and patient information to help doctors decide which individuals should be prioritized for cancer screening.

Automatic C to Rust translation technology provides accuracy beyond AI

As the C language, which forms the basis of critical global software like operating systems, faces security limitations, KAIST’s research team is pioneering core original technology research for the accurate automatic conversion to Rust to replace it. By proving the mathematical correctness of the conversion, a limitation of existing artificial intelligence (LLM) methods, and solving C language security issues through automatic conversion to Rust, they presented a new direction and vision for future software security research.

The paper by Professor Sukyoung Ryu’s research team from the School of Computing was published in the November issue of Communications of the ACM and was selected as the cover story.

The C language has been widely used in the industry since the 1970s, but its structural limitations have continuously caused severe bugs and security vulnerabilities. Rust, on the other hand, is a secure programming language developed since 2015, used in the development of operating systems and , and has the characteristic of being able to detect and prevent bugs before program execution.

Robotics Company Builds Straight-Up Terminator

“I am kind of blown away that they can get motors to work in such an elegant way. I assumed it was soft body mechanics,” wrote another. “Wow.”

Iron made its first debut on Wednesday, when XPeng CEO He Xiaopeng introduced the unit as the “most human-like” bot on the market to date. Per Humanoids Daily, the robot features “dexterous hands” with 22 degrees of flexibility, a “human-like spine,” gender options, and a digital face.

According to He, the bot also contains the “first all-solid-state battery in the industry,” as opposed to the liquid electrolyte typically found in lithium-ion batteries. Solid-state batteries are considered the “holy grail” for electric vehicle development, a design choice He says will make the robots safer for home use.

AI evaluates texts without bias—until the source is revealed

Large language models (LLMs) are increasingly used not only to generate content but also to evaluate it. They are asked to grade essays, moderate social media content, summarize reports, screen job applications and much more.

However, there are heated discussions—in the media as well as in academia—about whether such evaluations are consistent and unbiased. Some LLMs are under suspicion of promoting certain political agendas. For example, Deepseek is often characterized as having a pro-Chinese perspective and Open AI as being “woke.”

Although these beliefs are widely discussed, they are so far unsubstantiated. UZH-researchers Federico Germani and Giovanni Spitale have now investigated whether LLMs really exhibit systematic biases when evaluating texts. Their results, published in Science Advances, show that LLMs indeed deliver biased judgments—but only when information about the source or author of the evaluated message is revealed.

Microsoft finds security flaw in AI chatbots that could expose conversation topics

Your conversations with AI assistants such as ChatGPT and Google Gemini may not be as private as you think they are. Microsoft has revealed a serious flaw in the large language models (LLMs) that power these AI services, potentially exposing the topic of your conversations with them. Researchers dubbed the vulnerability “Whisper Leak” and found it affects nearly all the models they tested.

When you chat with AI assistants built into major search engines or apps, the information is protected by TLS (Transport Layer Security), the same used for online banking. These secure connections stop would-be eavesdroppers from reading the words you type. However, Microsoft discovered that the metadata (how your messages are traveling across the internet) remains visible. Whisper Leak doesn’t break encryption, but it takes advantage of what encryption cannot hide.

Quantum Route Redirect PhaaS targets Microsoft 365 users worldwide

A new phishing automation platform named Quantum Route Redirect is using around 1,000 domains to steal Microsoft 365 users’ credentials.

The kit comes pre-configured with phishing domains to allow less skilled threat actors to achieve maximum results with the least effort.

Since August, analysts at security awareness company KnowBe4 have noticed Quantum Route Redirect (QRR) attacks in the wild across a wide geography, although nearly three-quarters are located in the U.S.

Research drives commercialization of energy-efficient solar cell technology toward 40% efficiency milestone

Third-generation solar cell technology is advancing rapidly. An engineering research team at The Hong Kong Polytechnic University (PolyU) has achieved a breakthrough in the field of perovskite/silicon tandem solar cells (TSCs), focusing on addressing challenges that include improving efficiency, stability and scalability.

The team has conducted a comprehensive analysis of TSC performance and provided strategic recommendations, which aim to raise the energy conversion efficiency of this new type of solar cell from the current maximum of approximately 34% to about 40%.

The team hopes to accelerate the commercialization of /silicon TSCs through industry-academia-research collaboration, while aligning with the nation’s strategic plan of carbon peaking and neutrality and promoting the development of innovative technologies such as artificial intelligence through .

/* */