Soft tissue deformation during body movement has long posed a challenge to achieving optimal garment fit and comfort, particularly in sportswear and functional medical wear.
Researchers at The Hong Kong Polytechnic University (PolyU) have developed a novel anthropometric method that delivers highly accurate measurements to enhance the performance and design of compression-based apparel.
Prof. Joanne YIP, Associate Dean and Professor of the School of Fashion and Textiles at PolyU, and her research team pioneered this anthropometric method using image recognition algorithms to systematically access tissue deformation while minimizing motion-related errors.
The Faiss library is an open source library, developed by Meta FAIR, for efficient vector search and clustering of dense vectors. Faiss pioneered vector search on GPUs, as well as the ability to seamlessly switch between GPUs and CPUs. It has made a lasting impact in both research and industry, being used as an integrated library in several databases (e.g., Milvus and OpenSearch), machine learning libraries, data processing libraries, and AI workflows. Faiss is also used heavily by researchers and data scientists as a standalone library, often paired with PyTorch.
Collaboration with NVIDIA
Three years ago, Meta and NVIDIA worked together to enhance the capabilities of vector search technology and to accelerate vector search on GPUs. Previously, in 2016, Meta had incorporated high performing vector search algorithms made for NVIDIA GPUs: GpuIndexFlat ; GpuIndexIVFFlat ; GpuIndexIVFPQ. After the partnership, NVIDIA rapidly contributed GpuIndexCagra, a state-of-the art graph-based index designed specifically for GPUs. In its latest release, Faiss 1.10.0 officially includes these algorithms from the NVIDIA cuVS library.
SrinivasaRamanujan Aiyangar [ a ]FRS (22 December 1887 – 26 April 1920) was an Indian mathematician. He is widely regarded as one of the greatest mathematicians of all time, despite having almost no formal training in pure mathematics. He made substantial contributions to mathematical analysis, number theory, infinite series, and continued fractions, including solutions to mathematical problems then considered unsolvable.
Ramanujan initially developed his own mathematical research in isolation. According to Hans Eysenck, “he tried to interest the leading professional mathematicians in his work, but failed for the most part. What he had to show them was too novel, too unfamiliar, and additionally presented in unusual ways; they could not be bothered”. [ 4 ] Seeking mathematicians who could better understand his work, in 1913 he began a mail correspondence with the English mathematician G. H. Hardy at the University of Cambridge, England. Recognising Ramanujan’s work as extraordinary, Hardy arranged for him to travel to Cambridge. In his notes, Hardy commented that Ramanujan had produced groundbreaking new theorems, including some that “defeated me completely; I had never seen anything in the least like them before”, [ 5 ] and some recently proven but highly advanced results.
During his short life, Ramanujan independently compiled nearly 3,900 results (mostly identities and equations). [ 6 ] Many were completely novel; his original and highly unconventional results, such as the Ramanujan prime, the Ramanujan theta function, partition formulae and mock theta functions, have opened entire new areas of work and inspired further research. [ 7 ] Of his thousands of results, most have been proven correct. [ 8 ] The RamanujanJournal, a scientific journal, was established to publish work in all areas of mathematics influenced by Ramanujan, [ 9 ] and his notebooks—containing summaries of his published and unpublished results—have been analysed and studied for decades since his death as a source of new mathematical ideas.
The rapid advancement of artificial intelligence (AI) and machine learning systems has increased the demand for new hardware components that could speed up data analysis while consuming less power. As machine learning algorithms draw inspiration from biological neural networks, some engineers have been working on hardware that also mimics the architecture and functioning of the human brain.
Quantum computers, systems that perform computations leveraging quantum mechanical effects, could outperform classical computers in some optimization and information processing tasks. As these systems are highly influenced by noise, however, they need to integrate strategies that will minimize the errors they produce.
One proposed solution for enabling fault-tolerant quantum computing across a wide range of operations is known as magic state distillation. This approach consists of preparing special quantum states (i.e., magic states) that can then be used to perform a universal set of operations. This allows the construction of a universal quantum computer—a device that can reliably perform all operations necessary for implementing any quantum algorithm.
Yet while magic state distillation techniques can achieve good results, they typically consume large numbers of error-protected qubits and need to perform many rounds of error correction. This has so far limited their potential for real-world applications.
Quantum researchers have deployed a new algorithm to manage noise in qubits in real time. The method can be applied to a wide range of different qubits, even in large numbers.
Noise is the “ghost in the machine” in the effort to make quantum devices work. Certain quantum devices use qubits—the central component of any quantum processor—and they are extremely sensitive to even small disturbances in their environment.
A collaboration between researchers from the Niels Bohr Institute, MIT, NTNU, and Leiden University has now resulted in a method to effectively manage the noise. The result has been published in PRX Quantum.
Over the past decades, computer scientists have developed increasingly sophisticated sensors and machine learning algorithms that allow computer systems to process and interpret images and videos. This tech-powered capability, also referred to as machine vision, is proving to be highly advantageous for the manufacturing and production of food products, drinks, electronics, and various other goods.
Machine vision could enable the automation of various tedious steps in industry and manufacturing, such as the detection of defects, the inspection of electronics, automotive parts or other items, the verification of labels or expiration dates and the sorting of products into different categories.
While the sensors underpinning the functioning of many previously introduced machine vision systems are highly sophisticated, they typically do not process visual information with as much detail as the human retina (i.e., a light-sensitive tissue in the eye that processes visual signals).
As summer winds down, many of us in continental Europe are heading back north. The long return journeys from the beaches of southern France, Spain, and Italy once again clog alpine tunnels and Mediterranean coastal routes during the infamous Black Saturday bottlenecks. This annual migration, like many systems in our world, forms a network—not just of connections, but of communities shaped by shared patterns of origin and destination.
This is where network science —and in particular, community detection—comes in. For decades, researchers have developed powerful tools to uncover communities in networks: clusters of tightly interconnected nodes. But these tools work best for undirected networks, where connections are mutual. Graphically, the node maps may look familiar.
These clusters can mean that a group of people are all friends on Facebook, follow different sport accounts on X, or all live in the same city. Using a standard modularity algorithm, we can then find connections between different communities and begin to draw useful conclusions. Perhaps users in the fly-fishing community also show up as followers of nonalcoholic beer enthusiasts in Geneva. This type of information extraction, impossible without community analysis, is a layer of meaning that can be leveraged to sell beer or even nefariously influence elections.
Researchers have developed a novel attack that steals user data by injecting malicious prompts in images processed by AI systems before delivering them to a large language model.
The method relies on full-resolution images that carry instructions invisible to the human eye but become apparent when the image quality is lowered through resampling algorithms.
Developed by Trail of Bits researchers Kikimora Morozova and Suha Sabi Hussain, the attack builds upon a theory presented in a 2020 USENIX paper by a German university (TU Braunschweig) exploring the possibility of an image-scaling attack in machine learning.