Toggle light / dark theme

Summary: Researchers developed an experimental computing system, resembling a biological brain, that successfully identified handwritten numbers with a 93.4% accuracy rate.

This breakthrough was achieved using a novel training algorithm providing continuous real-time feedback, outperforming traditional batch data processing methods which yielded 91.4% accuracy.

The system’s design features a self-organizing network of nanowires on electrodes, with memory and processing capabilities interwoven, unlike conventional computers with separate modules.

VentureBeat presents: AI Unleashed — An exclusive executive event for enterprise data leaders. Hear from top industry leaders on Nov 15. Reserve your free pass

Fine-tuning large language models (LLM) has become an important tool for businesses seeking to tailor AI capabilities to niche tasks and personalized user experiences. But fine-tuning usually comes with steep computational and financial overhead, keeping its use limited for enterprises with limited resources.

To solve these challenges, researchers have created algorithms and techniques that cut the cost of fine-tuning LLMs and running fine-tuned models. The latest of these techniques is S-LoRA, a collaborative effort between researchers at Stanford University and University of California-Berkeley (UC Berkeley).

This is why i laughed at all that un canny valley crap talk in early 2010s. notice term is almost never used anymore. And, as for makin robots more attractive than most people. done in mid 2030s.


Does ChatGPT ever give you the eerie sense you’re interacting with another human being?

Artificial intelligence (AI) has reached an astounding level of realism, to the point that some tools can even fool people into thinking they are interacting with another human.

The eeriness doesn’t stop there. In a study published today in Psychological Science, we’ve discovered images of white faces generated by the popular StyleGAN2 algorithm look more “human” than actual people’s faces.

The world’s most valuable chip maker has announced a next-generation processor for AI and high-performance computing workloads, due for launch in mid-2024. A new exascale supercomputer, designed specifically for large AI models, is also planned.

H200 Tensor Core GPU. Credit: NVIDIA

In recent years, California-based NVIDIA Corporation has played a major role in the progress of artificial intelligence (AI), as well as high-performance computing (HPC) more generally, with its hardware being central to astonishing leaps in algorithmic capability.

An experimental computing system physically modeled after the biological brain has “learned” to identify handwritten numbers with an overall accuracy of 93.4%. The key innovation in the experiment was a new training algorithm that gave the system continuous information about its success at the task in real time while it learned. The study was published in Nature Communications.

The algorithm outperformed a conventional machine-learning approach in which training was performed after a batch of data had been processed, producing 91.4% accuracy. The researchers also showed that memory of past inputs stored in the system itself enhanced learning. In contrast, other computing approaches store memory within software or hardware separate from a device’s processor.

For 15 years, researchers at the California NanoSystems Institute at UCLA, or CNSI, have been developing a new platform technology for computation. The technology is a brain-inspired system composed of a tangled-up network of wires containing silver, laid on a bed of electrodes. The system receives input and produces output via pulses of electricity. The individual wires are so small that their diameter is measured on the nanoscale, in billionths of a meter.

The devices are controlled via voice commands or a smartphone app.


Active noise control technology is used by noise-canceling headphones to minimize or completely block out outside noise. These headphones are popular because they offer a quieter, more immersive listening experience—especially in noisy areas. However, despite the many advancements in the technology, people still don’t have much control over which sounds their headphones block out and which they let pass.

Semantic hearing

Now, deep learning algorithms have been developed by a group of academics at the University of Washington that enable users to select which noises to filter through their headphones in real-time. The system has been named “semantic hearing” by its creators.

Hussam Amrouch has developed an AI-ready architecture that is twice as powerful as comparable in-memory computing approaches. As reported in the journal Nature, the professor at the Technical University of Munich (TUM) applies a new computational paradigm using special circuits known as ferroelectric field effect transistors (FeFETs). Within a few years, this could prove useful for generative AI, deep learning algorithms, and robotic applications.

The basic idea is simple: unlike previous chips, where only calculations were carried out on transistors, they are now the location of data storage as well. That saves time and energy.

“As a result, the performance of the chips is also boosted,” says Hussam Amrouch, a professor of AI processor design at the Technical University of Munich (TUM).

From vehicle collision avoidance to airline scheduling systems to power supply grids, many of the services we rely on are managed by computers. As these autonomous systems grow in complexity and ubiquity, so too could the ways in which they fail.

Now, MIT engineers have developed an approach that can be paired with any , to quickly identify a range of potential failures in that system before they are deployed in the real world. What’s more, the approach can find fixes to the failures, and suggest repairs to avoid system breakdowns.

The team has shown that the approach can root out failures in a variety of simulated autonomous systems, including a small and large network, an aircraft collision avoidance system, a team of rescue drones, and a robotic manipulator. In each of the systems, the new approach, in the form of an automated sampling algorithm, quickly identifies a range of likely failures as well as repairs to avoid those failures.

Einstein’s fascination with light, considered quirky at the time, would lead him down the path to a brand new theory of physics.

Living half a century before Einstein, a Scotsman, James Clerk Maxwell, revealed a powerful unification and universalization of nature, taking the disparate sciences of electricity and magnetism and merging them into one communion. It was a titanic tour-de-force that compressed decades of tangled experimental results and hazy theoretical insights into a tidy set of four equations that govern a wealth of phenomena. And through Maxwell’s efforts was born a second great force of nature, electromagnetism, which describes, again in a mere four equations, everything from static shocks, the invisible power of magnets, the flow of electricity, and even radiation – that is, light – itself.

At the time Einstein’s fascination with electromagnetism was considered unfashionable. While electromagnetism is now a cornerstone of every young physicist’s education, in the early 20th century it was seen as nothing more than an interesting bit of theoretical physics, but really something that those more aligned in engineering should study deeply. Though Einstein was no engineer, as a youth his mind burned with a simple thought experiment: what would happen if you could ride a bicycle so quickly that you raced beside a beam of light? What would the light look like from the privileged perspective?