Toggle light / dark theme

In 2010, Mike Williams traveled from London to Amsterdam for a physics workshop. Everyone there was abuzz with the possibilities—and possible drawbacks—of machine learning, which Williams had recently proposed incorporating into the LHCb experiment. Williams, now a professor of physics and leader of an experimental group at the Massachusetts Institute of Technology, left the workshop motivated to make it work.

LHCb is one of the four main experiments at the Large Hadron Collider at CERN. Every second, inside the detectors for each of those experiments, proton beams cross 40 million times, generating hundreds of millions of proton collisions, each of which produces an array of particles flying off in different directions. Williams wanted to use machine learning to improve LHCb’s trigger system, a set of decision-making algorithms programmed to recognize and save only collisions that display interesting signals—and discard the rest.

Of the 40 million crossings, or events, that happen each second in the ATLAS and CMS detectors—the two largest particle detectors at the LHC—data from only a few thousand are saved, says Tae Min Hong, an associate professor of physics and astronomy at the University of Pittsburgh and a member of the ATLAS collaboration. “Our job in the trigger system is to never throw away anything that could be important,” he says.

An hypothesized term to fix a small mathematical inconsistency predicted electromagnetic waves, and that they had all the properties of light that were observed before and after him in the Nineteenth Century. Unwittingly, he also pointed science inexorably in the direction of the special theory of relativity

My last two articles, two slightly different takes on “recipes” for understanding Electromagnetism, show how Maxwell’s equations can be understood as arising from the highly special relationships between the electric and magnetic components within the Faraday tensor that is “enforced” by the assumption that the Gauss flux laws, equivalent to Coulomb’s inverse square force law, must be Lorentz covariant (consistent with Special Relativity).

From the standpoint of Special Relativity, there is obviously something very special going on behind these laws, which are clearly not from the outset Lorentz covariant. What i mean is that, as vector laws in three dimensional space, there is no way you can find a general vector field that fulfills them and deduce that it is Lorentz covariant — it simply won’t be so in general. There has to be something else further specializing that field’s relationship with the world to ensure such an in-general-decidedly-NOT-Lorentz covariant equation is, indeed covariant.

Get a blood test, check blood pressure, and swab for aliments — all without a doctor or nurse.

Adrian Aoun, CEO and co-founder of Forward Health, aims to scale healthcare.


Adrian Aoun, CEO and co-founder of Forward Health, aims to scale healthcare. It started in 2017 with the launch of tech-forward doctor’s offices that eschewed traditional medical staffing for technology solutions like body scanners, smart sensors, and algorithms that can diagnose ailments. Now, in 2023, he’s still on the same mission and rolled up all the learnings and technology found in the doctor’s office into a self-contained, standalone medical station called the CarePod.

The CarePod pitch is easy to understand. Why spend hours in a doctor’s office to get your throat swabbed for strep throat? Walk into the CarePod, soon to be located in malls and office buildings, and answer some questions to determine the appropriate test. CarePod users can get their blood drawn, throat swabbed, and blood pressure read – most of the frontline clinical work performed in primary care offices, all without a doctor or nurse. Custom AI powers the diagnosis, and behind the scenes, doctors write the appropriate prescription, which is available nearly immediately.

The cost? It’s $99 a month, which gives users access to all of the CarePods tests and features. As Aoun told me, this solution enables healthcare to scale like never before.

Summary: Researchers developed an experimental computing system, resembling a biological brain, that successfully identified handwritten numbers with a 93.4% accuracy rate.

This breakthrough was achieved using a novel training algorithm providing continuous real-time feedback, outperforming traditional batch data processing methods which yielded 91.4% accuracy.

The system’s design features a self-organizing network of nanowires on electrodes, with memory and processing capabilities interwoven, unlike conventional computers with separate modules.

VentureBeat presents: AI Unleashed — An exclusive executive event for enterprise data leaders. Hear from top industry leaders on Nov 15. Reserve your free pass

Fine-tuning large language models (LLM) has become an important tool for businesses seeking to tailor AI capabilities to niche tasks and personalized user experiences. But fine-tuning usually comes with steep computational and financial overhead, keeping its use limited for enterprises with limited resources.

To solve these challenges, researchers have created algorithms and techniques that cut the cost of fine-tuning LLMs and running fine-tuned models. The latest of these techniques is S-LoRA, a collaborative effort between researchers at Stanford University and University of California-Berkeley (UC Berkeley).

This is why i laughed at all that un canny valley crap talk in early 2010s. notice term is almost never used anymore. And, as for makin robots more attractive than most people. done in mid 2030s.


Does ChatGPT ever give you the eerie sense you’re interacting with another human being?

Artificial intelligence (AI) has reached an astounding level of realism, to the point that some tools can even fool people into thinking they are interacting with another human.

The eeriness doesn’t stop there. In a study published today in Psychological Science, we’ve discovered images of white faces generated by the popular StyleGAN2 algorithm look more “human” than actual people’s faces.

The world’s most valuable chip maker has announced a next-generation processor for AI and high-performance computing workloads, due for launch in mid-2024. A new exascale supercomputer, designed specifically for large AI models, is also planned.

H200 Tensor Core GPU. Credit: NVIDIA

In recent years, California-based NVIDIA Corporation has played a major role in the progress of artificial intelligence (AI), as well as high-performance computing (HPC) more generally, with its hardware being central to astonishing leaps in algorithmic capability.

An experimental computing system physically modeled after the biological brain has “learned” to identify handwritten numbers with an overall accuracy of 93.4%. The key innovation in the experiment was a new training algorithm that gave the system continuous information about its success at the task in real time while it learned. The study was published in Nature Communications.

The algorithm outperformed a conventional machine-learning approach in which training was performed after a batch of data had been processed, producing 91.4% accuracy. The researchers also showed that memory of past inputs stored in the system itself enhanced learning. In contrast, other computing approaches store memory within software or hardware separate from a device’s processor.

For 15 years, researchers at the California NanoSystems Institute at UCLA, or CNSI, have been developing a new platform technology for computation. The technology is a brain-inspired system composed of a tangled-up network of wires containing silver, laid on a bed of electrodes. The system receives input and produces output via pulses of electricity. The individual wires are so small that their diameter is measured on the nanoscale, in billionths of a meter.

The devices are controlled via voice commands or a smartphone app.


Active noise control technology is used by noise-canceling headphones to minimize or completely block out outside noise. These headphones are popular because they offer a quieter, more immersive listening experience—especially in noisy areas. However, despite the many advancements in the technology, people still don’t have much control over which sounds their headphones block out and which they let pass.

Semantic hearing

Now, deep learning algorithms have been developed by a group of academics at the University of Washington that enable users to select which noises to filter through their headphones in real-time. The system has been named “semantic hearing” by its creators.

Hussam Amrouch has developed an AI-ready architecture that is twice as powerful as comparable in-memory computing approaches. As reported in the journal Nature, the professor at the Technical University of Munich (TUM) applies a new computational paradigm using special circuits known as ferroelectric field effect transistors (FeFETs). Within a few years, this could prove useful for generative AI, deep learning algorithms, and robotic applications.

The basic idea is simple: unlike previous chips, where only calculations were carried out on transistors, they are now the location of data storage as well. That saves time and energy.

“As a result, the performance of the chips is also boosted,” says Hussam Amrouch, a professor of AI processor design at the Technical University of Munich (TUM).