Toggle light / dark theme

Elon Musk acquires Twitter for roughly $44 billion

The company’s board and the Tesla CEO hammered out the final details of his $54.20 a share bid.

The agreement marks the close of a dramatic courtship and a sharp change of heart at the social-media network.

Elon Musk acquired Twitter for $44 billion on Monday, the company announced, giving the world’s richest person command of one of its most influential social media sites — which serves as a platform for political leaders, a sounding board for experts across industries and an information hub for millions of everyday users.

The acquisition followed weeks of evangelizing on the necessity of “free speech,” as the Tesla CEO seized on Twitter’s role as the “de facto town square” and took umbrage with content moderation efforts he has seen as an escalation toward censorship. He said he sees Twitter as essential to the functioning of democracy and said the economics are not a concern.

Ownership of Twitter gives Musk power over hugely consequential societal and political issues, perhaps most significantly the ban on former president Donald Trump that the website enacted in response to the Jan. 6 riots.

Under the terms of the deal, Twitter will become a private company and shareholders will receive $54.20 per share, the company said in a news release. The deal is expected to close this year.

“Free speech is the bedrock of a functioning democracy, and Twitter is the digital town square where matters vital to the future of humanity are debated,” Musk said in the release. “I also want to make Twitter better than ever by enhancing the product with new features, making the algorithms open source to increase trust, defeating the spam bots, and authenticating all humans. Twitter has tremendous potential — I look forward to working with the company and the community of users to unlock it.”

Quantifying Human Consciousness With the Help of AI

A new deep learning algorithm is able to quantify arousal and awareness in humans at the same time.

#consc… See more.


Summary: A new deep learning algorithm is able to quantify arousal and awareness in humans at the same time.

Source: CORDIS

New research supported by the EU-funded HBP SGA3 and DoCMA projects is giving scientists new insight into human consciousness.

Led by Korea University and projects’ partner University of Liège (Belgium), the research team has developed an explainable consciousness indicator (ECI) to explore different components of consciousness.

Growing Anomalies at the Large Hadron Collider Raise Hopes

Amid the chaotic chains of events that ensue when protons smash together at the Large Hadron Collider in Europe, one particle has popped up that appears to go to pieces in a peculiar way.

All eyes are on the B meson, a yoked pair of quark particles. Having caught whiffs of unexpected B meson behavior before, researchers with the Large Hadron Collider beauty experiment (LHCb) have spent years documenting rare collision events featuring the particles, in hopes of conclusively proving that some novel fundamental particle or effect is meddling with them.

In their latest analysis, first presented at a seminar in March, the LHCb physicists found that several measurements involving the decay of B mesons conflict slightly with the predictions of the Standard Model of particle physics — the reigning set of equations describing the subatomic world. Taken alone, each oddity looks like a statistical fluctuation, and they may all evaporate with additional data, as has happened before. But their collective drift suggests that the aberrations may be breadcrumbs leading beyond the Standard Model to a more complete theory.

Quasiparticles used to generate millions of truly random numbers a second

This could lead to a truly random number generator making things much more secure.


Random numbers are crucial for computing, but our current algorithms aren’t truly random. Researchers at Brown University have now found a way to tap into the fluctuations of quasiparticles to generate millions of truly random numbers per second.

Random number generators are key parts of computer software, but technically they don’t quite live up to their name. Algorithms that generate these numbers are still deterministic, meaning that anyone with enough information about how it works could potentially find patterns and predict the numbers produced. These pseudo-random numbers suffice for low stakes uses like gaming, but for scientific simulations or cybersecurity, truly random numbers are important.

In recent years scientists have turned to the strange world of quantum physics for true randomization, using photons to generate strings of random ones and zeroes or tapping into the quantum vibrations of diamond. And for the new study, the Brown scientists tried something similar.

Scientists create algorithm to assign a label to every pixel in the world, without human supervision

Labeling data can be a chore. It’s the main source of sustenance for computer-vision models; without it, they’d have a lot of difficulty identifying objects, people, and other important image characteristics. Yet producing just an hour of tagged and labeled data can take a whopping 800 hours of human time. Our high-fidelity understanding of the world develops as machines can better perceive and interact with our surroundings. But they need more help.

Scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), Microsoft, and Cornell University have attempted to solve this problem plaguing vision models by creating “STEGO,” an that can jointly discover and segment objects without any human labels at all, down to the pixel.

STEGO learns something called “semantic segmentation”—fancy speak for the process of assigning a label to every pixel in an image. Semantic segmentation is an important skill for today’s computer-vision systems because images can be cluttered with objects. Even more challenging is that these objects don’t always fit into literal boxes; algorithms tend to work better for discrete “things” like people and cars as opposed to “stuff” like vegetation, sky, and mashed potatoes. A previous system might simply perceive a nuanced scene of a dog playing in the park as just a dog, but by assigning every pixel of the image a label, STEGO can break the image into its main ingredients: a dog, sky, grass, and its owner.

0 comments on “Toward Self-Improving Neural Networks: Schmidhuber Team’s Scalable Self-Referential Weight Matrix Learns to Modify Itself”

Back in 1993, AI pioneer Jürgen Schmidhuber published the paperA Self-Referential Weight Matrix, which he described as a “thought experiment… intended to make a step towards self-referential machine learning by showing the theoretical possibility of self-referential neural networks whose weight matrices (WMs) can learn to implement and improve their own weight change algorithm.” A lack of subsequent practical studies in this area had however left this potentially impactful meta-learning ability unrealized — until now.

In the new paper A Modern Self-Referential Weight Matrix That Learns to Modify Itself, a research team from The Swiss AI Lab, IDSIA, University of Lugano (USI) & SUPSI, and King Abdullah University of Science and Technology (KAUST) presents a scalable self-referential WM (SRWM) that leverages outer products and the delta update rule to update and improve itself, achieving both practical applicability and impressive performance in game environments.

The proposed model is built upon fast weight programmers (FWPs), a scalable and effective method dating back to the ‘90s that can learn to memorize past data and compute fast weight changes via programming instructions that are additive outer products of self-invented activation patterns, aka keys and values for self-attention. In light of their connection to linear variants of today’s popular transformer architectures, FWPs are now witnessing a revival. Recent studies have advanced conventional FWPs with improved elementary programming instructions or update rules invoked by their slow neural net to reprogram the fast neural net, an approach that has been dubbed the “delta update rule.”

/* */