Toggle light / dark theme

If the W’s excess heft relative to the standard theoretical prediction can be independently confirmed, the finding would imply the existence of undiscovered particles or forces and would bring about the first major rewriting of the laws of quantum physics in half a century.

“This would be a complete change in how we see the world,” potentially even rivaling the 2012 discovery of the Higgs boson in significance, said Sven Heinemeyer, a physicist at the Institute for Theoretical Physics in Madrid who is not part of CDF. “The Higgs fit well into the previously known picture. This one would be a completely new area to be entered.”

The finding comes at a time when the physics community hungers for flaws in the Standard Model of particle physics, the long-reigning set of equations capturing all known particles and forces. The Standard Model is known to be incomplete, leaving various grand mysteries unsolved, such as the nature of dark matter. The CDF collaboration’s strong track record makes their new result a credible threat to the Standard Model.

Santiago Ramón y Cajal, a Spanish physician from the turn of the 19th century, is considered by most to be the father of modern neuroscience. He stared down a microscope day and night for years, fascinated by chemically stained neurons he found in slices of human brain tissue. By hand, he painstakingly drew virtually every new type of neuron he came across using nothing more than pen and paper. As the Charles Darwin for the brain, he mapped every detail of the forest of neurons that make up the brain, calling them the “butterflies of the brain”. Today, 200 years later, Blue Brain has found a way to dispense with the human eye, pen and paper, and use only mathematics to automatically draw neurons in 3D as digital twins. Math can now be used to capture all the “butterflies of the brain”, which allows us to use computers to build any and all the billons of neurons that make up the brain. And that means we are getting closer to being able to build digital twins of brains.

These billions of neurons form trillions of synapses – where neurons communicate with each other. Such complexity needs comprehensive neuron models and accurately reconstructed detailed brain networks in order to replicate the healthy and disease states of the brain. Efforts to build such models and networks have historically been hampered by the lack of experimental data available. But now, scientists at the EPFL Blue Brain Project using algebraic topology, a field of Math, have created an algorithm that requires only a few examples to generate large numbers of unique cells. Using this algorithm – the Topological Neuronal Synthesis (TNS), they can efficiently synthesize millions of unique neuronal morphologies.

In 1,832, Charles Darwin witnessed hundreds of ballooning spiders landing on the HMS Beagle while some 60 miles offshore. Ballooning is a phenomenon that’s been known since at least the days of Aristotle—and immortalized in E.B. White’s children’s classic Charlotte’s Web—but scientists have only recently made progress in gaining a better understanding of its underlying physics.

Now, physicists have developed a new mathematical model incorporating all the various forces at play as well as the effects of multiple threads, according to a recent paper published in the journal Physical Review E. Authors M. Khalid Jawed (UCLA) and Charbel Habchi (Notre Dame University-Louaize) based their new model on a computer graphics algorithm used to model fur and hair in such blockbuster films as The Hobbit and Planet of the Apes. The work could one day contribute to the design of new types of ballooning sensors for explorations of the atmosphere.

There are competing hypotheses for how ballooning spiders are able to float off into the air. For instance, one proposal posits that, as the air warms with the rising sun, the silk threads the spiders emit to spin their “parachutes” catch the rising convection currents (the updraft) that are caused by thermal gradients. A second hypothesis holds that the threads have a static electric charge that interacts with the weak vertical electric field in the atmosphere.

The study also developed an automated diagnostic pipeline to streamline the genomic data— including the millions of variants present in each genome—for clinical interpretation. Variants unlikely to contribute to the presenting disease are removed, potentially causative variants are identified, and the most likely candidates prioritized. For its pipeline, the researchers and clinicians used Exomiser, a software tool that Robinson co-developed in 2014. To assist with the diagnostic process, Exomiser uses a phenotype matching algorithm to identify and prioritize gene variants revealed through sequencing. It thus automates the process of finding rare, segregating and predicted pathogenic variants in genes in which the patient phenotypes match previously referenced knowledge from human disease or model organism databases. The use of Exomiser was noted in the paper as having greatly increased the number of successful diagnoses made.

The genomic future.

Not surprisingly, the paper concludes that the findings from the pilot study support the case for using whole genome sequencing for diagnosing rare disease patients. Indeed, in patients with specific disorders such as intellectual disability, genome sequencing is now the first-line test within the NHS. The paper also emphasizes the importance of using the HPO to establish a standardized, computable clinical vocabulary, which provides a solid foundation for all genomics-based diagnoses, not just those for rare disease. As the 100,000 Genomes Project continues its work, the HPO will continue to be an essential part of improving patient prognoses through genomics.

The battle between artificial intelligence and human intelligence has been going on for a while not and AI is clearly coming very close to beating humans in many areas as of now. Partially due to improvements in neural network hardware and also improvements in machine learning algorithms. This video goes over whether and how humans could soon be surpassed by artificial general intelligence.

TIMESTAMPS:
00:00 Is AGI actually possible?
01:11 What is Artificial General Intelligence?
03:34 What are the problems with AGI?
05:43 The Ethics behind Artificial Intelligence.
08:03 Last Words.

#ai #agi #robots

We study the question of how to decompose Hilbert space into a preferred tensor-product factorization without any pre-existing structure other than a Hamiltonian operator, in particular the case of a bipartite decomposition into “system” and “environment.” Such a decomposition can be defined by looking for subsystems that exhibit quasi-classical behavior. The correct decomposition is one in which pointer states of the system are relatively robust against environmental monitoring (their entanglement with the environment does not continually and dramatically increase) and remain localized around approximately-classical trajectories. We present an in-principle algorithm for finding such a decomposition by minimizing a combination of entanglement growth and internal spreading of the system. Both of these properties are related to locality in different ways.