БЛОГ

Archive for the ‘information science’ category: Page 82

Jun 6, 2023

If light has no mass, why is it affected by gravity? General Relativity Theory

Posted by in categories: information science, mathematics, space

General relativity, part of the wide-ranging physical theory of relativity formed by the German-born physicist Albert Einstein. It was conceived by Einstein in 1915. It explains gravity based on the way space can ‘curve’, or, to put it more accurately, it associates the force of gravity with the changing geometry of space-time. (Einstein’s gravity)

The mathematical equations of Einstein’s general theory of relativity, tested time and time again, are currently the most accurate way to predict gravitational interactions, replacing those developed by Isaac Newton several centuries prior.

Continue reading “If light has no mass, why is it affected by gravity? General Relativity Theory” »

Jun 5, 2023

Quantum computers are better at guessing, new study demonstrates

Posted by in categories: computing, entertainment, information science, quantum physics

Daniel Lidar, the Viterbi Professor of Engineering at USC and Director of the USC Center for Quantum Information Science & Technology, and Dr. Bibek Pokharel, a Research Scientist at IBM Quantum, have achieved a quantum speedup advantage in the context of a “bitstring guessing game.” They managed strings up to 26 bits long, significantly larger than previously possible, by effectively suppressing errors typically seen at this scale. (A bit is a binary number that is either zero or one). Their paper is published in the journal Physical Review Letters.

Quantum computers promise to solve certain problems with an advantage that increases as the problems increase in complexity. However, they are also highly prone to errors, or noise. The challenge, says Lidar, is “to obtain an advantage in the real world where today’s quantum computers are still ‘noisy.’” This noise-prone condition of current is termed the “NISQ” (Noisy Intermediate-Scale Quantum) era, a term adapted from the RISC architecture used to describe classical computing devices. Thus, any present demonstration of quantum speed advantage necessitates noise reduction.

The more unknown variables a problem has, the harder it usually is for a to solve. Scholars can evaluate a computer’s performance by playing a type of game with it to see how quickly an algorithm can guess hidden information. For instance, imagine a version of the TV game Jeopardy, where contestants take turns guessing a secret word of known length, one whole word at a time. The host reveals only one correct letter for each guessed word before changing the secret word randomly.

Jun 4, 2023

Perovskite Sensor Array Emulates Human Retina For Panchromatic Imaging

Posted by in categories: biological, information science, life extension, robotics/AI, solar power, sustainability

The mammalian retina is a complex system consisting out of cones (for color) and rods (for peripheral monochrome) that provide the raw image data which is then processed into successive layers of neurons before this preprocessed data is sent via the optical nerve to the brain’s visual cortex. In order to emulate this system as closely as possible, researchers at Penn State University have created a system that uses perovskite (methylammonium lead bromide, MAPbX3) RGB photodetectors and a neuromorphic processing algorithm that performs similar processing as the biological retina.

Panchromatic imaging is defined as being ‘sensitive to light of all colors in the visible spectrum’, which in imaging means enhancing the monochromatic (e.g. RGB) channels using panchromatic (intensity, not frequency) data. For the retina this means that the incoming light is not merely used to determine the separate colors, but also the intensity, which is what underlies the wide dynamic range of the Mark I eyeball. In this experiment, layers of these MAPbX3 (X being Cl, Br, I or combination thereof) perovskites formed stacked RGB sensors.

The output of these sensor layers was then processed in a pretrained convolutional neural network, to generate the final, panchromatic image which could then be used for a wide range of purposes. Some applications noted by the researchers include new types of digital cameras, as well as artificial retinas, limited mostly by how well the perovskite layers scale in resolution, and their longevity, which is a long-standing issue with perovskites. Another possibility raised is that of powering at least part of the system using the energy collected by the perovskite layers, akin to proposed perovskite-based solar panels.

Jun 3, 2023

Joscha Bach: Time, Simulation Hypothesis, & Existence

Posted by in categories: cosmology, economics, education, government, information science, mathematics, quantum physics, robotics/AI

Joscha Bach is a cognitive scientist focusing on cognitive architectures, consciousness, models of mental representation, emotion, motivation and sociality.

Patreon: https://patreon.com/curtjaimungal.
Crypto: https://tinyurl.com/cryptoTOE
PayPal: https://tinyurl.com/paypalTOE
Twitter: https://twitter.com/TOEwithCurt.
Discord Invite: https://discord.com/invite/kBcnfNVwqs.
iTunes: https://podcasts.apple.com/ca/podcast/better-left-unsaid-wit…1521758802
Pandora: https://pdora.co/33b9lfP
Spotify: https://open.spotify.com/show/4gL14b92xAErofYQA7bU4e.
Subreddit r/TheoriesOfEverything: https://reddit.com/r/theoriesofeverything.
Merch: https://tinyurl.com/TOEmerch.

Continue reading “Joscha Bach: Time, Simulation Hypothesis, & Existence” »

Jun 3, 2023

AI Creates Killer Drug

Posted by in categories: biotech/medical, chemistry, information science, robotics/AI

Researchers in Canada and the United States have used deep learning to derive an antibiotic that can attack a resistant microbe, acinetobacter baumannii, which can infect wounds and cause pneumonia. According to the BBC, a paper in Nature Chemical Biology describes how the researchers used training data that measured known drugs’ action on the tough bacteria. The learning algorithm then projected the effect of 6,680 compounds with no data on their effectiveness against the germ.

In an hour and a half, the program reduced the list to 240 promising candidates. Testing in the lab found that nine of these were effective and that one, now called abaucin, was extremely potent. While doing lab tests on 240 compounds sounds like a lot of work, it is better than testing nearly 6,700.

Interestingly, the new antibiotic seems only to be effective against the target microbe, which is a plus. It isn’t available for people yet and may not be for some time — drug testing being what it is. However, this is still a great example of how machine learning can augment human brainpower, letting scientists and others focus on what’s really important.

Jun 3, 2023

I don’t believe in free will. This is why

Posted by in categories: cosmology, information science, neuroscience, physics

If I were a brilliant physicist, I would have written this.


Learn more about differential equations (and many other topics in maths and science) on Brilliant using the link https://brilliant.org/sabine. You can get started for free, and the first 200 will get 20% off the annual premium subscription.

Continue reading “I don’t believe in free will. This is why” »

Jun 3, 2023

Ultra-Processed Foods: AI’s New Contribution to Nutrition Science

Posted by in categories: food, health, information science, robotics/AI, science

Summary: Researchers developed a machine learning algorithm, FoodProX, capable of predicting the degree of processing in food products.

The tool scores foods on a scale from zero (minimally or unprocessed) to 100 (highly ultra-processed). FoodProX bridges gaps in existing nutrient databases, providing higher resolution analysis of processed foods.

This development is a significant advancement for researchers examining the health impacts of processed foods.

Jun 3, 2023

AI-Descartes: A Scientific Renaissance in the World of Artificial Intelligence

Posted by in categories: information science, robotics/AI

The system demonstrated its chops on Kepler’s third law of planetary motion, Einstein’s relativistic time-dilation law, and Langmuir’s equation of gas adsorption.

AI-Descartes, a new AI scientist, has successfully reproduced Nobel Prize-winning work using logical reasoning and symbolic regression to find accurate equations. The system is effective with real-world data and small datasets, with future goals including automating the construction of background theories.

In 1918, the American chemist Irving Langmuir published a paper examining the behavior of gas molecules sticking to a solid surface. Guided by the results of careful experiments, as well as his theory that solids offer discrete sites for the gas molecules to fill, he worked out a series of equations that describe how much gas will stick, given the pressure.

Jun 2, 2023

No one has done AR or VR well. Can Apple?

Posted by in categories: augmented reality, biotech/medical, information science, virtual reality

On Monday, Apple is more than likely going to reveal its long-awaited augmented or mixed reality Reality Pro headset during the keynote of its annual WWDC developer conference in California. It’s an announcement that has been tipped or teased for years now, and reporting on the topic has suggested that at various times, the project has been subject to delays, internal skepticism and debate, technical challenges and more. Leaving anything within Apple’s sphere of influence aside, the world’s overall attitude toward AR and VR has shifted considerably — from optimism, to skepticism.

Part of that trajectory is just the natural progression of any major tech hype cycle, and you could easily argue that the time to make the most significant impact in any such cycle is after the spike of undue optimism and energy has subsided. But in the case of AR and VR, we’ve actually already seen some of the tech giants with the deepest pockets take their best shots and come up wanting — not for lack of trying, but because of limitations in terms of what’s possible even at the bleeding edge of available tech. Some of those limits might actually be endemic to AR and VR, too, because of variances in the human side of the equation required to make mixed reality magic happen.

The virtual elephant in the room is, of course, Meta. The name itself pretty much sums up the situation: Facebook founder Mark Zuckerberg read a bad book and decided that VR was the inevitable end state of human endeavor — the mobile moment he essentially missed out on, but even bigger and better. Zuckerberg grew enamored by his delusion, first acquiring crowdfunded VR darling Oculus, then eventually commandeering the sobriquet for a shared virtual universe from the dystopian predictions of a better book and renaming all of Facebook after it.

Jun 1, 2023

Research team designs brain-inspired device for optoelectronic computing

Posted by in categories: information science, mathematics, robotics/AI

Perfect recall, computational wizardry and rapier wit: That’s the brain we all want, but how does one design such a brain? The real thing is comprised of ~80 billion neurons that coordinate with one another through tens of thousands of connections in the form of synapses. The human brain has no centralized processor, the way a standard laptop does.

Instead, many calculations are run in parallel, and outcomes are compared. While the operating principles of the human brain are not fully understood, existing mathematical algorithms can be used to rework deep learning principles into systems more like a human brain would. This brain-inspired computing paradigm—spiking (SNN)—provides a computing architecture well-aligned with the potential advantages of systems using both optical and .

In SNNs, information is processed in the form of spikes or action potentials, which are the that occur in real neurons when they fire. One of their key features is that they use asynchronous processing, meaning that spikes are processed as they occur in time, rather than being processed in a batch like in traditional neural networks. This allows SNNs to react quickly to changes in their inputs, and to perform certain types of computations more efficiently than traditional neural networks.

Page 82 of 329First7980818283848586Last