БЛОГ

Archive for the ‘information science’ category: Page 69

Jan 14, 2023

Quantum computers: How scientists can shield against cyber attacks

Posted by in categories: cybercrime/malcode, encryption, information science, quantum physics

Making predictions is never easy, but it is agreed that cryptography will be altered by the advent of quantum computers.

Thirteen, 53, and 433. That’s the size of quantum computers.


Hh5800/iStock.

Continue reading “Quantum computers: How scientists can shield against cyber attacks” »

Jan 14, 2023

Quantum machine learning (QML) poised to make a leap in 2023

Posted by in categories: information science, quantum physics, robotics/AI, security

Check out all the on-demand sessions from the Intelligent Security Summit here.

Classical machine learning (ML) algorithms have proven to be powerful tools for a wide range of tasks, including image and speech recognition, natural language processing (NLP) and predictive modeling. However, classical algorithms are limited by the constraints of classical computing and can struggle to process large and complex datasets or to achieve high levels of accuracy and precision.

Enter quantum machine learning (QML).

Jan 13, 2023

Metformin For Anti-Aging: Disproved & Dangerous!

Posted by in categories: biotech/medical, information science, life extension

This might be important. It might not be over for metformin just yet though as a mice study showed that rapamycin combined with metformin removed each other’s side effects.


If you are a non-diabetic who takes metformin for longevity, I highly recommend you stop immediately. Hear me out, and at the end of the video I’ll share what to do instead.

Continue reading “Metformin For Anti-Aging: Disproved & Dangerous!” »

Jan 13, 2023

Artificial Pancreas Developed That Can Help Maintain Healthy Glucose Levels in Type 2 Diabetes Patients

Posted by in categories: biotech/medical, health, information science

Scientists at the University of Cambridge have successfully trialed an artificial pancreas for use by patients living with type 2 diabetes. The device – powered by an algorithm developed at the University of Cambridge – doubled the amount of time patients were in the target range for glucose compared to standard treatment and halved the time spent experiencing high glucose levels.

Around 415 million people worldwide are estimated to be living with type 2 diabetes, which costs around $760 billion in annual global health expenditure. According to Diabetes UK, more than 4.9 million people have diabetes in the UK alone, of whom 90% have type 2 diabetes, and this is estimated to cost the NHS £10 billion per year.

“Many people with type 2 diabetes struggle to manage their blood sugar levels using the currently available treatments, such as insulin injections. The artificial pancreas can provide a safe and effective approach to help them, and the technology is simple to use and can be implemented safely at home.” —

Jan 13, 2023

Generative AI: From Data Generation to Creative Intelligence

Posted by in categories: business, information science, robotics/AI

A common idea that our creativity is what makes us uniquely human has shaped society but strides of progress made in the domain of Generative Artificial Intelligence question this very notion. Generative AI is an emerging field that involves the creation of original content or data using machine learning algorithms.

As we think about a future where humans and AI partner in iterative creative cycles, we consider how generative AI could impact current businesses and possibly create new ones. Up until recently, machines were relegated to analysis and cognitive roles, but today algorithms are improving at generating original content. These technologies are iterative in principle, one is built on top of the last one, and each new iteration enhances the algorithm and increases the potential for discovery exponentially.

The technology presents itself as a more refined and mature breed of AI that has sent investors into a frenzy and among all this emerges a clear market leader — OpenAI. Its flagship products-ChatGPT and DALL-E proved to be industry disruptors and brought generative AI tools to the masses. DALL-E allows people to generate and edit photo-realistic images simply by describing what they want to see, while ChatGPT does the same through a text medium.

Jan 13, 2023

Visualizing a complex electron wavefunction using high-resolution attosecond technology

Posted by in categories: information science, particle physics, quantum physics

The early 20th century saw the advent of quantum mechanics to describe the properties of small particles, such as electrons or atoms. Schrödinger’s equation in quantum mechanics can successfully predict the electronic structure of atoms or molecules. However, the “duality” of matter, referring to the dual “particle” and “wave” nature of electrons, remained a controversial issue. Physicists use a complex wavefunction to represent the wave nature of an electron.

“Complex” numbers are those that have both “real” and “imaginary” parts—the ratio of which is referred to as the “phase.” However, all directly measurable quantities must be “real”. This leads to the following challenge: when the electron hits a detector, the “complex” phase information of the disappears, leaving only the square of the amplitude of the wavefunction (a “real” value) to be recorded. This means that electrons are detected only as particles, which makes it difficult to explain their dual properties in atoms.

The ensuing century witnessed a new, evolving era of physics, namely, physics. The attosecond is a very short time scale, a billionth of a billionth of a second. “Attosecond physics opens a way to measure the phase of electrons. Achieving attosecond time-resolution, electron dynamics can be observed while freezing ,” explains Professor Hiromichi Niikura from the Department of Applied Physics, Waseda University, Japan, who, along with Professor D. M. Villeneuve—a principal research scientist at the Joint Attosecond Science Laboratory, National Research Council, and adjunct professor at University of Ottawa—pioneered the field of attosecond physics.

Jan 12, 2023

China’s new quantum code-breaking algorithm raises concerns in the US

Posted by in categories: computing, encryption, information science, quantum physics

The new algorithm could render mainstream encryption powerless within years.

Chinese researchers claim to have introduced a new code-breaking algorithm that, if successful, could render mainstream encryption powerless within years rather than decades.

The team, led by Professor Long Guilu of Tsinghua University, proclaimed that a modest quantum computer constructed with currently available technology could run their algorithm, South China Morning Post (SCMP) reported on Wednesday.

Jan 11, 2023

Open-Sourcing And Accelerating Precision Health Of The Future: Progress, Potential and Possibilities Podcast episode

Posted by in categories: biotech/medical, food, health, information science, robotics/AI

Simon Waslander is the Director of Collaboration, at the CureDAO Alliance for the Acceleration of Clinical Research (https://www.curedao.org/), a community-owned platform for the precision health of the future.

CureDAO is creating an open-source platform to discover how millions of factors, like foods, drugs, and supplements affect human health, within a decentralized autonomous organization (DAO), making suffering optional through the creation of a “WordPress of health data”.

Continue reading “Open-Sourcing And Accelerating Precision Health Of The Future: Progress, Potential and Possibilities Podcast episode” »

Jan 11, 2023

Neural network expert explains NEURALINK (in simple language)

Posted by in categories: information science, internet, robotics/AI

00:00 Trailer.
05:54 Tertiary brain layer.
19:49 Curing paralysis.
23:09 How Neuralink works.
33:34 Showing probes.
44:15 Neuralink will be wayyy better than prior devices.
1:01:20 Communication is lossy.
1:14:27 Hearing Bluetooth, WiFi, Starlink.
1:22:50 Animal testing & brain proxies.
1:29:57 Controlling muscle units w/ Neuralink.

I had the privilege of speaking with James Douma-a self-described deep learning dork. James’ experience and technical understanding are not easily found. I think you’ll find his words to be intriguing and insightful. This is one of several conversations James and I plan to have.

Continue reading “Neural network expert explains NEURALINK (in simple language)” »

Jan 11, 2023

AI creates high-resolution brain images from low-field strength MR scans

Posted by in categories: biotech/medical, information science, robotics/AI

Portable, low-field strength MRI systems have the potential to transform neuroimaging – provided that their low spatial resolution and low signal-to-noise (SNR) ratio can be overcome. Researchers at Harvard Medical School are harnessing artificial intelligence (AI) to achieve this goal. They have developed a machine learning super-resolution algorithm that generates synthetic images with high spatial resolution from lower resolution brain MRI scans.

The convolutional neural network (CNN) algorithm, known as LF-SynthSR, converts low-field strength (0.064 T) T1-and T2-weighted brain MRI sequences into isotropic images with 1 mm spatial resolution and the appearance of a T1-weighted magnetization-prepared rapid gradient-echo (MP-RAGE) acquisition. Describing their proof-of-concept study in Radiology, the researchers report that the synthetic images exhibited high correlation with images acquired by 1.5 T and 3.0 T MRI scanners.

Morphometry, the quantitative size and shape analysis of structures in an image, is central to many neuroimaging studies. Unfortunately, most MRI analysis tools are designed for near-isotropic, high-resolution acquisitions and typically require T1-weighted images such as MP-RAGE. Their performance often drops rapidly as voxel size and anisotropy increase. As the vast majority of existing clinical MRI scans are highly anisotropic, they cannot be reliably analysed with existing tools.

Page 69 of 293First6667686970717273Last