БЛОГ

Archive for the ‘supercomputing’ category: Page 52

Apr 21, 2021

Cerebras launches new AI supercomputing processor with 2.6 trillion transistors

Posted by in categories: robotics/AI, supercomputing

Cerebras Systems has unveiled its new Wafer Scale Engine 2 processor with a record-setting 2.6 trillion transistors and 850000 AI-optimized cores. It’s built for supercomputing tasks, and it’s the second time since 2019 that Los Altos, California-based Cerebras has unveiled a chip that is basically an entire wafer.

Chipmakers normally slice a wafer from a 12-inch-diameter ingot of silicon to process in a chip factory. Once processed, the wafer is sliced into hundreds of separate chips that can be used in electronic hardware.

But Cerebras, started by SeaMicro founder Andrew Feldman, takes that wafer and makes a single, massive chip out of it. Each piece of the chip, dubbed a core, is interconnected in a sophisticated way to other cores. The interconnections are designed to keep all the cores functioning at high speeds so the transistors can work together as one.

Apr 16, 2021

Simulations reveal how dominant SARS-CoV-2 strain binds to host, succumbs to antibodies

Posted by in categories: biotech/medical, supercomputing

Large-scale supercomputer simulations at the atomic level show that the dominant G form variant of the COVID-19-causing virus is more infectious partly because of its greater ability to readily bind to its target host receptor in the body, compared to other variants. These research results from a Los Alamos National Laboratory-led team illuminate the mechanism of both infection by the G form and antibody resistance against it, which could help in future vaccine development.

“We found that the interactions among the basic building blocks of the Spike protein become more symmetrical in the G form, and that gives it more opportunities to bind to the in the host—in us,” said Gnana Gnanakaran, corresponding author of the paper published today in Science Advances. “But at the same time, that means antibodies can more easily neutralize it. In essence, the variant puts its head up to bind to the receptor, which gives antibodies the chance to attack it.”

Researchers knew that the variant, also known as D614G, was more infectious and could be neutralized by antibodies, but they didn’t know how. Simulating more than a million and requiring about 24 million CPU hours of supercomputer time, the new work provides molecular-level detail about the behavior of this variant’s Spike.

Apr 16, 2021

New processor will enable 10 times faster training of AI

Posted by in categories: robotics/AI, supercomputing

NVIDIA has unveiled ‘Grace’ – its first data centre CPU, which will deliver a 10x performance leap for systems training AI models, using energy-efficient ARM cores. The company also revealed plans for a 20 exaflop supercomputer.

Apr 15, 2021

Photonic Supercomputer For AI: 10X Faster, 90% Less Energy, Plus Runway For 100X Speed Boost

Posted by in categories: robotics/AI, supercomputing

The Lightmatter photonic computer is 10 times faster than the fastest NVIDIA artificial intelligence GPU while using far less energy. And it has a runway for boosting that massive advantage by a factor of 100, according to CEO Nicholas Harris.

In the process, it may just restart a moribund Moore’s Law.

Or completely blow it up.

Apr 12, 2021

Xenobots: Living, Biological Robots that Work in Swarms

Posted by in categories: biological, robotics/AI, supercomputing

As the Tufts scientists were creating the physical xenobot organisms, researchers working in parallel at the University of Vermont used a supercomputer to run simulations to try and find ways of assembling these living robots in order to perform useful tasks.


Scientists at Tufts University have created a strange new hybrid biological/mechanical organism that’s made of living cells, but operates like a robot.

Mar 23, 2021

‘Doodles of light’ in real time mark leap for holograms at home

Posted by in categories: holograms, information science, supercomputing

Researchers from Tokyo Metropolitan University have devised and implemented a simplified algorithm for turning freely drawn lines into holograms on a standard desktop CPU. They dramatically cut down the computational cost and power consumption of algorithms that require dedicated hardware. It is fast enough to convert writing into lines in real time, and makes crisp, clear images that meet industry standards. Potential applications include hand-written remote instructions superimposed on landscapes and workbenches.

T potential applications of holography include important enhancements to vital, practical tasks, including remote instructions for surgical procedures, electronic assembly on circuit boards, or directions projected on landscapes for navigation. Making holograms available in a wide range of settings is vital to bringing this technology out of the lab and into daily life.

One of the major drawbacks of this state-of-the-art technology is the computational load of generation. The kind of quality we’ve come to expect in our 2D displays is prohibitive in 3D, requiring supercomputing levels of number crunching to achieve. There is also the issue of power consumption. More widely available hardware like GPUs in gaming rigs might be able to overcome some of these issues with raw power, but the amount of electricity they use is a major impediment to mobile applications. Despite improvements to available hardware, the solution can’t be achieved by brute force.

Mar 17, 2021

Fujitsu Leverages World’s Fastest Supercomputer and AI to Predict Tsunami Flooding

Posted by in categories: robotics/AI, supercomputing

A new AI model that harnesses the power of the world’s fastest supercomputer, Fugaku, can rapidly predict tsunami flooding in coastal areas before the tsunami reaches land.

The development of the new technology was announced as part of a joint project between the International Research Institute of Disaster Science (IREDeS) at Tohoku University, the Earthquake Research Institute at the University of Tokyo, and Fujitsu Laboratories.

The 2011 Great East Japan Earthquake and subsequent tsunami highlighted the shortcomings in disaster mitigation and the need to utilize information for efficient and safe evacuations.

Mar 11, 2021

Using artificial intelligence to generate 3D holograms in real-time

Posted by in categories: holograms, physics, robotics/AI, supercomputing

https://youtube.com/watch?v=NOujMHH3LAU

Holograms deliver an exceptional representation of 3D world around us. Plus, they’re beautiful. (Go ahead — check out the holographic dove on your Visa card.) Holograms offer a shifting perspective based on the viewer’s position, and they allow the eye to adjust focal depth to alternately focus on foreground and background.

Researchers have long sought to make computer-generated holograms, but the process has traditionally required a supercomputer to churn through physics simulations, which is time-consuming and can yield less-than-photorealistic results. Now, MIT researchers have developed a new way to produce holograms almost instantly — and the deep learning-based method is so efficient that it can run on a laptop in the blink of an eye, the researchers say.

Mar 10, 2021

The 10 most innovative companies in artificial intelligence

Posted by in categories: robotics/AI, supercomputing

From Cerebras Systems’ AI supercomputer to OpenAI’s natural language processor GPT-3, these are the companies pushing machine learning to the edge.

Mar 9, 2021

The Mechanical Basis of Memory – the MeshCODE Theory

Posted by in categories: biological, neuroscience, supercomputing

One of the major unsolved mysteries of biological science concerns the question of where and in what form information is stored in the brain. I propose that memory is stored in the brain in a mechanically encoded binary format written into the conformations of proteins found in the cell-extracellular matrix (ECM) adhesions that organise each and every synapse. The MeshCODE framework outlined here represents a unifying theory of data storage in animals, providing read-write storage of both dynamic and persistent information in a binary format. Mechanosensitive proteins that contain force-dependent switches can store information persistently, which can be written or updated using small changes in mechanical force. These mechanosensitive proteins, such as talin, scaffold each synapse, creating a meshwork of switches that together form a code, the so-called MeshCODE. Large signalling complexes assemble on these scaffolds as a function of the switch patterns and these complexes would both stabilise the patterns and coordinate synaptic regulators to dynamically tune synaptic activity. Synaptic transmission and action potential spike trains would operate the cytoskeletal machinery to write and update the synaptic MeshCODEs, thereby propagating this coding throughout the organism. Based on established biophysical principles, such a mechanical basis for memory would provide a physical location for data storage in the brain, with the binary patterns, encoded in the information-storing mechanosensitive molecules in the synaptic scaffolds, and the complexes that form on them, representing the physical location of engrams. Furthermore, the conversion and storage of sensory and temporal inputs into a binary format would constitute an addressable read-write memory system, supporting the view of the mind as an organic supercomputer.

I would like to propose here a unifying theory of rewritable data storage in animals. This theory is based around the realisation that mechanosensitive proteins, which contain force-dependent binary switches, can store information persistently in a binary format, with the information stored in each molecule able to be written and/or updated via small changes in mechanical force. The protein talin contains 13 of these switches (Yao et al., 2016; Goult et al., 2018; Wang et al., 2019), and, as I argue here, it is my assertion that talin is the memory molecule of animals. These mechanosensitive proteins scaffold each and every synapse (Kilinc, 2018; Lilja and Ivaska, 2018; Dourlen et al., 2019) and have been considered mainly structural. However, these synaptic scaffolds also represent a meshwork of binary switches that I propose form a code, the so-called MeshCODE.

Page 52 of 95First4950515253545556Last