БЛОГ

Archive for the ‘computing’ category: Page 217

Dec 13, 2022

Brain Implants are Here: Blackrock’s Neuroport & Synchron’s Stentrode

Posted by in categories: biotech/medical, computing, mobile phones, neuroscience

Neurotechnology and Brain-Computer Interfaces are advancing at a rapid pace and may soon be a life-changing technology for those with limited mobility and/or paralysis. There are already two brain implants, Blackrock Neurotech’s NeuroPort and Synchron’s Stentrode, that have been approved to start clinical trials under an Investigational Device Exemption. In this video, we compare these devices on the merits of safety, device specifications, and capability.

Thanks to Blackrock Neurotech for sponsoring this video. The opinions expressed in this video are that of The BCI Guys and should be taken as such.

Continue reading “Brain Implants are Here: Blackrock’s Neuroport & Synchron’s Stentrode” »

Dec 13, 2022

Particles of light may create fluid flow, data-theory comparison suggests

Posted by in categories: computing, particle physics

A new computational analysis by theorists at the U.S. Department of Energy’s Brookhaven National Laboratory and Wayne State University supports the idea that photons (a.k.a. particles of light) colliding with heavy ions can create a fluid of “strongly interacting” particles. In a paper just published in Physical Review Letters, they show that calculations describing such a system match up with data collected by the ATLAS detector at Europe’s Large Hadron Collider (LHC).

As the paper explains, the calculations are based on the hydrodynamic particle flow seen in head-on collisions of various types of ions at both the LHC and the Relativistic Heavy Ion Collider (RHIC), a DOE Office of Science user facility for research at Brookhaven Lab. With only modest changes, these calculations also describe seen in near-miss collisions, where that form a cloud around the speeding ions collide with the ions in the opposite beam.

“The upshot is that using the same framework we use to describe -lead and proton-lead collisions, we can describe the data of these ultra-peripheral collisions where we have a photon colliding with a lead nucleus,” said Brookhaven Lab theorist Bjoern Schenke, a co-author of the paper. “That tells you there’s a possibility that in these photon-ion collisions, we create a small dense strongly interacting medium that is well described by hydrodynamics—just like in the larger systems.”

Dec 13, 2022

Samsung puts processing-in-memory chip onto AMD MI100 GPU

Posted by in category: computing

Korean tech giant claims big performance, energy efficiency gains with memory tech.

Dec 13, 2022

Animal brains connected up to make mind-melded computer

Posted by in categories: computing, neuroscience

Year 2015 😗


The power of rats’ and monkeys’ brains has been pooled by wiring them up. If we could do the same with humans, it could allow non-verbal collaboration.

Dec 13, 2022

Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers

Posted by in categories: biotech/medical, computing, neuroscience

Year 2018 😗


State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10% of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.

Modern neuroscience has established numerical simulation as a third pillar supporting the investigation of the dynamics and function of neuronal networks, next to experimental and theoretical approaches. Simulation software reflects the diversity of modern neuroscientific research with tools ranging from the molecular scale to investigate processes at individual synapses (Wils and De Schutter, 2009) to whole-brain simulations at the population level that can be directly related to clinical measures (Sanz Leon et al., 2013). Most neuronal network simulation software, however, is based on the hypothesis that the main processes of brain function can be captured at the level of individual nerve cells and their interactions through electrical pulses. Since these pulses show little variation in shape, it is generally believed that they convey information only through their timing or rate of occurrence.

Dec 13, 2022

Japan’s New Semiconductor Foundry Rapidus Taps IBM For 2nm Process

Posted by in categories: business, computing, engineering, finance, government

Japan wants to get back into the leading-edge semiconductor business and very recently a new company was formed to reboot its semiconductor industry. The company is named Rapidus, referring to rapid production of new chips, a clear reference to how the company plans to differentiate its business from other foundries such as TSMC, Samsung, and Intel. The company has announced a partnership with IBM Research to develop IBM’s 2nm technology in fabs that Rapidus plans to build in Japan during the second part of this decade. Previously, Rapidus announced a collaboration with the Belgium-based microelectronics research hub IMEC on advanced semiconductor technologies. Imec is collaborative semiconductor research organization working the world’s major foundries, IDMs, fabless and fablite companies, material and tool suppliers, EDA companies and application developers.

The IBM process uses gate-all-around transistors — IBM refers to them as nano sheet FETs — which is the next generation of transistor design that enables device scaling beyond today’s FinFETs. The 2nm structures will require Rapidus to use ASML’s EUV manufacturing equipment. Business details with IBM were not disclosed, but there’s likely two parts to the deal: a cross-licensing agreement for the intellectual property necessary to build the product and a joint development agreement. While the announcement is nominally for IBM’s 2nm process, it likely includes a long-term commitment to build advanced semiconductor chips going beyond the 2nm process node.

Rapidus was formed by semiconductor veterans such as Rapidus President Atsuyoshi Koike, with backing by leading Japanese technology and financial firms, including Denso, Kioxia, Mitsubishi UFJ Bank, NEC, NTT, Softbank, Sony, and Toyota Motor. The Japanese government is also subsidizing Rapidus. The big change for Japan compared to prior national efforts is the collaboration with international organizations. It’s a recognition Japan cannot go it alone. This appears to be a fundamental change in Japanese attitudes. Building a fab in Japan will be helped by Japan’s strong manufacturing ecosystem of materials, equipment, and engineering talent.

Dec 13, 2022

Japan and the Netherlands just picked sides in the U.S.-China cold war over chips. Here’s what they chose

Posted by in category: computing

The two American allies are signing on to the U.S. offensive against China’s flagging chip industry.

Dec 12, 2022

Christopher Nolan Recreated a Nuclear Weapon Explosion Without CGI, Developed New IMAX Film for ‘Oppenheimer’: ‘A Huge Challenge’

Posted by in categories: computing, entertainment, military, quantum physics

Christopher Nolan revealed to Total Film magazine that he recreated the first nuclear weapon detonation without CGI effects as part of the production for his new movie “Oppenehimer.” The film stars longtime Nolan collaborator Cillian Murphy as J. Robert Oppenheimer, a leading figure of the Manhattan Project and the creation the atomic bomb during World War II. Nolan has always favored practical effects over VFX (he even blew up a real Boeing 747 for “Tenet”), so it’s no surprise he went the practical route when it came time to film a nuclear weapon explosion.

“I think recreating the Trinity test [the first nuclear weapon detonation, in New Mexico] without the use of computer graphics was a huge challenge to take on,” Nolan said. “Andrew Jackson — my visual effects supervisor, I got him on board early on — was looking at how we could do a lot of the visual elements of the film practically, from representing quantum dynamics and quantum physics to the Trinity test itself, to recreating, with my team, Los Alamos up on a mesa in New Mexico in extraordinary weather, a lot of which was needed for the film, in terms of the very harsh conditions out there — there were huge practical challenges.”

Dec 12, 2022

Scientists Have Blown Away the Internet Speed Record With an Optical Chip

Posted by in categories: computing, internet

Using a chip-based optical frequency comb, researchers transmitted almost double the global internet traffic in a single second.

Dec 12, 2022

The World-Changing Race to Develop the Quantum Computer

Posted by in categories: climatology, computing, internet, quantum physics, sustainability

Such a device could help address climate change and food scarcity, or break the Internet. Will the U.S. or China get there first?