Once the grid goes down, an old programming language called Forth—and a new operating system called Collapse OS—may be our only salvation.
Category: computing – Page 17
By way of an answer, I’ll offer one of the physicist Richard Feynman’s most famous dictums: What I cannot create, I do not understand. For much of its history, biology has been a reductionist science, driven by the principle that the best way to understand the mind-boggling complexity of living things is to dissect them into their constituent parts—organs, cells, proteins, molecules. But life isn’t a clockwork; it’s a dynamic system, and unexpected things emerge from the interactions between all those little parts. To truly understand life, you can’t just break it down. You have to be able to put it back together, too.
The C. elegans nematode is a tiny worm, barely as long as a hair is wide, with less than a thousand cells in its body. Of those, only 302 are neurons—about as small as a brain can get. “I remember, when my first child was born, how proud I was when they reached the age they could count to 302,” said Netta Cohen, a computational neuroscientist who runs a worm lab at the University of Leeds. But there’s no shame in smallness, Cohen emphasized: C. elegans does a lot with a little. Unlike its more unpleasant cousins, it’s not a parasite, outsourcing its survival needs to bigger organisms. Instead, it’s what biologists call a “free-living” animal. “It can reproduce, it can eat, it can forage, it can escape,” Cohen said. “It’s born and it develops, and it ages and it dies—all in a millimeter.”
Worm people like Cohen are quick to tell you that no fewer than four Nobel Prizes have been awarded for work on C. elegans, which was the first animal to have both its genome sequenced and its neurons mapped. But there’s a difference between schematics and an operating manual. “We know the wiring; we don’t know the dynamics,” Cohen said. “You would think that’s an ideal problem for a physicist or a computer scientist or a mathematician to solve.”
Explore the evolution of brain–computer interface (BCI) technology, including its fundamental limitations and future prospects.
Newly achieved precise control over light emitted from incredibly tiny sources, a few nanometers in size, embedded in two-dimensional (2D) materials could lead to remarkably high-resolution monitors and advances in ultra-fast quantum computing, according to an international team led by researchers at Penn State and Université Paris-Saclay.
In a recent study, published in ACS Photonics, scientists worked together to show how the light emitted from 2D materials can be modulated by embedding a second 2D material inside them — like a tiny island of a few nanometers in size — called a nanodot. The team described how they achieved the confinement of nanodots in two dimensions and demonstrated that, by controlling the nanodot size, they could change the color and frequency of the emitted light.
“If you have the opportunity to have localized light emission from these materials that are relevant in quantum technologies and electronics, it’s very exciting,” said Nasim Alem, Penn State associate professor of materials science and engineering and co-corresponding author on the study. “Envision getting light from a zero-dimensional point in your field, like a dot in space, and not only that, but you can also control it. You can control the frequency. You can also control the wavelength where it comes from.”
Researchers say they are finally unraveling the effects of ultrafast lasers that can change material states in attoseconds —one-billionth of one-billionth of a second—the time required to complete one light wave’s optical cycle.
The new Israeli research opens up new avenues for scientists to observe light closely in laboratory settings. Under these conditions, a wave crosses a hydrogen atom in a single attosecond, compared to the time required for light to move from Earth to the Moon.
Beyond its immediate use, the development may drive future speed advancements in communications and computing by increasing researchers’ understanding of high-speed quantum light and matter interactions.
Interferometers, devices that can modulate aspects of light, play the important role of modulating and switching light signals in fiber-optic communications networks and are frequently used for gas sensing and optical computing.
Now, applied physicists at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have invented a new type of interferometer that allows precise control of light’s frequency, intensity and mode in one compact package.
Called a cascaded-mode interferometer, it is a single waveguide on a silicon-on-insulator platform that can create multiple signal paths to control the amplitude and phase of light simultaneously, a process known as optical spectral shaping. By combining mechanisms to manipulate different aspects of light into a single waveguide, the device could be used in advanced nanophotonic sensors or on-chip quantum computing.
When the plasma inside a fusion system starts to misbehave, it needs to be quickly cooled to prevent damage to the device. Researchers at Commonwealth Fusion Systems believe the best bet is a massive gas injection: essentially, a well-timed, rapid blast of cooling gas inside their fusion system, which is known as SPARC.
But how many gas valves does it take to quickly tame a plasma that is hotter than the sun? The team has to strike the perfect balance: with too few valves, some parts of SPARC might overheat. With too many, valuable space inside the vessel would be wasted.
To answer this question, researchers turned to a computer code known as M3D-C1, which is developed and maintained by scientists at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL). The code was used to model different valve configurations, and the results show that spacing six gas valves around the fusion vessel, with three on the top and three on the bottom, provides optimal protection.
Korean researchers have developed a digital holography processor that converts two-dimensional (2D) videos into real-time three-dimensional (3D) holograms. This technology is expected to play a key role in the future of holography, as it enables the instantaneous transformation of ordinary 2D videos into 3D holograms.
The Electronics and Telecommunications Research Institute (ETRI) has announced the development of a programmable semiconductor-based digital holographic media processor (RHP) using Field Programmable Gate Array (FPGA) technology. This processor can convert 2D video into 3D holograms in real-time.
The real-time holography processor is the world’s first to utilize high-bandwidth memory (HBM) to generate real-time, full-color 3D holograms from 2D video. Notably, all the hardware required for hologram generation is integrated into a single system-on-chip (SoC).
WASHINGTON — Maxar Intelligence developed a visual-based navigation technology that enables aerial drones to operate without relying on GPS, the company announced March 25.
The software, called Raptor, provides a terrain-based positioning system for drones in GPS-denied environments by leveraging detailed 3D models created from Maxar’s satellite imagery. Instead of using satellite signals, a drone equipped with Raptor compares its real-time camera feed with a pre-existing 3D terrain model to determine its position and orientation.
Peter Wilczynski, chief product officer at Maxar Intelligence, explained that the Raptor software has three main components. One is installed directly on the drone, enabling real-time position determination. Another application georegisters the drone’s video feed with Maxar’s 3D terrain data. A separate laptop-based application works alongside drone controllers, allowing operators to extract precise ground coordinates from aerial video feeds.
Magnetoresistive random-access memory, or MRAM, promises to make computers more efficient and powerful, but a few hurdles still need to be cleared.