THE FINANCE industry has had a long and profitable relationship with computing. It was an early adopter of everything from mainframe computers to artificial intelligence (see timeline). For most of the past decade more trades have been done at high frequency by complex algorithms than by humans. Now big banks have their eyes on quantum computing, another cutting-edge technology.
A fundamentally new kind of computing will shake up finance—the question is when.
Popular media and policy-oriented discussions on the incorporation of artificial intelligence (AI) into nuclear weapons systems frequently focus on matters of launch authority—that is, whether AI, especially machine learning (ML) capabilities, should be incorporated into the decision to use nuclear weapons and thereby reduce the role of human control in the decisionmaking process. This is a future we should avoid. Yet while the extreme case of automating nuclear weapons use is high stakes, and thus existential to get right, there are many other areas of potential AI adoption into the nuclear enterprise that require assessment. Moreover, as the conventional military moves rapidly to adopt AI tools in a host of mission areas, the overlapping consequences for the nuclear mission space, including in nuclear command, control, and communications (NC3), may be underappreciated.
AI may be used in ways that do not directly involve or are not immediately recognizable to senior decisionmakers. These areas of AI application are far left of an operational decision or decision to launch and include four priority sectors: security and defense; intelligence activities and indications and warning; modeling and simulation, optimization, and data analytics; and logistics and maintenance. Given the rapid pace of development, even if algorithms are not used to launch nuclear weapons, ML could shape the design of the next-generation ballistic missile or be embedded in the underlying logistics infrastructure. ML vision models may undergird the intelligence process that detects the movement of adversary mobile missile launchers and optimize the tipping and queuing of overhead surveillance assets, even as a human decisionmaker remains firmly in the loop in any ultimate decisions about nuclear use. Understanding and navigating these developments in the context of nuclear deterrence and the understanding of escalation risks will require the analytical attention of the nuclear community and likely the adoption of risk management approaches, especially where the exclusion of AI is not reasonable or feasible.
A new study illuminates surprising choreography among spinning atoms. In a paper appearing in the journal Nature, researchers from MIT and Harvard University reveal how magnetic forces at the quantum, atomic scale affect how atoms orient their spins.
In experiments with ultracold lithium atoms, the researchers observed different ways in which the spins of the atoms evolve. Like tippy ballerinas pirouetting back to upright positions, the spinning atoms return to an equilibrium orientation in a way that depends on the magnetic forces between individual atoms. For example, the atoms can spin into equilibrium in an extremely fast, “ballistic” fashion or in a slower, more diffuse pattern.
The researchers found that these behaviors, which had not been observed until now, could be described mathematically by the Heisenberg model, a set of equations commonly used to predict magnetic behavior. Their results address the fundamental nature of magnetism, revealing a diversity of behavior in one of the simplest magnetic materials.
A team of researchers from MIT and Intel have created an algorithm that can create algorithms. In the long term, that could radically change the role of software developers.
Recently, Google introduced Portrait Light, a feature on its Pixel phones that can be used to enhance portraits by adding an external light source not present at the time the photo was taken. In a new blog post, Google explains how they made this possible.
In their post, engineers at Google Research note that professional photographers discovered long ago that the best way to make people look their best in portraits is by using secondary flash devices that are not attached to the camera. Such flash devices can be situated by the photographer prior to photographing a subject by taking into account the direction their face is pointing, other light available, skin tone and other factors. Google has attempted to capture those factors with its new portrait-enhancing software. The system does not require the camera phone operator to use another light source. Instead, the software simply pretends that there was another light source all along, and then allows the user to determine the most flattering configuration for the subject.
The engineers explain they achieved this feat using two algorithms. The first, which they call automatic directional light placement, places synthetic light into the scene as a professional photographer would. The second algorithm is called synthetic post-capture relighting. It allows for repositioning the light after the fact in a realistic and natural-looking way.
Put a robot in a tightly-controlled environment and it can quickly surpass human performance at complex tasks, from building cars to playing table tennis. But throw these machines a curve ball and they’re in trouble—just check out this compilation of some of the world’s most advanced robots coming unstuck in the face of notoriously challenging obstacles like sand, steps, and doorways.
The reason robots tend to be so fragile is that the algorithms that control them are often manually designed. If they encounter a situation the designer didn’t think of, which is almost inevitable in the chaotic real world, then they simply don’t have the tools to react.
Rapid advances in AI have provided a potential workaround by letting robots learn how to carry out tasks instead of relying on hand-coded instructions. A particularly promising approach is deep reinforcement learning, where the robot interacts with its environment through a process of trial-and-error and is rewarded for carrying out the correct actions. Over many repetitions it can use this feedback to learn how to accomplish the task at hand.
Microsoft’s new write-once storage medium is constructed from quartz glass, stores data using lasers, and uses machine learning algorithms for decoding.
While it is not known exactly who was behind this attack, a big concern is the sharing and use of these stolen red team tools by both sophisticated and non-sophisticated actors, similar to what we saw in 2017 with the ShadowBrokers group breach of the NSA’s Equation Group.
Dr. Carolina Reis Oliveria, is the CEO and Co-Founder of OneSkin Technologies, a biotechnology platform dedicated to exploring longevity science.
Carolina holds her Ph.D. in Immunology at the Federal University of Minas Gerais, in collaboration with the Rutgers University, where she conducted research with pluripotent stem cells as a source of retinal pigmented epithelium (RPE) cells, as well as the potential of RPE-stem cells derived as toxicological models for screening of new drugs with intra-ocular applications.
She founded a company called CELLSEQ solutions in Brazil which develops tools to revolutionize the safety and toxicology assays performed by pharmaceutical, cosmetic, agro-chemical and food industries, with technology based on stem cells and big data analysis.
She is an alumnus of IndieBio, the world’s leading biotechnology accelerator.
In 2016, Carolina relocated to Silicon Valley from Latin America to co-found OneSkin, and to lead the development of the company’s technologies.