БЛОГ

Archive for the ‘robotics/AI’ category: Page 618

Jun 20, 2020

Engineers Design Ion-Based Device That Operates Like an Energy-Efficient Brain Synapse

Posted by in category: robotics/AI

Ion-based technology may enable energy-efficient simulations of the brain’s learning process, for neural network AI systems.

Teams around the world are building ever more sophisticated artificial intelligence systems of a type called neural networks, designed in some ways to mimic the wiring of the brain, for carrying out tasks such as computer vision and natural language processing.

Using state-of-the-art semiconductor circuits to simulate neural networks requires large amounts of memory and high power consumption. Now, an MIT team has made strides toward an alternative system, which uses physical, analog devices that can much more efficiently mimic brain processes.

Continue reading “Engineers Design Ion-Based Device That Operates Like an Energy-Efficient Brain Synapse” »

Jun 19, 2020

Teaching physics to neural networks removes ‘chaos blindness’

Posted by in categories: biotech/medical, drones, robotics/AI

Researchers from North Carolina State University have discovered that teaching physics to neural networks enables those networks to better adapt to chaos within their environment. The work has implications for improved artificial intelligence (AI) applications ranging from medical diagnostics to automated drone piloting.

Neural networks are an advanced type of AI loosely based on the way that our brains work. Our natural neurons exchange electrical impulses according to the strengths of their connections. Artificial neural networks mimic this behavior by adjusting numerical weights and biases during training sessions to minimize the difference between their actual and desired outputs. For example, a can be trained to identify photos of dogs by sifting through a large number of photos, making a guess about whether the photo is of a dog, seeing how far off it is and then adjusting its weights and biases until they are closer to reality.

The drawback to this is something called “ blindness”—an inability to predict or respond to chaos in a system. Conventional AI is chaos blind. But researchers from NC State’s Nonlinear Artificial Intelligence Laboratory (NAIL) have found that incorporating a Hamiltonian function into neural networks better enables them to “see” chaos within a system and adapt accordingly.

Jun 19, 2020

Innovative dataset to accelerate autonomous driving research

Posted by in categories: robotics/AI, transportation

How can we train self-driving vehicles to have a deeper awareness of the world around them? Can computers learn from past experiences to recognize future patterns that can help them safely navigate new and unpredictable situations?

These are some of the questions researchers from the AgeLab at the MIT Center for Transportation and Logistics and the Toyota Collaborative Safety Research Center (CSRC) are trying to answer by sharing an innovative new open dataset called DriveSeg.

Through the release of DriveSeg, MIT and Toyota are working to advance research in autonomous driving systems that, much like , perceive the driving environment as a continuous flow of visual information.

Jun 19, 2020

Deep learning-based surrogate models outperform simulators and could hasten scientific discoveries

Posted by in categories: physics, robotics/AI

Surrogate models supported by neural networks can perform as well, and in some ways better, than computationally expensive simulators and could lead to new insights in complicated physics problems such as inertial confinement fusion (ICF), Lawrence Livermore National Laboratory (LLNL) scientists reported.

In a paper published by the Proceedings of the National Academy of Sciences (PNAS), LLNL researchers describe the development of a deep learning-driven Manifold & Cyclically Consistent (MaCC) surrogate model incorporating a multi-modal neural network capable of quickly and accurately emulating complex scientific processes, including the high-energy density physics involved in ICF.

The research team applied the model to ICF implosions performed at the National Ignition Facility (NIF), in which a computationally expensive numerical simulator is used to predict the energy yield of a target imploded by shock waves produced by the facility’s high-energy laser. Comparing the results of the neural network-backed surrogate to the existing simulator, the researchers found the surrogate could adequately replicate the simulator, and significantly outperformed the current state-of-the-art in surrogate models across a wide range of metrics.

Continue reading “Deep learning-based surrogate models outperform simulators and could hasten scientific discoveries” »

Jun 19, 2020

Amazon says it mitigated the largest DDoS attack ever recorded

Posted by in categories: cybercrime/malcode, robotics/AI

Amazon Web Services recently had to defend against a DDoS attack with a peak traffic volume of 2.3 Tbps, the largest ever recorded, ZDNet reports. Detailing the attack in its Q1 2020 threat report, Amazon said that the attack occurred back in February, and was mitigated by AWS Shield, a service designed to protect customers of Amazon’s on-demand cloud computing platform from DDoS attacks, as well as from bad bots and application vulnerabilities. The company did not disclose the target or the origin of the attack.

To put that number into perspective, prior to February of this year, ZDNet notes that the largest DDoS attack recorded was back in March 2018, when NetScout Arbor mitigated a 1.7 Tbps attack. The previous month, GitHub disclosed that it had been hit by an attack with a peak of 1.35 Tbps.

Jun 18, 2020

The startup making deep learning possible without specialized hardware

Posted by in category: robotics/AI

The discovery that led Nir Shavit to start a company came about the way most discoveries do: by accident. The MIT professor was working on a project to reconstruct a map of a mouse’s brain and needed some help from deep learning. Not knowing how to program graphics cards, or GPUs, the most common hardware choice for deep-learning models, he opted instead for a central processing unit, or CPU, the most generic computer chip found in any average laptop.

“Lo and behold,” Shavit recalls, “I realized that a CPU can do what a GPU does—if programmed in the right way.”

This insight is now the basis for his startup, Neural Magic, which launched its first suite of products today. The idea is to allow any company to deploy a deep-learning model without the need for specialized hardware. It would not only lower the costs of deep learning but also make AI more widely accessible.

Continue reading “The startup making deep learning possible without specialized hardware” »

Jun 18, 2020

Baidu Breaks Off an AI Alliance Amid Strained US-China Ties

Posted by in category: robotics/AI

The search giant was the only Chinese member of the Partnership on Artificial Intelligence, a US-led effort to foster collaboration on ethical issues.

Jun 18, 2020

Qualcomm Brings 5G And AI To Next Gen Robotics And Drones

Posted by in categories: drones, internet, robotics/AI, security

Qualcomm today announced its RB5 reference design platform for the robotics and intelligent drone ecosystem. As the field of robotics continues to evolve towards more advanced capabilities, Qualcomm’s latest platform should help drive the next step in robotics evolution with intelligence and connectivity. The company has combined its 5G connectivity and AI-focused processing along with a flexible peripherals architecture based on what they are calling “mezzanine” modules. The new Qualcomm RB5 platform promises an acceleration in the robotics design and development process with a full suite of hardware, software and development tools. The company is making big promises for the RB5 platform, and if current levels of ecosystem engagement are any indicator, the platform will have ample opportunities to prove itself.

Targeting robot and drone designs meant for enterprise, industrial and professional service applications, at the heart of the platform is Qualcomm’s QRB5165 system on chip (SOC) processor. The QRB5165 is derived from the Snapdragon 865 processor used in mobile devices, but customized for robotic applications with increased camera and image signal processor (ISP) capabilities for additional camera sensors, higher industrial grade temperature and security ratings and a non-Package-on-Package (POP) configuration option.

To help bring highly capable artificial intelligence and machine learning capabilities to bear in these applications, the chip is rated for 15 Tera Operations Per Second (TOPS) of AI performance. Additionally, as it is critical that robots and drones can “see” their surroundings, the architecture also includes support for up to seven concurrent cameras and a dedicated computer vision engine meant to provide enhanced video analytics. Given the sheer amount of information that the platform can generate, process and analyze, the platform also has support for a communications module boasting 4G and 5G connectivity speeds. In particular, the addition of 5G to the platform will allow high speed and low latency data connectivity to the robots or drones.

Continue reading “Qualcomm Brings 5G And AI To Next Gen Robotics And Drones” »

Jun 18, 2020

The Future Of Conversational AI

Posted by in categories: futurism, robotics/AI

With conversational AI, organizations can dramatically improve their customer experience. Here’s a look at the technology and where it’s headed.

Jun 18, 2020

OpenAI’s New Text Generator Writes Even More Like a Human

Posted by in categories: information science, robotics/AI

The data came from Common Crawl, a non-profit that scans the open web every month and downloads content from billions of HTML pages then makes it available in a special format for large-scale data mining. In 2017 the average monthly “crawl” yielded over three billion web pages. Common Crawl has been doing this since 2011, and has petabytes of data in over 40 different languages. The OpenAI team applied some filtering techniques to improve the overall quality of the data, including adding curated datasets like Wikipedia.

GPT stands for Generative Pretrained Transformer. The “transformer” part refers to a neural network architecture introduced by Google in 2017. Rather than looking at words in sequential order and making decisions based on a word’s positioning within a sentence, text or speech generators with this design model the relationships between all the words in a sentence at once. Each word gets an “attention score,” which is used as its weight and fed into the larger network. Essentially, this is a complex way of saying the model is weighing how likely it is that a given word will be preceded or followed by another word, and how much that likelihood changes based on the other words in the sentence.

Through finding the relationships and patterns between words in a giant dataset, the algorithm ultimately ends up learning from its own inferences, in what’s called unsupervised machine learning. And it doesn’t end with words—GPT-3 can also figure out how concepts relate to each other, and discern context.

Continue reading “OpenAI’s New Text Generator Writes Even More Like a Human” »

Page 618 of 1,407First615616617618619620621622Last