БЛОГ

Archive for the ‘robotics/AI’ category: Page 24

Dec 12, 2023

AI brains in lab: Scientists create a computer with human brain tissue

Posted by in category: robotics/AI

Brain-inspired computing hardware could be used to address limitations in AI.


Scientists have created Brainoware, a computer software that makes computing more brain-like by integrating real, actual, human brain tissues.

Dec 12, 2023

AI-Enhanced Employee Onboarding: A New Era In HR Practices

Posted by in category: robotics/AI

Onboarding new employees has always been a pivotal part of HR’s responsibilities.


Onboarding has undergone a radical transformation. Explore how AI and data are reshaping how we welcome new hires and helping us deliver engaging onboarding experiences.

Dec 12, 2023

Hudson Labs Solves Finance AI Data Overload Puzzle

Posted by in categories: finance, robotics/AI

The promise and peril of AI have dominated conversations this year.


Innovator leverages large-language models for financial services tech tools that better analyst work and performance.

Dec 12, 2023

Phi-2: The surprising power of small language models

Posted by in categories: innovation, robotics/AI

Microsoft research releases Phi-2 and promptbase.

Phi-2 outperforms other existing small language models, yet it’s small enough to run on a laptop or mobile device.


Over the past few months, our Machine Learning Foundations team at Microsoft Research has released a suite of small language models (SLMs) called “Phi” that achieve remarkable performance on a variety of benchmarks. Our first model, the 1.3 billion parameter Phi-1 (opens in new tab), achieved state-of-the-art performance on Python coding among existing SLMs (specifically on the HumanEval and MBPP benchmarks). We then extended our focus to common sense reasoning and language understanding and created a new 1.3 billion parameter model named Phi-1.5 (opens in new tab), with performance comparable to models 5x larger.

Continue reading “Phi-2: The surprising power of small language models” »

Dec 12, 2023

Cyborg computer with living brain organoid aces machine learning tests

Posted by in categories: biotech/medical, cyborgs, mathematics, robotics/AI

Scientists have grown a tiny brain-like organoid out of human stem cells, hooked it up to a computer, and demonstrated its potential as a kind of organic machine learning chip, showing it can quickly pick up speech recognition and math predictions.

As incredible as recent advances have been in machine learning, artificial intelligence still lags way behind the human brain in some important ways. For example, the brain happily learns and adapts all day long on an energy budget of about 20 watts, where a comparably powerful artificial neural network needs about 8 million watts to achieve anything remotely comparable.

What’s more, the human brain’s neural plasticity, its ability to grow new nervous tissue and expand existing connective channels, has granted it an ability to learn from noisy, low-quality data streams, with minimal training and energy expenditure. What AI systems accomplish with brute force and massive energy, the brain achieves with an effortless elegance. It’s a credit to the billions of years of high-stakes trial and error that delivered the human brain to the state it’s in today, in which it’s chiefly used to watch vast numbers of other people dancing while we’re on the toilet.

Dec 12, 2023

Tesla’s Dojo 2 Supercomputer: Leading the AI Revolution

Posted by in categories: robotics/AI, supercomputing

Tesla is pushing the boundaries of AI and supercomputing with the development of Dojo 2, aiming to build the world’s biggest supercomputer by the end of next year, and setting high goals for performance and cost efficiency.

Questions to inspire discussion.

Continue reading “Tesla’s Dojo 2 Supercomputer: Leading the AI Revolution” »

Dec 12, 2023

Tesla’s Giga Texas: $25K Car, Bots, Model Y, Cybertruck Expansion

Posted by in categories: robotics/AI, transportation

Tesla’s Giga Texas factory is not only expanding production capacity for the Cybertruck, but also hinting at the development of a $25K compact car and showcasing innovative and advanced manufacturing processes.

Questions to inspire discussion.

Continue reading “Tesla’s Giga Texas: $25K Car, Bots, Model Y, Cybertruck Expansion” »

Dec 12, 2023

Study: Customized GPT has security vulnerability

Posted by in categories: internet, robotics/AI, security

One month after OpenAI unveiled a program that allows users to easily create their own customized ChatGPT programs, a research team at Northwestern University is warning of a “significant security vulnerability” that could lead to leaked data.

In November, OpenAI announced ChatGPT subscribers could create custom GPTs as easily “as starting a conversation, giving it instructions and extra knowledge, and picking what it can do, like searching the web, making images or analyzing data.” They boasted of its simplicity and emphasized that no coding skills are required.

“This democratization of AI technology has fostered a community of builders, ranging from educators to enthusiasts, who contribute to the growing repository of specialized GPTs,” said Jiahao Yu, a second-year doctoral student at Northwestern specializing in secure machine learning. But, he cautioned, “the high utility of these custom GPTs, the instruction-following nature of these models presents new challenges in .”

Dec 12, 2023

A new model that allows robots to re-identify and follow human users

Posted by in categories: information science, internet, robotics/AI

In recent years, roboticists and computer scientists have introduced various new computational tools that could improve interactions between robots and humans in real-world settings. The overreaching goal of these tools is to make robots more responsive and attuned to the users they are assisting, which could in turn facilitate their widespread adoption.

Researchers at Leonardo Labs and the Italian Institute of Technology (IIT) in Italy recently introduced a new computational framework that allows robots to recognize specific users and follow them around within a given environment. This framework, introduced in a paper published as part of the 2023 IEEE International Conference on Advanced Robotics and Its Social Impacts (ARSO), allows robots re-identify users in their surroundings, while also performing specific actions in response to performed by the users.

“We aimed to create a ground-breaking demonstration to attract stakeholders to our laboratories,” Federico Rollo, one of the researchers who carried out the study, told Tech Xplore. “The Person-Following robot is a prevalent application found in many commercial mobile robots, especially in industrial environments or for assisting individuals. Typically, such algorithms use external Bluetooth or Wi-Fi emitters, which can interfere with other sensors and the user is required to carry.”

Dec 12, 2023

Researchers from Johns Hopkins and UC Santa Cruz Unveil D-iGPT: A Groundbreaking Advance in Image-Based AI Learning

Posted by in category: robotics/AI

Natural language processing (NLP) has entered a transformational period with the introduction of Large Language Models (LLMs), like the GPT series, setting new performance standards for various linguistic tasks. Autoregressive pretraining, which teaches models to forecast the most likely tokens in a sequence, is one of the main factors causing this amazing achievement. Because of this fundamental technique, the models can absorb a complex interaction between syntax and semantics, contributing to their exceptional ability to understand language like a person. Autoregressive pretraining has substantially contributed to computer vision in addition to NLP.

In computer vision, autoregressive pretraining was initially successful, but subsequent developments have shown a sharp paradigm change in favor of BERT-style pretraining. This shift is noteworthy, especially in light of the first results from iGPT, which showed that autoregressive and BERT-style pretraining performed similarly across various tasks. However, because of its greater effectiveness in visual representation learning, subsequent research has come to prefer BERT-style pretraining. For instance, MAE shows that a scalable approach to visual representation learning may be as simple as predicting the values of randomly masked pixels.

In this work, the Johns Hopkins University and UC Santa Cruz research team reexamined iGPT and questioned whether autoregressive pretraining can produce highly proficient vision learners, particularly when applied widely. Two important changes are incorporated into their process. First, the research team “tokenizes” photos into semantic tokens using BEiT, considering images are naturally noisy and redundant. This modification shifts the focus of the autoregressive prediction from pixels to semantic tokens, allowing for a more sophisticated comprehension of the interactions between various picture areas. Secondly, the research team adds a discriminative decoder to the generative decoder, which autoregressively predicts the subsequent semantic token.

Page 24 of 2,017First2122232425262728Last