Toggle light / dark theme

How Warp is introducing robots to automate its network of warehouses

Daniel Sokolovsky, the co-founder and CEO of Warp, told TechCrunch that Warp is always looking for ways to make shipping more efficient for its customers, which include enterprises like Walmart, Gopuff, and HelloFresh. With the advancements in AI, the company thought there could be more opportunities to automate.

Warp can’t automate the long-haul trucking or short-range delivery aspects of the supply chain, Sokolovsky said, so it’s working on what it can potentially change: the workflows inside its warehouses.

Warp started by installing cameras into its test warehouse in Los Angeles and used computer vision to turn that data into a virtual warehouse to start experimenting.

UP Researchers Predict Antimicrobial Resistance Using AI Models

Escherichia coli (E. coli) is a common bacterium that lives in the intestines of animals and humans, and it is often used to identify fecal contamination within the environment. E. coli can also easily develop resistance to antibiotics, making it an ideal organism for testing antimicrobial resistance—especially in certain agricultural environments where fecal material is used as manure or wastewater is reused.

Nonlinear neural network model reveals how fly brains reduce odor complexity

Two RIKEN researchers have used a scheme for simplifying data to mimic how the brain of a fruit fly reduces the complexity of information about smells it perceives. This could also help enhance our understanding of how the human brain processes sensory data.

The work is published in the journal Science Advances.

Sensors related to our five senses are constantly providing huge amounts of information to the . It would quickly become overloaded if it tried to process that sensory information without first simplifying it by reducing its number of dimensions.

Debut of LLM-enabled humanoid robot at event met with mixed reviews by human attendees

A team of roboticists at the University of Canberra’s Collaborative Robotics Lab, working with a sociologist colleague from The Australian National University, has found humans interacting with an LLM-enabled humanoid robot had mixed reactions. In their paper published in the journal Scientific Reports, the group describes what they saw as they watched interactions between an LLM-enabled humanoid robot posted at an innovation festival and reviewed feedback given by people participating in the interactions.

Over the past couple of years, LLMs such as ChatGPT have taken the world by storm, with some going so far as to suggest that the new technology will soon make many human workers obsolete. Despite such fears, scientists continue to improve such technology, sometimes employing it in new places—such as inside an existing . That is what the team in Australia did—they added ChatGPT to the interaction facilities of a robot named Pepper and then posted the robot at an innovation festival in Canberra, where attendees were encouraged to interact with it.

Before it was given an LLM, Pepper was already capable of moving around autonomously and interacting with people on a relatively simple level. One of its hallmarks is its ability to maintain eye contact. Such abilities, the team suggested, made the robot a good target for testing with LLM-enabled humanoid robots “in the wild.”

Training robots without robots: Smart glasses capture first-person task demos

Over the past few decades, robots have gradually started making their way into various real-world settings, including some malls, airports and hospitals, as well as a few offices and households.

For robots to be deployed on a larger scale, serving as reliable everyday assistants, they should be able to complete a wide range of common manual tasks and chores, such as cleaning, washing the dishes, cooking and doing the laundry.

Training machine learning algorithms that allow robots to successfully complete these tasks can be challenging, as it often requires extensive annotated data and/or demonstration videos showing humans the tasks. Devising more effective methods to collect data to train robotics algorithms could thus be highly advantageous, as it could help to further broaden the capabilities of robots.