Toggle light / dark theme

In the not-too-distant future, many of us may routinely use 3D headsets to interact in the metaverse with virtual iterations of companies, friends, and life-like company assistants. These may include Lily from AT&T, Flo from Progressive, Jake from State Farm, and the Swami from CarShield. We’ll also be interacting with new friends like Nestlé‘s Cookie Coach, Ruth, the World Health Organization’s Digital Health worker Florence, and many others.

Creating digital characters for virtual reality apps and in ecommerce is a fast-rising new segment of IT. San Francisco-based Soul Machines, a company that is rooted in both the animation and artificial intelligence (AI) sectors, is jumping at the opportunity to create animated digital avatars to bolster interactions in the metaverse. Customers are much more likely to buy something when a familiar face — digital or human — is involved.

Investors, understandably, are hot on the idea. This week, the 6-year-old company revealed an infusion of series B financing ($70 million) led by new investor SoftBank Vision Fund 2, bringing the company’s total funding to $135 million to date.

According to The Guardian, there’s a team of researchers in northern Greece who have spent the last few years experimenting with ways to harvest metal though agriculture:

In a remote, beautiful field, high in the Pindus mountains in Epirus, they are experimenting with a trio of shrubs known to scientists as “hyperaccumulators”: plants which have evolved the capacity to thrive in naturally metal-rich soils that are toxic to most other kinds of life. They do this by drawing the metal out of the ground and storing it in their leaves and stems, where it can be harvested like any other crop. As well as providing a source for rare metals – in this case nickel, although hyperaccumulators have been found for zinc, aluminium, cadmium and many other metals, including gold – these plants actively benefit the earth by remediating the soil, making it suitable for growing other crops, and by sequestering carbon in their roots. One day, they might supplant more destructive and polluting forms of mining.

Imagine, finding a way to pull minerals out of the Earth … without violent colonization and destructive mining practices. Maybe us lowly humans could learn a thing or two from the flowers!

Scientists have grown plants in soil from the Moon, a first in human history and a milestone in lunar and space exploration.

In a new paper published in the journal Communications Biology, University of Florida researchers showed that plants can successfully sprout and grow in lunar . Their study also investigated how plants respond biologically to the Moon’s soil, also known as , which is radically different from soil found on Earth.

This work is a first step toward one day growing plants for food and oxygen on the Moon or during . More immediately, this research comes as the Artemis Program plans to return humans to the Moon.

The latest “machine scientist” algorithms can take in data on dark matter, dividing cells, turbulence, and other situations too complicated for humans to understand and provide an equation capturing the essence of what’s going on.


Despite rediscovering Kepler’s third law and other textbook classics, BACON remained something of a curiosity in an era of limited computing power. Researchers still had to analyze most data sets by hand, or eventually with Excel-like software that found the best fit for a simple data set when given a specific class of equation. The notion that an algorithm could find the correct model for describing any data set lay dormant until 2009, when Lipson and Michael Schmidt, roboticists then at Cornell University, developed an algorithm called Eureqa.

Their main goal had been to build a machine that could boil down expansive data sets with column after column of variables to an equation involving the few variables that actually matter. “The equation might end up having four variables, but you don’t know in advance which ones,” Lipson said. “You throw at it everything and the kitchen sink. Maybe the weather is important. Maybe the number of dentists per square mile is important.”

One persistent hurdle to wrangling numerous variables has been finding an efficient way to guess new equations over and over. Researchers say you also need the flexibility to try out (and recover from) potential dead ends. When the algorithm can jump from a line to a parabola, or add a sinusoidal ripple, its ability to hit as many data points as possible might get worse before it gets better. To overcome this and other challenges, in 1992 the computer scientist John Koza proposed “genetic algorithms,” which introduce random “mutations” into equations and test the mutant equations against the data. Over many trials, initially useless features either evolve potent functionality or wither away.

Machines are learning things fast and replacing humans at a faster rate than ever before. Fresh development in this direction is a robot that can taste food. And not only it can taste the food, it can do so while making the dish it is preparing! This further leads to the robot having the ability to recognise taste of the food in various stages of chewing when a human eats the food.

The robot chef was made by Mark Oleynik, a Russian mathematician and computer scientist. Researchers at the Cambride University trained the robot to ‘taste’ the food as it cooks it.

The robot had already been trained to cook egg omelets. The researchers at Cambridge University added a sensor to the robot which can recognise different levels of saltiness.

Jack in the Box has become the latest American food chain to experiment with automation, as it seeks to handle staffing challenges and improve the efficiency of its service.

Jack in the Box is one of the largest quick service restaurant chains in America, with more than 2,200 branches. With continued staffing challenges impacting its operating hours and costs, Jack in the Box saw a need to revamp its technology and establish new systems – particularly in the back-of-house – that improve restaurant-level economics and alleviate the pain points of working in a high-volume commercial kitchen.

Reimagining A Healthier Future for All — Dr. Pat Verduin PhD, Chief Technology Officer, Colgate, discussing the microbiome, skin and oral care, and healthy aging from a CPG perspective.


Dr. Patricia Verduin, PhD, (https://www.colgatepalmolive.com/en-us/snippet/2021/circle-c…ia-verduin) is Chief Technology Officer for the Colgate-Palmolive Company where she provides leadership for product innovation, clinical science and long-term research and development across their Global Technology Centers’ Research & Development pipeline.

Dr. Verduin joined Colgate Palmolive in 2007 as Vice President, Global R&D. Previously she served as Vice President, Scientific Affairs, for the Grocery Manufacturers Association, and from 2000 to 2006, she held the position of Vice President, Research & Development, at ConAgra Foods.

From search engines to voice assistants, computers are getting better at understanding what we mean. That’s thanks to language-processing programs that make sense of a staggering number of words, without ever being told explicitly what those words mean. Such programs infer meaning instead through statistics—and a new study reveals that this computational approach can assign many kinds of information to a single word, just like the human brain.

The study, published April 14 in the journal Nature Human Behavior, was co-led by Gabriel Grand, a graduate student in and computer science who is affiliated with MIT’s Computer Science and Artificial Intelligence Laboratory, and Idan Blank Ph.D. ‘16, an assistant professor at the University of California at Los Angeles. The work was supervised by McGovern Institute for Brain Research investigator Ev Fedorenko, a cognitive neuroscientist who studies how the uses and understands language, and Francisco Pereira at the National Institute of Mental Health. Fedorenko says the rich knowledge her team was able to find within computational language models demonstrates just how much can be learned about the world through language alone.

The research team began its analysis of statistics-based language processing models in 2015, when the approach was new. Such models derive meaning by analyzing how often pairs of co-occur in texts and using those relationships to assess the similarities of words’ meanings. For example, such a program might conclude that “bread” and “apple” are more similar to one another than they are to “notebook,” because “bread” and “apple” are often found in proximity to words like “eat” or “snack,” whereas “notebook” is not.