Toggle light / dark theme

► Skip the waitlist by signing up for Masterworks here: https://masterworks.art/ainews.
Purchase shares in great masterpieces from artists like Pablo Picasso, Banksy, Andy Warhol, and more. See important Masterworks disclosures: https://www.masterworks.com/about/disclaimer?utm_source=aine…disclaimer.

Premium Robots: https://taimine.com/
Deep Learning AI Specialization: https://imp.i384100.net/GET-STARTED

GenAug has been developed by Meta AI and the University of Washington, which utilizes pre-trained text-to-image generative artificial intelligence models to enable imitation-based learning in practical robots. Stanford artificial intelligence researchers have proposed a method, called ATCON to drastically improve the quality of attention maps and classification performance on unseen data. Google’s new SingSong AI can generate instrumental music that complements your singing.

AI News Timestamps:

My 11th Ambient music video release for YT. An unofficial Soundtrack to the Sci Fi movie ‘2010: The Year we Make Contact’ (starring Roy Scheider & Helen Mirren). The movie was based on the Arthur C. Clarke novel, which was the sequel to 2001: A Space Odyssey. I went alot more in depth with the visuals on this one, recreating shots from the original movie, but with an extra dash of VFX that weren’t easy to pull off on a PC in 1986.

In upcoming video releases I will be doing a deep dive into the ambient multiverse, exploring various styles from Space Ambient to Dark Ambient to Cyberpunk to Sleep music to White Noise. My focus on this channel is to create relaxing cinematic ambient background music for chilling, focus, work and meditation. With the occasional eerie dark ambient tracks. The theme for my video backdrops is a rich fusion of derelict imagery, planets and moons.

Music & Animation by Duncan Brown.
Planet Maps by Robert Stein III (Pinterest)

I made this for you to Enjoy. Like. Share. Subscribe.

Hey folks, I’m excited to share a new essay with y’all on my proposed route towards nanoscale human brain connectomics. I suggest that synchrotron ‘expansion x-ray microscopy’ has the potential to enable anatomical imaging of the entire human brain with sub-100 nm voxel size and high contrast in around 1 year for a price of roughly $10M. I plan to continue improving this essay over time as I acquire more detailed information and perform more calculations.

For a brief history of this concept: I started exploring this idea during undergrad (working with a laboratory-scale x-ray microscope), but was cut short by the pandemic. Now, I’m working on a PhD in biomedical engineering centered on gene therapy and synthetic biology, but I have retained a strong interest in connectomics. I recently began communication with some excellent collaborators who might be able to help move this technology forward. Hoping for some exciting progress!


By Logan Thrasher Collins.

PDF version

The German company will launch its operating system by the mid of this decade.

German luxury and commercial vehicle brand, Mercedes Benz, has announced its software partnership with Google to offer “super-computer-like” navigation and other services in every car, Reuters.


Mercedes’ plans for the future

Mercedes’ partnership with Google follows the route that conventional carmakers such as Ford, Renault, Nissan, and General Motors have taken to add Google’s suite of services to their cars. This partnership allows users to tap into Google’s Maps, Assitant, and other services and use traffic information to determine the best routes to reach their destination.

What if, instead of using X-rays or ultrasound, we could use touch to image the insides of human bodies and electronic devices? In a study publishing in the journal Cell Reports Physical Science (“A smart bionic finger for subsurface tactile-tomography”), researchers present a bionic finger that can create 3D maps of the internal shapes and textures of complex objects by touching their exterior surface.

“We were inspired by human fingers, which have the most sensitive tactile perception that we know of,” says senior author Jianyi Luo, a professor at Wuyi University. “For example, when we touch our own bodies with our fingers, we can sense not only the texture of our skin, but also the outline of the bone beneath it.”

“Our bionic finger goes beyond previous artificial sensors that were only capable of recognizing and discriminating between external shapes, surface textures, and hardness,” says co-author Zhiming Chen, a lecturer at Wuyi University.

Google is launching new updates for Maps that are part of its plan to make the navigation app more immersive and intuitive for users, the company announced today at its event in Paris.

Most notably, the company announced that Immersive View is rolling out starting today in London, Los Angeles, New York, San Francisco and Tokyo. Immersive View, which Google first announced at I/O in May 2022, is designed to help you plan ahead and get a deeper understanding of a city before you visit it. The company plans to launch Immersive View in more cities, including Amsterdam, Dublin, Florence and Venice in the coming months.

The feature fuses billions of Street View and aerial images to create a digital model of the world. It also layers information on top of the digital model, such as details about the weather, traffic and how busy a location may be. For instance, say you’re planning to visit the Rijksmuseum in Amsterdam and want to get an idea of it before you go. You can use Immersive View to virtually soar over the building to get a better idea of what it looks like and where the entrances are located. You can also see what the area looks like at different times of the day and what the weather will be like. Immersive View can also show you nearby restaurants, and allows you look inside them to see if they would be an ideal spot for you.

“To create these true-to-life scenes, we use neural radiance fields (NeRF), an advanced AI technique, transforms ordinary pictures into 3D representations,” Google explained in a blog post. “With NeRF, we can accurately recreate the full context of a place including its lighting, the texture of materials and what’s in the background. All of this allows you to see if a bar’s moody lighting is the right vibe for a date night or if the views at a cafe make it the ideal spot for lunch with friends.”

The company also announced that a new feature called “glanceable directions” is rolling out globally on Android and iOS in the coming months. The feature lets you track your journey right from your route overview or lock screen. Users will see updated ETAs and where to make your next turn. If you decide to take another path, the app will update your trip automatically. Google notes that previously, this information was only visible by unlocking your phone, opening the app and using comprehensive navigation mode. Glanceable directions can be used whenever you’re using the app, whether you’re walking, biking or taking public transit.

A team of researchers have come up with a machine learning-assisted way to detect the position of shapes including the poses of humans to an astonishing degree — using only WiFi signals.

In a yet-to-be-peer-reviewed paper, first spotted by Vice, researchers at Carnegie Mellon University came up with a deep learning method of mapping the position of multiple human subjects by analyzing the phase and amplitude of WiFi signals, and processing them using computer vision algorithms.

“The results of the study reveal that our model can estimate the dense pose of multiple subjects, with comparable performance to image-based approaches, by utilizing WiFi signals as the only input,” the team concluded in their paper.