Toggle light / dark theme

Boston University researchers, Xin Zhang, a professor at the College of Engineering, and Reza Ghaffarivardavagh, a Ph.D. student in the Department of Mechanical Engineering, released a paper in Physical Review B demonstrating it’s possible to silence noise using an open, ringlike structure, created to mathematically perfect specifications, for cutting out sounds while maintaining airflow.

“Today’s barriers are literally thick heavy walls,” says Ghaffarivardavagh. Although noise-mitigating barricades, called sound baffles, can help drown out the whoosh of rush hour traffic or contain the symphony of music within concert hall walls, they are a clunky approach not well suited to situations where airflow is also critical. Imagine barricading a jet engine’s exhaust vent—the plane would never leave the ground. Instead, workers on the tarmac wear earplugs to protect their hearing from the deafening roar.

Ghaffarivardavagh and Zhang let mathematics—a shared passion that has buoyed both of their engineering careers and made them well-suited research partners—guide them toward a workable design for what the acoustic metamaterial would look like.

Read more

Circa 2015


Death is the one thing that’s guaranteed in today’s uncertain world, but now a new start-up called Humai thinks it might be able to get rid of that inconvenient problem for us too, by promising to transfer people’s consciousness into a new, artificial body.

If it sounds like science fiction, and that’s because it still is, with none of the technology required for Humai’s business plan currently up and running. But that’s not deterring the company’s CEO, Josh Bocanegra, who says his team will resurrect their first human within 30 years.

So how do you go about transferring someone’s consciousness to another robot body? As Humai explains on their website (which comes complete with new-age backing music):

This animation video provides a good summary, about the challenges that need to be solved in order to establish an outpost on #Mars


To support Kurzgesagt and learn more about Brilliant, go to https://www.brilliant.org/nutshell and sign up for free. The first 688 people that go to that link will get 20% off the annual Premium subscription.

Get your Mars Base Poster here: https://standard.tv/collections/in-a-nutshell/products/in-a-nutshell-mars-poster

A new study has found that dopamine — a neurotransmitter that plays an important role in our cognitive, emotional, and behavioral functioning — plays a direct role in the reward experience induced by music. The new findings have been published in the Proceedings of the National Academy of Sciences.

“In everyday life, humans regularly seek participation in highly complex and pleasurable experiences such as music listening, singing, or playing, that do not seem to have any specific survival advantage. Understanding how the brain translates a structured sequence of sounds, such as music, into a pleasant and rewarding experience is thus a challenging and fascinating question,” said study author Laura Ferreri, an associate professor in cognitive psychology at Lyon University.

“In the scientific literature, there was a lack of direct evidence showing that dopamine function is causally related to music-evoked pleasure. Therefore in this study, through a pharmacological approach, we wanted to investigate whether dopamine, which plays a major role in regulating pleasure experiences and motivation to engage in certain behaviors, plays a direct role in the experience of pleasure induced by music.”

Read more

Now researchers at Ulsan National Institute of Science and Technology (UNIST) in South Korea have made a nanomembrane out of silver nanowires to serve as flexible loudspeakers or microphones. The researchers even went so far as to demonstrate their nanomembrane by making it into a loudspeaker that could be attached to skin and used it to play the final movement of a violin concerto—namely, La Campanella by Niccolo Paganini.


Researchers in South Korea made a tiny loudspeaker, and then used it to play a violin concerto.

Read more

Researchers at the University of Waterloo, Canada, have recently developed a system for generating song lyrics that match the style of particular music artists. Their approach, outlined in a paper pre-published on arXiv, uses a variational autoencoder (VAE) with artist embeddings and a CNN classifier trained to predict artists from MEL spectrograms of their song clips.

“The motivation for this project came from my personal interest,” Olga Vechtomova, one of the researchers who carried out the study, told TechXplore. “Music is a passion of mine, and I was curious about whether a machine can generate lines that sound like the lyrics of my favourite music artists. While working on text generative models, my research group found that can generate some impressive lines of text. The natural next step for us was to explore whether a machine could learn the ‘essence’ of a specific music artist’s lyrical style, including choice of words, themes and sentence structure, to generate novel lyrics lines that sound like the artist in question.”

The system developed by Vechtomova and her colleagues is based on a neural network model called variational autoencoder (VAE), which can learn by reconstructing original lines of text. In their study, the researchers trained their model to generate any number of new, diverse and coherent lyric lines.

Read more