Toggle light / dark theme

An international team of scientists has defied nature to make diamonds in minutes in a laboratory at room temperature—a process that normally requires billions of years, huge amounts of pressure and super-hot temperatures.

The team, led by The Australian National University (ANU) and RMIT University, made two types of diamonds: the kind found on an engagement ring and another type of diamond called Lonsdaleite, which is found in nature at the site of meteorite impacts such as Canyon Diablo in the US.

One of the lead researchers, ANU Professor Jodie Bradby, said their breakthrough shows that Superman may have had a similar trick up his sleeve when he crushed coal into diamond, without using his heat ray.

The gang behind the Ragnar Locker ransomware posted an ad on Facebook in an attempt to publicly shame a victim so it would pay a ransom. Security experts say the innovative tactic is indicative of things to come.

See Also: Palo Alto Networks Ignite 20: Discover the Future of Cybersecurity, Today

Earlier this week, the cyber gang hacked into a random company’s Facebook advertising account and then used it to buy an ad containing a press release stating Ragnar Locker had breached the Italian liquor company Campari and demanded it pay the ransom or see its data released. The security firm Emsisoft provided an image of the ad to Information Security Media Group.

You’ve probably heard us say this countless times: GPT-3, the gargantuan AI that spews uncannily human-like language, is a marvel. It’s also largely a mirage. You can tell with a simple trick: Ask it the color of sheep, and it will suggest “black” as often as “white”—reflecting the phrase “black sheep” in our vernacular.

That’s the problem with language models: because they’re only trained on text, they lack common sense. Now researchers from the University of North Carolina, Chapel Hill, have designed a new technique to change that. They call it “vokenization,” and it gives language models like GPT-3 the ability to “see.”

It’s not the first time people have sought to combine language models with computer vision. This is actually a rapidly growing area of AI research. The idea is that both types of AI have different strengths. Language models like GPT-3 are trained through unsupervised learning, which requires no manual data labeling, making them easy to scale. Image models like object recognition systems, by contrast, learn more directly from reality. In other words, their understanding doesn’t rely on the kind of abstraction of the world that text provides. They can “see” from pictures of sheep that they are in fact white.