Toggle light / dark theme

A team of researchers from South Korea has developed a new LLM called “DarkBert,” which has been trained exclusively on the “Dark Web.”

A team of South Korean researchers has taken the unprecedented step of developing and training artificial intelligence (AI) on the so-called “Dark Web.” The Dark Web trained AI, called DarkBERT, was unleashed to trawl and index what it could find to help shed light on ways to combat cybercrime.

The “Dark Web” is a section of the internet that remains hidden and cannot be accessed through standard web browsers.


This article gives a clear and chilling assessment of the impact of the Ukraine conflict on the future of collaborative space exploration; in doing so it highlights how humankind’s habitual tendency towards wars severally slow, if not completely halt, our urgent reach for the stars. As that old warrior Churchill once said„ ‘Jaw, jaw is always better than War, war!’


Russia’s invasion of Ukraine in February 2022 has resulted in hundreds of thousands of deaths, millions homeless and displaced and billions of dollars of damage in infrastructure. The conflict has also had less immediate but significant impacts on other areas, including on the space industries of Ukraine and Russia, but also globally in terms of the launch market, spaceflight activity and international cooperation.

In the wake of the start of the conflict on Feb. 24, 2022, and resulting international backlash against Russia, the then-head of the Russian space agency Dmitry Rogozin threatened to end its cooperation with the West on the International Space Station (ISS) program over sanctions imposed on Russia. He also issued a threat to SpaceX founder and CEO Elon Musk for the company’s role in providing connectivity through its Starlink satellites.

After igniting a global obsession over generative art, ten-month-old Midjourney appears to be entering the Middle Kingdom, the world’s largest internet market.

In an article posted on the Tencent-owned social platform WeChat late on Monday, a corporate account named “Midjourney China” said it has started accepting applications for beta test users. But the account soon deleted its first and only article on Tuesday.

It’s unclear why the post disappeared after receiving an overwhelming reception in China. Applications would only be open for a few hours every Monday and Friday, the original post said, and users quickly filled up the first quota on launch day. TechCrunch hasn’t been able to test the product.

Changing this dynamic requires not just new technology, but a different way of thinking about automation. Real worksites have clutter, traffic, patchy wifi, and a host of other routine inconveniences that serve as barriers tor automation. And real people have real jobs to do—requiring training, institutional knowledge, prioritization, collaboration, and other skills that are impossible to automate.

Automation tools need to be dynamic to add value and act as an extension of these teams, fitting into their current workplace, amplifying their expertise, going places they can’t, and completing tasks they don’t have time for. In short, making their jobs easier, not more complicated.

After being at the helm for over a decade of internet juggernaut Google, former CEO Eric Schmidt has switched gears and one of his latest activities is being an advocate for the fast-growing US bioeconomy, which is valued at over $1T. During his keynote at the 2022 SynBioBeta conference in Oakland, CA last year, he passed along advice for the next generation of biotechnologists, detailing what is needed from different stakeholders to fulfill the potential of the global bioeconomy to solve the world’s biggest problems. I caught up with Schmidt again recently, a year since his talk, to see what progress has been made. You can see the full Q&A here.

I think it’s time.

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) warned today of a critical remote code execution (RCE) flaw in the Ruckus Wireless Admin panel actively exploited by a recently discovered DDoS botnet.

While this security bug (CVE-2023–25717) was addressed in early February, many owners are likely yet to patch their Wi-Fi access points. Furthermore, no patch is available for those who own end-of-life models affected by this issue.

Attackers are abusing the bug to infect vulnerable Wi-Fi APs with AndoryuBot malware (first spotted in February 2023) via unauthenticated HTTP GET requests.

At a conference at New York University in March, philosopher Raphaël Millière of Columbia University offered yet another jaw-dropping example of what LLMs can do. The models had already demonstrated the ability to write computer code, which is impressive but not too surprising because there is so much code out there on the Internet to mimic. Millière went a step further and showed that GPT can execute code, too, however. The philosopher typed in a program to calculate the 83rd number in the Fibonacci sequence. “It’s multistep reasoning of a very high degree,” he says. And the bot nailed it. When Millière asked directly for the 83rd Fibonacci number, however, GPT got it wrong: this suggests the system wasn’t just parroting the Internet. Rather it was performing its own calculations to reach the correct answer.

Although an LLM runs on a computer, it is not itself a computer. It lacks essential computational elements, such as working memory. In a tacit acknowledgement that GPT on its own should not be able to run code, its inventor, the tech company OpenAI, has since introduced a specialized plug-in—a tool ChatGPT can use when answering a query—that allows it to do so. But that plug-in was not used in Millière’s demonstration. Instead he hypothesizes that the machine improvised a memory by harnessing its mechanisms for interpreting words according to their context—a situation similar to how nature repurposes existing capacities for new functions.

This impromptu ability demonstrates that LLMs develop an internal complexity that goes well beyond a shallow statistical analysis. Researchers are finding that these systems seem to achieve genuine understanding of what they have learned. In one study presented last week at the International Conference on Learning Representations (ICLR), doctoral student Kenneth Li of Harvard University and his AI researcher colleagues—Aspen K. Hopkins of the Massachusetts Institute of Technology, David Bau of Northeastern University, and Fernanda Viégas, Hanspeter Pfister and Martin Wattenberg, all at Harvard—spun up their own smaller copy of the GPT neural network so they could study its inner workings. They trained it on millions of matches of the board game Othello by feeding in long sequences of moves in text form. Their model became a nearly perfect player.