Toggle light / dark theme

The European Commission adopted on Thursday the initial implementing rules on cybersecurity of critical entities and networks under the Directive on measures for a high common level of cybersecurity across the Union. The NIS2 Directive addresses cybersecurity risk management measures and cases in which an incident should be considered significant and companies providing digital infrastructures and services should report it to national authorities. The move is seen as another major step in boosting the cyber resilience of Europe’s critical digital infrastructure.

The implementing regulation will apply to specific categories of companies providing digital services, such as cloud computing service providers, data center service providers, online marketplaces, online search engines, and social networking platforms, to name a few. For each category of service providers, the implementing act also specifies when an incident is considered significant.

Adopting the implementing regulation coincides with the deadline for Member States to transpose the NIS2 Directive into national law. As of Oct. 18, 2024, all Member States must apply the measures necessary to comply with the NIS2 cybersecurity rules, including supervisory and enforcement measures. The implementing regulation will be published in the Official Journal in due course and enter into force 20 days thereafter.

At Impactsure Technologies, we’ve helped clients of banks generate guarantees and contracts through preapproved clauses in a matter of a few seconds without the need to go through a complex process of vetting that would have otherwise taken many days. It not only enhances the customer experience but also makes it easier to manage the processes efficiently. The clients are able to manage their contracts well, manage the content, ensure appropriate vetting and scrutiny are done effectively, manage the timelines, and incorporate the electronic signing options in a seamless way.

As contract management complexities continue to increase in the banking and enterprise sectors, the adoption of GenAI emerges as strategically crucial for organizations seeking to enhance operational efficiency, mitigate risks and maintain regulatory compliance. By harnessing the power of AI-driven automation, banks and enterprises can streamline contract processes, optimize resource utilization and confidently navigate the complicated legal landscape.

A combination of GenAI, NLP and ML represents a paradigm shift in contract management, empowering banks and enterprises to easily manage the complexities of the modern business environment with agility and resilience. By embracing AI-driven solutions, organizations can unlock new opportunities for growth, innovation and sustainable success in an increasingly competitive and rapidly evolving environment.

The US Department of Justice (DoJ) has submitted a new “Proposed Remedy Framework” to correct Google’s violation of antitrust antitrust laws in the country (h/t Mishaal Rahman). This framework seeks to remedy the harm caused by Google’s search distribution and revenue sharing, generation and display for search results, advertising scale and monetization, and accumulation and use of data.

The most drastic of the proposed solutions includes preventing Google from using its products, such as Chrome, Play, and Android, to advantage Google Search and related products. Other solutions include allowing websites to opt-out of training or appearing in Google-owned AI products, such as in AI Overviews in Google Search.

Google responded to this by asserting that “DOJ’s radical and sweeping proposals risk hurting consumers, businesses, and developers.” While the company intends to respond in detail to DoJ’s final proposals, it says that the DoJ is “already signaling requests that go far beyond the specific legal issues in this case.”

Two of San Francisco’s leading players in artificial intelligence have challenged the public to come up with questions capable of testing the capabilities of large language models (LLMs) like Google Gemini and OpenAI’s o1. Scale AI, which specializes in preparing the vast tracts of data on which the LLMs are trained, teamed up with the Center for AI Safety (CAIS) to launch the initiative, Humanity’s Last Exam.

Featuring prizes of US$5,000 (£3,800) for those who come up with the top 50 questions selected for the test, Scale and CAIS say the goal is to test how close we are to achieving “expert-level AI systems” using the “largest, broadest coalition of experts in history.”

Why do this? The leading LLMs are already acing many established tests in intelligence, mathematics and law, but it’s hard to be sure how meaningful this is. In many cases, they may have pre-learned the answers due to the gargantuan quantities of data on which they are trained, including a significant percentage of everything on the internet.

The precise geometry of the protected area encompassing an iconic New Zealand volcano, Mount Taranaki, is unmistakable from space, highlighting its status as New Zealand’s second national park.

This conical, often snow-capped volcano not only captivates with its natural beauty but also serves as a critical area for scientific research due to its unstable geological history and ongoing volcanic threats. In 2017, Mount Taranaki was granted the same legal rights as a person, emphasizing its profound cultural significance to the Indigenous Māori people.

Mount Taranaki

The large language models that have increasingly taken over the tech world are not “cheap” in many ways. The most prominent LLMs, such as GPT-4, took some $100 million to build in the form of legal costs of accessing training data, computational power costs for what could be billions or trillions of parameters, the energy and water needed to fuel computation, and the many coders developing the training algorithms that must run cycle after cycle so the machine will “learn.”

But, if a researcher needs to do a specialized task that a machine could do more efficiently and they don’t have access to a large institution that offers access to generative AI tools, what other options are available? Say, a parent wants to prep their child for a difficult test and needs to show many examples of how to solve complicated math problems.

Building their own LLM is an onerous prospect for costs mentioned above, and making direct use of the big models like GPT-4 and Llama 3.1 might not immediately be suited for the complex in logic and math their task requires.

The Pennsylvania State University in May blocked a prominent professor at the school from doing research and making presentations on its behalf, Retraction Watch has learned.

The professor, Deborah Kelly, has faced mounting scrutiny over her work since a researcher in the United Kingdom noticed apparent data manipulation in a now-retracted article she published in 2017. Kelly earned her third retraction last week following a university probe that found “serious data integrity concerns” in another paper, as we reported at the time.

In comments she made via her legal counsel for that story, Kelly, a biomedical engineer and an expert in electron microscopy, told us:

In science fiction movies like Frankenstein and Re-Animator, human bodies are revived, existing in a strange state between life and death. While this may seem like pure fantasy, a recent study suggests that a “third state” of existence might actually be present in modern biology.

According to the researchers, this third state occurs when the cells of a dead organism continue to function after its death, sometimes gaining new capabilities they never had while the organism was alive.

Amazingly, if further experiments on cells from dead animals — including humans — prove this ability, it could even challenge the definition of legal death.