Toggle light / dark theme

Moreover, the concept of limitation, which dictates that the means and methods of warfare are not unlimited, can help prevent the escalation of conflicts in space by imposing restrictions on the use of certain weapons or tactics that could cause indiscriminate harm or result in long-term consequences for space exploration and utilization. Given a growing number of distinct weapons systems in orbit – from missile defense systems with kinetic anti-satellite capabilities, electronic warfare counter-space capabilities, and directed energy weapons to GPS jammers, space situational awareness, surveillance, and intelligence gathering capabilities – legal clarity rather than strategic ambiguity are crucial for ensuring the responsible and peaceful use of outer space.

Additionally, the principle of humanity underscores the importance of treating all individuals with dignity and respect, including astronauts, cosmonauts, and civilians who may be affected by conflicts in space. By upholding this principle, outer space law can ensure that human rights are protected and preserved, particularly in the profoundly challenging environment of outer space. Moreover, with civilians on the ground increasingly tethered to space technologies for communication, navigation, banking, leisure, and other essential services, the protection of their rights becomes a fundamental imperative.

The modern laws of armed conflict (LOAC) offer a valuable blueprint for developing a robust legal framework for governing activities in outer space. By integrating complementary principles of LOAC or international humanitarian law with the UN Charter into outer space law, policymakers can promote the peaceful and responsible use of outer space while mitigating the risks associated with potential conflicts in this increasingly contested domain.

As artificial intelligence (AI) becomes increasingly ubiquitous in business and governance, its substantial environmental impact — from significant increases in energy and water usage to heightened carbon emissions — cannot be ignored. By 2030, AI’s power demand is expected to rise by 160%. However, adopting more sustainable practices, such as utilizing foundation models, optimizing data processing locations, investing in energy-efficient processors, and leveraging open-source collaborations, can help mitigate these effects. These strategies not only reduce AI’s environmental footprint but also enhance operational efficiency and cost-effectiveness, balancing innovation with sustainability.

Page-utils class= article-utils—vertical hide-for-print data-js-target= page-utils data-id= tag: blogs.harvardbusiness.org, 2007/03/31:999.386782 data-title= How Companies Can Mitigate AI’s Growing Environmental Footprint data-url=/2024/07/how-companies-can-mitigate-ais-growing-environmental-footprint data-topic= Environmental sustainability data-authors= Christina Shim data-content-type= Digital Article data-content-image=/resources/images/article_assets/2024/06/Jul24_04_1298348302-383x215.jpg data-summary=

Practical steps for reducing AI’s surging demand for water and energy.

Applied in this way, it’s not just generative AI—this is transformational AI. It goes beyond accelerating productivity; it accelerates innovation by sparking new business strategies and revamping existing operations, paving the way for a new era of autonomous enterprise.

Keep in mind that not all Large Language Models (LLMs) can be tailored for genuine business innovation. Most models are generalists that are trained on public information found on the internet and are not experts on your particular brand of doing business. However, techniques like Retrieval Augmented Generation (RAG) allow for the augmentation of general LLMs with industry-specific and company-specific data, enabling them to adapt to anyone’s requirements without extensive and expensive training.

We are still in the nascent stages of advanced AI adoption. Most companies are grappling with the basics—such as implementation, security and governance. However, forward-thinking organizations are already looking ahead. By reimagining the application of generative AI, they are laying the groundwork for businesses to reinvent themselves, ushering in an era where innovation knows no bounds.

How can rapidly emerging #AI develop into a trustworthy, equitable force? Proactive policies and smart governance, says Salesforce.


These initial steps ignited AI policy conversations amid the acceleration of innovation and technological change. Just as personal computing democratized internet access and coding accessibility, fueling more technology creation, AI is the latest catalyst poised to unlock future innovations at an unprecedented pace. But with such powerful capabilities comes large responsibility: We must prioritize policies that allow us to harness its power while protecting against harm. To do so effectively, we must acknowledge and address the differences between enterprise and consumer AI.

Enterprise versus consumer AI

Salesforce has been actively researching and developing AI since 2014, introduced our first AI functionalities into our products in 2016, and established our office of ethical and human use of technology in 2018. Trust is our top value. That’s why our AI offerings are founded on trust, security and ethics. Like many technologies, there’s more than one use for AI. Many people are already familiar with large language models (LLMs) via consumer-facing apps like ChatGPT. Salesforce is leading the development of AI tools for businesses, and our approach differentiates between consumer-grade LLMs and what we classify as enterprise AI.

Preparation requires technical research and development, as well as adaptive, proactive governance.

Yoshua Bengio, Geoffrey Hinton, […], Andrew Yao, Dawn Song, […], Pieter Abbeel, Trevor Darrell, Yuval Noah Harari, Ya-Qin Zhang, Lan Xue, […], Shai Shalev-Shwartz, Gillian Hadfield, Jeff Clune, Tegan Maharaj, Frank Hutter, Atılım Güneş Baydin, Sheila McIlraith, Qiqi Gao, Ashwin Acharya, David Krueger, Anca Dragan, Philip Torr, Stuart Russell, Daniel Kahneman, Jan Brauner [email protected], and Sören Mindermann +22 authors +20 authors +15 authors fewer Authors Info & Affiliations

Science.

Two OpenAI employees who worked on safety and governance recently resigned from the company behind ChatGPT.

Daniel Kokotajlo left last month and William Saunders departed OpenAI in February. The timing of their departures was confirmed by two people familiar with the situation. The people asked to remain anonymous in order to discuss the departures, but their identities are known to Business Insider.

Chinese ambassador Chen Xu called for the high-quality development of artificial intelligence (AI), assistance in promoting children’s mental health, and protection of children’s rights while delivering a joint statement on behalf of 80 countries at the 55th session of the United Nations Human Rights Council (UNHRC) on Thursday.

Chen, China’s permanent representative to the UN Office in Geneva and other international organizations in Switzerland, said that artificial intelligence is a new field of human development and should adhere to the concept of consultation, joint construction, and shared benefits, while working together to promote the governance of artificial intelligence.

The new generation of children has become one of the main groups using and benefiting from AI technology. The joint statement emphasized the importance of children’s mental health issues.