БЛОГ

Archive for the ‘law’ category: Page 3

Aug 9, 2024

The Six Singularities (There’s Not Just One)

Posted by in categories: law, robotics/AI, singularity

More than one singularity.


The singularity could soon be upon us. The PESTLE framework, developed by this episode’s guest Daniel Hulme, expresses not one but six types of singularity that could occur: political, environmental, social, technological, legal and economic. ‪@JonKrohnLearns‬ and Daniel Hulme discuss how each of these singularities could bring good to the world, aligning with human interests and pushing forward progress. They also talk about neuromorphic computing, machine consciousness, and applying AI at work.

Continue reading “The Six Singularities (There’s Not Just One)” »

Aug 7, 2024

Dr. Ashwin Vasan — Commissioner — NYC Dept. of Health & Mental Hygiene — Strengthening Public Health

Posted by in categories: biotech/medical, government, health, law, neuroscience, policy

Strengthening Public Health Systems For Healthier And Longer Lives — Dr. Ashwin Vasan, Commissioner, NYC Department of Health and Mental Hygiene.


Dr. Ashwin Vasan, MD, PhD is the Commissioner of the New York City Department of Health and Mental Hygiene (https://www.nyc.gov/site/doh/about/ab…).

Continue reading “Dr. Ashwin Vasan — Commissioner — NYC Dept. of Health & Mental Hygiene — Strengthening Public Health” »

Jul 30, 2024

Most cyber ransoms are paid in secret but a new law could change that

Posted by in categories: business, cybercrime/malcode, government, law, mapping

Australian businesses are paying untold amounts of ransom to hackers, but the government is hoping to claw back some visibility with a landmark cybersecurity law.

While major ransomware attacks on companies such as MediSecure, Optus and Latitude have grabbed headlines for breaching the privacy of millions, the practice of quietly paying off cybercriminals has flourished in the dark.

The situation has deteriorated to the point that the government’s original ambition for an outright ban on ransom payments has been nixed, for now, and the focus has shifted to mapping the scale of the problem.

Jul 23, 2024

Human Brain Organoid Research and Applications: Where and How to Meet Legal Challenges?

Posted by in categories: biotech/medical, ethics, law, neuroscience

One of the most debated ethical concerns regarding brain organoids is the possibility that they will become conscious (de Jongh et al. 2022). Currently, many researchers believe that human brain organoids will not become conscious in the near future (International Society for Stem Cell Research 2021). However, several consciousness theories suggest that even existing human brain organoids could be conscious (Niikawa et al. 2022). Further, the feasibility depends on the definition of “consciousness.” For the sake of argument, we assume that human brain organoids can be conscious in principle and examine the legal implications of three types of “consciousness” in the order in which they could be easiest to realize. The first is a non–valenced experience—a mere sensory experience without positive or negative evaluations. The second is a valenced experience or sentience— an experience with evaluations such as pain and pleasure. The third is a more developed cognitive capacity. We assume that if any consciousness makes an entity a subject of (more complex) welfare, it may need to be legally (further) protected.

As a primitive form of consciousness, a non–valenced experience will, if possible, be realized earlier by human brain organoids than other forms of consciousness. However, the legal implications remain unclear. Suppose welfare consists solely of a good or bad experience. In that case, human brain organoids with a non–valenced experience have nothing to protect because they cannot have good or bad experiences. However, some argue that non–valenced experiences hold moral significance even without contributing to welfare. In addition, welfare may not be limited to experience as it has recently been adopted in animal ethics (Beauchamp and DeGrazia 2020). Adopting this perspective, even if human brain organoids possess only non–valenced experiences—or lack consciousness altogether—their basic sensory or motor capacities (Kataoka and Sawai 2023) or the possession of living or non-living bodies to utilize these capacities (Shepherd 2023), may warrant protection.

Jul 20, 2024

Eye reflections: The key to detecting deepfakes

Posted by in categories: law, robotics/AI

Governments and organizations worldwide are beginning to recognize the potential dangers. Efforts are being made to develop more sophisticated deepfake detection tools and to establish legal frameworks to address the misuse of this technology.

However, the battle against these convincing fakes is ongoing, and as detection methods improve, so too do the techniques used to create them.

The combination of astronomical techniques and AI highlights a multidisciplinary approach to solving the problem, underscoring the need for innovative and collaborative solutions.

Jul 15, 2024

The Legal War Against Deepfake Revenge Porn

Posted by in category: law

The legal system is struggling to keep up with the criminalization of deepfake revenge porn, raising concerns about privacy, consent, and the need for more resources to detect and prove the authenticity of digital evidence.

Questions to inspire discussion.

Continue reading “The Legal War Against Deepfake Revenge Porn” »

Jul 14, 2024

Space Exploration: A Thriving Industry With Tangible Earthly Rewards

Posted by in categories: economics, education, health, law, space travel

Furthermore, the synergy between educational programs, cultural influences and the tangible benefits derived from space exploration not only enriches our present-day society but also ensures a legacy of continuous innovation and exploration. This ongoing engagement with space inspires future generations to look beyond our planetary boundaries and consider what might be possible in the broader cosmos.

Space exploration presents significant challenges, including costs, astronaut health risks and technological hurdles for interstellar travel. Ethical and legal considerations regarding space colonization, resource utilization and celestial environmental impact require careful consideration and international cooperation.

While Silicon Valley visionaries envision a future among the stars, other voices remind us of our responsibilities to Earth. These are not mutually exclusive goals. By leveraging advancements and opportunities from space exploration, we can better protect and enhance life on Earth. Through economic benefits, scientific advancement and social inspiration, space exploration remains a crucial endeavor for humanity, not as an escape from our problems, but as a way to expand our horizons and solve them on our home planet.

Jul 12, 2024

Is OI the New AI? Questions Surrounding “Brainoware”

Posted by in categories: law, robotics/AI

Hybridizing OI and AI, and adding what seems like a “human” component into our current advances, probably asks more questions than it answers. Here are some of those questions for the law, and how we might begin to think about them.

The Best — and Worst — of Brains

Envisioning how brain organoids might entangle themselves with the law doesn’t take a wild imaginative step; many of the questions we might have around brain organoid models are similar to the ones we’re currently grappling with regarding artificial intelligence. Would OI warrant recognition for the work it produces? And is that output protectible? Under current (and quickly-evolving) copyright developments, AI doesn’t meet the “human” requirement for authorship on its own. But AI (and OI) require human input to work, and there may be some wiggle room on AI work protection, either citing AI as a joint author with human operators, or drawing a line at a certain threshold of human control in the AI-generated work as sufficient for copyright protection.

Jul 11, 2024

Could AIs become conscious? Right now, we have no way to tell

Posted by in categories: biological, ethics, law, robotics/AI

Advances in artificial intelligence are making it increasingly difficult to distinguish between uniquely human behaviors and those that can be replicated by machines. Should artificial general intelligence (AGI) arrive in full force—artificial intelligence that surpasses human intelligence—the boundary between human and computer capabilities will diminish entirely.

In recent months, a significant swath of journalistic bandwidth has been devoted to this potentially dystopian topic. If AGI machines develop the ability to consciously experience life, the moral and legal considerations we’ll need to give them will rapidly become unwieldy. They will have feelings to consider, thoughts to share, intrinsic desires, and perhaps fundamental rights as newly minted beings. On the other hand, if AI does not develop consciousness—and instead simply the capacity to out-think us in every conceivable situation—we might find ourselves subservient to a vastly superior yet sociopathic entity.

Neither potential future feels all that cozy, and both require an answer to exceptionally mind-bending questions: What exactly is consciousness? And will it remain a biological trait, or could it ultimately be shared by the AGI devices we’ve created?

Jul 10, 2024

The Promise and Peril of AI

Posted by in categories: biotech/medical, drones, ethics, existential risks, law, military, robotics/AI

In early 2023, following an international conference that included dialogue with China, the United States released a “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy,” urging states to adopt sensible policies that include ensuring ultimate human control over nuclear weapons. Yet the notion of “human control” itself is hazier than it might seem. If humans authorized a future AI system to “stop an incoming nuclear attack,” how much discretion should it have over how to do so? The challenge is that an AI general enough to successfully thwart such an attack could also be used for offensive purposes.

We need to recognize the fact that AI technologies are inherently dual-use. This is true even of systems already deployed. For instance, the very same drone that delivers medication to a hospital that is inaccessible by road during a rainy season could later carry an explosive to that same hospital. Keep in mind that military operations have for more than a decade been using drones so precise that they can send a missile through a particular window that is literally on the other side of the earth from its operators.

We also have to think through whether we would really want our side to observe a lethal autonomous weapons (LAW) ban if hostile military forces are not doing so. What if an enemy nation sent an AI-controlled contingent of advanced war machines to threaten your security? Wouldn’t you want your side to have an even more intelligent capability to defeat them and keep you safe? This is the primary reason that the “Campaign to Stop Killer Robots” has failed to gain major traction. As of 2024, all major military powers have declined to endorse the campaign, with the notable exception of China, which did so in 2018 but later clarified that it supported a ban on only use, not development—although even this is likely more for strategic and political reasons than moral ones, as autonomous weapons used by the United States and its allies could disadvantage Beijing militarily.

Page 3 of 9112345678Last