Toggle light / dark theme

The future of warfare starts in your mind. Understand how Neuroscience, Technology/AI and the OODA loop affects your flow. The world is changing, and cognitive warfare is at the forefront. In our latest podcast episode, we sit down with James Giordano, PhD, a Navy veteran and an expert in neurocognitive science, to delve into the world of cognitive warfare.
Stay in the Loop: https://www.aglx.com/newsletter-signup-north-america.
From the impact of emotions on decision-making to the integration of artificial intelligence and human cognition, this episode challenges your perspective on the battlefield. Join us as we explore the ethical implications of genetic modifications, the transformative effects of psychedelics, and the complexities of data usage in the digital age. Get ready to reimagine the relationship between technology, culture, and language. Don’t miss out on this opportunity to gain valuable insights from our thought-provoking conversation with Dr. Giordano. Tune in now to stay ahead of the curve on the evolving landscape of warfare!

00:00 — Understanding the OODA loop: A Neuroscience Perspective.
09:11 — Exploring Fifth Generation Warfare and Liminal Warfare.
16:06 — The Long Game: China’s Strategic Plan.
22:19 — Understanding Cognitive Warfare and Human-Machine Teaming.
25:52 — The Evolution of Human-Machine Teaming.
29:11 — Human Involvement in AI Decision Making.
36:01 — The Ethics of Paternalistic AI Systems.
40:43 — Technology’s Impact on Cognitive Engagement.
45:13 — Exploring Technologies for Human Performance Enhancement.
55:59 — Diving Into Attacking Mode and Ethics.
56:24 — Hacking the Human Genome.
59:37 — Epigenetic Modification and Phenotypic Shift.
1:04:54 — The Psychedelic Revolution.
1:11:18 — Revisiting Alcohol and Caffeine: Benefits and Burdens.
1:19:18 — Impact of Technology on Cognitive Capacity.
1:23:33 — Information Overload and Burdens.
1:27:02 — Ownership and Security of Personal Data.
1:31:56 — Identifying Predispositional Traits.
1:33:49 — Data Manipulation and Biometrics.
1:40:13 — Cultural Impact of Technology.
1:48:55 — The Role of Education in Integrating Science, Technology, Ethics, and Policy.
1:54:30 — Major Threats and Concerns in Today’s World.

Researchers discovered 49,000 misconfigured and exposed Access Management Systems (AMS) across multiple industries and countries, which could compromise privacy and physical security in critical sectors.

Access Management Systems are security systems that control employee access to buildings, facilities, and restricted areas via biometrics, ID cards, or license plates.

Security researchers at Modat conducted a comprehensive investigation in early 2025 and discovered tens of thousands of internet-exposed AMS that were not correctly configured for secure authentication, allowing anyone to access them.

If you’ve recently scrolled through Instagram, you’ve probably noticed it: users posting AI-generated images of their lives or chuckling over a brutal feed roast by ChatGPT. What started as an innocent prompt – “Ask ChatGPT to draw what your life looks like based on what it knows about you” – has gone viral, inviting friends, followers, and even ChatGPT itself to get a peek into our most personal details. It’s fun, often eerily accurate, and, yes, a little unnerving.

The trend that started it all

A while ago, Instagram’s “Add Yours” sticker spurred the popular trend “Ask ChatGPT to roast your feed in one paragraph.” What followed were thousands of users clamouring to see the AI’s take on their profiles. ChatGPT didn’t disappoint – delivering razor-sharp observations on everything from overused vacation spots to the endless brunch photos and quirky captions, blending humour with a dash of truth. The playful roasting felt oddly familiar, almost like a best friend’s inside joke.

Related: SDR: a spectrum of possibilities

NAVWAR awarded the order on behalf of the Navy’s Program Executive Office for Command, Control, Communication, Computers, and Intelligence (PEO C4I) in San Diego.

The AN/USC-61© is a maritime software-defined radio (SDR) that has become standard for the U.S. military. The compact, multi-channel DMR provides several different waveforms and multi-level information security for voice and data communications.

Back in June, YouTube quietly made a subtle but significant policy change that, surprisingly, benefits users by allowing them to remove AI-made videos that simulate their appearance or voice from the platform under YouTube’s privacy request process.

First spotted by TechCrunch, the revised policy encourages affected parties to directly request the removal of AI-generated content on the grounds of privacy concerns and not for being, for example, misleading or fake. YouTube specifies that claims must be made by the affected individual or authorized representatives. Exceptions include parents or legal guardians acting on behalf of minors, legal representatives, and close family members filing on behalf of deceased individuals.

According to the new policy, if a privacy complaint is filed, YouTube will notify the uploader about the potential violation and provide an opportunity to remove or edit the private information within their video. YouTube may, at its own discretion, grant the uploader 48 hours to utilize the Trim or Blur tools available in YouTube Studio and remove parts of the footage from the video. If the uploader chooses to remove the video altogether, the complaint will be closed, but if the potential privacy violation remains within those 48 hours, the YouTube Team will review the complaint.