Toggle light / dark theme

Is Neuromorphic Computing The Answer For Autonomous Driving And Personal Robotics?

If you follow the latest trends in the tech industry, you probably know that there’s been a fair amount of debate about what the next big thing is going to be. Odds-on favorite for many has been augmented reality (AR) glasses, while others point to fully autonomous cars, and a few are clinging to the potential of 5G. With the surprise debut of Amazon’s Astro a few weeks back, personal robotic devices and digital companions have also thrown their hat into the ring.

However, while there has been little agreement on exactly what the next thing is, there seems to be little disagreement that whatever it turns out to be, it will be somehow powered, enabled, or enhanced by artificial intelligence (AI). Indeed, the fact that AI and machine learning (ML) are our future seems to be a foregone conclusion.

Yet, if we do an honest assessment of where some of these technologies actually stand on a functionality basis versus initial expectations, it’s fair to argue that the results have been disappointing on many levels. In fact, if we extend that thought process out to what AI/ML were supposed to do for us overall, then we start to come to a similarly disappointing conclusion.

The Metaverse is Taking Over the Physical World

Imagine a place where you could always stay young, name a city after yourself, or even become the president — sounds like a dream? Well, if not in the real world, such dreams can definitely be fulfilled in the virtual world of a metaverse. The metaverse is believed by some to be the future of the internet, where apart from surfing, people would also be able to enter inside the digital world of the internet, in the form of their avatars.

The advent of AR, blockchain, and VR devices in the last few years has sparked the development of the metaverse. Moreover, the unprecedented growth of highly advanced technologies in the gaming industry, which offer immersive gameplay experiences, not only provides us a glimpse of how the metaverse would look like but also indicates that we are closer than ever to experience a virtual world of our own.

Facebook’s smart Ray-Ban glasses are disappointingly familiar

But first, Facebook is going to have to bridge the territory of privacy — not just for those who might have photos taken of them, but for the wearers of these microphone and camera-equipped glasses. VR headsets are one thing (and they come off your face after a session). Glasses you wear around every day are the start of Facebook’s much larger ambition to be an always-connected maker of wearables, and that’s a lot harder for most people to get comfortable with.

Walking down my quiet suburban street, I’m looking up at the sky. Recording the sky. Around my ears, I hear ABBA’s new song, I Still Have Faith In You. It’s a melancholic end to the summer. I’m taking my new Ray Ban smart glasses for a walk.

The Ray-Ban Stories feel like a conservative start. They lack some features that have been in similar products already. The glasses, which act as earbud-free headphones, don’t have 3D spatial audio like the Bose Frames and Apple’s AirPods Pro do. The stereo cameras, on either side of the lenses, don’t work with AR effects, either. Facebook has a few sort-of-AR tricks in a brand-new companion app called View that pairs with these glasses on your phone, but they’re mostly ways of using depth data for a few quick social effects.

Building a template for the future 6G network

Traditional networks are unable to keep up with the demands of modern computing, such as cutting-edge computation and bandwidth-demanding services like video analytics and cybersecurity. In recent years, there has been a major shift in the focus of network research towards software-defined networks (SDN) and network function virtualization (NFV), two concepts that could overcome the limitations of traditional networking. SDN is an approach to network architecture that allows the network to be controlled using software applications, whereas NFV seeks to move functions like firewalls and encryption to virtual servers. SDN and NFV can help enterprises perform more efficiently and reduce costs. Needless to say, a combination of the two would be far more powerful than either one alone.

In a recent study published in IEEE Transactions on Cloud Computing, researchers from Korea now propose such a combined SDN/NFV network architecture that seeks to introduce additional computational functions to existing network functions. “We expect our SDN/NFV-based infrastructure to be considered for the future 6G network. Once 6G is commercialized, the resource management technique of the network and computing core can be applied to AR/VR or holographic services,” says Prof. Jeongho Kwak of Daegu Gyeongbuk Institute of Science and Technology (DGIST), Korea, who was an integral part of the study.

The new network architecture aims to create a holistic framework that can fine-tune processing resources that use different (heterogeneous) processors for different tasks and optimize networking. The unified framework will support dynamic service chaining, which allows a single network connection to be used for many connected services like firewalls and intrusion protection; and code offloading, which involves shifting intensive computational tasks to a resource-rich remote server.

How computer vision works

It’s no secret that AI is everywhere, yet it’s not always clear when we’re interacting with it, let alone which specific techniques are at play. But one subset is easy to recognize: If the experience is intelligent and involves photos or videos, or is visual in any way, computer vision is likely working behind the scenes.

Computer vision is a subfield of AI, specifically of machine learning. If AI allows machines to “think,” then computer vision is what allows them to “see.” More technically, it enables machines to recognize, make sense of, and respond to visual information like photos, videos, and other visual inputs.

Over the last few years, computer vision has become a major driver of AI. The technique is used widely in industries like manufacturing, ecommerce, agriculture, automotive, and medicine, to name a few. It powers everything from interactive Snapchat lenses to sports broadcasts, AR-powered shopping, medical analysis, and autonomous driving capabilities. And by 2,022 the global market for the subfield is projected to reach $48.6 billion annually, up from just $6.6 billion in 2015.

/* */