Toggle light / dark theme

Industrial Internet of Things (IIoTs) refers to a technology that combines wireless sensors, controllers, and mobile communication technologies to make every aspect of industrial production processes intelligent and efficient. Since IIoTs can involve several small battery-driven devices and sensors, there is a growing need to develop a robust network for data transmission and power transfer to monitor the IIoT environment.

In this regard, is a promising technology. It utilizes to power small devices that consume minimal power. Recently, simultaneous wireless information and power transfer (SWIPT), which utilizes a single radio frequency signal to simultaneously perform and information decoding, has attracted significant interest for IIoTs.

Additionally, with smart devices rapidly growing in number, SWIPT has been combined with nonorthogonal multiple access (NOMA) system, which is a promising candidate for IIoTs due to their ability to extend the battery life of sensors and other devices. However, the energy efficiency of this system falls significantly with transmission distance from the central controller.

New details of Musk’s involvement in the Ukraine-Russia war revealed in his biography.

Elon Musk holds many titles. He is the CEO of Tesla SpaceX and owns the social media company X, which was recently rebranded from Twitter. Going by an excerpt of his biography, published in the Washington Post.

According to the excerpt from Walter Isaacson’s book, Musk disabled his company Starlink’s satellite communication networks, which were being used by the Ukrainian military to attack the Russian naval fleet in Sevastopol, Crimea, sneakily. The Ukrainian army was using Starlink as a guide to target Russian ships and attack them with six small… More.


Musk’s biographer alleges he prevented nuclear war between Ukraine and Russia by turning off Starlink satellite network near Crimea, but Musk says, ‘SpaceX did not deactivate anything’.

It’s easy to trick the large language models powering chatbots like OpenAI’s ChatGPT and Google’s Bard. In one experiment in February, security researchers forced Microsoft’s Bing chatbot to behave like a scammer. Hidden instructions on a web page the researchers created told the chatbot to ask the person using it to hand over their bank account details. This kind of attack, where concealed information can make the AI system behave in unintended ways, is just the beginning.

Hundreds of examples of “indirect prompt injection” attacks have been created since then. This type of attack is now considered one of the most concerning ways that language models could be abused by hackers. As generative AI systems are put to work by big corporations and smaller startups, the cybersecurity industry is scrambling to raise awareness of the potential dangers. In doing so, they hope to keep data—both personal and corporate—safe from attack. Right now there isn’t one magic fix, but common security practices can reduce the risks.

“Indirect prompt injection is definitely a concern for us,” says Vijay Bolina, the chief information security officer at Google’s DeepMind artificial intelligence unit, who says Google has multiple projects ongoing to understand how AI can be attacked. In the past, Bolina says, prompt injection was considered “problematic,” but things have accelerated since people started connecting large language models (LLMs) to the internet and plug-ins, which can add new data to the systems. As more companies use LLMs, potentially feeding them more personal and corporate data, things are going to get messy. “We definitely think this is a risk, and it actually limits the potential uses of LLMs for us as an industry,” Bolina says.

The Mozilla Foundation examined 25 car brands and discovered that all of them violate user privacy.

In-car internet is great. A car occupant can play a song, chat with a voice assistant or find directions to their destination with one click of a button.

But after reading the latest report by Mozilla Foundation on user data privacy in cars, one might rethink before switching on their in-built navigation system. We’re not being alarmists, but the report is sounding alarms left, right, and center.

Every few years comes a disruptive technology that catalyzes the development stages of not just companies but also society as a whole. Generative AI may not be as big as the invention of the internet but it is a foundational block to create a new digital transformation aided by AI.

The reason why Generative AI is one of the most exciting chapters in this journey of transformation is because the technology comes very close to imitating human quality of output. It sparks a very controversial debate about its advantages and disadvantages, especially in a country like ours with a large disposition to lose in terms of jobs that can be replaced by AI. But let’s look at our own journey of digital transformation closely. India has always charted a strong developmental course in terms of the tech industry with robust manpower, unmatched pricing, and a very dynamic workforce that has placed our country sixth in terms of AI investments between 2013–2022. Here, we do have to be mindful that our journey as a country for any disruptive technology may look completely different than others.

Will we ever decipher the language of molecular biology? Here, I argue that we are just a few years away from having accurate in silico models of the primary biomolecular information highway — from DNA to gene expression to proteins — that rival experimental accuracy and can be used in medicine and pharmaceutical discovery.

Since I started my PhD in 1996, the computational biology community had embraced the mantra, “biology is becoming a computational science.” Our ultimate ambition has been to predict the activity of biomolecules within cells, and cells within our bodies, with precision and reproducibility akin to engineering disciplines. We have aimed to create computational models of biological systems, enabling accurate biomolecular experimentation in silico. The recent strides made in deep learning and particularly large language models (LLMs), in conjunction with affordable and large-scale data generation, are propelling this aspiration closer to reality.

LLMs, already proven masters at modeling human language, have demonstrated extraordinary feats like passing the bar exam, writing code, crafting poetry in diverse styles, and arguably rendering the Turing test obsolete. However, their potential for modeling biomolecular systems may even surpass their proficiency in modeling human language. Human language mirrors human thought providing us with an inherent advantage, while molecular biology is intricate, messy, and counterintuitive. Biomolecular systems, despite their messy constitution, are robust and reproducible, comprising millions of components interacting in ways that have evolved over billions of years. The resulting systems are marvelously complex, beyond human comprehension. Biologists often resort to simplistic rules that work only 60% or 80% of the time, resulting in digestible but incomplete narratives. Our capacity to generate colossal biomolecular data currently outstrips our ability to understand the underlying systems.

ChatGPT is not officially available in China.

China can’t buy US chips required for advanced artificial intelligence models, but that’s not stopping the country from churning out AI models. After receiving regulatory approval from the country’s internet watchdog, several Chinese tech companies launched their respective AI chatbots last week. This comes in response to OpenAI’s ChatGPT, which, since its launch, has prompted rival tech companies around the globe to launch their own chatbots.

According to a Reuters report, Baidu CEO Robin Li has claimed that over 70 large language models (LLMs) with over 1 billion parameters have been released in China.


US sanctions on semiconductors are choking China’s ability to advance in generative AI. But as per latest reports, China has given nod to over 70 LLMs, showcasing its growth in the AI space.

Can we learn robot manipulation for everyday tasks, only by watching videos of humans doing arbitrary tasks in different unstructured settings? Unlike widely adopted strategies of learning task-specific behaviors or direct imitation of a human video, we develop a a framework for extracting agent-agnostic action representations from human videos, and then map it to the agent’s embodiment during deployment. Our framework is based on predicting plausible human hand trajectories given an initial image of a scene. After training this prediction model on a diverse set of human videos from the internet, we deploy the trained model zero-shot for physical robot manipulation tasks, after appropriate transformations to the robot’s embodiment. This simple strategy lets us solve coarse manipulation tasks like opening and closing drawers, pushing, and tool use, without access to any in-domain robot manipulation trajectories. Our real-world deployment results establish a strong baseline for action prediction information that can be acquired from diverse arbitrary videos of human activities, and be useful for zero-shot robotic manipulation in unseen scenes.