Toggle light / dark theme

Is OpenSource AI Threatening The Tech Titans?

Open-source AI can be defined as software engineers collaborating on various artificial intelligence projects that are open to the public to develop. The goal is to better integrate computing with humanity. In early March, the open source community got their hands on Meta’s LLaMA which was leaked to the public. In barely a month, there are very innovative OpenSource AI model variants with instruction tuning, quantization, quality improvements, human evals, multimodality, RLHF, etc.

Open-source models are faster, more customizable, more private, and capable. They are doing things with $100 and 13B params that even market leaders are struggling with. One open-source solution, Vicuna, is an… More.


This article explores AI in the context of open-sourced alternatives and highlights market dynamics in play.

A Chip Off the Old Eye: Device Mimics Human Vision and Memory

The team’s research demonstrates a working device that captures, processes and stores visual information. With precise engineering of the doped indium oxide, the device mimics a human eye’s ability to capture light, pre-packages and transmits information like an optical nerve, and stores and classifies it in a memory system like the way our brains can.


Summary: Researchers developed a single-chip device that mimics the human eye’s capacity to capture, process, and store visual data.

This groundbreaking innovation, fueled by a thin layer of doped indium oxide, could be a significant leap towards applications like self-driving cars that require quick, complex decision-making abilities. Unlike traditional systems that need external, energy-intensive computation, this device encapsulates sensing, information processing, and memory retention in one compact unit.

As a result, it enables real-time decision-making without being hampered by processing extraneous data or being delayed by transferring information to separate processors.

Mint DIS 2023

Generative artificial intelligence (AI) has put AI in the hands of people, and those who don’t use it could struggle to keep their jobs in future, Jaspreet Bindra, Founder and MD, Tech Whisperer Lt. UK, surmised at the Mint Digital Innovation Summit on June 9.

“We never think about electricity until it’s not there. That’s how AI used to be. It was always in the background and we never thought about it. With generative AI it has come into our hands, and 200–300 million of us are like, wow!” said Bindra.

He noted that while AI won’t replace humans at their jobs, someone using AI very well could. He urged working professionals to “recalibrate” and embrace generative AI as a “powerful tool” created by humans, instead of looking at it as a threat.

Tiny device mimics human vision and memory abilities

Researchers have created a small device that “sees” and creates memories in a similar way to humans, in a promising step towards one day having applications that can make rapid, complex decisions such as in self-driving cars.

The neuromorphic invention is a enabled by a sensing element, doped indium oxide, that’s thousands of times thinner than a human hair and requires no external parts to operate.

RMIT University engineers in Australia led the work, with contributions from researchers at Deakin University and the University of Melbourne.

Zuckerberg Announces Bold Plan to Jam AI Into “Every Single One of Our Products”

Meta-formerly-Facebook CEO Mark Zuckerberg has a genius new plot to add some interest to Meta-owned products: just jam in some generative AI, absolutely everywhere.

Axios reports that in an all-hands meeting on Thursday, Zuckerberg unveiled a barrage of generative AI tools and integrations, which are to be baked into both Meta’s internal and consumer-facing products, Facebook and Instagram included.

“In the last year, we’ve seen some really incredible breakthroughs — qualitative breakthroughs — on generative AI,” Zuckerberg told Axios in a statement, “and that gives us the opportunity to now go take that technology, push it forward, and build it into every single one of our products.”

Microsoft AI Introduces Orca: A 13-Billion Parameter Model that Learns to Imitate the Reasoning Process of LFMs (Large Foundation Models)

The remarkable zero-shot learning capabilities demonstrated by large foundation models (LFMs) like ChatGPT and GPT-4 have sparked a question: Can these models autonomously supervise their behavior or other models with minimal human intervention? To explore this, a team of Microsoft researchers introduces Orca, a 13-billion parameter model that learns complex explanation traces and step-by-step thought processes from GPT-4. This innovative approach significantly improves the performance of existing state-of-the-art instruction-tuned models, addressing challenges related to task diversity, query complexity, and data scaling.

The researchers acknowledge that the query and response pairs from GPT-4 can provide valuable guidance for student models. Therefore, they enhance these pairs by adding detailed responses that offer a better understanding of the reasoning process employed by the teachers when generating their responses. By incorporating these explanation traces, Orca equips student models with improved reasoning and comprehension skills, effectively bridging the gap between teachers and students.

The research team utilizes the Flan 2022 Collection to enhance Orca’s learning process further. The team samples tasks from this extensive collection to ensure a diverse mix of challenges. These tasks are then sub-sampled to generate complex prompts, which serve as queries for LFMs. This approach creates a diverse and rich training set that facilitates robust learning for the Orca, enabling it to tackle a wide range of tasks effectively.

/* */