Toggle light / dark theme

Tesla’s innovative design and manufacturing processes, as well as their focus on safety features, set them apart from traditional automakers and position them as a formidable competitor in the industry.

Questions to inspire discussion.

How does Tesla’s design and manufacturing processes set them apart from traditional automakers?

A team of researchers led by Professor Young S. Park at UNIST’s Department of Chemistry has achieved a significant breakthrough in the field of organic semiconductors. Their successful synthesis and characterization of a novel molecule called “BNBN anthracene” has opened up new possibilities for the development of advanced electronic devices.

The paper is published in the journal Angewandte Chemie International Edition.

Organic semiconductors play a crucial role in improving the movement and light properties of electrons in carbon-centered organic electronic devices. The team’s research focused on enhancing the chemical diversity of these semiconductors by replacing carbon-carbon (C−C) bonds with isoelectronic boron-nitrogen (B−N) bonds. This substitution allows for precise modulation of the electronic properties without significant structural changes.

RNA polymerase

The key lies in mimicking nature’s machinery. The researchers identified RNA polymerase, a key enzyme that converts DNA into RNA, which is then used to make proteins. They designed two artificial nucleotides that flawlessly mimic the geometry of natural nucleotides. RNA polymerase readily accepted these novel additions when tested, seamlessly incorporating them into transcription.

A protein in the immune system programmed to protect the body from fungal infections is also responsible for exacerbating the severity of certain autoimmune diseases such as irritable bowel disease (IBS), type 1 diabetes, eczema and other chronic disorders, new research from The Australian National University (ANU) has found.

The discovery could pave the way for new and more effective drugs, without the nasty side effects of existing treatments. In addition to helping to manage severe autoimmune conditions, the breakthrough could also help treat all types of cancer. The work has been published in Science Advances.

The scientists have discovered a previously unknown function of the protein, known as DECTIN-1, which in its mutated state limits the production of T regulatory cells or so-called ‘guardian’ cells in the immune system.

SpaceX’s announcement of the new V2 Starship marks a significant commitment to innovation and improvement in the evolution of Starship, with potential game-changing advancements in technology and capabilities.

Questions to inspire discussion.

Why is SpaceX shifting to V2 Starship?
—SpaceX is shifting to V2 Starship due to potential game-changing advancements in technology and capabilities, including increased propellant capacity and reduced dry mass for quicker booster return and reaching higher orbits.

Artificial intelligence researchers claim to have made the world’s first scientific discovery using a large language model, a breakthrough that suggests the technology behind ChatGPT and similar programs can generate information that goes beyond human knowledge.

The finding emerged from Google DeepMind, where scientists are investigating whether large language models, which underpin modern chatbots such as OpenAI’s ChatGPT and Google’s Bard, can do more than repackage information learned in training and come up with new insights.

“When we started the project there was no indication that it would produce something that’s genuinely new,” said Pushmeet Kohli, the head of AI for science at DeepMind. “As far as we know, this is the first time that a genuine, new scientific discovery has been made by a large language model.”

Microsoft research releases Phi-2 and promptbase.

Phi-2 outperforms other existing small language models, yet it’s small enough to run on a laptop or mobile device.


Over the past few months, our Machine Learning Foundations team at Microsoft Research has released a suite of small language models (SLMs) called “Phi” that achieve remarkable performance on a variety of benchmarks. Our first model, the 1.3 billion parameter Phi-1 (opens in new tab), achieved state-of-the-art performance on Python coding among existing SLMs (specifically on the HumanEval and MBPP benchmarks). We then extended our focus to common sense reasoning and language understanding and created a new 1.3 billion parameter model named Phi-1.5 (opens in new tab), with performance comparable to models 5x larger.

We are now releasing Phi-2 (opens in new tab), a 2.7 billion-parameter language model that demonstrates outstanding reasoning and language understanding capabilities, showcasing state-of-the-art performance among base language models with less than 13 billion parameters. On complex benchmarks Phi-2 matches or outperforms models up to 25x larger, thanks to new innovations in model scaling and training data curation.