Toggle light / dark theme

China’s Tianwen-1 Mars probe just completed its number one goal

Tianwen-1 is a historic victory for both the CNSA and space exploration.


Upon successful orbital insertion and landing, Tianwen-1 became a historic victory for the CNSA and space exploration. Before Tianwen-1, the only two successful missions to send an orbiter and lander to Mars were NASA’s Viking 1 and 2 missions in 1975. Prior to that, the Soviet Union had attempted this feat with their Mars 2 and 3 missions in 1971 and Mars 6 in 1973.

Mars 2 was an outright failure, with the lander being destroyed and the orbiter sending back no data. On Mars 3, the orbiter obtained approximately eight months of data, and while the lander touched down safely, it only returned 20 seconds of data. On Mars 6, the orbiter produced data from an occultation experiment, but the lander failed on the descent.

COLMENA, a new concept on lunar exploration

Live now, on the Space Renaissance YouTube channel.


We are stepping at the gates of a new era in space exploration, one which will finally incorporate the inner solar system to society’s daily life and economics. The first step is the Moon, and the asteroids will probably follow. The surface of those bodies presents special challenges for human and technological activities as well as resource exploitation. These challenges, which include regolith, extreme thermal amplitude, high energy radiation and surface mineral mixing among others, open the door to new operational approaches. COLMENA is the pathfinder of one such avenue: using swarms of micro-rovers for scientific exploration, resource prospection or, eventually, mining The first COLMENA mission will deploy 5 microrovers (56 grams each) on the Moon surface by the end of this year, flying on board a private spacecraft. In the talk I will briefly explain the context, technical characteristics and objectives of the mission, as well as its future.

A short bio.

Dr. Gustavo MEDINA TANCO is Professor at the Institute of Nuclear Sciences of UNAM in Mexico, where he leads the group of ultra-high energy cosmic rays and is the Head of the Laboratory for Space Instrumentation, LINX, which he created in 2009. He has also created, and is responsible for, the National Laboratory for Space Access (LANAE) in state of Hidalgo, Mexico, which will start operation in 2022. He was for 10 years the science coordinator of the International JEM-EUSO Collaboration and member of its executive board and, as such he lead the Mexican participation in the development of several instruments under the coordination of CNES, NASA, ASI and ROSCOSMOS.

Lockheed Martin gets $59 million order for Stryker cyber and electronic warfare suite

Lockheed Martin has been busy this year. In April of 2022, the Defense Advanced Research Projects Agency (DARPA) and its U.S. Air Force partner announced that they had completed a free flight test of the Lockheed Martin version of the Hypersonic Air-breathing Weapon Concept (HAWC).

Then just last month, the U.S. Department of Defense (DoD) awarded the company a contract to construct the nation’s first megawatt-scale long-duration energy storage system. Under the direction of the U.S. Army Engineer Research and Development Center’s (ERDC) Construction Engineering Research Laboratory (CERL), the new system, called “GridStar Flow,” will be set up at Fort Carson, Colorado.

In the same time frame, General Motors and the firm announced their plans to produce a series of electric moon rovers for future commercial space missions. The companies said they plan aim to test the batteries developed by GM, in space later this year. They also set the ambitious goal of testing a prototype vehicle on the moon by 2025.

UC Berkeley and Google AI Researchers Introduce ‘Director’: a Reinforcement Learning Agent that Learns Hierarchical Behaviors from Pixels

By Planning in the Latent Space of a Learned World Model. The world model Director builds from pixels allows effective planning in a latent space. To anticipate future model states given future actions, the world model first maps pictures to model states. Director optimizes two policies based on the model states’ anticipated trajectories: Every predetermined number of steps, the management selects a new objective, and the employee learns to accomplish the goals using simple activities. The direction would have a difficult control challenge if they had to choose plans directly in the high-dimensional continuous representation space of the world model. To reduce the size of the discrete codes created by the model states, they instead learn a goal autoencoder. The goal autoencoder then transforms the discrete codes into model states and passes them as goals to the worker after the manager has chosen them.

Deep reinforcement learning advancements have accelerated the study of decision-making in artificial agents. Artificial agents may actively affect their environment by moving a robot arm based on camera inputs or clicking a button in a web browser, in contrast to generative ML models like GPT-3 and Imagen. Although artificial intelligence has the potential to aid humans more and more, existing approaches are limited by the necessity for precise feedback in the form of often given rewards to acquire effective techniques. For instance, even robust computers like AlphaGo are restricted to a certain number of moves before earning their next reward while having access to massive computing resources.

Contrarily, complex activities like preparing a meal necessitate decision-making at all levels, from menu planning to following directions to the shop to buy supplies to properly executing the fine motor skills required at each stage along the way based on high-dimensional sensory inputs. Artificial agents can complete tasks more independently with scarce incentives thanks to hierarchical reinforcement learning (HRL), which automatically breaks down complicated tasks into achievable subgoals. Research on HRL has, however, been difficult because there is no universal answer, and existing approaches rely on manually defined target spaces or subtasks.

/* */