Toggle light / dark theme

If people are worried that Chat-GPT could be taking their jobs, they haven’t seen Auto-GPT yet.


Auto-GPT is an AI chatbot similar to ChatGPT and others. It is based on the GPT-4 language model of OpenAI, the same LLM that powers the ChatGPT. But, as the name implies, “Autonomous Artificial Intelligence Chat Generative Pre-trained Transformer,” a step further, but what exactly is it? Let us go through what Auto-GPT is and how it works.

What is Auto-GPT

Essentially, Auto-GPT is a chatbot. You ask it the questions it answers smartly. But, unlike ChatGPT and other GPT-based chatbots, which need a prompt every single time, Auto-GPT can automate the whole task, so you do not need to prompt it. Once given a task, Auto-GPT will figure out the steps on its own to reach the goal.

The data centers that help train ChatGPT-like AI are very ‘thirsty,’ finds a new study.

A new study has uncovered how much water is consumed when training large AI models like OpenAI’s ChatGPT and Google’s Bard. The estimates of AI water consumption were presented by researchers from the Universities of Colorado Riverside and Texas Arlington in a pre-print article titled “Making AI Less ‘Thirsty.’”

Of course, the water used to cool these data centers doesn’t just disappear into the ether but is usually removed from water courses like rivers. The researchers distinguish between water “withdrawal” and “consumption” when estimating AI’s water usage.


Pp76/iStock.

In contrast to consumption, which relates mainly to water loss due to evaporation when used in data centers, the former involves physically removing water from a river, lake, or other sources. The consumption component of that equation, where the study claims “water cannot be recycled,” is where most of the study on AI’s water use is concentrated.

Tesla is about to launch a big new software update that includes a few new features and a lot of user interface upgrades.

As a Tesla owner, it’s always a good day to get a notification that a new software update is available. You start wondering what new features or improvements you are getting that day.

Well, now we have a good preview of the next Tesla software update as Teslascope (a service that tracks Tesla software updates) found out about a new update that the automaker is pushing to employee vehicles, which generally means it will be coming soon to the customer fleet as well.

According to Chinese state media, a natively developed ground-effect “wingship” has completed 30 critical sea trials, opening the door for further development.

The South China Morning Post claims that China’s new ground-effect “wingship” has just completed 30 sea trials. According to the state-run media company, the new vehicle could be used to airdrop supplies on islands and beaches and conduct quick search and rescue missions.

The “wingship,” which seems to be an example of a ground-effect vehicle or wing-in-ground-effect craft, combines an air-cushioned vehicle and an aircraft that glides close to the ground.


SCMP/Weibo.

Fascinating proposal for methodology.


Models are scientific models, theories, hypotheses, formulas, equations, naïve models based on personal experiences, superstitions (!), and traditional computer programs. In a Reductionist paradigm, these Models are created by humans, ostensibly by scientists, and are then used, ostensibly by engineers, to solve real-world problems. Model creation and Model use both require that these humans Understand the problem domain, the problem at hand, the previously known shared Models available, and how to design and use Models. A Ph.D. degree could be seen as a formal license to create new Models[2]. Mathematics can be seen as a discipline for Model manipulation.

But now — by avoiding the use of human made Models and switching to Holistic Methods — data scientists, programmers, and others do not themselves have to Understand the problems they are given. They are no longer asked to provide a computer program or to otherwise solve a problem in a traditional Reductionist or scientific way. Holistic Systems like DNNs can provide solutions to many problems by first learning about the domain from data and solved examples, and then, in production, to match new situations to this gathered experience. These matches are guesses, but with sufficient learning the results can be highly reliable.

We will initially use computer-based Holistic Methods to solve individual and specific problems, such as self-driving cars. Over time, increasing numbers of Artificial Understanders will be able to provide immediate answers — guesses — to wider and wider ranges of problems. We can expect to see cellphone apps with such good command of language that it feels like talking to a competent co-worker. Voice will become the preferred way to interact with our personal AIs.

Language models can speed up and automate many tasks in areas such as text or code. What happens when they run themselves?

This new trend in generative AI is also called “self-prompting” or “auto-prompting”. The language model develops and executes prompts that can lead to new prompts based on an initial input.

This approach becomes truly powerful when combined with tools such as web search or the ability to test written code. The language model becomes an automatic assistant that can do much more than just generate text or code.

Founded in 2005 as an Ohio-based environmental newspaper, EcoWatch is a digital platform dedicated to publishing quality, science-based content on environmental issues, causes, and solutions.

While the electric vehicle market expands, some drivers remain hesitant to switch to a fuel-free car or truck because of range anxiety, or the fear that the battery of their EV won’t have enough power to get to another charging station. But researchers have found a way that could give EV batteries a pretty substantial boost, extending the vehicle range more than 10 times.