Stanford’s OceanOne uses haptic feedback to let human pilots safely explore the briny deep.
Category: robotics/AI – Page 2364
The odds are now better than ever that future explorers, both robotic and human, will be able to take samples of the lunar’s hidden interior in deep impact basins like Crisium and Moscoviense. This gives planners more options on where to embed the first science colony.
Finding and sampling the Moon’s ancient interior mantle — one of the science drivers for sending robotic spacecraft and future NASA astronauts to the Moon’s South Pole Aitken basin — is just as likely achievable at similar deep impact basins scattered around the lunar surface.
At least that’s the view reached by planetary scientists who have been analyzing the most recent data from NASA’s Gravity Recovery And Interior Laboratory (GRAIL) and its Lunar Reconnaissance Orbiter (LRO) missions as well as from Japan’s SELENE (Kaguya) lunar orbiter.
The consensus is that the lunar crust is actually thinner than previously thought.
I do love Nvidia!
During the past nine months, an Nvidia engineering team built a self-driving car with one camera, one Drive-PX embedded computer and only 72 hours of training data. Nvidia published an academic preprint of the results of the DAVE2 project entitled End to End Learning for Self-Driving Cars on arXiv.org hosted by the Cornell Research Library.
The Nvidia project called DAVE2 is named after a 10-year-old Defense Advanced Research Projects Agency (DARPA) project known as DARPA Autonomous Vehicle (DAVE). Although neural networks and autonomous vehicles seem like a just-invented-now technology, researchers such as Google’s Geoffrey Hinton, Facebook’s Yann Lecune and the University of Montreal’s Yoshua Bengio have collaboratively researched this branch of artificial intelligence for more than two decades. And the DARPA DAVE project application of neural network-based autonomous vehicles was preceded by the ALVINN project developed at Carnegie Mellon in 1989. What has changed is GPUs have made building on their research economically feasible.
Neural networks and image recognition applications such as self-driving cars have exploded recently for two reasons. First, Graphical Processing Units (GPU) used to render graphics in mobile phones became powerful and inexpensive. GPUs densely packed onto board-level supercomputers are very good at solving massively parallel neural network problems and are inexpensive enough for every AI researcher and software developer to buy. Second, large, labeled image datasets have become available to train massively parallel neural networks implemented on GPUs to see and perceive the world of objects captured by cameras.
Closing the instability gap.
(Phys.org)—It might be said that the most difficult part of building a quantum computer is not figuring out how to make it compute, but rather finding a way to deal with all of the errors that it inevitably makes. Errors arise because of the constant interaction between the qubits and their environment, which can result in photon loss, which in turn causes the qubits to randomly flip to an incorrect state.
In order to flip the qubits back to their correct states, physicists have been developing an assortment of quantum error correction techniques. Most of them work by repeatedly making measurements on the system to detect errors and then correct the errors before they can proliferate. These approaches typically have a very large overhead, where a large portion of the computing power goes to correcting errors.
In a new paper published in Physical Review Letters, Eliot Kapit, an assistant professor of physics at Tulane University in New Orleans, has proposed a different approach to quantum error correction. His method takes advantage of a recently discovered unexpected benefit of quantum noise: when carefully tuned, quantum noise can actually protect qubits against unwanted noise. Rather than actively measuring the system, the new method passively and autonomously suppresses and corrects errors, using relatively simple devices and relatively little computing power.
Excellent read and a true point about the need for some additional data laws with our ever exploding information overload world.
Laws for Mobility, IoT, Artificial Intelligence and Intelligent Process Automation
If you are the VP of Sales, it is quite likely you want and need to know up to date sales numbers, pipeline status and forecasts. If you are meeting with a prospect to close a deal, it is quite likely that having up to date business intelligence and CRM information would be useful. Likewise traveling to a remote job site to check on the progress of an engineering project is also an obvious trigger that you will need the latest project information. Developing solutions integrated with mobile applications that can anticipate your needs based upon your Code Halo data, the information that surrounds people, organizations, projects, activities and devices, and acting upon it automatically is where a large amount of productivity gains will be found in the future.
There needs to be a law, like Moore’s infamous law, that states, “The more data that is collected and analyzed, the greater the economic value it has in aggregate.” This law I believe is accurate and my colleagues at the Center for the Future of Work, wrote a book titled Code Halos that documents evidence of its truthfulness as well. I would also like to submit an additional law, “Data has a shelf-life and the economic value of data diminishes over time.” In other words, if I am negotiating a deal today, but can’t get the critical business data I need for another week, the data will not be as valuable to me then. The same is true if I am trying to optimize, in real-time, the schedules of 5,000 service techs, but don’t have up to date job status information. Receiving job status information tomorrow, does not help me optimize schedules today.
Luv this.
He too wears a golden robe and sings chants.
Standing at just over half a metre tall, Xian’er is the product of a collaboration between Longquan temple on the outskirts of Beijing, a technology company and local universities specializing in artificial intelligence.
Hmmm;
The latest figures are a clear sign that India’s largest outsourcing firms are succeeding at ‘non-linear’ growth, where revenues increase disproportionately compared with hiring.
While the numbers are good news for an industry that is trying to defend profit margins, it raises concerns over the future of hiring and the availability of engineering jobs in a sector that employs over three million people.
“What you’re seeing now is about 200,000 people being hired in the IT industry — it’s not the 4–5 lakhs that they used to hire 10 years ago. And that’s because the growth has shrunk from 35–40% and the competition was for resources. Even now the competition is for resources, but it’s for slightly more experienced resources — people who can work on automation, artificial intelligence, machine languages, data sciences. So, it’s not hiring for Java coding anymore,” said Infosys co-founder Kris Gopalakrishnan in a recent interview.
China’s robot revolution
Posted in robotics/AI
China’s love of robots.
The Ying Ao sink foundry in southern China’s Guangdong province does not look like a factory of the future. The sign over the entrance is faded; inside, the floor is greasy with patches of mud, and a thick metal dust — the by-product of the stainless-steel polishing process — clogs the air. As workers haul trolleys across the factory floor, the cavernous, shed-like building reverberates with a loud clanging.
Guangdong is the growth engine of China’s manufacturing industry, generating $615bn in exports last year — more than a quarter of the country’s total. In this part of the province, the standard wage for workers is about Rmb4,000 ($600) per month. Ying Ao, which manufactures sinks destined for the kitchens of Europe and the US, has to pay double that, according to deputy manager Chen Conghan, because conditions in the factory are so unpleasant. So, four years ago, the company started buying machines to replace the ever more costly humans.
Kurzweil, me and others have been saying devices will eventually be phased out for a while now. However, I do not believe the phase out will be due to AI. I do believe it will be based on how humans will use and adopt NextGen technology. I believe that AI will only be a supporting technology for humans and will be used in conjunction with AR, BMI, etc.
My real question around the phasing out of devices is will we jump from Smartphone directly to BMI or see a migration of Smartphone to AR Contacts & Glasses then eventually BMI?…
(Bloomberg) — Forget personal computer doldrums and waning smartphone demand. Google thinks computers will one day cease being physical devices.
“Looking to the future, the next big step will be for the very concept of the “device to fade away, Google Chief Executive Officer Sundar Pichai wrote Thursday in a letter to shareholders of parent Alphabet Inc. “Over time, the computer itself — whatever its form factor — will be an intelligent assistant helping you through your day.
Instead of online information and activity happening mostly on the rectangular touch screens of smartphones, Pichai sees artificial intelligence powering increasingly formless computers. “We will move from mobile first to an AI first world, he said.