Toggle light / dark theme

Movidius’ Myriad 2 vision processing chip (Photo: Movidius)

The branch of artificial intelligence called deep learning has given us new wonders such as self-driving cars and instant language translation on our phones. Now it’s about to injects smarts into every other object imaginable.

That’s because makers of silicon processors from giants such as Intel Corp. and Qualcomm Technologies Inc. as well as a raft of smaller companies are starting to embed deep learning software into their chips, particularly for mobile vision applications. In fairly short order, that’s likely to lead to much smarter phones, drones, robots, cameras, wearables and more.

Read more

Oxford Nanopore Technologies is changing the course of genomics through the development of their small and portable DNA sequencer, the MinION, which makes of nanopore technology.

The handheld, portable tricorder from Star Trek was essentially able to scan and record biological data from almost anything, and it could do it anytime and anywhere. Recent technology has been pulling the device out of science fiction and turning it into reality, but none have come close to getting genetic information with the same portability…except for British company Oxford Nanopore Technologies.

The device is only 10.16 cm (4 in) long, 2.54 cm (1 in) wide, and weighs about 87 grams. It’s a USB-powered device, and it’s called the MinION. It’s smaller than most smartphones, easy to forget in your jacket pocket, and it’s changing the DNA sequencing industry.

Read more

New tech from Carnegie Mellon makes it much easier to play ‘Angry Birds’ on your wrist.

Smartwatches walk a fine line between functionality and fashion, but new SkinTrack technology from Carnegie Mellon University’s Future Interfaces Group makes the size of the screen a moot point. The SkinTrack system consists of a ring that emits a continuous high-frequency AC signal and a sensing wristband that goes under the watch. The wristband tracks the finger wearing the ring and senses whether the digit is hovering or actually making contact with your arm or hand, turning your skin into an extension of the touchscreen.

The tech is so precise that you’re able to use the back of your hand to dial a phone number, draw letters for navigation shortcuts, scroll through apps, play Angry Birds or select an item from a list. Researchers at the Future Interfaces Group say the tech is 99 percent accurate when it comes to touch.

Read more

Ask an Information Architect, CDO, Data Architect (Enterprise and non-Enterprise) they will tell you they have always known that information/ data is a basic staple like Electricity all along; and glad that folks are finally realizing it. So, the same view that we apply to utilities as core to our infrastructure & survival; we should also apply the same value and view about information. And, in fact, information in some areas can be even more important than electricity when you consider information can launch missals, cure diseases, make you poor or wealthy, take down a government or even a country.


What is information? Is it energy, matter, or something completely different? Although we take this word for granted and without much thought in today’s world of fast Internet and digital media, this was not the case in 1948 when Claude Shannon laid the foundations of information theory. His landmark paper interpreted information in purely mathematical terms, a decision that dematerialized information forever more. Not surprisingly, there are many nowadays that claim — rather unthinkingly — that human consciousness can be expressed as “pure information”, i.e. as something immaterial graced with digital immortality. And yet there is something fundamentally materialistic about information that we often ignore, although it stares us — literally — in the eye: the hardware that makes information happen.

As users we constantly interact with information via a machine of some kind, such as our laptop, smartphone or wearable. As developers or programmers we code via a computer terminal. As computer or network engineers we often have to wade through the sheltering heat of a server farm, or deal with the material properties of optical fibre or copper in our designs. Hardware and software are the fundamental ingredients of our digital world, both necessary not only in engineering information systems but in interacting with them as well. But this status quo is about to be massively disrupted by Artificial Intelligence.

A decade from now the postmillennial youngsters of the late 2020s will find it hard to believe that once upon a time the world was full of computers, smartphones and tablets. And that people had to interact with these machines in order to access information, or build information systems. For them information would be more like electricity: it will always be there, and always available to power whatever you want to do. And this will be possible because artificial intelligence systems will be able to manage information complexity so effectively that it will be possible to deliver the right information at the right person at the right time, almost at an instant. So let’s see what that would mean, and how different it would be from what we have today.

I do love Nvidia!


During the past nine months, an Nvidia engineering team built a self-driving car with one camera, one Drive-PX embedded computer and only 72 hours of training data. Nvidia published an academic preprint of the results of the DAVE2 project entitled End to End Learning for Self-Driving Cars on arXiv.org hosted by the Cornell Research Library.

The Nvidia project called DAVE2 is named after a 10-year-old Defense Advanced Research Projects Agency (DARPA) project known as DARPA Autonomous Vehicle (DAVE). Although neural networks and autonomous vehicles seem like a just-invented-now technology, researchers such as Google’s Geoffrey Hinton, Facebook’s Yann Lecune and the University of Montreal’s Yoshua Bengio have collaboratively researched this branch of artificial intelligence for more than two decades. And the DARPA DAVE project application of neural network-based autonomous vehicles was preceded by the ALVINN project developed at Carnegie Mellon in 1989. What has changed is GPUs have made building on their research economically feasible.

Neural networks and image recognition applications such as self-driving cars have exploded recently for two reasons. First, Graphical Processing Units (GPU) used to render graphics in mobile phones became powerful and inexpensive. GPUs densely packed onto board-level supercomputers are very good at solving massively parallel neural network problems and are inexpensive enough for every AI researcher and software developer to buy. Second, large, labeled image datasets have become available to train massively parallel neural networks implemented on GPUs to see and perceive the world of objects captured by cameras.

A video of a fully bendable smartphone with a graphene touch display debuts at a Chinese trade show.

A Chinese company just showed off a fully bendable smartphone with a graphene screen during a trade show at Nanping International Conventional Center in Chongqing. Videos of the incredibly flexible phone are making the rounds, and no wonder, as it looks rather impressive.

It isn’t yet known which company developed the bendable smartphone, and very few details have emerged about it. What we do know is that it weighs 200g, the smartphone can be worn around the wrist, and the screen is fully touch enabled.

Read more

LeEco is known as the “Netflix of China” due to its very popular video streaming service, but the conglomerate also has interests in a much wider range of sectors including smartphones, TVs and electric vehicles.

Ding Lei, LeEco’s auto chief and a former top official at General Motors’ China venture with SAIC Motor, says part of LeEco’s advantage in tomorrow’s auto industry is that it carries no baggage from today’s.

This, the man said, is the future of cars, and the Chinese consumer electronics company LeEco is going to make that future a reality.

Read more

Kurzweil, me and others have been saying devices will eventually be phased out for a while now. However, I do not believe the phase out will be due to AI. I do believe it will be based on how humans will use and adopt NextGen technology. I believe that AI will only be a supporting technology for humans and will be used in conjunction with AR, BMI, etc.

My real question around the phasing out of devices is will we jump from Smartphone directly to BMI or see a migration of Smartphone to AR Contacts & Glasses then eventually BMI?…


(Bloomberg) — Forget personal computer doldrums and waning smartphone demand. Google thinks computers will one day cease being physical devices.

“Looking to the future, the next big step will be for the very concept of the “device to fade away, Google Chief Executive Officer Sundar Pichai wrote Thursday in a letter to shareholders of parent Alphabet Inc. “Over time, the computer itself — whatever its form factor — will be an intelligent assistant helping you through your day.

Instead of online information and activity happening mostly on the rectangular touch screens of smartphones, Pichai sees artificial intelligence powering increasingly formless computers. “We will move from mobile first to an AI first world, he said.

Read more