Toggle light / dark theme

AR-Smart glasses: 2029. Will look like just a normal pair of sunglasses. All normal smartphone type features. Built in AI systems. Set up for some VR stuff. An built in earbud / mic, for calls, music, talking to Ai, etc… May need a battery pack, we ll see in 2029.


The smart glasses will soon come with a built-in assistant.

Without understanding how gravity affects time, the GPS location in your phone would get progressively less accurate until you end up in the wrong location.

The demonstration at 22 Bishopsgate was part of the Lord Mayor of London Alderman Professor Michael Mainelli’s mayoral theme, ‘Connect to Prosper

The demonstration was the first in a series of showpiece exercises, which will run for the duration of the Lord Mayor’s tenure. The Experiment Series seeks to showcase innovation and invention in the City of London and promote and celebrate the many ‘knowledge miles’ within the Square Mile.

OLED panels have been around for quite some time, but now we are starting to see them come to gaming monitors, raising concerns about burn-in issues.

OLED pixel technology has been used in smartphones and TVs for many years now, and with each iteration of the technology, improvements are being made to the quality of the panel, particularly with the reduction of known problems. But now we are starting to see the gaming industry be blessed with gorgeous QD-OLED panels, and the brands behind these new gaming monitors are rolling out features such as MSI’s OLED Care technology to reduce the chances of debilitating issues such as burn-in.

Imagine performing a sweep around an object with your smartphone and getting a realistic, fully editable 3D model that you can view from any angle. This is fast becoming reality, thanks to advances in AI.

Researchers at Simon Fraser University (SFU) in Canada have unveiled new AI technology for doing exactly this. Soon, rather than merely taking 2D photos, everyday consumers will be able to take 3D captures of real-life objects and edit their shapes and appearance as they wish, just as easily as they would with regular 2D photos today.

In a new paper appearing on the arXiv preprint server and presented at the 2023 Conference on Neural Information Processing Systems (NeurIPS) in New Orleans, Louisiana, researchers demonstrated a new technique called Proximity Attention Point Rendering (PAPR) that can turn a set of 2D photos of an object into a cloud of 3D points that represents the object’s shape and appearance.

Apple quietly submitted a research paper last week related to its work on a multimodal large language model (MLLM) called MM1. Apple doesn’t explain what the meaning behind the name is, but it’s possible it could stand for MultiModal 1.

Being multimodal, MM1 is capable of working with both text and images. Overall, its capabilities and design are similar to the likes of Google’s Gemini or Meta’s open-source LLM Llama 2.

An earlier report from Bloomberg said Apple was interested in incorporating Google’s Gemini AI engine into the iPhone. The two companies are reportedly still in talks to let Apple license Gemini to power some of the generative AI features coming to iOS 18.

Apple is looking to team up with Google for a mega-deal to leverage the Gemini AI model for features on iPhone, Bloomberg reported. This will put Google in a commanding position as the company already has a deal with Apple as the preferred search engine provider on iPhones for the Safari browser.

The publication cited people familiar with the matter saying that Apple is looking to license Google’s AI tech to introduce AI-powered features with iOS updates later this year. Additionally, the company also held discussions with OpenAI to potentially use GPT models, Bloomberg said.