Toggle light / dark theme

Aiming to be first in the world to have the most advanced forms of artificial intelligence while also maintaining control over more than a billion people, elite Chinese scientists and their government have turned to something new, and very old, for inspiration—the human brain.

Equipped with surveillance and visual processing capabilities modelled on human vision, the new “brain” will be more effective, less energy hungry, and will “improve governance,” its developers say. “We call it bionic retina computing,” Gao Wen, a leading artificial intelligence researcher, wrote in the paper “City Brain: Challenges and Solution.”

The API-AI nexus isn’t just for tech enthusiasts; its influence has widespread real-world implications. Consider the healthcare sector, where APIs can allow diagnostic AI algorithms to access patient medical records while adhering to privacy regulations. In the financial sector, advanced APIs can connect risk-assessment AIs to real-time market data. In education, APIs can provide the data backbone for AI algorithms designed to create personalized, adaptive learning paths.

However, this fusion of AI and APIs also raises critical questions about data privacy, ethical use and governance. As we continue to knit together more aspects of our digital world, these concerns will need to be addressed to foster a harmonious and responsible AI-API ecosystem.

We stand at the crossroads of a monumental technological paradigm shift. As AI continues to advance, APIs are evolving in parallel to unlock and amplify this potential. If you’re in the realm of digital products, the message is clear: The future is not just automated; it’s API-fied. Whether you’re a developer, a business leader or an end user, this new age promises unprecedented levels of interaction, personalization and efficiency—but it’s upon us to navigate it responsibly.

In a groundbreaking development, Google’s forthcoming generative AI model, Gemini, has been reported to outshine even the most advanced GPT-4 models on the market. The revelation comes courtesy of SemiAnalysis, a semiconductor research company, which anticipates that by the close of 2024, Gemini could exhibit a staggering 20-fold increase in potency compared to ChatGPT. Gemini…

This may be a great idea. Currently watching. Several countries are trying this.


The race is on to build the world’s first floating city.
For more by Tomorrow’s Build subscribe now — https://bit.ly/3vOOJ98

Join our mailing list — https://bit.ly/tomorrows-build.

Listen to The World’s Best Construction Podcast.
Apple — https://apple.co/3OssZsH
Spotify — https://spoti.fi/3om1NkB
Amazon Music — https://amzn.to/3znmBP4

Follow us on Twitter — https://twitter.com/TomorrowsBuild/

Head over to our on-demand library to view sessions from VB Transform 2023. Register Here

Enterprises have quickly recognized the power of generative AI to uncover new ideas and increase both developer and non-developer productivity. But pushing sensitive and proprietary data into publicly hosted large language models (LLMs) creates significant risks in security, privacy and governance. Businesses need to address these risks before they can start to see any benefit from these powerful new technologies.

As IDC notes, enterprises have legitimate concerns that LLMs may “learn” from their prompts and disclose proprietary information to other businesses that enter similar prompts. Businesses also worry that any sensitive data they share could be stored online and exposed to hackers or accidentally made public.

Don’t put anything into an AI tool you wouldn’t want to show up in someone else’s query or give hackers access to. While inputting every bit of information you can think of in an innovation project is tempting, you have to be careful. Oversharing proprietary information on a generative AI is a growing concern for companies. You can fall victim to inconsistent messaging and branding and potentially share information that shouldn’t be available to the public. We’re also seeing increased cyber criminals hacking into generative AI platforms.

Generative AI’s knowledge isn’t up to date. So your query results shouldn’t necessarily be taken at face value. It probably won’t know about recent competitive pivots, legislation or compliance updates. Use your expertise to research AI insight to make sure what you’re getting is accurate. And remember, AI bias is prevalent, so it’s just as essential to cross-check research for that, too. Again, this is where having smart, meticulous people on board will help to refine AI insight. They know your industry and organization better than AI and can use queries as a helpful starting point for something bigger.

The promise of AI in innovation is huge, as it unlocks unprecedented efficiency and head-turning output. We’re only seeing the tip of the iceberg as it relates to the promise the technology holds, so lean into it. But do so with governance—no one wants snake tail for dinner.

The Biden administration announced on Friday a voluntary agreement with seven leading AI companies, including Amazon, Google, and Microsoft. The move, ostensibly aimed at managing the risks posed by AI and protecting Americans’ rights and safety, has provoked a range of questions, the foremost being: What does the new voluntary AI agreement mean?

At first glance, the voluntary nature of these commitments looks promising. Regulation in the technology sector is always contentious, with companies wary of stifling growth and governments eager to avoid making mistakes. By sidestepping the direct imposition of command and control regulation, the administration can avoid the pitfalls of imposing… More.


That said, it’s not an entirely hollow gesture. It does emphasize important principles of safety, security, and trust in AI, and it reinforces the notion that companies should take responsibility for the potential societal impact of their technologies. Moreover, the administration’s focus on a cooperative approach, involving a broad range of stakeholders, hints at a potentially promising direction for future AI governance. However, we should also not forget the risk of government growing too cozy with industry.

Still, let’s not mistake this announcement for a seismic shift in AI regulation. We should consider this a not-very-significant step on the path to responsible AI. At the end of the day, what the government and these companies have done is put out a press release.

Follow me on Twitter or LinkedIn.

The regulatory crackdown that has shaken up China’s fintech industry since late 2020 appears to be coming to a close with the imposition of hefty fines on the country’s two digital payments giants.

Tencent, along with its payments subsidiary Tenpay, has been fined approximately 2.99 billion yuan ($410 million) by the People’s Bank of China for “its past regulatory breaches in relation to the provision of payment services in the mainland of China,” the company said in a filing on Friday.

On the same day, the central bank announced it will slap a 7.123 billion yuan (roughly $1 billion) fine on Ant Group, the fintech affiliate of Alibaba, for a range of illegal activities, including those concerning corporate governance, consumer protection, banking and insurance, payments and settlement, anti-money laundering practices and fund sales.