Toggle light / dark theme

According to Klaus Schwab, the founder and executive chair of the World Economic Forum (WEF), the 4-IR follows the first, second, and third Industrial Revolutions—the mechanical, electrical, and digital, respectively. The 4-IR builds on the digital revolution, but Schwab sees the 4-IR as an exponential takeoff and convergence of existing and emerging fields, including Big Data; artificial intelligence; machine learning; quantum computing; and genetics, nanotechnology, and robotics. The consequence is the merging of the physical, digital, and biological worlds. The blurring of these categories ultimately challenges the very ontologies by which we understand ourselves and the world, including “what it means to be human.”

The specific applications that make up the 4-R are too numerous and sundry to treat in full, but they include a ubiquitous internet, the internet of things, the internet of bodies, autonomous vehicles, smart cities, 3D printing, nanotechnology, biotechnology, materials science, energy storage, and more.

While Schwab and the WEF promote a particular vision for the 4-IR, the developments he announces are not his brainchildren, and there is nothing original about his formulations. Transhumanists and Singularitarians (or prophets of the technological singularity), such as Ray Kurzweil and many others, forecasted these and more revolutionary developments,. long before Schwab heralded them. The significance of Schwab and the WEF’s take on the new technological revolution is the attempt to harness it to a particular end, presumably “a fairer, greener future.”

Trust in AI. If you’re a clinician or a physician, would you trust this AI?

Clearly, sepsis treatment deserves to be focused on, which is what Epic did. But in doing so, they raised several thorny questions. Should the model be recalibrated for each discrete implementation? Are its workings transparent? Should such algorithms publish confidence along with its prediction? Are humans sufficiently in the loop to ensure that the algorithm outputs are being interpreted and implem… See more.


Earlier this year, I wrote about fatal flaws in algorithms that were developed to mitigate the COVID-19 pandemic. Researchers found two general types of flaws. The first is that model makers used small data sets that didn’t represent the universe of patients which the models were intended to represent leading to sample selection bias. The second is that modelers failed to disclose data sources, data-modeling techniques and the potential for bias in either the input data or the algorithms used to train their models leading to design related bias. As a result of these fatal flaws, such algorithms were inarguably less effective than their developers had promised.

Now comes a flurry of articles on an algorithm developed by Epic to provide an early warning tool for sepsis. According to the CDC, “sepsis is the body’s extreme response to an infection. It is a life-threatening medical emergency and happens when an infection you already have triggers a chain reaction throughout your body. Without timely treatment, sepsis can rapidly lead to tissue damage, organ failure, and death. Nearly 270,000 Americans die as a result of sepsis.”

A need exists to accurately estimate overdose risk and improve understanding of how to deliver treatments and interventions in people with opioid use…


The Microsoft 365 Defender security research team discovered a new vulnerability in macOS that allows an attacker to bypass the System integrity protection or SIP. This is a critical security feature in macOS which uses kernel permissions to limit the ability to write critical system files. Microsoft explains that they also found a similar technique […].

Thanks to this new category of algorithms that has proved its power of mimicking human skills just by learning through examples. Deep learning is a technology representing the next era of machine learning. Algorithms used in machine learning are created by programmers and they hold the responsibility for learning through data. Decisions are made based on such data.

Some of the AI experts say, t here will a shift in AI trends. For instance, the late 1990s and early 2000s saw the rise of machine learning. Neural networks gained its popularity in the early 2010s, and growth in reinforcement came into light recently.

Well, these are just a couple of caveats we’re experienced throughout the past years.

Artificial Intelligence is rapidly improving and has recently gotten to a point where it can outperform humans in several highly competetive job markets including the media. OpenAI and Intel are working on the most advanced AI Algorithms that are actually starting to understand the world similar to the way we experience it. They call these models: OpenAI CLIP, Codex, GPT 4 and other things which are all good at certain things. Now they’re trying to combine them to improve their generality and maybe create a real and working Artificial General Intelligence for our future. Whether AI Supremacy will happen before the singularity is unclear, but one thing is for sure: AI and Machine Learning will take over many jobs in the very near future.

If you enjoyed this video, please consider rating this video and subscribing to our channel for more frequent uploads. Thank you! smile

TIMESTAMPS:
00:00 The Rise of AI Supremacy.
01:15 What Text-Generation AI is doing.
03:28 OpenAI is not open at all?
06:12 The Image AI: CLIP
08:52 LastIs AI taking over every job?
10:32 Last Words.

#ai #agi #intel

Experts in the AI and Big Data sphere consider October 2021 to be a dark month. Their pessimism isn’t fueled by rapidly shortening days or chilly weather in much of the country—but rather by the grim news from Facebook on the effectiveness of AI in content moderation.

This is unexpected. The social media behemoth has long touted tech tools such as machine learning and Big Data as answers to its moderation woes. As CEO Mark Zuckerberg explained for CBS News, “The long-term promise of AI is that in addition to identifying risks more quickly and accurately than would have already happened, it may also identify risks that nobody would have flagged at all—including terrorists planning attacks using private channels, people bullying someone too afraid to report it themselves, and other issues both local and global.”

None.


Scientists from Heidelberg and Bern have succeeded in training spiking neural networks to solve complex tasks with extreme energy efficiency. The advance was enabled by the BrainScaleS-2 neuromorphic platform, which can be accessed online as part of the EBRAINS research infrastructure.

Developing a machine that processes information as efficiently as the human brain has been a long-standing research goal towards true artificial intelligence. An interdisciplinary research team at Heidelberg University and the University of Bern led by Dr Mihai Petrovici is tackling this problem with the help of biologically-inspired artificial neural networks.

Spiking neural networks, which mimic the structure and function of a natural nervous system, represent promising candidates because they are powerful, fast, and energy-efficient. One key challenge is how to train such complex systems. The German-Swiss research team has now developed and successfully implemented an algorithm that achieves such training.

A commonly available oral diuretic pill approved by the U.S. Food and Drug Administration may be a potential candidate for an Alzheimer’s disease treatment for those who are at genetic risk, according to findings published in Nature Aging. The research included analysis showing that those who took bumetanide — a commonly used and potent diuretic — had a significantly lower prevalence of Alzheimer’s disease compared to those not taking the drug. The study, funded by the National Institute on Aging (NIA), part of the National Institutes of Health, advances a precision medicine approach for individuals at greater risk of the disease because of their genetic makeup.

The research team analyzed information in databases of brain tissue samples and FDA-approved drugs, performed mouse and human cell experiments, and explored human population studies to identify bumetanide as a leading drug candidate that may potentially be repurposed to treat Alzheimer’s.

“Though further tests and clinical trials are needed, this research underscores the value of big data-driven tactics combined with more traditional scientific approaches to identify existing FDA-approved drugs as candidates for drug repurposing to treat Alzheimer’s disease,” said NIA Director Richard J. Hodes, M.D.