БЛОГ

Archive for the ‘information science’ category: Page 203

Apr 8, 2019

QC — Cracking RSA with Shor’s Algorithm

Posted by in categories: cybercrime/malcode, encryption, information science

With new advances in technology it all comes down to simple factoring. Classical factoring systems are outdated where some problems would take 80 billion years to solve but with new technologies such as the dwave 2 it can bring us up to speed to do the same problems in about 2 seconds. Shores algorithm shows us also we can hack anything with it simply would need the technology and code simple enough and strong enough. Basically with new infrastructure we can do like jason…


RSA is the standard cryptographic algorithm on the Internet. The method is publicly known but extremely hard to crack. It uses two keys for encryption. The public key is open and the client uses it to encrypt a random session key. Anyone intercepts the encrypted key must use the second key, the private key, to decrypt it. Otherwise, it is just garbage. Once the session key is decrypted, the server uses it to encrypt and decrypt further messages with a faster algorithm. So, as long as we keep the private key safe, the communication will be secure.

RSA encryption is based on a simple idea: prime factorization. Multiplying two prime numbers is pretty simple, but it is hard to factorize its result. For example, what are the factors for 507,906,452,803? Answer: 566,557 × 896,479.

Continue reading “QC — Cracking RSA with Shor’s Algorithm” »

Apr 5, 2019

Using AI to Make Better AI

Posted by in categories: information science, robotics/AI, space travel

Next month, however, a team of MIT researchers will be presenting a so-called “Proxyless neural architecture search” algorithm that can speed up the AI-optimized AI design process by 240 times or more. That would put faster and more accurate AI within practical reach for a broad class of image recognition algorithms and other related applications.

“There are all kinds of tradeoffs between model size, inference latency, accuracy, and model capacity,” says Song Han, assistant professor of electrical engineering and computer science at MIT. Han adds that:

“[These] all add up to a giant design space. Previously people had designed neural networks based on heuristics. Neural architecture search tried to free this labor intensive, human heuristic-based exploration [by turning it] into a learning-based, AI-based design space exploration. Just like AI can [learn to] play a Go game, AI can [learn how to] design a neural network.”

Continue reading “Using AI to Make Better AI” »

Apr 5, 2019

Agriculture: Machine learning can reveal optimal growing conditions to maximize taste, other features

Posted by in categories: biotech/medical, chemistry, food, genetics, information science, robotics/AI

What goes into making plants taste good? For scientists in MIT’s Media Lab, it takes a combination of botany, machine-learning algorithms, and some good old-fashioned chemistry.

Using all of the above, researchers in the Media Lab’s Open Agriculture Initiative report that they have created that are likely more delicious than any you have ever tasted. No is involved: The researchers used computer algorithms to determine the optimal growing conditions to maximize the concentration of flavorful molecules known as .

But that is just the beginning for the new field of “cyber agriculture,” says Caleb Harper, a principal research scientist in MIT’s Media Lab and director of the OpenAg group. His group is now working on enhancing the human disease-fighting properties of herbs, and they also hope to help growers adapt to changing climates by studying how crops grow under different conditions.

Continue reading “Agriculture: Machine learning can reveal optimal growing conditions to maximize taste, other features” »

Apr 5, 2019

Artificial intelligence can now emulate human behaviors – soon it will be dangerously good

Posted by in categories: information science, media & arts, robotics/AI

When artificial intelligence systems start getting creative, they can create great things – and scary ones. Take, for instance, an AI program that let web users compose music along with a virtual Johann Sebastian Bach by entering notes into a program that generates Bach-like harmonies to match them.

Run by Google, the app drew great praise for being groundbreaking and fun to play with. It also attracted criticism, and raised concerns about AI’s dangers.

Continue reading “Artificial intelligence can now emulate human behaviors – soon it will be dangerously good” »

Apr 3, 2019

A Mathematician Just Solved a Deceptively Simple Puzzle That Has Boggled Minds for 64 Years

Posted by in category: information science

A mathematician in England just solved a decades-old Diophantine equation for the number 33. Now, only 42 remains.

Read more

Mar 30, 2019

An artificial neuron implemented on an actual quantum processor

Posted by in categories: information science, quantum physics, robotics/AI

Artificial neural networks are the heart of machine learning algorithms and artificial intelligence. Historically, the simplest implementation of an artificial neuron traces back to the classical Rosenblatt’s “perceptron”, but its long term practical applications may be hindered by the fast scaling up of computational complexity, especially relevant for the training of multilayered perceptron networks. Here we introduce a quantum information-based algorithm implementing the quantum computer version of a binary-valued perceptron, which shows exponential advantage in storage resources over alternative realizations. We experimentally test a few qubits version of this model on an actual small-scale quantum processor, which gives answers consistent with the expected results. We show that this quantum model of a perceptron can be trained in a hybrid quantum-classical scheme employing a modified version of the perceptron update rule and used as an elementary nonlinear classifier of simple patterns, as a first step towards practical quantum neural networks efficiently implemented on near-term quantum processing hardware.

Read more

Mar 26, 2019

This ‘mind-reading’ algorithm can decode the pictures in your head

Posted by in categories: computing, information science, neuroscience

New computer program uses brain activity to draw images of airplanes, leopards, and stained-glass windows.

Read more

Mar 24, 2019

A New Must-Read Book on the AI Singularity from Barnes & Noble

Posted by in categories: cosmology, engineering, information science, nanotechnology, quantum physics, robotics/AI, singularity

Hot off the press…


Barnes & Noble Press releases a new non-fiction book The Syntellect Hypothesis: Five Paradigms of the Mind’s Evolution by Alex M. Vikoulov as Hardcover (Press Release, San Francisco, CA, USA, March 22, 2019 11.00 AM PST)

Named “The Book of the Year” by futurists and academics alike, “# 1 Hot New Release” in Amazon charts in Physics of Time, Phenomenology, and Phenomenological Philosophy, the book has now been released by Barnes & Noble Press as hardcover in addition to ebook and paperback released earlier this year. In one volume, the author covers it all: from quantum physics to your experiential reality, from the Big Bang to the Omega Point, from the ‘flow state’ to psychedelics, from ‘Lucy’ to the looming AI Singularity, from natural algorithms to the operating system of your mind, from geo-engineering to nanotechnology, from anti-aging to immortality technologies, from oligopoly capitalism to Star-Trekonomics, from the Matrix to Universal Mind, from Homo sapiens to Holo syntellectus.

Continue reading “A New Must-Read Book on the AI Singularity from Barnes & Noble” »

Mar 23, 2019

Why a Humanist Ethics of Datafication Can’t Survive a Posthuman World

Posted by in categories: ethics, information science, surveillance

https://paper.li/e-1437691924#/


Geoffrey Rockwell and Bettina Berendt’s (2017) article calls for ethical consideration around big data and digital archive, asking us to re-consider whether. In outlining how digital archives and algorithms structure potential relationships with whose testimony has been digitized, Rockwell and Berendt highlight how data practices change the relationship between research and researched. They make a provocative and important argument: datafication and open access should, in certain cases, be resisted. They champion the careful curation of data rather than large-scale collection of, pointing to the ways in which these data are used to construct knowledge about and fundamentally limit the agency of the research subject by controlling the narratives told about them. Rockwell and Berendt, drawing on Aboriginal Knowledge (AK) frameworks, amongst others, argue that some knowledge is just not meant to be openly shared: information is not an inherent good, and access to information must be earned instead. This approach was prompted, in part, by their own work scraping #gamergate Twitter feeds and the ways in which these data could be used to speak for others, in, without their consent.

From our vantage point, Rockwell and Berendt’s renewed call for an ethics of datafication is a timely one, as we are mired in media reports related to social media surveillance, electoral tampering, and on one side. Thanks, Facebook. On the other side, academics fight for the right to collect and access big data in order to reveal how gender and racial discrimination are embedded in the algorithms that structure everything from online real estate listings, to loan interest rates, to job postings (American Civil Liberties Union 2018). As surveillance studies scholars, we deeply appreciate how Rockwell and Berendt take a novel approach: they turn to a discussion of Freedom of Information (FOI), Freedom of Expression (FOE), Free and Open Source software, and Access to Information. In doing so, they unpack the assumptions commonly held by librarians, digital humanists and academics in general, to show that accumulation and datafication is not an inherent good.

Read more

Mar 23, 2019

Blue Brain solves a century-old neuroscience problem

Posted by in categories: information science, mathematics, neuroscience

A team led by Lida Kanari now reports a new system for distinguishing cell types in the brain, an algorithmic classification method that the researchers say will benefit the entire field of neuroscience. Blue Brain founder Professor Henry Markram says, “For nearly 100 years, scientists have been trying to name cells. They have been describing them in the same way that Darwin described animals and trees. Now, the Blue Brain Project has developed a mathematical algorithm to objectively classify the shapes of the neurons in the brain. This will allow the development of a standardized taxonomy [classification of cells into distinct groups] of all cells in the brain, which will help researchers compare their data in a more reliable manner.”

The team developed an algorithm to distinguish the shapes of the most common type of neuron in the neocortex, the . Pyramidal are distinctively tree-like cells that make up 80 percent of the in the neocortex, and like antennas, collect information from other neurons in the . Basically, they are the redwoods of the brain forest. They are excitatory, sending waves of electrical activity through the network, as people perceive, act, and feel.

The father of modern neuroscience, Ramón y Cajal, first drew pyramidal cells over 100 years ago, observing them under a microscope. Yet up until now, scientists have not reached a consensus on the types of pyramidal neurons. Anatomists have been assigning names and debating the different types for the past century, while neuroscience has been unable to tell for sure which types of neurons are subjectively characterized. Even for visibly distinguishable neurons, there is no common ground to consistently define morphological types.

Continue reading “Blue Brain solves a century-old neuroscience problem” »