БЛОГ

Archive for the ‘information science’ category: Page 166

Jun 21, 2020

The case for self-explainable AI

Posted by in categories: biotech/medical, information science, robotics/AI

For instance, suppose a neural network has labeled the image of a skin mole as cancerous. Is it because it found malignant patterns in the mole or is it because of irrelevant elements such as image lighting, camera type, or the presence of some other artifact in the image, such as pen markings or rulers?

Researchers have developed various interpretability techniques that help investigate decisions made by various machine learning algorithms. But these methods are not enough to address AI’s explainability problem and create trust in deep learning models, argues Daniel Elton, a scientist who researches the applications of artificial intelligence in medical imaging.

Elton discusses why we need to shift from techniques that interpret AI decisions to AI models that can explain their decisions by themselves as humans do. His paper, “Self-explaining AI as an alternative to interpretable AI,” recently published in the arXiv preprint server, expands on this idea.

Jun 19, 2020

Scientists built a new quantum computer. It’s made of five atoms and “self-destroys” after each use

Posted by in categories: computing, information science, particle physics, quantum physics

Scientists managed another breakthrough. They built a quantum computer that can execute the difficult Shor’s algorithm. It’s just five atoms big, but the experts claim it will be easy to scale it up.

Jun 18, 2020

OpenAI’s New Text Generator Writes Even More Like a Human

Posted by in categories: information science, robotics/AI

The data came from Common Crawl, a non-profit that scans the open web every month and downloads content from billions of HTML pages then makes it available in a special format for large-scale data mining. In 2017 the average monthly “crawl” yielded over three billion web pages. Common Crawl has been doing this since 2011, and has petabytes of data in over 40 different languages. The OpenAI team applied some filtering techniques to improve the overall quality of the data, including adding curated datasets like Wikipedia.

GPT stands for Generative Pretrained Transformer. The “transformer” part refers to a neural network architecture introduced by Google in 2017. Rather than looking at words in sequential order and making decisions based on a word’s positioning within a sentence, text or speech generators with this design model the relationships between all the words in a sentence at once. Each word gets an “attention score,” which is used as its weight and fed into the larger network. Essentially, this is a complex way of saying the model is weighing how likely it is that a given word will be preceded or followed by another word, and how much that likelihood changes based on the other words in the sentence.

Through finding the relationships and patterns between words in a giant dataset, the algorithm ultimately ends up learning from its own inferences, in what’s called unsupervised machine learning. And it doesn’t end with words—GPT-3 can also figure out how concepts relate to each other, and discern context.

Jun 16, 2020

The Higgs Boson –“Gateway” to the Dark Universe?

Posted by in categories: cosmology, information science, particle physics

The cosmos contains a Higgs field—similar to an electric field—generated by Higgs bosons in the vacuum. Particles interact with the field to gain energy and, through Albert Einstein’s iconic equation, E=mc2, mass. The Standard Model of particle physics, although successful at describing elementary particles and their interactions at low energies, does not include a viable and hotly debated dark-matter particle. The only possible candidates, neutrinos, do not have the right properties to explain the observed dark matter.

“One particularly interesting possibility is that these long-lived dark particles are coupled to the Higgs boson in some fashion—that the Higgs is actually a portal to the dark world. We know for sure there’s a dark world, and there’s more energy in it than there is in ours. It’s possible that the Higgs could actually decay into these long-lived particles,” said LianTao Wang, a University of Chicago physicist, in 2019, referring to the last holdout particle in physicists’ grand theory of how the universe works, discovered at the LHC in 2012, filling the last gap in the standard model of fundamental particles and forces. Since then, the standard model has stood up to every test, yielding no hints of new physics.

The dark world makes up more than 95 percent of the universe, but scientists only know it exists from its effects—” like a poltergeist you can only see when it pushes something off a shelf.” We know there’s dark matter because like the poltergeist, we can see gravity acting on it keeping galaxies from flying apart.

Jun 15, 2020

New Algorithm Is a Lot Like the “Enhance!” Feature In “CSI”

Posted by in category: information science

https://youtube.com/watch?v=e1H6QSmzAtM

Pixel-be-gone!

Jun 15, 2020

Measuring the spin of a black hole

Posted by in categories: cosmology, information science, singularity

A black hole, at least in our current understanding, is characterized by having “no hair,” that is, it is so simple that it can be completely described by just three parameters, its mass, its spin and its electric charge. Even though it may have formed out of a complex mix of matter and energy, all other details are lost when the black hole forms. Its powerful gravitational field creates a surrounding surface, a “horizon,” and anything that crosses that horizon (even light) cannot escape. Hence the singularity appears black, and any details about the infalling material are also lost and digested into the three knowable parameters.

Astronomers are able to measure the masses of black holes in a relatively straightforward way: watching how matter moves in their vicinity (including other black holes), affected by the gravitational field. The charges of black holes are thought to be insignificant since positive and negative infalling charges are typically comparable in number. The spins of are more difficult to determine, and both rely on interpreting the X-ray emission from the hot inner edge of the accretion disk around the black hole. One method models the shape of the X-ray continuum, and it relies on good estimates of the mass, distance, and viewing angle. The other models the X-ray spectrum, including observed atomic emission lines that are often seen in reflection from the hot gas. It does not depend on knowing as many other parameters. The two methods have in general yielded comparable results.

CfA astronomer James Steiner and his colleagues reanalyzed seven sets of spectra obtained by the Rossi X-ray Timing Explorer of an outburst from a stellar-mass black hole in our galaxy called 4U1543-47. Previous attempts to estimate the spin of the object using the continuum method resulted in disagreements between papers that were considerably larger than the formal uncertainties (the papers assumed a mass of 9.4 solar-masses and a distance of 24.7 thousand light-years). Using careful refitting of the spectra and updated modeling algorithms, the scientists report a spin intermediate in size to the previous ones, moderate in magnitude, and established at a 90% confidence level. Since there have been only a few dozen well confirmed black hole spins measured to date, the new result is an important addition.

Jun 14, 2020

DeepCoder from Microsoft can leave programmers without work

Posted by in categories: information science, robotics/AI

Artificial intelligence (AI) is a broad field constituted of many disciplines like robotics or machine learning. The aim of AI is to create machines capable of performing tasks and cognitive functions that are otherwise only within the scope of human intelligence. To get there, machines must be able to learn these opportunities automatically instead of having each of them to be explicitly programmed end-to-end.

Another task of AI is to write programs. Similar technology was developed by Microsoft in conjunction with Cambridge University. They developed a program which is able to create other programs, borrowing code. The invention is called DeepCoder. This software that can take into account the requirements of developers and find the code fragments in a large database. You can see the work of scientists here.

“The potential for the automation of writing software code is just incredible. This means a reduction of the huge amount of effort that is required to develop code. Such a system will be much more productive than any man. In addition, you can create a system that was previously impossible to build”,

Jun 14, 2020

AI makes blurry faces look 64 times sharper

Posted by in categories: information science, robotics/AI

A new algorithm takes pixelated images of faces and creates realistic-looking versions with up to 64 times the resolution.

Jun 13, 2020

MIT’s Tiny New Brain Chip Aims for AI in Your Pocket

Posted by in categories: information science, robotics/AI

The human brain operates on roughly 20 watts of power (a third of a 60-watt light bulb) in a space the size of, well, a human head. The biggest machine learning algorithms use closer to a nuclear power plant’s worth of electricity and racks of chips to learn.

That’s not to slander machine learning, but nature may have a tip or two to improve the situation. Luckily, there’s a branch of computer chip design heeding that call. By mimicking the brain, super-efficient neuromorphic chips aim to take AI off the cloud and put it in your pocket.

The latest such chip is smaller than a piece of confetti and has tens of thousands of artificial synapses made out of memristors—chip components that can mimic their natural counterparts in the brain.

Jun 13, 2020

Israeli researchers explain how they are healing the world with precision

Posted by in categories: biotech/medical, computing, health, information science

Data governs our lives more than ever. But when it comes to disease and death, every data point is a person, someone who became sick and needed treatment.

Recent studies have revealed that people suffering from the same disease category may have different manifestations. As doctors and scientists better understand the reasons underlying this variability, they can develop novel preventive, diagnostic and therapeutic approaches and provide optimal, personalized care for every patient.

To accomplish this goal often requires broadscale collaborations between physicians, basic researchers, theoreticians, experimentalists, computational biologists, computer scientists and data scientists, engineers, statisticians, epidemiologists and others. They must work together to integrate scientific and medical knowledge, theory, analysis of medical big data and extensive experimental work.

Continue reading “Israeli researchers explain how they are healing the world with precision” »