Despite a flood of Sunday morning hype, it’s questionable whether computers crossed an artificial intelligence threshold last weekend.
 However, the news about a chatbot with the personality of a 13-year-old
 Ukrainian boy passing the Turing test did get us thinking: Is tricking 
every third human in a text exchange really the best way to measure 
computer intelligence?
 Computers are already smart, just in their own ways. They catalogue the
 breadth of human knowledge, find meaning in mushroom clouds of data, 
and fly spacecraft to other worlds. And they’re getting better. Below 
are four domains of computing where the machines are rising.
Information retrieval
 Given the right set of rules, computers are the ultimate librarians. Google’s search algorithm shakes down 50 billion web pages every
 time you need to prove your boyfriend wrong about his latest baseless 
assertion. It’s so good at its job that many people consider clicking to
 the second page of search results an act of desperation.
 Where it’s headed:
 Understanding human language is one of the most difficult things 
computers can do. Beyond basic subject/verb agreement, decades of bots 
have mostly failed at figuring out the vagaries of the written word. 
Unlike us, computers struggle to understand how a word can change 
meaning depending on its neighbors, says Russ Altman, a biomedical 
informatics researcher at Stanford.
 Solving this problem is Altman’s obsession. Since 2000, he and his 
colleagues have been teaching a machine how to get meaning from some of 
the densest language on the planet: medical journalese. The Pharmacogenomics Knowledge Base (PharmaGKB)
 has read 26 million scientific abstracts to create a searchable index 
of different effects that various drugs have on individual genes. The 
program understands things like clauses and how the meaning of a word 
can be modified by the words around it (which is important for parsing 
dense phrasing that might send a confusing message about whether a drug 
activates a gene), and also knows many synonyms and antonyms. The 
resulting database is hugely important to pharmaceutical companies, who 
use it to save time and money on basic research when they are searching 
for new drug combinations.
Robotics
 Robots that work in controlled environments, like car manufacturing plant,
 are impressive enough. But getting them to do programmed tasks 
alongside humans, who have complex behaviors, is one of the most 
difficult challenges in computing.
 The vanguard of intelligent robotics are droids that let humans do 
tasks that require creative thought or fine manipulation, and fill in 
the organization and heavy lifting where needed. For example, Amazon 
already has armies of organizational droids that shuttle items for packing from a Manhattan-like grid of shelving towers to human packers.
 Where it’s headed:
 Researchers are getting better at teaching robots how to read the 
syntax of human movement, so they can work more closely on more 
complicated projects. David Bourne, a roboticist at Carnegie-Mellon 
University’s Robotics Institute, says the key is to play to both the 
human and robot strengths. “A person is actually more dextrous, but a 
robot can move to an exact position really well.” Bourne made a robotic 
arm that assists automobile welders. In a trial, the human-robot team 
assembled a Hummer frame. The robot had a video projector that showed 
the human exactly where to put different parts and then made perfect, 
5-second welds. For more difficult welds, it deferred to its partner. 
“Together they were able to do the project 10 times faster than a team 
of three human professionals,” says Bourne.
Machine Learning
 Machine learning is a sub-discipline of AI that uses trial-and-error to
 figure out complex problems. For example, a cloud service might spend a
 weekend feeding House of Cards to half a million people,
 or run through millions of iterations to help a lending bank evaluate 
credit risk scenarios. Getting data to flow to the right places requires
 constant adaptation to respond to the network’s shifting bandwidth 
bottlenecks. Cloud providers like Amazon use algorithms learn from the 
varying demands, so the bitrate stays high.
 Where it’s headed:
 Machine learning isn’t just keeping the cloud clutter-free; it’s going 
to turn smart phones into geniuses. Current machine learning programs 
can require hundreds or thousands of iterations, but researchers are 
building animal-inspired algorithms that can learn good from bad after 
only a few trials.
 Tony Lewis is the lead developer at Qualcomm’s Zeroth Project, an 
R&D lab that’s building next-gen chipsets and programs that run on 
them. “We’ve been able to demonstrate in a very simple application how 
you can use reinforcement learning to teach a robot to do the right 
thing,” he says.
 Eventually he sees this technology making its way into phones and 
tablets. Instead of having to access the settings to change your ring 
tone or turn off your alarm on the weekend, you could just give it 
positive or negative reinforcement, like giving a dog a treat, and it would learn.
Better brains
 Computers have come a long way in interpreting complex inputs like 
sound, movement, and image recognition. But there’s room to grow: Siri 
still makes mistakes, Kinect hasn’t totally revolutionized gaming, and 
Google needed 16,000 processors to train a computer to identify cat videos on YouTube.
 This is mostly because things like language and kittens can’t be easily
 reduced to binary equations. But new processors could process with 
logic more akin to the way neurons work—passing along many different 
information streams in parallel.
 Where it’s headed:
 Several researchers (including Lewis)
 are trying to creating chips that work more like brains than 
calculators. This field is called neuromorphic computing. Like a brain, a
 neural processing unit (NPU) processes many different data streams at 
the same time. The end goal is to have devices that can read complex 
sensory information (like voices and flailing limbs) at a fraction of 
the computational cost of traditional chips. This means that Siri’s 
daughter will be able to answer your questions faster, with less 
prompting, and without being as much of a drain on your battery. These 
NPUs will run alongside traditional, binary CPUs, which will still be 
essential for running things like operating systems and tip calculators.
Source:Extraa Education 

0 comments:
Post a Comment