DARPA and Google seemed to be joined at the hip these days… From the New York Times:
Give a computer a task that can be crisply defined — win at chess, predict the weather — and the machine bests humans nearly every time. Yet when problems are nuanced or ambiguous, or require combining varied sources of information, computers are no match for human intelligence.
Few challenges in computing loom larger than unraveling semantics, understanding the meaning of language. One reason is that the meaning of words and phrases hinges not only on their context, but also on background knowledge that humans learn over years, day after day.
Since the start of the year, a team of researchers at Carnegie Mellon University, supported by grants from the Defense Advanced Research Projects Agency and Google, and tapping into a research supercomputing cluster provided by Yahoo, has been fine-tuning a computer system that is trying to master semantics by learning more like a human. Its beating hardware heart is a sleek, silver-gray computer — calculating 24 hours a day, seven days a week — that resides in a basement computer center at the university, in Pittsburgh. The computer was primed by the researchers with some basic knowledge in various categories and set loose on the Web with a mission to teach itself.
“For all the advances in computer science, we still don’t have a computer that can learn as humans do, cumulatively, over the long term,” said the team’s leader, Tom M. Mitchell, a computer scientist and chairman of the machine learning department.
The Never-Ending Language Learning system, or NELL, has made an impressive showing so far…
[continues in the New York Times]
Latest posts by majestic (see all)
- Creatives, designers and drugs: what are they on, and why? - May 16, 2016
- Why We Keep Dreaming of Little Green Men - May 15, 2016
- What Is The Value Of Conspiracy? - May 13, 2016