British scientists have created a computer that is able to learn sign language by watching television shows that are both subtitled signed.
Researchers at the University of Oxford, came up with an algorithm to recognize gestures made by people. The team exposed the computer to around 10 hours of TV footage that was both signed and subtitled, tasking the software to learn 210 nouns and adjectives that appeared during the footage, of which it correctly learned 136 words.
The software uses a person’s arms to estimate the rough location of fast-moving hands and identifies flesh-colored pixels in those areas to understand the precise hand shape, however difficulty often arises when some words have different meanings depending on the context. E.g. cutting a tree has a different sign to cutting a rose.
Although this is a clever piece of technology, it still has a long way to go. Rather than working round the problem, maybe the researchers should concentrate their efforts more on making deaf people hear again. What do you think?
- Gross Mechanical Tumor is a Computer Peripheral
- Extreme Computer Junkie
- Road Sign Misspells Three Out Of Four Words
- Geek’s Remains Stored In Computer Casket
- “Thinking Cap” Closer To Reality
GEAT ARTICLES ON OTHER BLOGS...
Sorry, the comment form is closed at this time.