Tech »  Topic »  Turning senses into media: Can we teach artificial intelligence to perceive?

Turning senses into media: Can we teach artificial intelligence to perceive?


Credit: Pixabay/CC0 Public Domain

Humans perceive the world through different senses: we see, feel, hear, taste and smell. The different senses with which we perceive are multiple channels of information, also known as multimodal. Does this mean that what we perceive can be seen as multimedia?

Xue Wang, Ph.D. Candidate at LIACS, translates perception into multimedia and uses Artificial Intelligence (AI) to extract information from multimodal processes, similar to how the brain processes information. In her research she has tested learning processes of AI in four different ways.

Putting words into vectors

First, Xue looked into word-embedded learning: the translation of words into vectors. A vector is a quantity with two properties, namely a direction and a magnitude. Specifically, this part deals with how the classification of information can be improved. Xue proposed the use of a new AI model that links words to images, making it easier ...


Copyright of this story solely belongs to phys.org . To see the full text click HERE