A Korean research team announced on Feb. 3 that they have successfully developed an imaginative computer program that can acquire information contained in videos and make up words or dialogue suitable for each scene shown on the screen.
A research team headed by Jang Byung-tak, professor of the Department of Computer Science and Engineering at Seoul National University, entered the 1,232 minute long Korean animation Pororo into the computer program. They found that the program was able to teach itself to recognize scenes, lines, stories, and characters using associative memory that resembles a human brain's neural network.
After entering specific scenes, the program can create dialogues appropriate for each character. The dialogues may differ from the original ones. It is also possible to see different versions created depending on whether 100 minutes or 10,000 minutes of the cartoon are entered into the program. The phenomenon is attributable to the possibility that the nature of characters may change as time goes by.
The research team believes that the computer program will be able to be utilized in English education for children. It will be possible to use the program in a manner that parents and children can predict and make a story in English after uploading an English cartoon.
In addition, the program could be widely used, since it can be installed on robots, computers, or smartphones.
Professor Jang said, “Our research is significant in that we showcased the industry's first technology that enables people to learn knowledge themselves using animation.” He added, “I hope that our study will lay the groundwork for developing artificial intelligence based on big data.”
The research findings were published at the 29th AAAI Conference on Artificial Intelligence (AAAI-15) held from Jan. 25 to 30 at the Hyatt Regency in Austin, Texas, the U.S.