Has Google's AI invented it's own language?

Posted by $ WilliamShipley 7 years, 11 months ago to Technology
10 comments | Share | Flag

We’ve had a number of discussions about whether computers can be creative or if they just do “what they were programmed to do”. Google Translate revamped their algorithms last September. Previously, they had programmed the system to have phrase pairs between different languages. But with 103 languages the number of pairs they needed to have was growing beyond their ability to maintain it.

They switched from a programmed translation to a neural network and fed the vast database they already had it. The resulting translation utility is now live and now appears to have developed an internal representation language that it translates source languages to and then to the destination language. It was not specifically programmed to do this.

This allows it to translate between pairs of languages that it doesn’t have an example for. A human who knew the French to English translation of a phrase and the English to Russian one, might translate French to Russian by going through English. The Google translation translates through the language it made up.
SOURCE URL: https://research.googleblog.com/2016/11/zero-shot-translation-with-googles.html


Add Comment

FORMATTING HELP

All Comments Hide marked as read Mark all as read

  • Posted by $ jlc 7 years, 11 months ago
    This makes me wonder about the structure of the internal language. We have all seen the studies where the perception of beauty is enhanced by averaging tens/hunderds of pictures into a composite human face; now we can postulate a situation where a hundred human languages are averaged to make a single artificial human language - one that might be better than all existing natural languages.

    Jan
    Reply | Mark as read | Best of... | Permalink  
    • Posted by ewv 7 years, 11 months ago
      A computer representation of language for automatic translation serves a fundamentally different purpose than man's conceptual language of understanding. The representation is more efficient for what it does in machine manipulations but not superior for man's cognitive role of concepts in his conceptual conscious awareness -- including the 'crow epistemology' principle.
      Reply | Mark as read | Parent | Best of... | Permalink  
      • Posted by $ jlc 7 years, 11 months ago
        Crow epistemology is interesting, but not what I was referring to. In facial research, a composite image made from 10 pics is found to be selected as more attractive and any individual; a composite made from 100 pics is selected as more attractive than the 10-fold composite, etc. (https://en.wikipedia.org/wiki/Average...)

        If there is a composite language created from all of the languages extant, then it might actually be better at expressing all of the concepts we need, and it will probably be more logically constructed. Whether or not it is suitable for humans to use is, as you point out, another question entirely.

        Jan
        Reply | Mark as read | Parent | Best of... | Permalink  
        • Posted by ewv 7 years, 11 months ago
          If it's not suitable for humans to use it's not a better language. But there is no doubt that language, meaning human language, could be better and more logically structured than the any of the world's languages are now.
          Reply | Mark as read | Parent | Best of... | Permalink  
  • Posted by CircuitGuy 7 years, 11 months ago
    This is amazing, if true.
    I imagine Univac to Univac was written in this language, and I read an English translation.
    Reply | Mark as read | Best of... | Permalink  
    • Posted by ewv 7 years, 11 months ago
      I began learning computers with machine language (literally in 0s and 1s and then progressing to octal representation) for an old univac 1105. They did not speak English, and it probably isn't the internal language representation google evolved either. You can listen (in English translation) to univac fears in 1958 of the takeover of the world by man at https://www.youtube.com/watch?v=G6Wz_...

      The google report is interesting but does not show consciousness, focus or creativity by an independent 'mind'. It is the result of how the neural net was programmed and the data fed into it. The internal state of a neural net evolves over time as the data accumulates, in accordance with whatever is defined as 'success'. The evolved state of net is never directly programmed.

      In this case the internal representation that evolved converged on the essence of the language structure common to all the languages. It is interesting how it wound up with a state so efficient and that it worked at all, but if it does work at all it had to be some representation of language -- and the net had no means of distinguishing between languages other than what it was fed to help classify but not completely restrict. So in a crude sense, the different languages are all ways of saying the same thing just like there are different ways within a language in both grammar and vocabulary.

      The representation seems to have succeeded where the direct attempt by man did not: univacs couldn't understand Esperanto (which probably contributed to their fears of man).
      Reply | Mark as read | Parent | Best of... | Permalink  
      • Posted by CircuitGuy 7 years, 11 months ago
        This is cool. I started a grad class on neural nets in 2001, but the class, or rather the video feed from the university to my employer, was cancelled, so I never finished it. It was going to be about how machines can learn things like backing up a truck by trial and error similar to how human truck drivers learn. I know it's not a mind, but it feels more human-like than a PID loop.
        Reply | Mark as read | Parent | Best of... | Permalink  
        • Posted by ewv 7 years, 11 months ago
          A basic neural net is represented by a matrix of weights associating the input and output data. As more data is fed into it the matrix elements are adjusted in accordance with successful outputs. So the representation is a matrix that is dynamically evolving. That is how it "learns" with an accumulation of trial and error or more systematic changes. The methods and complexities for doing this have developed far beyond what I have kept up with.
          Reply | Mark as read | Parent | Best of... | Permalink  

FORMATTING HELP

  • Comment hidden. Undo