NLP aka. AI

Well, it’s been a while since I posted something, but I guess I’ve been thinking..

Thinking about my highschool project, now named, Natural Language Processing(which changed the goal, as a general, quite a bit). After a few weeks of research, I’ve become familiar with the known problems of bringing NLP to life, and I must admit, these problems look much alike problems within machine visual, aquistic, taste and sensory recognition. This really leaves me wondering, why are these areas of research, regarded as so completely different, when all of them basically require – observing, storing, prioritising and querying the data?(Excuse me if I missed some, but you get my point 🙂 ).

Why are we trying to achieve each one of these goals on their own(where they do seem extremely complicated), instead of creating a unified solution which can be used across the different areas of concerns? I guess the reason for this is the frightful name of such a solution -> artificial intelligence.

However, if you look at the world one detail at a time, as we do with the different aspects of AI, it seems extremely complex and chaotic, if we could find one simple truth, one simple rule, everything becomes simple and harmonic. In just the same way I have come to believe that recognising words and language is exactly the same as recognising shapes and colors, the only difference is the observing channel, what we have now is different terms for essentially the same thing, and that, in my mind, seems utterly stupid.

This leads me to my scary conclusion -> The easiest way to make something that trully can understand and speak like a human, is to create something that is fundamentally same as a human, a so-called artificial intelligence.

On babies

Another aspect that concerns me in AI development, is how advanced we expect the systems to be already at their starting point.

It is simple to understand, that a 1-year-old baby cannot understand a Stanford professor, first it needs to learn, observe and understand, and it must do so in a context close enough to the context of the Stanford professor, if we want it follow along the same lines of thought as the professor. So, why is it so tough to understand, that without having an idea of the world, the AI cannot come with complete and meaningful to us answers?

What we do nowadays is create agents that take a read through the Oxford dictionary, create a semantic net of the meanings of words in the English language, and rely on statistical calculations for everything else. This is like locking up a kid in basement and forcing him/her to read and memorise the dictionary. That creates a relatively good and smart source of information, but it has no understanding outside of that dictionary, which scares us, we are social beings, and send people that lock their kids up to prison(well most societies do at least). And one small detail, how does the kid know how to read in the first place?

Recently a Googler had posted on the official Google blog, that she belived that Google has acieved a rough 90% efficiency in finding what we want, however the last 10% require 90% of the work, and here she’s essentially talking about correctly and ligitimately understanding the human and the knowledge that we’ve preserved on the world wide web. However NLP in information querying is a whole other post 🙂

So perhaps babies hold the answer to the worlds problems. If developers want to create a true AI, they should create babies, not professors 🙂

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s