Korean, Edit

Where is AI Heading?

Recommended Articles : 【Evolutionary Behavioral Science】 Evolutionary Behavioral Science Contents, 【Philosophy】 Lecture 1. What is Knowledge?


a. AI for Generating Hypotheses, The Beginning of the Idea



image

(Made by DALL∙E. Be cautious about the typos in the image.)


Recently, Google launched Gemini. Consequently, these days the term Artificial General Intelligence (AGI) is being heard more and more. Suddenly, I became curious. Where is AI currently at? Some see AI development in 3 stages, others in 5 or 7 stages. Each model has the stage where ChatGPT is currently at shaded.


Case 1. AI 3-stage description model

Stage 1. ANI(artificial narrow intelligence) : A model that handles specific tasks

Stage 2. AGI(artificial general intelligence) : A model that, like humans, handles all tasks

Stage 3. ASI(artificial super intelligence) : A model of AI surpassing humans, like in science fiction


Case 2. AI 5-stage description model

Stage 1. Specialized AI(narrow AI) : A model that handles specific tasks

Stage 2. Advanced Specialized AI(advanced narrow AI) : Can work across multiple tasks but still limited to specific domains

Stage 3. Human-Level AI(human-level AI) : AI at a similar level to humans

Stage 4. Superhuman AI(superhuman AI) : AI at a superior level to humans

Stage 5. Self-Improving AI(self-improving AI) : AI that evolves and learns by itself


Case 3. AI 7-stage description model

Stage 1. rule-based AI system : A computing system that operates based on predefined rules, like a calculator

Stage 2. context awareness and retention system : AI that understands context, like ChatGPT, Siri

Stage 3. domain-specific mastery system : AI like IBM Watson, Google’s DeepMind AlphaGo

Stage 4. thinking and reasoning AI system : AI capable of logical thinking

Stage 5. AGI(artificial general intelligence) : Human-level AI. Has not yet been achieved.

Stage 6. ASI(artificial super intelligence) : AI surpassing humans

Stage 7. AI Singularity : AI transcending human cognitive abilities, irreversible state


Firstly, the fact that ChatGPT has only reached the second stage in various models is interesting. It also means that we have much more to be amazed about in the future. For example, ChatGPT is not very good at logical thinking. It’s well known that it struggles with solving math problems and tends to write essays for personal statements by cobbling together bits and pieces rather than structuring the writing carefully. This indicates that ChatGPT is still operating at the level of pattern matching, and like the advent of the transformer made this possible, another paradigm shift will be needed for AI to leap to the next stage.

People often say that the next stage of AI is one capable of logical thinking. Solving math problems or writing logically coherent essays, for example. So, what is logical thinking? Logical thinking is the process of creating new knowledge by assigning order to knowledge. For instance, how did Einstein discover the theory of relativity? Electromagnetic forces are mediated by electromagnetic waves, i.e., light, which travels at c = 3 × 108 m/s. If the speed of light varied with the speed of the system, electromagnetic phenomena would appear inconsistently. Therefore, Einstein thought of the principle of the constancy of the speed of light, leading to the conclusion that space and time are relative. Ultimately, Einstein’s logical thinking resulted in the creation of new knowledge: the theory of relativity.

Therefore, the next stage of AI, capable of logical thinking, would also be an AI that creates its own knowledge. So, what new algorithms would need to be introduced to realize such AI? One idea I previously proposed on my blog is to define logical thinking itself as a pattern. This involves compiling all human knowledge and organizing the meta-knowledge that corresponds to logical thinking methodologies. By establishing logical thinking itself as a pattern and having transformers match these patterns, we could potentially create an AI capable of logical thinking.

The second method is to create an AI with a self. From a physics perspective, the self can be considered a naturally occurring state where negative entropy is densely concentrated. However, this is a result-oriented explanation and doesn’t specify the structure of the neural network. But what if a self-recurrent model could create a self? Some might point out that RNNs also have recurrent elements, so why don’t those neural networks have a self? However, RNNs only have recurrent elements; their inputs and outputs are externally provided. In contrast, the inputs for an AI with a self come from the self. If this idea is realized, all action plans of that AI model would initiate from the self, potentially leading to all factual judgments,value judgments being attributed to the self. Nevertheless, due to Gödel’s incompleteness theorems, no formal system can be complete, so continuously uncovering contradictions and attempting logical thinking could lead to the creation of new knowledge. Could this be the next paradigm shift?

○ An unsolved mystery: Why don’t humans experience hallucinations as frequently as Large Language Models (LLMs)? It seems as though humans possess a solid knowledge system that could be called truth, unlike AI. I wonder if this has anything to do with the concept of self.

What does the emergence of AI that creates its own knowledge mean? It’s uncertain whether this is the next step or the final one. However, it might mean that we no longer need an Einstein, and paradoxically, people might have fewer opportunities to ponder knowledge. From my experience with university admissions, it seems that all departments – mathematics, statistics, physics, chemistry, biology, aerospace engineering, computer science, psychology, and economics – are researching AI. There’s a sense that all these departments are selecting people good at math (perhaps because they need to study AI). One thing is clear: the traditional concept of academic disciplines is disintegrating. The knowledge of traditional concepts is disappearing. I’m glad I’ve been able to document a large amount of this knowledge on my blog before it vanishes.

The Singularity refers to the birth of an intellectual being beyond the collective intelligence of humanity. This being would have a self, play Go better than any Go player, solve math problems effectively, create its own academic fields, and render human labor unnecessary.

But, as always, humanity will find the answers. Just as I am preparing for the Singularity.



Posted : 2023.12.05 23:50

results matching ""

    No results matching ""