Formal net
ChatGPT demonstrated significant value in ...
Includes only formulated ideas, i.e. Questions, Hypotheses, Predictions, and Observations
Major disruptions in businesses will start to appear around 2024 - i.e. profitable (by free cash flow) and hyper-growth companies relying on large-scale neural networks as their main tech strength.
64.0%
2025-10-26
2024-06-26
No Content
27.3%
AGI
Human brains have no biological apparatus for logic thinking (or, more broadly speaking, system 2 thinking). This is in contrast with the fact that human language faculties are more likely supported by prewired circuitries.
All human languages are logical in the sense that the meaning of linguistic expressions corresponding to disjunction (e.g. English *or*, Chinese *huozhe,* Japanese *ka*) conform to the meaning of the logical operator in classical logic, inclusive- *or*. It is highly implausible, we argue, that children acquire the (logical) meaning of disjunction by observing how adults use disjunction. [https://doi.org/10.1111/j.1468-0017.2009.01380.x](https://doi.org/10.1111/j.1468-0017.2009.01380.x)
More precisely, LLMs model the concepts in natural languages using the language (albeit in a different syntax). Obviously, LLMs don't need to learn the concepts from scratch, they already have encoded words. More importantly, it doesn't need to learn a representation of the kinds of relationships between concepts, those are also encoded in words in the language as well, such as, "is", "belong to," "cause", etc. Here comes the more speculative part. To perform cognitive tasks, LLMs need to learn the specific relationships between specific concepts, and those relationships can be connections between a group of words, e.g., "swan", "black", "is", here "swan" and "black" are two concepts while "is" is the relationship between them. However, such groups of words can have multiple interpretations. Thus to truly encode a specific relationship, LLMs need to have connections embedded with syntax. e.g. it needs to encode the relationship between "black," and "swan" as "swan is black". i.e. A language phrase encodes the relationship. Thus one might be able to say that LLMs model the world in language. It might be a totally different gramma from natural language, but a syntax nonetheless and it's quite possible that this syntax is inspired by the syntax in natural language. The ability to model concepts using words, phrases and even sentences combined with syntax is critical. [It might be the reason we humans reached our level of intelligence](https://www.themind.net/hypotheses/8yof9E9YTYu4vHQI4qgBcw).
This is how humans reached this level of intelligence. In some sense, language and its syntax provides the programming language for brains and reduced the need for specialized neural circuitries. To reach the same level of intelligence without a vastly larger number of neural circuitries in artificial neuron networks, they need to be able to do the same thing. .
High-level concepts and relationships between them exist linguistically in our brains, and cognitive functions based on these concepts and relationships are also encoded in sentences-like linguistic memories. Our brains can a) store models of the world in the sentences like linguistic memory. E.g. "Deers come to this spot when there is a drought." and b) Construct new knowledge/predictions by constructing new sentences following syntax rules E.g. "there is a drought now, if we go to this spot we might find deers." High-level human cognitive functions are the enterprise of our braining employing these two faculties. We don't have dedicated circuitries for each model expressed in linguistic memory, we just need the basic circuitries for language processing. Note that this hypothesis is different from linguistic determinism.
Loading items related to this hypothesis
Loading...
The brain uses a mental language to represent and organize complex ideas and concepts. This mental language is thought to be distinct from natural languages like English or Spanish, and it is believed to be the medium through which we think and process information. According to the LOTH, the structure and content of this mental language are shaped by the structure and content of the natural languages that we learn, but it is not identical to any one natural language. Instead, it is thought to be a universal language that is used by all humans to represent and process complex ideas. Link: https://plato.stanford.edu/entries/language-thought/
Because this self-supervised learning process mimics the brain's learning mechanism: make predictions and learn from prediction errors.
50.0%
From unconscious learning in brian to individual scientific inquiries to collective scientific inquiries.
87.8%
Loading items related to this observation
Loading...
The brain learns by constantly making predictions and making corrections (in its wiring) to approach close to the actual results.
50.0%
Loading items related to this hypothesis
Loading...
Social intelligence is the driving evolutionary pressure responsible for the development of human intelligence
79.2%
No Content
)
Terms of Service Privacy Policy About
Loading...
Loading...
Loading...