Formal net
Includes only formulated ideas, i.e. Questions, Hypotheses, Predictions, and Observations
Based on
An alternative to
The logic instinct
All human languages are logical in the sense that the meaning of linguistic expressions corresponding to disjunction (e.g. English *or*, Chinese *huozhe,* Japanese *ka*) conform to the meaning of the logical operator in classical logic, inclusive- *or*. It is highly implausible, we argue, that children acquire the (logical) meaning of disjunction by observing how adults use disjunction.
[https://doi.org/10.1111/j.1468-0017.2009.01380.x](https://doi.org/10.1111/j.1468-0017.2009.01380.x)
Based on
LLMs model the world using language.
More precisely, LLMs model the concepts in natural languages using the language (albeit in a different syntax).
Obviously, LLMs don't need to learn the concepts from scratch, they already have encoded words. More importantly, it doesn't need to learn a representation of the kinds of relationships between concepts, those are also encoded in words in the language as well, such as, "is", "belong to," "cause", etc.
Here comes the more speculative part.
To perform cognitive tasks, LLMs need to learn the specific relationships between specific concepts, and those relationships can be connections between a group of words, e.g., "swan", "black", "is", here "swan" and "black" are two concepts while "is" is the relationship between them.
However, such groups of words can have multiple interpretations. Thus to truly encode a specific relationship, LLMs need to have connections embedded with syntax. e.g. it needs to encode the relationship between "black," and "swan" as "swan is black". i.e. A language phrase encodes the relationship.
Thus one might be able to say that LLMs model the world in language. It might be a totally different gramma from natural language, but a syntax nonetheless and it's quite possible that this syntax is inspired by the syntax in natural language.
The ability to model concepts using words, phrases and even sentences combined with syntax is critical. [It might be the reason we humans reached our level of intelligence](https://www.themind.net/hypotheses/8yof9E9YTYu4vHQI4qgBcw).
Based on
To reach human-level intelligence, AI needs to be able to model the world in language rather than numeric ...(12 more characters.)
This is how humans reached this level of intelligence. In some sense, language and its syntax provides the programming language for brains and reduced the need for specialized neural circuitries. To reach the same level of intelligence without a vastly larger number of neural circuitries in artificial neuron networks, they need to be able to do the same thing.
.
Based on
High-level cognitive functions are linguistical in our brains.
High-level concepts and relationships between them exist linguistically in our brains, and cognitive functions based on these concepts and relationships are also encoded in sentences-like linguistic memories.
Our brains can
a) store models of the world in the sentences like linguistic memory. E.g. "Deers come to this spot when there is a drought." and
b) Construct new knowledge/predictions by constructing new sentences following syntax rules E.g. "there is a drought now, if we go to this spot we might find deers."
High-level human cognitive functions are the enterprise of our braining employing these two faculties. We don't have dedicated circuitries for each model expressed in linguistic memory, we just need the basic circuitries for language processing.
Note that this hypothesis is different from linguistic determinism.
Based on
Based on
Language shapes our perception of the world.
Boroditsky has conducted a number of studies that have shown how the language we speak can influence our perception of time, space, and other aspects of our environment. For example, speakers of languages that use different words for different types of snow (e.g. "wet snow" versus "dry snow") are better at discriminating between different types of snow than speakers of languages that do not make this distinction.
Based on
The language of thought hypothesis (LOTH)
The brain uses a mental language to represent and organize complex ideas and concepts. This mental language is thought to be distinct from natural languages like English or Spanish, and it is believed to be the medium through which we think and process information.
According to the LOTH, the structure and content of this mental language are shaped by the structure and content of the natural languages that we learn, but it is not identical to any one natural language. Instead, it is thought to be a universal language that is used by all humans to represent and process complex ideas.
Link: https://plato.stanford.edu/entries/language-thought/
Based on
Based on
Supported by
Brian learns about the world with its controlled hallucination process
Supported by
Modern scientific research since Bacon follows the hypothetico-deductive process.
Based on
Based on
The brain is nothing but a prediction machine.
Suggested in [Being You](https://www.anilseth.com/being-you/), the perception process is a top-down "controlled hallucination." The brain predicts conceived "things" it will see and then "the sensory input" caused by them. Then the brain verifies or corrects the conceived things with the sensory input it actually receives.
61.0%
Based on
Predicts
Advances in AI R&D will bring another tech revolution same scale as the internet.
Validated by
ChatGPT demonstrated significant value in advancing my understanding of a topic.
Here is an example:
**Me**: I have a hypothesis. Cortical columns make classifications based on raw sensory inputs as well as classification signals from other columns. They learn by predicting incoming raw sensory inputs as well as classification signals from other columns. One column's prediction error helps its own learning, as well as other columns' learning when the prediction was made with input from their classification signals. What do you think of my hypothesis?
**ChatGPT**: Your hypothesis describes a process that is similar to how the brain is thought to process and classify sensory information. Cortical columns, which are groups of neurons in the cerebral cortex, are thought to play a role in processing and classifying sensory information. They do this by receiving input from sensory organs and then making predictions about what this input represents. These predictions are then compared to the actual input, and any discrepancies between the two are used to update the column's understanding of the sensory information.
It is also thought that cortical columns communicate with each other, sharing information about the sensory input they have received and the predictions they have made. This can help to improve the overall accuracy of the predictions made by the brain, as different columns may have different strengths and areas of expertise in processing different types of sensory information.
Overall, your hypothesis captures some of the key principles of how the brain processes and classifies sensory information, and it is generally consistent with current understanding of this process. However, it is important to note that the brain is a complex and multifaceted organ, and our understanding of how it works is still evolving. There may be additional factors and processes at play that are not captured in your hypothesis.
100.0%
Validated by
Predicts
LLM will be instrumental in achieving at least one human-level AI task beyond NLP in 3 years
Validated by
LLM enabled robotics in 2022