Formal net
LLM / Foundation models can provide the structural ...
Includes only formulated ideas, i.e. Questions, Hypotheses, Predictions, and Observations
Human brains have no biological apparatus for logic thinking (or, more broadly speaking, system 2 thinking). This is in contrast with the fact that human language faculties are more likely supported by prewired circuitries.
All human languages are logical in the sense that the meaning of linguistic expressions corresponding to disjunction (e.g. English *or*, Chinese *huozhe,* Japanese *ka*) conform to the meaning of the logical operator in classical logic, inclusive- *or*. It is highly implausible, we argue, that children acquire the (logical) meaning of disjunction by observing how adults use disjunction. [https://doi.org/10.1111/j.1468-0017.2009.01380.x](https://doi.org/10.1111/j.1468-0017.2009.01380.x)
More precisely, LLMs model the concepts in natural languages using the language (albeit in a different syntax). Obviously, LLMs don't need to learn the concepts from scratch, they already have encoded words. More importantly, it doesn't need to learn a representation of the kinds of relationships between concepts, those are also encoded in words in the language as well, such as, "is", "belong to," "cause", etc. Here comes the more speculative part. To perform cognitive tasks, LLMs need to learn the specific relationships between specific concepts, and those relationships can be connections between a group of words, e.g., "swan", "black", "is", here "swan" and "black" are two concepts while "is" is the relationship between them. However, such groups of words can have multiple interpretations. Thus to truly encode a specific relationship, LLMs need to have connections embedded with syntax. e.g. it needs to encode the relationship between "black," and "swan" as "swan is black". i.e. A language phrase encodes the relationship. Thus one might be able to say that LLMs model the world in language. It might be a totally different gramma from natural language, but a syntax nonetheless and it's quite possible that this syntax is inspired by the syntax in natural language. The ability to model concepts using words, phrases and even sentences combined with syntax is critical. [It might be the reason we humans reached our level of intelligence](https://www.themind.net/hypotheses/8yof9E9YTYu4vHQI4qgBcw).
This is how humans reached this level of intelligence. In some sense, language and its syntax provides the programming language for brains and reduced the need for specialized neural circuitries. To reach the same level of intelligence without a vastly larger number of neural circuitries in artificial neuron networks, they need to be able to do the same thing. .
High-level concepts and relationships between them exist linguistically in our brains, and cognitive functions based on these concepts and relationships are also encoded in sentences-like linguistic memories. Our brains can a) store models of the world in the sentences like linguistic memory. E.g. "Deers come to this spot when there is a drought." and b) Construct new knowledge/predictions by constructing new sentences following syntax rules E.g. "there is a drought now, if we go to this spot we might find deers." High-level human cognitive functions are the enterprise of our braining employing these two faculties. We don't have dedicated circuitries for each model expressed in linguistic memory, we just need the basic circuitries for language processing. Note that this hypothesis is different from linguistic determinism.
That is, assign language tokens to objects. not just for communication, e.g., when young children name their dolls. Or when someone comes up with a new concept, they would eager to find a linguistic name for it, even before the need to communicate it.
Boroditsky has conducted a number of studies that have shown how the language we speak can influence our perception of time, space, and other aspects of our environment. For example, speakers of languages that use different words for different types of snow (e.g. "wet snow" versus "dry snow") are better at discriminating between different types of snow than speakers of languages that do not make this distinction.
No Content
The brain uses a mental language to represent and organize complex ideas and concepts. This mental language is thought to be distinct from natural languages like English or Spanish, and it is believed to be the medium through which we think and process information. According to the LOTH, the structure and content of this mental language are shaped by the structure and content of the natural languages that we learn, but it is not identical to any one natural language. Instead, it is thought to be a universal language that is used by all humans to represent and process complex ideas. Link: https://plato.stanford.edu/entries/language-thought/
Because this self-supervised learning process mimics the brain's learning mechanism: make predictions and learn from prediction errors.
50.0%
From unconscious learning in brian to individual scientific inquiries to collective scientific inquiries.
87.8%
The controlled hallucination process is also a hypotheses-prediction-observation correction learning process. Brains can learn brand new concepts by generating new hypotheses about the world, making predictions about the sensory input, and correcting according to observation (actual sensory input)
Although the definition of "hypothesis" is narrower in science - a hypothesis in science usually refers to a theory that explains **how** things work. Propositions such as "It's okay to eat bruised apple" or "Mr. Huckabee is a bad politician." are not regarded as hypotheses.
96.2%
The brain learns by constantly making predictions and making corrections (in its wiring) to approach close to the actual results.
50.0%
Suggested in [Being You](https://www.anilseth.com/being-you/), the perception process is a top-down "controlled hallucination." The brain predicts conceived "things" it will see and then "the sensory input" caused by them. Then the brain verifies or corrects the conceived things with the sensory input it actually receives.
61.0%
AI
Social intelligence is the driving evolutionary pressure responsible for the development of human intelligence
79.2%
Major disruptions in businesses will start to appear around 2024 - i.e. profitable (by free cash flow) and hyper-growth companies relying on large-scale neural networks as their main tech strength.
64.0%
2025-10-26
2024-06-26
Here is an example: **Me**: I have a hypothesis. Cortical columns make classifications based on raw sensory inputs as well as classification signals from other columns. They learn by predicting incoming raw sensory inputs as well as classification signals from other columns. One column's prediction error helps its own learning, as well as other columns' learning when the prediction was made with input from their classification signals. What do you think of my hypothesis? **ChatGPT**: Your hypothesis describes a process that is similar to how the brain is thought to process and classify sensory information. Cortical columns, which are groups of neurons in the cerebral cortex, are thought to play a role in processing and classifying sensory information. They do this by receiving input from sensory organs and then making predictions about what this input represents. These predictions are then compared to the actual input, and any discrepancies between the two are used to update the column's understanding of the sensory information. It is also thought that cortical columns communicate with each other, sharing information about the sensory input they have received and the predictions they have made. This can help to improve the overall accuracy of the predictions made by the brain, as different columns may have different strengths and areas of expertise in processing different types of sensory information. Overall, your hypothesis captures some of the key principles of how the brain processes and classifies sensory information, and it is generally consistent with current understanding of this process. However, it is important to note that the brain is a complex and multifaceted organ, and our understanding of how it works is still evolving. There may be additional factors and processes at play that are not captured in your hypothesis.
100.0%
It's powered by LLMs. The value it brings is being testified widely.
The art market is relatively small but the stock image market might be much larger.
GitHub copilot is promising, but it's not human-level yet. It might be possible for Self-driving to use LLMs as foundations to build models that can predict the social behavior of humans on the street.
98.4%
2025-10-27
In SayCan, we showed how we can connect robot learning pipelines to large language models, bringing a lot of common sense knowledge to robotics. The hope was that as the LLMs become better (which they seem to be consistently doing), it will have a positive effect on robotics.
)
Terms of Service Privacy Policy About
Loading...
Loading...
Loading...