Everything net
High-level cognitive functions are linguistical ...
Everything relevant to the thought
8 items and more
Loading more relevant content by others
Loading...

... We do so by examining the cognitive abilities associated with three contemporary theories of conscious function: Global Workspace Theory (GWT), Information Generation Theory (IGT), and Attention Schema Theory (AST). ...

... We find that all three theories specifically relate conscious function to some aspect of domain-general intelligence in humans. ...

... With this insight, we turn to the field of Artificial Intelligence (AI) and find that, while still far from demonstrating general intelligence, many state-of-the-art deep learning methods have begun to incorporate key aspects of each of the three functional theories. ...

... Given this apparent trend, we use the motivating example of mental time travel in humans to propose ways in which insights from each of the three theories may be combined into a unified model. ...

... We believe that doing so can enable the development of artificial agents which are not only more generally intelligent but are also consistent with multiple current theories of conscious function. ...

1330 characters.

Loading items related to this reference source
Loading...

... [https://www.inc.com/bill-murphy-jr/how-emotionally-intelligent-people-use-send-a-bible-rule-to-become-remarkably-more-memorable.html](https://www.inc.com/bill-murphy-jr/how-emotionally-intelligent-people-use-send-a-bible-rule-to-become-remarkably-more-memorable.html) ...

470 characters.

emotional intelligence
Loading items related to this note
Loading...

... Where causality comes in is when we venture to model the 1/2 https://t.co/AmgYUOenjq ...

301 characters.

Loading items related to this reference source
Loading...

... Thread… Here’s my basic understanding of the model: the economy has some industries that are capital intensive and others that are not. When the central bank makes interest rates artificially low, it makes capital investment cheap and skews the economy toward capital intensive sectors. ...

... Society needs to produce fewer factories and machine tools and more beer and pairs of pants (or whatever). ...

... The period through the end of 2007 arguably fits the austrian model. There was arguably over investment in residential home construction. In 2006 and 2007 the home building industry was contracting while other industries were still growing. But in mid 2008, the situation changed. ...

... I can’t figure out how to explain this period with an Austrian model. I don’t see why anyone would consider this kind of mass unemployment necessary or how it set us up for stronger growth later. ...

2948 characters.

Loading items related to this reference source
Loading...

... 4) The above rules are somewhat different for A.I. startups. There will be a FOMO race for top A.I. startups. I'm already seeing it. THAT said, if you are A.I. and profitable, you will demand double the valuation instantly. Nothing beats controlling your own destiny. ...

273 characters.

Loading items related to this reference source
Loading...

... There's a widespread view in inflation hawk circles that the BLS came under political pressure (in the 1990s and maybe before) to modify the CPI to reduced the measured inflation rate and hence reduce how much the government would have to pay in Social Security benefits. ...

... The CPI, the one used for adjusting Social Security benefits, tends to come in higher than the PCE index. Right now, for example, PCE inflation is 5.7 percent while CPI inflation is 7 percent. ...

... So if Newt Gingrich forced the BLS to lower the CPI as a backdoor way of cutting Social Security payments, did he force the BEA to do the same thing so it wouldn't look suspicious? ...

1019 characters.

Loading items related to this reference source
Loading...

... This @JosephPolitano writeup is invaluable for understanding how this works. https://t.co/mmlaHCOVYW https://t.co/xtCyVhGzZl The use of OER to measure housing costs strikes a lot of people as goofy when they first hear about it. ...

... You could survey homeowners on their mortgage payments, but then would you price it based on the mortgages people actually pay (which would include people who bought homes 20 years ago and therefore have tiny payments) or what they would pay if they bought their house now? ...

933 characters.

Loading items related to this reference source
Loading...

... I'm reading @mattyglesias and thinking that bubble-phobia was a major factor behind the bad economic performance of the Bush and Obama years. https://t.co/p8zhQAIA2D https://t.co/xBi4kjjAXK In the early 2000s, people exaggerated the frothiness of the tech bubble and the harms from its crash. ...

... When a sluggish recovery finally started to gain steam in 2004-5, triggering a housing boom, people once again over-estimated its frothiness. This bubble frame caused the Fed to react too slowly to the onset of the Great Recession from December 2007 to September 2008. ...

... Then the idea that 1999 and 2006 were driven by unsustainable bubbles, as opposed to just being healthy economic booms, prevented people from recognizing how far the US economy was below potential from 2010 to 2015. ...

... People in the early 2010s assumed we had to live with a permanently shitty economy to avoid having more bubbles. But now we know that was wrong. The economy was still way below potential in 2015, and it's possible to recover rapidly from a recession with appropriate macro policy. ...

1418 characters.

Loading items related to this reference source
Loading...
More precisely, LLMs model the concepts in natural languages using the language (albeit in a different syntax). Obviously, LLMs don't need to learn the concepts from scratch, they already have encoded words. More importantly, it doesn't need to learn a representation of the kinds of relationships between concepts, those are also encoded in words in the language as well, such as, "is", "belong to," "cause", etc. Here comes the more speculative part. To perform cognitive tasks, LLMs need to learn the specific relationships between specific concepts, and those relationships can be connections between a group of words, e.g., "swan", "black", "is", here "swan" and "black" are two concepts while "is" is the relationship between them. However, such groups of words can have multiple interpretations. Thus to truly encode a specific relationship, LLMs need to have connections embedded with syntax. e.g. it needs to encode the relationship between "black," and "swan" as "swan is black". i.e. A language phrase encodes the relationship. Thus one might be able to say that LLMs model the world in language. It might be a totally different gramma from natural language, but a syntax nonetheless and it's quite possible that this syntax is inspired by the syntax in natural language. The ability to model concepts using words, phrases and even sentences combined with syntax is critical. [It might be the reason we humans reached our level of intelligence](https://www.themind.net/hypotheses/8yof9E9YTYu4vHQI4qgBcw).
Loading items related to this hypothesis
Loading...
8 items and more
Loading more relevant content from same author
Loading...

... **Observation:** An observed empirical fact, indisputable for reasonable people. **Hypothesis:** A conjecture that is generalized from observations (induction), or deducted from other hypotheses (deduction). Collectively, hypotheses constitute our model (or theory) of the world. ...

... A hypothesis, often together with other hypotheses, can produce predictions. We use the broadest sense of this word, not limited to untested theories of how something works. It encompasses assumption, theory, putative knowledge, and the narrow sense of hypothesis. ...

... **Prediction:** A yet-to-be-made observation or (observational categorical per Quine). It's usually based on one or more hypotheses. The relationships between these categories of thoughts are fixed. In another sentence, there is an algebra in these elements of thoughts. ...

... This theory (or "hypothesis" as defined in itself) has [a philosophical basis](https://www.themind.net/hypotheses/M1qolEkbTje29ze62yEfQg): our knowledge of the world consists solely of prediction models. ...

... Out brain's intelligence is solely neurons trained to that predicts inputs working collectively. ...

1300 characters.

the mind net
Loading items related to this hypothesis
Loading...

... The brain predicts conceived "things" it will see and then "the sensory input" caused by them. Then the brain verifies or corrects the conceived things with the sensory input it actually receives. ...

324 characters.

AI
Loading items related to this hypothesis
Loading...

... For example, she has shown that speakers of languages that use different words for different types of spatial relationships (e.g. "left" versus "right") are better at remembering the location of objects than speakers of languages that do not make this distinction. ...

264 characters.

p/brain
Loading items related to this observation
Loading...

... The brain learns by constantly making predictions and making corrections (in its wiring) to approach close to the actual results. ...

129 characters.

Loading items related to this hypothesis
Loading...

... The brain uses a mental language to represent and organize complex ideas and concepts. This mental language is thought to be distinct from natural languages like English or Spanish, and it is believed to be the medium through which we think and process information. ...

651 characters.

p/investing
Loading items related to this hypothesis
Loading...

... All (actual and putative) knowledge has the same standards of success—the prediction of observation sentences. ...

110 characters.

Loading items related to this hypothesis
Loading...

... E.g. visual models of microscopic things. Physical symbols for abstract concepts. ...

81 characters.

p/The Mind Net
Loading items related to this observation
Loading...

... Here is an example: **Me**: I have a hypothesis. Cortical columns make classifications based on raw sensory inputs as well as classification signals from other columns. They learn by predicting incoming raw sensory inputs as well as classification signals from other columns. ...

... One column's prediction error helps its own learning, as well as other columns' learning when the prediction was made with input from their classification signals. What do you think of my hypothesis? ...

... **ChatGPT**: Your hypothesis describes a process that is similar to how the brain is thought to process and classify sensory information. Cortical columns, which are groups of neurons in the cerebral cortex, are thought to play a role in processing and classifying sensory information. ...

... They do this by receiving input from sensory organs and then making predictions about what this input represents. These predictions are then compared to the actual input, and any discrepancies between the two are used to update the column's understanding of the sensory information. ...

... It is also thought that cortical columns communicate with each other, sharing information about the sensory input they have received and the predictions they have made. ...

... This can help to improve the overall accuracy of the predictions made by the brain, as different columns may have different strengths and areas of expertise in processing different types of sensory information. ...

... Overall, your hypothesis captures some of the key principles of how the brain processes and classifies sensory information, and it is generally consistent with current understanding of this process. ...

... However, it is important to note that the brain is a complex and multifaceted organ, and our understanding of how it works is still evolving. There may be additional factors and processes at play that are not captured in your hypothesis. ...

1869 characters.

p/Investing
Loading items related to this observation
Loading...
This is how humans reached this level of intelligence. In some sense, language and its syntax provides the programming language for brains and reduced the need for specialized neural circuitries. To reach the same level of intelligence without a vastly larger number of neural circuitries in artificial neuron networks, they need to be able to do the same thing. .
Loading items related to this hypothesis
Loading...
3 items
That is, assign language tokens to objects. not just for communication, e.g., when young children name their dolls. Or when someone comes up with a new concept, they would eager to find a linguistic name for it, even before the need to communicate it.
Loading items related to this observation
Loading...
Boroditsky has conducted a number of studies that have shown how the language we speak can influence our perception of time, space, and other aspects of our environment. For example, speakers of languages that use different words for different types of snow (e.g. "wet snow" versus "dry snow") are better at discriminating between different types of snow than speakers of languages that do not make this distinction.
Loading items related to this observation
Loading...
No Content
Loading items related to this observation
Loading...
It's hard to imagine higher-level cognitive faculty without some form of hierarchical information processing. ATB proposed that such a hierarchy corresponds to the hierarchy of objects in the real world. This might be a bit too speculative. Columns learn from prediction errors. They can predict raw sensory inputs; they can also predict signals by other columns produced from sensory inputs. Thus, learning can happen when there are raw sensory input prediction errors as well as when there are other column signal prediction errors. Learning in columns can easily be hierarchical - naturally, models (or knowledge) learned from them are hierarchical. That being said, there is no reason to believe that the hierarchy is a neat pyramid with clear-cut division between layers. Any column can learn from any other columns as long as their signals are useful. It's just that learning, and thus models, can happen orders away from raw input signals.
Loading items related to this hypothesis
Loading...
)
Terms of Service Privacy Policy About
Loading...
Loading...
Loading...