Everything net
Self-supervised learning, a prominent AI ...
Everything relevant to the thought
5 items

... With this insight, we turn to the field of Artificial Intelligence (AI) and find that, while still far from demonstrating general intelligence, many state-of-the-art deep learning methods have begun to incorporate key aspects of each of the three functional theories. ...

... Given this apparent trend, we use the motivating example of mental time travel in humans to propose ways in which insights from each of the three theories may be combined into a unified model. ...

1330 characters.

Loading items related to this reference source
Loading...

... Where causality comes in is when we venture to model the 1/2 https://t.co/AmgYUOenjq ...

301 characters.

Loading items related to this reference source
Loading...

... Thread… Here’s my basic understanding of the model: the economy has some industries that are capital intensive and others that are not. When the central bank makes interest rates artificially low, it makes capital investment cheap and skews the economy toward capital intensive sectors. ...

... The period through the end of 2007 arguably fits the austrian model. There was arguably over investment in residential home construction. In 2006 and 2007 the home building industry was contracting while other industries were still growing. But in mid 2008, the situation changed. ...

... I can’t figure out how to explain this period with an Austrian model. I don’t see why anyone would consider this kind of mass unemployment necessary or how it set us up for stronger growth later. ...

2948 characters.

Loading items related to this reference source
Loading...
As DALL-E synthesis comes to Shutterstock, Getty Images makes countermoves.
Loading items related to this reference source
Loading...

... Yes, I know the HODLers see it as a buying opportunity, and they could be right — not doing price predictions, just trying to think this through 1/ First: crypto faithful comparing this to "crypto winter" of 2017-18, which was comparable in percentage terms. ...

1419 characters.

Loading items related to this reference source
Loading...
8 items and more
Loading more relevant content from same author
Loading...

... Columns learn from prediction errors. They can predict raw sensory inputs; they can also predict signals by other columns produced from sensory inputs. Thus, learning can happen when there are raw sensory input prediction errors as well as when there are other column signal prediction errors. ...

... Learning in columns can easily be hierarchical - naturally, models (or knowledge) learned from them are hierarchical. That being said, there is no reason to believe that the hierarchy is a neat pyramid with clear-cut division between layers. ...

... Any column can learn from any other columns as long as their signals are useful. It's just that learning, and thus models, can happen orders away from raw input signals. ...

946 characters.

p/brain
Loading items related to this hypothesis
Loading...

... More precisely, LLMs model the concepts in natural languages using the language (albeit in a different syntax). Obviously, LLMs don't need to learn the concepts from scratch, they already have encoded words. ...

... More importantly, it doesn't need to learn a representation of the kinds of relationships between concepts, those are also encoded in words in the language as well, such as, "is", "belong to," "cause", etc. Here comes the more speculative part. ...

... To perform cognitive tasks, LLMs need to learn the specific relationships between specific concepts, and those relationships can be connections between a group of words, e.g., "swan", "black", "is", here "swan" and "black" are two concepts while "is" is the relationship between them. ...

... Thus one might be able to say that LLMs model the world in language. It might be a totally different gramma from natural language, but a syntax nonetheless and it's quite possible that this syntax is inspired by the syntax in natural language. ...

... The ability to model concepts using words, phrases and even sentences combined with syntax is critical. [It might be the reason we humans reached our level of intelligence](https://www.themind.net/hypotheses/8yof9E9YTYu4vHQI4qgBcw). ...

1529 characters.

Loading items related to this hypothesis
Loading...

... It enables the learning cycle of observations -> hypotheses -> predictions -> correct with observations. This can be argued for [philosophically](https://www.themind.net/hypotheses/W2wRBi5mSeGueEYevUjMzw) and [neuroscientifically](https://www.themind.net/hypotheses/M4p8C9lOTRu8ipf5zGtEJA). ...

422 characters.

the mind net
Loading items related to this hypothesis
Loading...

... **Hypothesis:** A conjecture that is generalized from observations (induction), or deducted from other hypotheses (deduction). Collectively, hypotheses constitute our model (or theory) of the world. A hypothesis, often together with other hypotheses, can produce predictions. ...

... **Prediction:** A yet-to-be-made observation or (observational categorical per Quine). It's usually based on one or more hypotheses. The relationships between these categories of thoughts are fixed. In another sentence, there is an algebra in these elements of thoughts. ...

... This theory (or "hypothesis" as defined in itself) has [a philosophical basis](https://www.themind.net/hypotheses/M1qolEkbTje29ze62yEfQg): our knowledge of the world consists solely of prediction models. ...

... and [a neuroscience one](https://www.themind.net/hypotheses/M4p8C9lOTRu8ipf5zGtEJA) (and [independently](https://www.themind.net/hypotheses/n3Tx6wlrSWOjsXHSYggrFQ).) Out brain's intelligence is solely neurons trained to that predicts inputs working collectively. ...

1300 characters.

the mind net
Loading items related to this hypothesis
Loading...
No Content
Loading items related to this observation
Loading...

... Because this self-supervised learning process mimics the brain's learning mechanism: make predictions and learn from prediction errors. ...

135 characters.

Loading items related to this hypothesis
Loading...

... The brain predicts conceived "things" it will see and then "the sensory input" caused by them. Then the brain verifies or corrects the conceived things with the sensory input it actually receives. ...

324 characters.

AI
Loading items related to this hypothesis
Loading...

... All the things we think of as intelligence—from seeing, to touching, to language, to high-level thought—are fundamentally the same: making predictive models of the world and then correcting the model based on prediction errors. ...

227 characters.

Loading items related to this hypothesis
Loading...
No Content
89.8%
Loading items related to this hypothesis
Loading...
)
Terms of Service Privacy Policy About
Loading...
Loading...
Loading...