I believe this depends on the state/local laws.
I believe this depends on the state/local laws.
I got glasses for the first time about 5 years ago. Since I work on a computer all day, I asked them to add the blue light blocking coating to help reduce eye strain (if you’re not familiar, it looks mostly clear but it has a very mild yellow tint). My frequency of getting migraines while working has dropped significantly.
I really hate that people keep treating these LLMs as if they’re actually thinking. They absolutely are not. All they are, under the hood, is really complicated statistical models. They don’t think about or understand anything, they just calculate what the most likely response to a given input is based on their training data.
That becomes really obvious when you look at where they often fall down: math questions and questions about the actual words they’re using.
They do well on standardized math assessments, but if you change the questions just a little, to something outside their training data (often just different numbers or a slightly different phrasing is enough), they fail spectacularly.
They often can’t answer questions about words at all (how many 'R’s in ‘strawberry’, for instance) because they don’t even have a concept of the word, they just have a token that represents that word, and a list of associations that they use to calculate when to use that word.
LLMs are complex, and the way they’re designed means that the specifics of what associations they make and how they’re weighted and things like that are opaque to us, but that doesn’t mean we don’t know how they work (despite that being a big talking point when they first came out). And I really wish people would stop treating them like something they’re not.