

It’s something of the law of averages. At their core, an LLM is a sophisticated text prediction algorithm, that boils down the entire corpus of human language into numeric tokens, that it averages out, and creates entire sentences by determining the next most likely word to fill the space.
Given enough data, and you need a tremendous amount of it for an LLM, patterns start to come about, and many of those end up the ones that we see in LLMs.




Or stuff that is really difficult to get. Part of the diagnostic process for my psychiatrist needs me to arrange a 30 minute interview with a family member (which only works if you have a family member who is willing to do so, believes that ADHD isn’t just a personal failing, or has the time to arrange such a thing), or reports from primary school, which most people aren’t likely to keep around when they’re an adult in university.
If you don’t have either of those, no diagnosis for you, and you’re out several hundred dollars for nothing.