Sometimes, even in technical discussions, you can present clear evidence, working examples, and credible sources, but before any real back-and-forth can even start, the only reply is a block. And I was ready to talk openly and willing to find common ground. It’s a pity, as I used to hold this well-known person in high regard, which makes the outcome all the more disappointing.
The topic was whether large language models (LLMs) can count letters. I brought a working demo from my own inference engine and cited peer-reviewed research showing it’s possible under the right setup. The evidence was on the table, the conversation should have been straightforward.
Instead, the exchange ended before it even began. A reminder that sometimes, debates end not because the facts are lacking, but because engaging with them can be harder than avoiding them.