AI until proven human
The Simulation Hypothesis goes something like this:
Pixabay license from stux
- Because humans have the most advanced civilizations that we're aware of, one of the following unlikely scenarios is almost certainly true:
- Approximately 0 civilizations survive to post-human levels of sophistication;
- The portion of post-human civilizations that want to simulate their own history is close to 0; or
- The percentage of human-like civilizations that exist in simulated realities is close to 100%.
- If the 3rd possibility is the one that's true, then humans are almost certainly existing in a simulated reality.
There are arguments against the hypothesis, and for this article, I'm not really interested in whether it's right or not, but I was reminded of it in the following video about scientific authorship.
In that video, Sabine Hossenfelder points out that science has a problem with AI authorship. Any substantial dataset can be used to produce thousands of AI-generated papers. The AI operator merely has to point the LLM at a dataset, and then ask it to create and test a hypothesis and to write a paper describing the methods and conclusions.
In a matter of minutes, you have a plausible seeming paper that can be submitted to a journal - hallucinations and other errors be damned.
Contrast that with the effort involved when a human goes through the process. This might involve the use of AI, but it still requires human expertise and creativity (at lest for now). With a human involved, the process takes days, weeks, or months.
With that differential in cost and effort - as long as people have a monetary incentive to submit AI-generated articles, they'll keep doing it.
Obviously, the thing that Hossenfelder's observation about scientific literature has in common with the simulation hypothesis is the ease of constructing a fabrication vs. the difficulty of constructing the real thing.
This problem is far from unique to science. I saw a related article on the topic today, too: When AI fails, who is to blame?. This article gave examples of AI failures showing up in fictional literature, business agreements, legal filings, medical transcriptions, and in Google's comical recommendation for LLM users to "put glue on pizza and eat small rocks.".
Another legal failure in the press recently was from Anthropic, the company behind Claude. Anthropic's lawyers submitted legal filings with hallucinated citations.
(It's a little off topic, but speaking of AI failures, who can forget the Chatbot that sold a $76,000 dollar car for $1?😉)
We often talk about the difficulty of identifying AI on the Steem blockchain, but the problem is not unique to Steem. Rather, it is emerging throughout all of society. And the issue of scale that Hossenfelder describes means that it's not going away anytime soon.
Applying similar logic to the reasoning in the simulation hypothesis, we need to be aware that for every unique human-written piece of text, there are probably thousands (or more) AI-written texts.
If a single LLM could have written 1,000 articles in the time that I wrote this one, unless you have some reason to believe that I'm a human, the odds are higher than 1,000:1 in favor of it being composed by an AI.
As we see above, this reasoning applies equally to things we find on news sites, scientific and medical literature, legal filings, social media sites, and - of course - here on the Steem blockchain.
So, maybe we need to outgrow our intuitions. Whenever we encounter a piece of text with uncertain provenance, the default assumption from today onwards should probably be that it was written by AI.
Going forward, maybe the problem to be addressed shouldn't be thought of as "AI detection". Maybe the easier problem to solve would be "proof of humanity".
Thank you for your time and attention.
As a general rule, I up-vote comments that demonstrate "proof of reading".
Steve Palmer is an IT professional with three decades of professional experience in data communications and information systems. He holds a bachelor's degree in mathematics, a master's degree in computer science, and a master's degree in information systems and technology management. He has been awarded 3 US patents.

Pixabay license, source
Reminder
Visit the /promoted page and #burnsteem25 to support the inflation-fighters who are helping to enable decentralized regulation of Steem token supply growth.
Известный мне AI отличается серой невнятностью текстов, ему обычно нечего сказать по теме. Можете вы поделиться текстами Steemit, которые по вашему вызывают сомнения в их человеческом происхождении?
Интересно, что в картинках и фотографиях вычислить AI-подделку сложнее. Там яркости наоборот в избытке.
There's some truth to that, but it can be mitigated with the right prompting. And the AIs are getting better and better every month.
They're easy enough to find. Here's an AI poem that I posted a year ago. If I hadn't stated openly that it was written by AI, it probably would've been difficult to know.
It depends on the image, but yeah, this can be true of many.
По-моему, за умных текстовых ботов не стоит беспокоиться. Наверняка умные AI-тексты будут стоить дорого, вряд ли их будет выгодно публиковать в Steemit.
Интересный пример текста с AI. Что касается стихов, наши стихоплёты могли бы засыпать весь Steemit стихами даже без помощи AI.
Ваш пример использования AI трогательный. Далеко вам до стихоплета. Милая попытка.
You may also want to consider a "market for lemons" type of analysis. If market participants have a hard time distinguishing the quality of goods on an item-by-item basis then provenance can become more important.
0.00 SBD,
0.00 STEEM,
0.56 SP
Interesting. There's a big information asymmetry between "author" and reader w.r.t. whether or not the article is written by AI. In the context of science & law, there's less asymmetry w.r.t whether the information is truthful - since the claims are verifiable with some effort. When it comes to general writing quality, there's not much asymmetry at all, since the reader can tell easily if the writing sucks or not.
But yeah, it seems like the "brand" of the creator becomes more important. I guess this isn't very surprising since the strength of an account's follower networks is one of the focus areas that I have been exploring. Steem's reputation score isn't very useful (except in predicting votes), but I think the follower network strength metrics can be a decent predictor of quality.
0.00 SBD,
0.12 STEEM,
0.12 SP