You are viewing a single comment's thread from:
RE: Curated by Thoth - 2025-08-18 20:53Z
Did you changed the posts summary?
In the last post I read this:
Here is a brief look at the posts...
This is missing now.
I found the summary as a unordered list better, as it allowed me to see more clearly what content Thoth had compiled today and whether it was worth taking a closer look at the relevant Thoth commentary and author post.
It looks like I've reached the end of the line for gemini-2.5-pro (free tier). I'm guessing that they deprioritize free API accounts when usage is high, and the failure rate has been very high this week with that model. It's not hitting rate limits, but the errors I'm seeing have the same impact.
Fortunately, it seems like gemini-2.5-flash does a decent job with the blog post format (so far, anyway), so I'll probably stick with that one unless/until I have some reason to switch.
0.00 SBD,
4.20 STEEM,
4.20 SP
What happens to Thoth when the free usage limit is exhausted? Does the script then stop sending summaries?
We had discussed whether AI could analyse all posts and suspected that it would not be feasible with free subscriptions. Now we have practical proof.
0.00 SBD,
0.23 STEEM,
0.23 SP
I just have it scheduled to run twice per day, starting from a random block (or from a specified block, depending upon the config), so if the rate limit were exceeded, the runs would fail and abort until the next day. Or I would cancel the 2nd run if the first one failed. So far, it hasn't gotten close to the limits, though.
If multiple people were running Thoth, each would need to provide their own API keys, so they'd each have their own rate limit. I actually set up a paid API key for myself, too, just in case, but I haven't had to use it yet (and I don't plan to for the foreseeable future).
Right. One person definitely couldn't do it. I guess it's theoretically possible with decentralization. It would take a lot of people with their own individual free subscriptions, though. The most you can get from gemini is with the flash-lite model at 1000 requests per day or gemma at 14400 requests per day.
The ArliAI API is free with unlimited usage, but it only accepts one connection at a time, and it's really slow, so I don't think it would be able to analyze every post in the time available. Also, I haven't reworked the prompts for their gemma model after they pulled the rug out on the previous models that were in their free tier.
I didn't change the prompt, but I changed the model. Seems like the different models all have their own personalities.
gemini-2.5-pro was encountering a lot of errors this morning, and apparently these errors still count against the quota. I wasn't sure if it would be able to finish the evening run before hitting the cap, so I switched to gemini-2.5-flash-lite, which has a higher daily limit.
I also prefer the usual format. This one was sort of painful to read. If I continue to observe troubles with gemini-2.5-pro, I guess I'll have to rework the prompts for the flash or flash-light model. For now, I have it set back on gemini-2.5-pro for tomorrow, so hopefully it was a one-time thing and we're back to the preferred format. (🤞) I'm probably not going to have time for adjustments before this weekend.
0.00 SBD,
0.86 STEEM,
0.86 SP