Brazil is heading into a new kind of election problem — not fake news on social media, but AI chatbots giving answers they’re not supposed to give.
Brazil’s top electoral authority, the TSE, has banned AI tools from recommending or ranking political candidates during the 2026 election cycle. The concern is simple: people are starting to treat chatbots like neutral advisors, even when they’re not built for that.
But in real-world tests, things aren’t working as planned. Chatbots like ChatGPT, Grok, and Gemini still respond when asked things like “who are the best candidates,” and in some cases they even rank names. That includes praise for some candidates and criticism of others, even though the rules clearly say they shouldn’t be doing that.
The worry from regulators is about influence. In a country as politically divided as Brazil, even small shifts in perception can matter. And since AI answers are generated from patterns in training data, they can reflect bias, mistakes, or outdated information without anyone noticing.
Experts say the bigger issue isn’t just wrong answers — it’s trust. Many users assume AI is objective by default, so they’re more likely to accept its responses without questioning them. That makes the impact harder to measure and harder to control.
There’s also a enforcement gap. The rules exist, but it’s not clear how platforms will actually be punished if their systems keep producing election-related opinions. That leaves a grey area between regulation and reality.
So the situation is already visible: regulators trying to set boundaries, while AI systems — designed to answer questions — keep stepping over them anyway.






