We’ve all been there. We gave a thumbs up in the air and looked at the suggested words that somehow fit exactly what we were trying to say. So we tap on that. surely. But new research suggests that these little taps can do more than save a few seconds.
Cornell Tech study published this week scientific advancementWe’ve found that AI-powered autocomplete suggestions don’t just change the way you write, they actually impact how you write. think. And you won’t even notice it happening.

What did the research actually find?
Researchers conducted two large-scale experiments with more than 2,500 participants, asking them to write short essays on socially sensitive topics, including the death penalty, hydraulic fracturing, GMOs, and voting rights for felons.
Some participants received autocomplete suggestions that were secretly designed to lean in a particular direction, generated using large-scale language models from the GPT-3 and GPT-4 families. Others got nothing.
What are the results? People who have written about AI as biased have gradually warmed to its position. Not because they were persuaded by the argument. It’s not because I’ve read something compelling. Because their calls kept finishing my thoughts.

Knowing the trick didn’t break the spell.
Now here’s the part where you need to put your phone down for a moment. Researchers told some participants that AI has a bias problem. This is a sort of “don’t say we didn’t warn you” disclaimer. They then later tried to report it to others. In most misinformation studies, this approach works like a mental vaccine. Again, neither did anything.
“Their attitude toward the issue has still changed.” The scope of autocompletion has exploded, said lead author Mor Naaman. Gmail now offers the ability to compose entire emails on your behalf.
So the next time your phone suggests something is “fully supported,” take another look at those little blue words. Your opinion can become someone else’s with just a tap.