The digital craze for artificial intelligence seems to have cooled, with people quitting using AI chatbots altogether or stopping sharing personal information.
According to a new report from Malwarebytes, the fascination with so-called chatbots is slowly but steadily being replaced by a more aware and proactive public that is not only worried about their privacy, but is taking steps to address those concerns.

Will the AI spark disappear by 2026?
In a survey conducted by Malwarebytes, 90% of respondents said they were worried about AI (in some form) using their data without their consent, and a whopping 88% did not freely share personal information with ChatGPT or Gemini.
Surprisingly, 84% of respondents did not share their personal health information with these tools. That’s pretty amazing if you ask me. Because I know of at least 5 people who have submitted their recent health checkup report to ChatGPT or Gemini and asked for general help or guidance.
But here’s the most interesting part: 43% and 42% of survey participants stopped using ChatGPT and Gemini, respectively. That’s a significant number.
Although I’m not part of the user pool, I still rely on these AI tools to summarize a 100-page document or visualize something based on text commands, so both OpenAI and Google need to take into account the numbers and growing concerns of the general public about chatbot usage.

Can we sustain the user-AI relationship with better privacy?
Respondents are already taking steps to protect their digital footprint or data from artificial intelligence. According to the survey report, 44% have stopped using Instagram and 37% no longer use Facebook.
There’s no mention of people fearing Meta AI using photos, videos and chats for training and improvement, but there may be a plausible connection there.
On the positive side, 82% of respondents are opting out of data collection whenever possible, 71% are using ad blockers and 46% are using VPNs. More and more users are becoming increasingly concerned about privacy policies across the platforms they use, and where possible, entering fake or dummy data or using personal data removal services.
“This study shows that many people are unclear about exactly how AI is being used for their benefit and its impact on their privacy, leading to distrust and confusion,” the survey report said.