Ace quick missions & earn crypto rewards while gaining real-world Web3 skills. Participate Now! 🔥
Key Takeaways
Ace quick missions & earn crypto rewards while gaining real-world Web3 skills. Participate Now! 🔥
A new study from MIT has found that relying too much on artificial intelligence (AI) tools like ChatGPT could hinder the ability to think clearly and remember information.
Researchers at the university’s Media Lab asked 54 people to complete writing tasks over four sessions. Each person used one of three methods, which were writing without help, using a search engine, or relying on ChatGPT.
In the final session, people who had used ChatGPT were asked to write without any tools, while those who had worked without help were told to try using the chatbot. The result showed that over 83% of the ChatGPT users could not remember parts of what they had just written.
Did you know?
Subscribe - We publish new crypto explainer videos every week!
What is a Crypto Mining Pool? Is it Worth it? (Beginner-Friendly)
In a June 18 post on X, Alex Vacca, co-founder of ColdIQ, said the AI tool might not be helping people work better, but instead weakening their thinking. He explained that when ChatGPT handles the task, people seem to forget what was written almost right away.
To better understand what was happening in the brain, researchers used EEG machines to track brain activity during each task. They found that brain activity dropped the more someone relied on an AI tool.
People who wrote without help showed the highest level of mental effort. Those who used search engines were in the middle. Meanwhile, participants using ChatGPT showed the lowest levels of brain engagement.
The study also discusses a concept known as "cognitive debt". This means that it may save energy in the moment when people rely on AI tools, but there could be long-term downsides. These include weaker problem-solving, less original thinking, and being more easily influenced by others.
Meanwhile, a study published in Nature Human Behavior on May 19 found that GPT-4 was more persuasive than humans in 64% of debates. How? Read the full story.
To ensure the highest level of accuracy & most up-to-date information, BitDegree.org is regularly audited & fact-checked by following strict editorial guidelines & review methodology.
Carefully selected industry experts contribute their real-life experience & expertise to BitDegree's content. Our extensive Web3 Expert Network is compiled of professionals from leading companies, research organizations and academia.