Artificial intelligence-powered writing assistants that autocomplete sentences or offer “smart replies” not only put words into people’s mouths, they also put ideas into their heads, according to new research.
Is that effect any different than the one you’d get if you have biased references, or biased search results, when doing the researchb for your writing?
Well of course it will be different. One has to do with another author publishing questionable data and the other would be related to misunderstanding of someone else’s published data. In this case, the use of AI in writing is implied to result in authors not being in control of what they themselves publish.
All of these are bad but do not necessarily arise on purpose. But let’s not add ways to muddy the already mudied waters of science.
I bet it’s more pernicious because it is easy to incorporate AI suggestions. If you do your own research, you may have to think a bit if the references/search results may be bad, and you still have to put the info in your own words so that you don’t offend the copyright gods. With the AI help, well, the spellings are good, the sentences are perfectly formed, the information is plausible, it’s probably not a straight-forward copy, why not just accept?
I’ve just read the abstract of the study - but it doesn’t seem to be about people mindlessly copying the AI and producing biased text as a result. Rather, it’s about people seeing the points the AI makes, thinking “Good point!” and adjusting their own opinion accordingly.
So it looks to me like it’s just the effect of where done view points get more exposure.
Is that effect any different than the one you’d get if you have biased references, or biased search results, when doing the researchb for your writing?
Well of course it will be different. One has to do with another author publishing questionable data and the other would be related to misunderstanding of someone else’s published data. In this case, the use of AI in writing is implied to result in authors not being in control of what they themselves publish.
All of these are bad but do not necessarily arise on purpose. But let’s not add ways to muddy the already mudied waters of science.
Those seem like questions for more research.
I bet it’s more pernicious because it is easy to incorporate AI suggestions. If you do your own research, you may have to think a bit if the references/search results may be bad, and you still have to put the info in your own words so that you don’t offend the copyright gods. With the AI help, well, the spellings are good, the sentences are perfectly formed, the information is plausible, it’s probably not a straight-forward copy, why not just accept?
I’ve just read the abstract of the study - but it doesn’t seem to be about people mindlessly copying the AI and producing biased text as a result. Rather, it’s about people seeing the points the AI makes, thinking “Good point!” and adjusting their own opinion accordingly.
So it looks to me like it’s just the effect of where done view points get more exposure.