“While these uses of GenAI are often neither overtly malicious nor explicitly violate these tools’ content policies or terms of services, their potential for harm is significant.”
Not sure what to make out of this article. The statistics are nice to know, but something like this seems poorly investigated:
AI overview answers in Google search that tell users to eat glue
Google’s AI has a strength others lack: not only it allows users to rate an answer, but it can also use Google’s search data to check whether people are laughing at or mocking its results.
The “fire breathing swans”, the “glue on pizza”, or the “gasoline flavored spaghetti”, have disappeared from Google’s AI.
Gemini now also uses a draft system where it reviews and refines its own initial answer several times, before presenting the final result.
Not sure what to make out of this article. The statistics are nice to know, but something like this seems poorly investigated:
Google’s AI has a strength others lack: not only it allows users to rate an answer, but it can also use Google’s search data to check whether people are laughing at or mocking its results.
The “fire breathing swans”, the “glue on pizza”, or the “gasoline flavored spaghetti”, have disappeared from Google’s AI.
Gemini now also uses a draft system where it reviews and refines its own initial answer several times, before presenting the final result.