One of my former (and very long-term) freelance gigs, How Stuff Works, has replaced writers with ChatGPT-generated content and also laid off its excellent editorial staff.
It seems that going forward, when articles I wrote are updated by ChatGPT, my byline will still appear at the top of the article with a note at the bottom of the article saying that AI was used. So it will look as if I wrote the article using AI.
To be clear: I did not write articles using ChatGPT.
#AI #LLM #ChatGPT
Ok, so do you wanna talk about your terrible writing partner in school? Or “yellow press”? Or maybe the topic of the article, which isn’t journalism in the slightest? Or how about my point, which was, again, that even bad writers have context, as opposed to an LLM which is just filling in the arbitrary patterns it’s programmed to delineate. Readability is not what I’m talking about.
Dude, what’s with aggression? We just having a conversation that floats along. I’m talking about general LLMs capabilities to write text - which are in my opinion comparable to human writing, since again - a lot of people lack the same things LLMs generated texts are lacking. And I had some examples. No idea what made you so upset.
You brought up several different, unrelated topics and pretty much ignored anything I said to disprove something I never claimed. That is frustrating to deal with.
Except you are the one who responded to me. And if there is a point you made I overlooked - I will gladly answer it. I also didn’t disprove anything - just voiced my opinion. I’m not interested in a debate club and winning arguments, just sharing opinions and trying to understand others.
The top comment is about how LLMs don’t comprehend what they’re writing, and your first comment (as I read it) was about how LLMs work how human brains do. My point was that they don’t and why, not about how good or bad humans or machines are at writing, which is what you kept bringing up, hence the frustration.
My first comment is, that there are enough humans out there that don’t really comprehend what they are writing and often also make shit up as they go. I was not talking about the underlying mechanism, which is rather speculative since we have little idea how complex functions of the brain - like text generation, work. Just making a humorous light hearted comparison.
Our conversation is a nice illustration how, maybe we as humans aren’t as good at understanding text - as we might think. (Again - that is a light hearted comment and not some profound complex observation).
To be clear, I’m not talking about underlying mechanisms, either, but the approach to the task. A human writer, even one bad at writing and not understanding the topic, will approach the writing with a goal and write to that goal and topic. They can even research if they so choose, but even if they are just making things up, there is intent and context there.
An LLM doesn’t have any of that. It literally just generates words that match certain patterns, with no actual purpose or goal. It may have been programmed with a goal in mind, but it doesn’t have one of its own. It can’t reason, it can’t research, it can’t make decisions. I think that is an important distinction that people who are just saying “Who cares? It’s all bad writing anyways” are missing.
Do you think that person was signing up for jobs writing for blogs or content farms?
Have you read some low quality journalism? The whole yellow press can be replaced with GTP and no one would ever see a difference.
Ok, so do you wanna talk about your terrible writing partner in school? Or “yellow press”? Or maybe the topic of the article, which isn’t journalism in the slightest? Or how about my point, which was, again, that even bad writers have context, as opposed to an LLM which is just filling in the arbitrary patterns it’s programmed to delineate. Readability is not what I’m talking about.
That’s how you get the room
Removed by mod
Dude, what’s with aggression? We just having a conversation that floats along. I’m talking about general LLMs capabilities to write text - which are in my opinion comparable to human writing, since again - a lot of people lack the same things LLMs generated texts are lacking. And I had some examples. No idea what made you so upset.
You brought up several different, unrelated topics and pretty much ignored anything I said to disprove something I never claimed. That is frustrating to deal with.
Except you are the one who responded to me. And if there is a point you made I overlooked - I will gladly answer it. I also didn’t disprove anything - just voiced my opinion. I’m not interested in a debate club and winning arguments, just sharing opinions and trying to understand others.
The top comment is about how LLMs don’t comprehend what they’re writing, and your first comment (as I read it) was about how LLMs work how human brains do. My point was that they don’t and why, not about how good or bad humans or machines are at writing, which is what you kept bringing up, hence the frustration.
My first comment is, that there are enough humans out there that don’t really comprehend what they are writing and often also make shit up as they go. I was not talking about the underlying mechanism, which is rather speculative since we have little idea how complex functions of the brain - like text generation, work. Just making a humorous light hearted comparison.
Our conversation is a nice illustration how, maybe we as humans aren’t as good at understanding text - as we might think. (Again - that is a light hearted comment and not some profound complex observation).
To be clear, I’m not talking about underlying mechanisms, either, but the approach to the task. A human writer, even one bad at writing and not understanding the topic, will approach the writing with a goal and write to that goal and topic. They can even research if they so choose, but even if they are just making things up, there is intent and context there.
An LLM doesn’t have any of that. It literally just generates words that match certain patterns, with no actual purpose or goal. It may have been programmed with a goal in mind, but it doesn’t have one of its own. It can’t reason, it can’t research, it can’t make decisions. I think that is an important distinction that people who are just saying “Who cares? It’s all bad writing anyways” are missing.