• drosophila@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    1 day ago

    Being able to summarize and answer questions about a specific corpus of text was a use case I was excited for even knowing that LLMs can’t really answer general questions or logically reason.

    But if Google search summaries are any indication they can’t even do that. And I’m not just talking about the screenshots people post, this is my own experience with it.

    Maybe if you could run the LLM in an entirely different way such that you could enter a question and then it tells you which part of the source text statistically correlates the most with the words you typed; instead of trying to generate new text. That way in a worse case scenario it just points you to a part of the source text that’s irrelevant instead of giving you answers that are subtly wrong or misleading.

    Even then I’m not sure the huge computational requirements make it worth it over ctrl-f or a slightly more sophisticated search algorithm.

    • anomnom@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      4 hours ago

      you could enter a question and then it tells you which part of the source text statistically correlates the most with the words you typed; instead of trying to generate new text. That way in a worse case scenario it just points you to a part of the source text that’s irrelevant instead of giving you answers that are subtly wrong or misleading.

      Isn’t this what the best search engines were doing before the AI summaries?

      The main problem now is the proliferation of AI “sources” that are really just keyword stuffed junk websites that take over the first page of search results. And that’s apparently a difficult or unprofitable problem for the search algorithms to solve.

      • drosophila@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        3 hours ago

        That’s what Google was trying to do, yeah, but IMO they weren’t doing a very good job of it (really old Google search was good if you knew how to structure your queries, but then they tried to make it so you could ask plain English questions instead of having to think about what keywords you were using and that ruined it IMO). And you also weren’t able to run it against your own documents.

        LLMs on the other hand are so good at statistical correlation that they’re able to pass the Turing test. They know what words mean in context (in as much they “know” anything) instead of just matching keywords and a short list of synonyms. So there’s reason to believe that if you were able to see which parts of the source text the LLM considered to be the most similar to a query that could be pretty good.

        There is also the possibility of running one locally to search your own notes and documents. But like I said I’m not sure I want to max out my GPU to do a document search.

    • AA5B@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 hours ago

      Even the success case is a failure. I’ve had several instances where Google returned a nice step by step how to answer a user’s questions, correctly, but I can’t forward the link and trust they’ll see the same thing

    • Flying Squid@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      1 day ago

      Multiple times now, I’ve seen people post AI summaries of articles on Lemmy which miss out really, really important points.

    • MagicShel@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 day ago

      Well an example of something I think it could solve would be: “I’m trying to set this application up to run locally. I’m getting this error message. Here’s my configuration files. What is not set up correctly, or if that’s not clear, what steps can I take to provide more helpful information?”

      ChatGPT is always okay at that as long as you have everything set up according to the most common scenarios, but it tells you a lot of things that don’t apply or are wrong in the specific case. I would like to get answers that are informed by our specific setup instructions, security policies, design standards, etc. I don’t want to have to repeat “this is a Java spring boot application running on GCP integrating with redis on docker… blah blah blah”.

      I can’t say whether it’s worth it yet, but I’m hopeful. I might do the same with ChatGPT and custom GPTs, but since I use my personal account for that, it’s on very shaky ground to upload company files to something like that, and I couldn’t share with the team anyway. It’s great to ask questions that don’t require specific knowledge, but I think I’d be violating company policy to upload anything.

      We are encouraged to use NotebookLLM, however.