"FreshLLMs"
Research summary of FreshLLMs, a paper introducing search engine-augmented prompting to improve LLM factuality and reduce hallucinations on current world knowledge questions.
13 posts
Research summary of FreshLLMs, a paper introducing search engine-augmented prompting to improve LLM factuality and reduce hallucinations on current world knowledge questions.
Towards minimizing harm from “hallucinations” and other baloney.
How might Google support users to calibrate trust within the tool itself?
Shares response from Bard and brief additional comments re running Will Knight's "hallucination test".
How do you share about misinformation without spreading it? How do you link to the outputs of chatbots and generative search engines without deceiving folks?
Exploring the challenges of differentiating LLM-generated content in search results and proposing possible actions.
Sharing an example hallucination test.
The briefest introduction.
Thinking about hallucination with Klosterman, Leahu, Munk et al., Rettberg, and Powles & Nissenbaum.
@simonw sharing about finding misleading claims from Claude 2
Claude 2 fails my Claude Shannon hallucination test, producing a summary of a non-existent publication.
Warnings and ClaimReview?