Posts
Filtered by: evaluating-results-meta
Clear filter ×hallucinationsocial-search-requestPerplexity-AIcomparing-resultsgenerative-searchprompt engineeringsearch-auditsmicro interactions in searchYou.comspeculative-designGoogle-BardCGTbetter-searchcompetition-in-searchchoice-in-searchgenerative-web-searchknowledge-cutoffsalternative-search-enginesevaluating-results-metasearch-quality-complaintsstanding-queriessearch-engine-optimizationrepository-of-examplesAnthropic-ClaudeCFP-RFPRAGPhinddoubtingadvertising-in-searchseo-for-social-goodOpenAI-ChatGPTpublic-interest-technologyspeculative-classroom-exercisesTavilysearch-user-interfacesgenerative-search-APIfresh-searchautocomplete-in-web-searchfalse-premiseTikTokAndievaluating-search-enginesarticulationsocial-searchdata-poisoningInflection-AI-Piacademic-searchChainForgeautomation biasMeno-Paradox+221 more
Showing 5 posts tagged "evaluating-results-meta"
The Need for ChainForge-like Tools in Evaluating Generative Web Search Platforms
A provocation.
How do people perceive and perform-with tool outputs?
To better understand and address tool use, we need to understand not 'accuracy' but interaction.
[What is this type of encoding? ']
Various results for asking about the HTML character entity reference for an apostrophe.
[I want to buy a new SUV which brand is best?]
Comparative screenshot collection showing initial search results from seven generative search tools for the query about which SUV brand is best.