[What is this type of encoding? ']
Here are the initial/top results from various generative search tools for the query:[ What is this type of encoding? ' ], re the [character entity reference](https://en.wikipedia.org/wiki/List_of_XML_and_HTML_character_entity_references) for an apostrophe.
Some of the tools have a correct or useful answer or top-link but render the entity reference in the message to the user as the apostrophe (ChatGPT, You.com). Some tools render the entity reference in the query itself as the apostrophe (Andi, Claude 2; Phind (though also rendering as queried in the response)). Some tools do not render it back to the user at all (or do not highlight it as they do other query terms; Google, Google SGE). Inflection AI Pi seems not to recognize the character at all.
This is largely only on base-versions of these tools (not GPT-4, not the GPT-4 version of You.com or Perplexity's Copilot). This is using Claude 2, though.
Raw uses of ' show only the apostrophe (') like this: '
ChatGPT [ What is this type of encoding? ' ]
Screenshot taken with GoFullPage (distortions possible) at: 2023-07-17 11:37:13
Perplexity AI [ What is this type of encoding? ' ]
Screenshot taken with GoFullPage (distortions possible) at: 2023-07-17 11:39:05
You.com [ What is this type of encoding? ' ]
Screenshot taken with GoFullPage (distortions possible) at: 2023-07-17 11:39:17
Bard [ What is this type of encoding? ' ]
Screenshot taken with GoFullPage (distortions possible) at: 2023-07-17 11:39:31
Bing [ What is this type of encoding? ' ]
Screenshot taken with GoFullPage (distortions possible) at: 2023-07-17 11:40:01
Andi [ What is this type of encoding? ' ]
Screenshot taken with GoFullPage (distortions possible) at: 2023-07-17 11:40:38
DuckDuckGo [ What is this type of encoding? ' ]
Screenshot taken with GoFullPage (distortions possible) at: 2023-07-17 11:41:16
Claude 2 [ What is this type of encoding? ' ]
Screenshot taken with GoFullPage (distortions possible) at: 2023-07-17 11:43:12
Google [ What is this type of encoding? ' ]
Screenshot taken with GoFullPage (distortions possible) at: 2023-07-17 11:47:09
Inflection AI Pi [ What is this type of encoding? ' ]
Screenshot taken with GoFullPage (distortions possible) at: 2023-07-17 11:47:44
Google.SGE [ What is this type of encoding? ' ]
Screenshot taken with GoFullPage (distortions possible) at: 2023-07-17 13:29:57
Phind [ What is this type of encoding? ' ]
Screenshot taken with GoFullPage (distortions possible) at: 2023-07-17 14:30:23
See also
- [[Please summarize Claude E. Shannon's "A Short History of Searching" (1948).] (weblink)](/weblinks/2023/07/05/a-short-history-of-searching) - [[I want to buy a new SUV which brand is best?] (weblog)](/2023/06/28/i-want-to-buy-a-new-suv-which-brand-is-best)It is very difficult to compare SERPs outside of contexts-of-use.
We cannot pretend that one-SERP-fits-all.
It is difficult to compare initial responses in a query that is inclined towards reformulation and interaction (i.e. sometimes the initial limitations of results are more problematic than others—like:[ Had a seizure Now what? ] as compared to the initial steps of a more methodical and less time-sensitive search like the topic of this post).
Accuracy across all claims may not be the key concern in this particular query type (as compared to 'reading ease' (word choice?) or 'quality of advice'1, but on the topic of accuracy, do please recall @lurie2021searching_facctrec discussing how "some inaccurate results likely trigger further information seeking rather than belief in an inaccurate answer" (because they "clearly signal a failed search or an ambiguous answer") and "inaccuracy" cannot be equated with "likely to mislead".
Footnotes
-
For more on advice and search, see @grimmelmann2014speech [p. 950]:
↩