File:ChatGPT is bullshit, s10676-024-09775-5.pdf

From Wikimedia Commons, the free media repository
Jump to navigation Jump to search
Go to page
next page →
next page →
next page →

Original file(1,239 × 1,645 pixels, file size: 735 KB, MIME type: application/pdf, 10 pages)

Captions

Captions

Add a one-line explanation of what this file represents

Summary

[edit]
Description
English: Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.
Date 11 July 2024
Source https://link.springer.com/article/10.1007/s10676-024-09775-5
Author Michael Townsen Hicks, James Humphries & Joe Slater

Licensing

[edit]
w:en:Creative Commons
attribution
This file is licensed under the Creative Commons Attribution 4.0 International license.
You are free:
  • to share – to copy, distribute and transmit the work
  • to remix – to adapt the work
Under the following conditions:
  • attribution – You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
This file, which was originally posted to an external website, has not yet been reviewed by an administrator or reviewer to confirm that the above license is valid. See Category:License review needed for further instructions.

File history

Click on a date/time to view the file as it appeared at that time.

Date/TimeThumbnailDimensionsUserComment
current10:10, 24 September 2024Thumbnail for version as of 10:10, 24 September 20241,239 × 1,645, 10 pages (735 KB)Yann (talk | contribs){{Information |Description={{en|Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the...

Metadata