User:Gertrude16/Chatgpt/Ebigras22 Peer Review

General info

 * Whose work are you reviewing?

Gertrude16, Doopdole, Aley.v17, Japhaplen


 * Link to draft you're reviewing
 * https://en.wikipedia.org/wiki/User:Gertrude16/Chatgpt?veaction=edit&preload=Template:Dashboard.wikiedu.org_draft_template


 * Link to the current version of the article (if it exists)
 * ChatGPT

Evaluate the drafted changes
- "Can machines think?" is a question that is...

- Is it a bit redundant to mention that it's a machine? also its a predictive model AI

- Remove the question mark

- Is the Turing test philosophical?

- The test is to see whether or not a machine can fool human into thinking that it's human too, not to test whether the machine can think.

- Since Chat GPT is a predictive model, it only writes what it assumes would be the probable string of words to follow.

- Take out one of the "howevers" in the second paragraph.

- Why unfortunately? No opinions in the article

- Repetition of but

- Repeating that Chat GPT may think in a different way to humans (but good argument)

-Turing responded... he's dead (also good argument)

- Repetition of thinking, I know it's hard to find a synonym tho

- Individual paragraphs don't need conclusions (also no conclusions in wiki articles)

- Repetition of objection

- Maybe use a colon?

- First

- Typo

- Good argument

- Talk in absolutes

- ??

- Repetition of arises (https://www.thesaurus.com/)

- Punctuation

- We?

- Add hyperlink to solipsism article

- Repetition of argument and arises

- Repetition of more than one test

- Add hyperlinks to all the other tests

- Good article overall! Mainly grammar mistakes, but the content is solid.

Can Chat GPT think?
If machines are able to think is a question that is consistently asked as machines are improving.* Chat GPT is one of these machines*, so as it produces written work for us many question if it is actually thinking?* Many philosophical tests can be applied to Chat GPT to understand if it can think. One of these tests is the Turing Test made by Alan Turing. This test was developed to see whether a machine can think*. The basic concept of this test is to see whether a machine can fool an investigator into thinking that he is a human and if it can, Turing suggests that machines can think*. According to the Turing Test, if Chat GPT can think, it would need to be able to have a conversation because this requires one to have to reflect on what to respond to, which would indicate that Chat GPT is thinking.*

Unfortunately, the Turing Test has many objections*. The first objection that arises is if the test is humancentric. The Turing Test seems to be forcing Chat GPT (or another machine) to think like a human, but perhaps Chat GPT can think, but* in a different way than a human does. This would cause Chat GPT to fail the Turing Test and would not be considered able to think, however perhaps Chat GPT does think just in a different way than a human does*. However, Turing responds* to this objection and states that he was only trying to offer a sufficient condition and not a necessary one for a machine to think. So, if the Chat GPT does not pass the test, we shouldn’t conclude that it couldn’t think because it is not a necessary condition for it to think. To conclude, even if Chat GPT would not pass the Turing Test, it still might be able to think*.

The second objection is Lady Lovelace’s objection which states that for Chat GPT to think it needs to use originality or creativity and if Chat GPT can pass the Turing Test it would only mean that it has good programming. There are two ways of interpreting this argument.* One*, Chat GPT (or another machine) would need to be able to do something new to prove that it can think. Turing’s argument to this interpretation is that even us as humans do not always do something new, but we are considered as being able to think. The second way of interpreting this argument is that Chat GPT only does things that its programmers have told it too*, so the Turing Test wouldn't be testing the machine, it would be testing the programmers*. Turing’s counter-argument is that humans are also programmed to do certain things, even if we don’t call it programming. For example, saying please and thank you. So it seems* that Chat GPT would still be considered able to think even after Lady Lovelace’s objection.

The final objection that arises is the argument from consciousness. Many believe that for something to think they have to have consciousness, however, the Turing Test only tests how Chat GPT would behave, and does not seem to test if it has consciousness or not. In this objection, it matters what is happening inside Chat GPT and it is not enough to say that Chat GPT would behave as if it was thinking*. So the Turing Test would not accurately test if Chat GPT can think or not because it does not test its consciousness. One argument that arises from this,* is that we* don’t really know what is happening inside Chat GPT. However, Turing explains that if we believe this argument, we might fall into believing solipsism*. He states that just like we don’t know what’s going on in Chat GPT and if it’s thinking or not, we also don’t know what’s going on in another human and if they are actually thinking or not. Another argument that arises from the argument of consciousness is Susan Scneider’s argument*. She states that when diagnosing a medical illness, doctors often use more than one test, so while trying to figure out if Chat GPT (or another machine) is conscious or not, we should try to use more than one test*. She makes a couple of suggestions on what tests we can use, one being the ACT (the AI Consciousness Test). The ACT tests if Chat GPT has developed its own views and experiences. To test this, we would first have to make sure Chat GPT hasn’t had any previous information programmed into it about consciousness, then we would be able to ask it questions about how it feels to exist, how it experiences certain things, etc.. Scneider also suggests the ACT as a sufficient condition and not a necessary one to consciousness. In other words, if Chat GPT doesn’t pass it does not mean that it does not have consciousness.

Sources to be added