Jump to content

Talk:Eugene Goostman

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

Tagged with "One Source"

This article seems to rely very heavily on the opinions, publications and related activities of Kevin Warwick, who is a controversial figure ( [1] ) in AI and cybernetics. This article needs to additionally cite acclaim from people, prizes, organisations, institutions and publications which are not affiliated to Kevin Warwick. It's not entirely one source, but it is very, very heavily skewed towards one source. Andrew Oakley (talk) 14:17, 9 June 2014 (UTC)[reply]

The NewScientist source is actually from 2012, although it's possibly taking a press release at face value in its definition of the Turing test - from the Turing test article itself, the 30% target was just something Turing expected to be possible by 2000, but NS claims "Turing said that a machine that fooled humans into thinking it was human 30 per cent of the time would have beaten the test.". I've cut the claim that Turing said this in his Computing Machinery and Intelligence, as the NS source doesn't mention this, but it looks like "30% success rate is the test" might be a problematic statement, and one that Warwick has perpetuated. --McGeddon (talk) 15:48, 9 June 2014 (UTC)[reply]

30% claim

If we can find a good source, I think the WP article should be quite clear that AMT never said that 30% of an (unspecified) audience should be convinced to pass the test. Swedish mathematician Olle Häggström just added a footnote to his sceptical blog entry about the current topic (http://haggstrom.blogspot.se/2014/06/om-turingtestet.html), but he did add so on my suggestion, so I’d feel uncomfortable about using that as a source. Thore Husfeldt (talk) 17:07, 9 June 2014 (UTC)[reply]

This is FAKE

Read this- http://www.neowin.net/news/that-claim-that-a-computer-passed-the-turing-test-was-crap---and-heres-why It's not a Princeton University project at all! 129.180.139.48 (talk) 11:13, 10 June 2014 (UTC)[reply]

It's a fake: [2] GermanX (talk) 06:04, 13 June 2014 (UTC)[reply]

Yes, the article already has four full paragraphs about how Warwick was wrong to announce that Eugene had "passed the Turing test". --McGeddon (talk) 08:32, 13 June 2014 (UTC)[reply]

Criticisms Category?

There seems to be a significant amount of debate on the ability of this chatterbot, maybe a criticisms category in the article would be suiting? — Preceding unsigned comment added by Nlesbirel (talkcontribs) 14:32, 10 June 2014 (UTC)[reply]

There are many valid criticisms. For instance, how would a human reply with 57 typewritten (keyboarded) words in less than one second?98.110.67.68 (talk) 13:36, 12 June 2014 (UTC)The Honourable Ronald Adair[reply]
We do not normally use the word "criticism" because it is non-neutral. ViperSnake151  Talk  15:15, 12 June 2014 (UTC)[reply]

Is "critique" neutral?98.110.67.68 (talk) 16:24, 12 June 2014 (UTC)The Honourable Ronald Adair[reply]

"Reception" or "response" is usually the way to go. --McGeddon (talk) 16:33, 12 June 2014 (UTC)[reply]

Criticism is often used, it's neutral I think. WikipediaUserCalledChris (talk) 12:33, 3 January 2017 (UTC)[reply]

Please note the lang ...

... i.e. in what, how this thing is coded. Virtually certain it isn't any of the received AI langs. Lycurgus (talk) 17:44, 13 June 2014 (UTC)[reply]

Couldn't find anything behind it, guess it's super-sekrit. Prolly some inferences could be made running down the individuals concerned, what's available in eliza/bot FOSS, and the history thru the earlier competitions entered. Prolly also not worth spending more time on. 108.183.102.223 (talk) 05:26, 15 June 2014 (UTC)[reply]

Validity of pass of Turing Test

The validity of the pass of the Turing Test is not affected by any "exaggeration of the achievement", nor is the "bot's use of personality and humour in an attempt to misdirect users from its non-human tendencies and lack of actual intelligence" relevant. These are all irrelevant. It either passed the test or it did not. Other programmes may have passed the test in the past - that does not invalidate this test.Royalcourtier (talk) 20:54, 13 June 2014 (UTC)[reply]

Are you a sockpuppet of Warwick or something? The reason its disputed is because these people are trying to frame a simple prediction made by a scientist as a test of apparent intelligence. This is not a supercomputer, it doesn't think. It just watches for key words and responds appropriately, and pulls out a zinger if it doesn't know what you were talking about. ViperSnake151  Talk  01:34, 14 June 2014 (UTC)[reply]
Agree. I was surprised at the low bar set here. I understand the matter of Turing's prediction. but for one thing it's 15 years late. Second 2/3 rds saw it was a machine, you'd expect the reverse to be a standard, only some low percentage of highly perspicacious, or randomly selecting judges could tell not 67%. Seems like a massive cheat on the concept. And pretty sure it's as you say a showcase piece of what people are saying is passing for AI but is in fact nothing more than cheap tricks or in the case of something like Watson massive brute force engineering on a narrow problem. This looks more like simple grifters gaming the letter of the Turing paper and their willing institutional and media dupes. 198.255.198.157 (talk) 05:57, 14 June 2014 (UTC)[reply]
WP:NOTFORUM. --Demiurge1000 (talk) 13:50, 15 June 2014 (UTC)[reply]
Acknowledged/Agreed Du1000, there is more than enough content on the controversy vs. the Test and nothing in re the thread above which I started. Effort on the former better directed to the latter I think. 108.183.102.223 (talk) 15:07, 15 June 2014 (UTC)[reply]

This sentence is confusing

"Each judge simultaneously participated in five textual conversations, each of them between a human and one of the five competing bots."

The first part says 5 simultaneous conversations, while the second part says that only a human and a bot were answering. Does this mean that the human was replying in four of the five conversations, or that someone made a mistake in writing this? Or does it mean that the five conversations were held between a judge and the five bots? If it's the first or last ones, it needs to be made clearer. If it's the second one, it needs to be fixed entirely.

Assuming it's either the first or last one, I propose the following wording:

First option: "Each judge simultaneously participated in five textual conversations, each of them between a human, who was controlling four out of five conversations, and one of the five competing bots."

Second option: "Each judge simultaneously participated in five textual conversations, each of them between a judge and one of the five competing bots."

Blaziken (T-C) 20:10, 19 June 2014 (UTC)[reply]

Clear up the matter of fact with one or more sources and then make the needed grammatical correction ad libitum. 108.183.102.223 (talk) 14:33, 21 June 2014 (UTC)[reply]
I believe your user page showed you to be a Francophone and not a native speaker of English. If you have determined the matter of fact and want me to compose the text, advise. 108.183.102.223 (talk) — Preceding undated comment added 22:23, 21 June 2014 (UTC)[reply]
Where did you get that from? I haven't even created a user page, though you are right about me not being a native speaker of English. Regardless, this discussion is moot. Someone else fixed it.
Blaziken (T-C) 16:16, 1 July 2014 (UTC)[reply]

Example text

I think adding a short example of an exchange with the program would illustrate far more than a mere description could. For example:

Example conversation

Scott: Which is bigger, a shoebox or Mount Everest?
Eugene: I can’t make a choice right now. I should think it out later. And I forgot to ask you where you are from…
Scott: How many legs does a camel have?
Eugene: Something between 2 and 4. Maybe, three? :-))) By the way, I still don’t know your specialty – or, possibly, I’ve missed it?

Augurar (talk) 06:31, 12 September 2014 (UTC)[reply]

Oh, what the hell, I'll just add it and if anyone objects they can discuss it. Augurar (talk) 06:35, 12 September 2014 (UTC)[reply]
Scott Aaronson's credentials appear to pass WP:BLOGS. Because the the program's behavior is the whole point of the article, an illustrating quote is valuable. I'd keep it. —Waldhorn (talk) 07:31, 12 September 2014 (UTC)[reply]
Personally I feel that example is so bad it is misleading. This from Time magazine might be a better representation. Still very clearly a chatbot, I can't see how anyone could be fooled. https://time.com/2847900/eugene-goostman-turing-test/ 2.98.102.170 (talk) 14:24, 6 April 2023 (UTC)[reply]

Criticisms

At present some of the criticisms made with regard to Eugene Goostman's performance in the Turing test appear to be questionable. 1. PC Therapist was in no way a general (unrestricted) conversation machine - its name gives it away, 2. The impression is clearly given that Cleverbot tricked 59.3% of the assessors. This was not the case, in the competition the audience were asked to give a score out of 100 as to how human they thought a conversational entity was and on average Cleverbot scored 59.3%. This is apparent on the Cleverbot page but here it seems to be misrepresented on the Eugene Goostman page. There was also no parallel comparison of Cleverbot with a human. Whilst the results obtained provide interesting information, they are somewhat removed from the Turing test. Should they therefore appear on this page as they have little to do with Eugene Goostman? TexTucker (talk) 09:12, 22 March 2017 (UTC)[reply]

Source

An IP editor has suggested that this NBC News article could be used as a source in the article: [3]. --JBL (talk) 12:43, 31 July 2020 (UTC)[reply]