An experimentation that sawmental healthsupport provided to about 4,000 humans using an artificial intelligence ( AI)chatbothas been foregather with severe criticism online , over business about informed consent .
On Friday , Rob Morris , co - founder of the social media app Koko , announced the final result of an experimentation his company had run usingGPT-3 .
Koko allowspeople to offer their problem to other users , who then attempt to avail them " rethink " the situation , in what has beenlikened to a form of cognitive behavioral therapy . For the experimentation , Koko user could prefer to have anAIhelper compose responses to human drug user , which they could then use ( or alter or replace if necessary ) .
" Messages composed by AI ( and supervised by humans ) were rated significantly higher than those written by humans on their own ( p < .001 ) . Response times go down 50 % , to well under a minute , " Morris wrote on Twitter .
" And yet … we pulled this from our platform pretty speedily . Why?“he added . " Once people learned the message were co - created by a machine , it did n’t crop . faux empathy feel uncanny , empty . "
While he went on to suggest that this could be because oral communication model – essentially really , really good autocomplete – do not have human lived experience and so their response hail across as spurious , a lot of people focused on whether the participants had provided informed consent .
In a later illumination , Morris stressed that they " were not pair people up to jaw with GPT-3 , without their knowledge " .
People have point out that it is not clear how the statement " everyone knew about the feature " check with the title " once people learned the messages were co - write by a simple machine , it did n’t work " .
Morristold Insiderthat the study was " nontaxable " from informed consent law , pointing to aprevious studyby the company which had also been exempt , and that " every individual has to render consent to habituate the serve " .
" This bring down no further hazard to users , no deception , and we do n’t collect any in person identifiable information or personal health information ( no e-mail , phone turn , informatics , username , etc ) , " he lend .
A just looking at the methodological analysis would avail to clarify when informed consent was give and when the participants learned that response could have been make by ( human - supervised ) AI . However , it is ill-defined at this stagecoach whether the data will be issue , with Morris now discover it to Insider as not a university bailiwick , " just a product feature article explored " .