A new report from plagiarism detector Copyleaks found that 60% of OpenAI’s GPT-3.5 outputs contained some form of plagiarism.
Why it matters: Content creators from authors and songwriters to The New York Times are arguing in court that generative AI trained on copyrighted material ends up spitting out exact copies.
A genuine question: How well do chatgpt & others add citations if asked?
ChatGPT itself doesn’t know where it got the info from, so it makes up links and names - it’s a language model, not a search engine.
On the other hand, if you manage to find a reputable source and give it relevant metadata, it can format a nice citation for you, saving you time on that instead.
Badly. This burns my laziest students every semester. Chatgpt just adds nonsense citations.
Microsoft’s copilot adds them, it’s why I prefer to use it.
Copilot is GPT under the hood, it just starts with a search step that finds (hopefully) relevant content and then passes that to GPT for summarization.
There is custom gpts for that. ScholarAI and Consensus are OK.
Perplexity AI includes citations every time.