![]() ![]() Usually, this is the introduction and conclusion in a document. We observed a higher incidence of false positives in the first few or last few sentences of a document. Changed how we aggregate sentences in the beginning and at the end of a submission.We may adjust this minimum word requirement over time based on the continuous evaluation of our model. Results show that our accuracy increases with just a little more text, and our goal is to focus on long-form writing. Increased the minimum word count from 150 to 300 wordsīased on our data and testing, we increased the minimum word requirement from 150 to 300 words for a document to be evaluated by our AI writing detector.In order to reduce the likelihood of misinterpretation, we have updated the AI indicator button in the Similarity Report to contain an asterisk for percentages less than 20% to call attention to the fact that the score is less reliable. We learned that our AI writing detection scores under 20% have a higher incidence of false positives.This is inconsistent behavior, and we will continue to test to understand the root cause. Added an additional indicator for documents with less than 20% AI writing detected. ![]() Based on the results of these tests, we made the below updates to our model in May to ensure we hold steadfast on our objective of keeping our false positive rate below 1% for a document. Since the launch of our solution in April, we tested 800,000 academic papers that were written before the release of ChatGPT. We will continue to adapt and optimize our model based on our learnings from real-world document submissions, and as large language models evolve to ensure we maintain this less than 1% false positive rate. We’re committed to safeguarding the interests of students while helping institutions maintain high standards of academic integrity. For example, if we identify that 50% of a document is likely written by an AI tool, it could contain as much as 65% AI writing. We’re comfortable with that since we do not want to incorrectly highlight human-written text as AI-written. In order to maintain this low rate of 1% for false positives, there is a chance that we might miss 15% of AI written text in a document. To bolster our testing framework and diagnose statistical trends of false positives, in April 2023 we performed additional tests on 800,000 additional academic papers that were written before the release of ChatGPT to further validate our less than 1% false positive rate. In other words, we might flag a human-written document as AI-written for one out of every 100 fully-human written documents. ![]() We strive to maximize the effectiveness of our detector while keeping our false positive rate - incorrectly identifying fully human-written text as AI-generated - under 1% for documents with over 20% of AI writing. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |