A well-rounded approach to avoid and detect AI-generated content in your applications

by | Mar 17, 2023 | Article

Reading time:

The release of ChatGPT in November of 2022 sent ripples throughout the globe. Humans had done it: created an artificial intelligence-powered chatbot that uses a machine learning model to scan through billions of data points and spit back out relevant content in a conversational tone that sounds human.  

Excitement ensued. The powerful new chatbot saw more than one million users in its first five days.

 By January, just two months after its launch, ChatGPT had reached 100 monthly active users, becoming the fastest-growing consumer application in history. Data from Similarweb reported that the service had 13 million unique users per day in January, more than double its levels in December.

To put that in context, it took TikTok nine months to reach 100 million users. It took Instagram more than two years.

But, as excitement has grown, so have new fears. Universities and academia across the globe are crying foul—how can they ever accept essay submissions again? How will scholarship candidates be reviewed? Will there be a way to protect original content? Will there be a way to detect AI-generated copy? 

AI is the new plagiarism. But it’s a smarter, faster and better option for those looking to create content quickly.

It’s undoubtedly a scary situation for institutions that review and assess scholarly submissions. But, let’s stop for a moment. Take a deep breath. Yes, there are tools out there to catch AI-generated content. And yes, there is so much you can do to promote and demand original content.

Let’s take a look at AI-generated content and how to avoid and detect it in your applications.

Back to top

What is AI-generated content?

First, let’s take a look at what AI-generated content is and how it works. Because, the more you know, the better you can understand and detect AI content in your own applications and submissions. 

ChatGPT is an artificial intelligence chatbot developed by OpenAI and financed in large part by Microsoft. It is built on top of OpenAI’s GPT-3 family of large language models and has been fine-tuned using both supervised and reinforcement learning techniques.

Source: ChatGPT

The chatbot uses complex learning models to predict the next word based on previous word sequences. It sounds like what our phones do when we start typing a message to a friend, but it’s actually different. 

Harry Guinness in his Zapier blog about ChatGPT explains it well: 

“[A] humongous dataset was used to form a deep learning neural network—a complex, many-layered, weighted algorithm modelled after the human brain—which allowed ChatGPT to learn patterns and relationships in the text data and tap into the ability to create human-like responses by predicting what text should come next in any given sentence.”

Generative AI can produce text and images, write blog posts, create program code, write poetry and even generate artwork.

To test it out, I gave ChatGPT a prompt. 

“Can you write a 1,000 word essay on the French involvement in the American Revolution?”

Chat GPT prompt sample

It took fewer than 30 seconds for the essay to arrive. It was sensible and read well; was grammatically perfect and flowed naturally. 

Then, I tested AI-generated images. I went to OpenAI’s DALL·E 2, where you can describe an image and the software creates it—in mere seconds.

I provided the following prompt, sticking with my theme of the American Revolution, sort of.

“A Banksy painting of Paul Revere.”

This was even faster. In about 10 seconds, DALL•E produced the following image, on right.

Back to top

DallE image of Paul Revere

The downsides of AI-generated content 

While many users have experienced the joy (and fun) of implementing such powerful AI tools, many have begun raising alarms.  First, there is the increased threat of misinformation. AI tools can easily generate articles that contain factual errors. NewsGuard, a company that tracks online misinformation, called AI-powered chatbots “the most powerful tool for spreading misinformation that has ever been on the internet.” Then, related to application and submission programs, there is the opportunity for students or submitters to leverage it for written work. You can see how easy it was above to create an essay in seconds without having to do any actual research. AI tools can greatly diminish the quality of education, which could have far-reaching consequences.  If you run a submission program, AI-generated content can seriously undercut the integrity of your program and your review process, which could damage your organisation’s reputation and impact. Let’s go over how to mitigate against it. Back to top

How to avoid and detect AI-generated content in your applications

First, employ an AI content detection tool

There is an easy way to detect AI-generated content: AI content detection tools. These new apps are popping up quickly, and as AI-powered tools advance, so too will these detection tools. Examples of such tools are CopyLeaks, Originality.ai and GPTZero—to name only a few, and allow you to copy and paste your content and scan for originality.  I took the essay I created above in ChatGPT on the French involvement in the American Revolution and plugged it into this free AI content detector at CopyLeaks. Here was the response.
Copyleaks sample of AI  content detection

Then, I ran it through GPTZero, and received the same response.

GPTZero AI-content detection

Communicate your content rules up front

Using an AI content detection tool is important–but it’s best employed as a single quality check in your application review process. 

It’s important to create a submission process that promotes original content from the outset. 

Be transparent about your expectations for submitted content. Let your participants know if you don’t allow AI-generated copy. Set the expectation from the very beginning. 

Back to top

Create an eligibility screener for submissions

Consider a qualification round where you can check submissions for eligibility. In Good Grants, for example, you can create an eligibility round to collect specific details that can be reviewed through auto-scoring, based on certain criteria you have created.

This can help filter out any weak submissions and weed out those who don’t meet your criteria from the start. 

(See: Automatically divert ineligible applicants with eligibility screening).

Back to top

 Require diversity in submission content format

It’s time to think outside the box. Submissions should contain more than text. Multimedia is a great way to diversify your application submissions. It also helps submitters provide a more well-rounded submission. 

If, for example, you are collecting essays on the French involvement in the American revolution, you could ask for: 

  • A video component to theapplication where the submitter records themselves on video speaking about their essay or topic.  
  • Visuals or artwork from the time period, with sources attached
  • Photos of the research process. For example, photos of notes taken or a list of credible sources. 

Application or submissions management software can make all of this very easy. With Good Grants, you can accept unlimited files and file sizes, from video and audio to images and more.

There’s more to a high-quality application than good copy.

It’s now more important than ever before to request diverse formats as part of your application process. 

Back to top 

Build a multi-stage review process

If your submission program is complex or requires considerable review time, it’s a good idea to create a multi-step review process.

If you want more eyes on each submission, you could, for example, create a process where applications or submissions go through an approval process before landing in front of your final review team. 

So, for example, if you’re working with students on American Revolution essays, you could require sign-off from the student’s advisor on the submission.

Or, if you are managing a scholarship program, you could require sign-off or approval from an applicant’s previous teacher or instructor.  

A multi-stage review flow helps provide integrity checks along the early stages of review, which can help flag any questionable submissions or applications.

Back to top

Provide contractual agreements for authenticity 

Make them sign their name to it. There’s nothing like a signature requirement to agree that a submission is wholly that submitter’s, just before they click “Submit.” 

Create a policy around AI-generated content and any consequences of submitting non-original content. Then, make sure that the policy is transparent for and communicated to your submitters. 

In Good Grants, you can create contracts directly in the platform, where submitters can sign and you can manage all in one place.

Back to top

Knowledge is power, just ask ChatGPT

Creating a well-rounded approach to avoid and detect AI-generated content is critical for the integrity of your application program. And learning about AI-generated content is the first step to acknowledging its power and the implications it has created for our world.

Take a moment to explore ChatGPT and other tools for yourself to see how it works. Try different prompts… or even the prompts you ask for in your submissions. Then, stay up-to-date on new developments; there will be many. The door to artificial intelligence is only just opening. 

But by implementing a well-rounded, diverse submission process, you can start protecting your applications, program and organisation, today.

Back to top

Categories

Follow our blog