AI chatbots are changing how we learn, work, and develop job skills across industries, transforming training, upskilling, and professional certification programs. While chatbots like ChatGPT and Claude have benefits for credentialing organizations and those focused on training and validating the job skills of current and future employees, they also have challenges and risks.
How AI Can Support Workforce Learning & Assessment
Organizations delivering workforce certification and training programs can use AI tools to:
- Generate job-specific training content
- Provide real-time coaching or microlearning
- Simulate interview or certification scenarios
- Summarize complex industry standards
- Support soft-skills development with scenario-based feedback
But Your Candidates Can Also Use them, Which Creates Risks for Exam Integrity
Just as AI can assist administrators and trainers, it can also be misused by candidates during certification exams or high-stakes assessments.
Chatbots and AI with chatbot-like features can:
- Suggest answers to online exam questions
- Provide written responses to case-study situations
- Automatically generate code or calculations
- Generate real-time responses as interview questions are asked
Without a proactive approach to protecting exam integrity, these AI-tools pose a serious risk to the credibility of your programs, and the safety and reliability of your workforce. The good news is that there are best practices and strategies to follow and technology that can secure your exams and programs.
Bookmark this page in case you need to reference it later!
To bookmark, press Ctrl+D (Windows) or Cmd+D (Mac)
Bookmark this page for future reference!
Ctrl+D (Windows) or Cmd+D (Mac)
Part 1: Chatbots 101
What they are, how they work, and prompting tips.
Types of Chatbots & How They Work
Most chatbots fall under two broad categories:
- Rule-based chatbots
- AI chatbots
Each category covers a range of chatbot types, depending on what they’re designed to do. Some combine features from both. We’ll quickly cover rule-based chatbots, then focus on AI-based chatbots.
Rule-Based Chatbots
Rule-based chatbots follow set rules and give specific responses. They use an “if this, then that” model, which is simple, but can still handle complex interactions through a decision tree.
Some rule-based chatbots are very straightforward. For example, when you need support and the chatbot asks what it can help you with, then provides buttons for technical support, login/password issues, or accessibility options, for example. Then, based on what you click, it takes you to the next branch of the decision tree. Some of these decision-trees only have a few branches, others could have several hundred.
While some use buttons, others basically identify keywords in your message that trigger the next step in the conversation. For example, if you say, “I need to reset my password,” the chatbot recognizes words like “reset” and “password” and takes you to that branch. If it doesn’t recognize what you’re saying/asking, it uses fallback responses to reroute the discussion, such as: “Can you rephrase that?” or “Are you looking for [topic]?”

interface created by ChatGPT 4o
AI Chatbots
This section focuses on generative AI chatbots like ChatGPT, Claude, and Gemini. From here on out, we’ll refer to them generally as AI chatbots, or just chatbots, unless we’re talking about other specific types. We’ll also discuss AI with chatbot-like features and functionality, like Perplexity, NotebookLM, and Wolfram Alpha.
How Do AI chatbots Work?
Instead of answering from a fixed list of responses like rule-based chatbots, AI chatbots generate new, custom responses based on the insane amount of resources they’ve been trained on. Although it may seem like they’re copying from those resources, they’re basically just predicting what to say based on the patterns they’ve learned. That’s why they give you slightly different, custom responses even if you ask the same question twice.
Thanks to their underlying technologies, chatbots can understand what you’re saying, figure out what you really mean, and respond like a real person.
While AI chatbots each work a little differently, here’s a quick look at the foundational technologies they rely on:
- Large Language Models (LLMs): These are basically chatbots’ brains; they’re trained on tons of text to help chatbots understand and generate human-like responses.
- Machine Learning (ML): Allows chatbots to learn from examples and data so they can give better, more accurate responses over time.
- Natural Language Processing (NLP): Helps chatbots read and respond to your messages in a natural way.
- Natural Language Understanding (NLU): helps chatbots figure out what you really mean. This goes beyond simple word-for-word interpretations to understand intent, sentiment, and context.
- Natural Language Generation (NLG): A part of NLP that helps chatbots generate human-like responses.
PLACEHOLDER FOR GEN-AI & RAG COMPARISON
Which Chatbot is Best for Professional Education Programs?
ChatGPT can help you with every part of the writing process, whether it’s brainstorming training plans and drafting presentation content or writing objective exam rules and clear assignment instructions. It can also generate different types of text, edit and improve clarity, and adapt content to fit the needs of diverse employees.
- Best for: Writing content, checking grammar and punctuation, summarizing information, and adjusting text for different employees’ accessibility needs.
- Limitations: Like any generative AI, it can produce generic or inaccurate content and requires fact-checking, especially for subject-specific accuracy.
Like ChatGPT, Claude is a versatile writing tool that stands out for its ability to interpret and respond to complex prompts with nuance and contextual awareness while avoiding cookie-cutter language.
- Best for: Developing comprehensive training materials and supplemental resources.
- Limitations: Can be less versatile than ChatGPT depending on the topic.
DeepSeek Chat is a relatively newer chatbot that can handle technical writing tasks and simplify dense, complex information into well-organized content that’s easy to understand.
- Best for: Organizing technical content (e.g., performance reports, training manuals, etc.), summarizing information, and creating training guides and other resources for employees.
- Limitations: May struggle with more creative writing tasks and can have difficulty picking up on sarcasm, figures of speech, and other language nuances.
Whether you’re asking technical questions or looking for life advice, Pi is the perfect chatbot for it.
Pi is more of a conversational partner than a typical AI assistant. It helps organize daily tasks training schedules, process your thoughts and emotions, and learn how to address difficult situations at work or life in general. It can help support employees’ emotional well-being by offering thoughtful responses that can encourage them, reduce stress, and feel heard.
Another surprisingly rare feature Pi offers is that it can listen and reply in natural-sounding voices. You can choose from a variety of voices with different accents, which makes the experience feel more personal and engaging.
Gemini, Google’s AI assistant, can generate text, images, and code. Its ability generate different types of content is especially useful for creating training plans that include visuals (images, charts, tables, etc.), mock data for simulations and case studies, and coding exercises.
- Best for: technical writing, coding, and generating structured content.
- Limitations: Gemini is pretty good at writing, but it definitely isn’t as advanced and natural as ChatGPT and Claude .
Microsoft Copilot is an AI assistant built into Microsoft apps (Word, Excel, etc.) and is also available as a standalone chatbot. It runs on the same large language model as ChatGPT but is geared more toward productivity and organization rather than creative writing.
- Best for: Creating training materials within Microsoft apps, summarizing documents, and assisting with administrative writing tasks and organization.
- Limitations: Not ideal for highly creative or in-depth content.
Perplexity looks and operates just like a chatbot, but it’s different from a technical perspective. It’s an AI-powered conversational search engine and answer engine, which means that it answers your questions by pulling together and citing information from reliable sources on the internet.
NotebookLM allows you to upload PDFs, websites, YouTube videos, audio files, Google Docs, or Slides and summarize them or turn them into detailed outlines, training or study guides, and FAQs.
You can use its preset summary tools (FAQ, training guide, briefing document) and questions it generates from the text and save the responses. You can also ask specific questions and it will answer with cited information from the text.
Wolfram Alpha isn’t a chatbot or a search engine, but it has elements of both. So, it’s a bit of something in between, plus a really smart calculator. Like a chatbot, it understands what you ask it, and it answers in a format similar to a search engine. But instead of chatting or showing links, it gives direct, computed answers based on built-in data and math.


Scholarcy is an AI summarizer with a few chatbot-like features. It quickly summarizes content, organizes it better than most chatbots, and creates flashcards to help you study. And, like chatbots, it lets you ask questions about the content and gives intelligent, helpful answers.
You can upload training manuals; certification study guides; assessment rubrics; compliance documentation; audio transcripts from webinars, meetings, or interviews; , and more, in these formats: PDF, Word, Powerpoint, HTML, XML, LaTeX, TXT, CSV, RIS, BIB, NBIB
Chatbot Prompting 101
Writing effective prompts is pretty simple. Just be clear, specific, and provide context.
What's a Prompt?
A prompt is the message you type into a chatbot or other generative AI to tell it what you want. Prompts can be a question, requesting help, or giving it a task like summarizing text, generating code, and even translating text to other languages.
What is Prompt Engineering?
Prompt engineering is just how to write your requests to get the response you’re looking for from a chatbot. “Engineering” makes it sound like a really technical (sometimes it can be), but for most programs’ use cases, you just need to be clear and specific about what you want and provide details to help the chatbot tailor its response.
There are a ton of terms that make prompting seem like an extremely complex and technical process, but they mostly just describe things people naturally do when they use AI.
Like shot prompting, for example. It includes Zero-shot, One-shot, and Few-shot. It just refers to the number of examples you provide in your prompt to show the chatbot what you want.
- Zero-shot = 0 examples
- One-shot = 1 example
- Few-shot = a few examples
Chatbot Prompting Tips for Professional Education Programs
Provide Specific Context and Clear Instructions
Providing specific, relevant details can improve the chances of getting the response you want.
Think of writing a prompt like writing a grocery list for someone who’s shopping for you. The chatbot will usually get the items on your list, but if you want something specific, like a brand, flavor, or amount, you have to say so.
If you want chocolate ice cream, adding “dessert” to your list doesn’t really help. Adding “ice cream” is better, but that still doesn’t mean you’ll get the flavor you want. Instead, tell them the flavor, brand, and amount you want.
Details to Consider Including When Writing Prompts in Professional Education:
- Voice: Who should it write like? e.g., “Write as certified HR professional,” “Respond like a compliance officer”, “Write like Steve Jobs” This helps align the response with industry expectations.
- Tone: Should the output be formal, instructional, persuasive, conversational, or technical? Choose a tone that fits your audience, whether it’s corporate trainees, certification candidates, or executive stakeholders.
- Audience: Who will be reading this? Are they frontline employees, mid-level managers, certification candidates, or technical professionals? What’s their familiarity with the topic or industry terminology?
- Format: How should the response be delivered? Do you need a training module outline, assessment question bank, onboarding checklist, or policy summary? If you prefer bullet points, tables, or slides, be specific about the structure (e.g., number of rows/columns, headers, etc.).
- Length: Specify the desired scope—e.g., a 200-word policy summary, a 5-question quiz, a 10-slide presentation outline, or a 3-paragraph response.
- Context: Provide relevant details such as industry, role-specific skills, standards (e.g., ISO, OSHA), or regulatory frameworks. This ensures the output is accurate and relevant to your professional setting.
- Examples: Share examples of acceptable vs. unacceptable content, tone, or format. This helps the chatbot understand your expectations and generate tailored responses that meet your standards.
Use these prompting examples and templates as a starting point, and customize your prompts with details specific to your training objectives, certification standards, or assessment criteria.
Template: Based on the text I provided, create [number] [question type] questions to assess [audience] on their understanding of [topic(s)]. [Q&A requirements and specifications.]
Example: Based on the text I provided, create 30 multiple-choice questions to assess participants preparing for a Food Safety Manager Certification exam on their understanding of methods to prevent cross-contamination. Each question should be objective, concise, and written in plain language. Provide four answer choices per question, with only one correct answer.
Template: Based on the following [question type] questions, create a [alternative question type] question(s) that assesses the same underlying concepts or skills. The new question should encourage [objective/outcomes (e.g., critical thinking and practical application rather than recall)]. [Add questions here.]
Provide a clear explanation of your reasoning for how you adapted the original questions into this format. Then, include a concise description of what an appropriate student response should contain.
Example of the 1st paragraph from above: Based on the following multiple-choice questions, create an essay question that assesses the same underlying concepts or skills. The new essay question should encourage critical thinking and practical application rather than recall. [Add questions here.]
Note: If you’re repurposing the question for a different audience or subject, briefly describe them so the chatbot can adapt its response.
Writing objective certification exam rules is difficult because rules that seem clear can still be misinterpreted.
Prompt to use: Create objective rules for an proctored online certification exam. The rules should comprehensively address:
*Behavior (e.g., no talking or using cell phones to look up answers)
*System requirements (e.g., computer with a functioning webcam and microphone)
*Test environment (e.g., quiet room, clear desk/surface, no other people present)
Use direct, unambiguous language that leaves no room for misinterpretation. Provide the rules in a numbered list that can be copied and pasted into the assessment platform/LMS.
Here are two examples of basic exam rules followed by improved versions. Use the improved versions as examples of the level of clarity and specificity expected as you create the set of rules:
*Rule 1 (basic): No talking during the test.
*Rule 1 (improved): No communicating with other individuals by any means, whether verbal, non-verbal, or electronic.
*Rule 2 (basic): Your desk must be clear of all items except for the device you use to take the exam.
*Rule 2 (improved): The testing area and any surface your device is placed on must be clear of all items except the device used to complete the exam. This includes books, papers, electronics, and other personal belongings.
Note: When you type an asterisk (*) at the start of a line, chatbots, generally speaking, get the gist that you’re trying to create a bullet point.
Template: Based on the case study instructions I provided below, create a rubric that evaluates [add what’s being assessed, e.g., specific job skills and competencies]. Each item should include [type of performance scale, e.g., a five-point numeric scale from 1 to 5 (1 = Novice, 5 = Excellent) or a three-point scale with the levels: Excellent, Developing, and Needs Improvement].
Format the rubric in a [type, e.g., table or chart] to use in [format, e.g., Word Doc, Google Sheets, LMS, etc.]. Here’s the case study and instructions: [add text and instructions]
Example: Based on the case study instructions I provided below, create a rubric that evaluates communication skills. Each item should include five performance levels using a numeric scale from 1 to 5 (1 = Novice, 5 = Excellent).
Format the rubric in a table to use in a Word Doc and include a column for written feedback. Here are the case study details and instructions: [add text and information]
Note: This prompt template is just that: a template—a starting point. You’ll really need to fill in the blanks with as much information and context as possible—learning goals, topics/subjects, specific characters, realistic issues and scenarios—to help the chatbot tailor the case study to your course needs.
Create a [length]-word case study about [specific scenario/situation] for [audience, e.g., developers, sales managers, participants in a certification program, etc.].
The case study should include:
*Comprehensive, realistic background information about [scenario/situation]
*Key characters, stakeholders, and/or groups involved
*Specific opportunities, challenges, and decisions to evaluate
*Five short-answer questions that require in-depth problem-solving and a proposed plan to address the situation
Prompts that summarize text—whether it’s an entire research paper or a paragraph from an article—can be as simple as, “Summarize this.”
But you can also ask the chatbot to summarize it in specific formats or focus on certain topics. Just copy and paste the text into the chatbot or upload the document, then start the prompt with something like “summarize” or “simplify” (or any other related terms), and then include details on the length, format, and focus.
Here are a few example prompts you can try:
- Summarize this document into 2-3 concise sentences, then provide bullet points on the important information in each section, especially the sections about data collection methods and results.
- Simplify this text and explain it in practical, easy-to-understand language.
- Explain what this text is saying in simple terms that [audience] can understand.
- Shorten this text into a [resource type, e.g., study guide, FAQ, etc.] for [audience].
I’m a [job role] using this image in my [employee development module, training presentation, etc.] about [subject/topic]. Please write the following for the image:
- Descriptive alt text (approximately 125 characters or fewer)
- Detailed image description for accessibility
- Suggested file name (use lowercase letters and hyphens, no spaces)
- Text caption to display alongside the image
Note: You can upload images to most AI chatbots like as ChatGPT, Gemini, and Claude.
Don't Overthink It
- Don’t worry about order: Chatbots use machine learning to process your whole prompt at once, so the order of the information you provide usually doesn’t matter that much in most cases. Just focus on providing the right information.
- Write naturally: Chatbots can “understand” everyday language (even our slang and sarcasm) thanks to a technology called natural language processing, which is part of how large language models (LLMs) work. So, you don’t need to use special wording or commands. Just write to it like you would a real person, and it’ll usually get the gist of what you’re saying.
Fact-Check Everything
Always, always, always fact-check chatbot responses because they don’t generate text based on facts. They just predict what word is most likely to come next.
Treat All Chatbot Conversations Like Public Conversations
Regardless if it’s a public or private chatbot, don’t include any personally identifiable information (PII) or sensitive organizational details in your prompts or files you upload (Excel, Word, etc.).
Anonymize information to protect employees, candidates, yourself, and your organization. For example, use “Company ABC” instead of your company name, or “Employee A” instead of specific names.
Part 2: Benefits & Risks of Chatbots in Professional Education Environments
Benefits of AI Chatbots in Professional Education
When used appropriately, chatbots can benefit programs, candidates, and employees. This section focuses on employee and candidate benefits, but program benefits and uses are covered in the next section.
Personalizes Training Experiences
Whether chatbots are used independently or integrated with other training tools, their ability to personalize learning experiences is a key benefit that overlaps with nearly every benefit we cover in this section.
Increases Engagement and Participation
AI chatbots’ ability to provide immediate quality feedback helps create more engaging and interactive learning experiences (Bhutoria, 2022). By adjusting their feedback to match each individual’s needs and preferences, chatbots help keep them engaged and make it easier for them to remember what they’ve learned (Labadze et al., 2023).
This personalized, interactive learning increases engagement and actually makes it more enjoyable for participants (Song & Song, 2023; Walter, 2024). And in some cases, chatbots can increase motivation, interest in the materials and participation (Mai et al., 2024; Nguyen, 2023), which can encourage them to seek more information (Wood & Moss, 2024). This level of engagement can encourage employees and candidates to participate in in-depth interactive discussions, evaluate multiple perspectives, and engage in thoughtful debates. Participating in these activities promotes critical thinking and decision-making (Zhong et al., 2024).
Although it’s worth noting that chatbots tend to have a greater impact on engagement and motivation in individuals who don’t know the material as well (Liu & Reinders, 2025). In professional learning environments, this could be because candidates and employees who are already knowledgeable and interested in the topic may not need chatbots to stay motivated and engaged.
Improves Training and Performance
AI chatbots can significantly improve training and performance in professional education settings, but the benefits depend on how individuals use chatbots and how often.
For example, Sanchez-Vera (2025) found that individuals who use chatbots moderately with their regular studying method were better prepared for exams and outperformed individuals who used them too little or too much. In other words, chatbots can be used to help candidates and employees learn, but there’s a point of diminishing returns when they’re overused. The sweet spot is when they’re used as a supplement to regular training and exam preparation.
Stojanov et al. (2024) found that individuals’ views of their performance can influence the ways they use chatbots and how often they rely on them:
- “High-achievers” don’t use chatbots that often, but when they do, it’s usually to reinforce what they learned or to learn more about a topic.
- “Average” performers use chatbots more than high-achievers, typically to help them complete specific parts of their training assignments.
- “Struggling” refers to individuals who heavily rely on chatbots to clarify confusing topics and simplify content rather than working to understand them on their own.
Most individuals use chatbots for basic things like defining terms and comparing concepts, but some rely on them to understand how those concepts apply to real-world situations (Sanchez-Vera, 2025). This aligns with findings from Zhong et al. (2024) where those who used ChatGPT had a better understanding of key terms and concepts related to specific topics/fields.
While AI chatbots seem to be the most beneficial for adult learners (Wu & Yu, 2023), a common challenge for learners at all levels (higher, primary, and secondary education) was that they didn’t fully understand what chatbots could do or know how to ask effective questions, which limited their ability to make the most of the tool (Sanchez-Vera, 2025; Wu & Yu, 2023).
Provides Real-Time, Quality Feedback and Support
Nearly every study we use in this guide emphasizes the benefits of chatbots’ abilities to provide personalized, quality feedback in real-time. This feedback can help employees and candidates learn, reflect, and stay engaged. They can tailor responses to each individual’s needs and help with many different types of learning activities, including improving their writing, learning through debates, and setting goals while tracking progress during training sessions.
Chatbots can also identify where employees and candidates are struggling and respond with diverse, personalized support, such as follow-up questions or quizzes, which provide an interactive and low-pressure way to reinforce learning (Deng & Yu, 2023).
While some individuals find chatbot feedback easier to read and more organized than human feedback, it’s not a good idea to rely on it for assessment purposes since it usually lacks nuance and doesn’t always match the feedback that knowledgeable experts provide (Dai et al., 2023; Neuman, 2021; Schei et al., 2024). For example, chatbot responses can be too long, generalized, and overly positive in some cases.
Overall, chatbots are effective tools that provide adult learners with immediate feedback to support their learning, development, and training.
Tutors and Acts as a Training Partner
The feedback from chatbots also extends to their ability to be great tutors and training partners for adult learners. They can simplify complex information, organize text, and turn different content into valuable training resources for professionals. They also create a more relaxed, nonjudgemental environment where people feel comfortable asking questions (Klos et al., 2021).
Encourages Independent Learning
Chatbots help learners take ownership of their progress and play a more active role in directing their learning and development (Creely, 2024; Sánchez-Vera, 2025). Because chatbot responses adapt to individual needs, they offer a personalized, flexible experience that supports self-paced learning (Creely, 2024; Deng & Yu, 2023).
They also support independent learning in several ways by offering self-assessment, self-monitoring, goal setting, and progress tracking. For example, chatbots provide quizzes or engage in debates and discussions to help adult learners test their knowledge and reflect on their understanding and to plan training or study sessions by setting specific objectives and checking in throughout to stay on track.
Reduces Cognitive Load
When workloads are high, people are more likely to use software and other AI tools to help manage everything (Hasebrook et al., 2023; Koudela-Hamila et al., 2022). Chatbots can help reduce cognitive load by handling smaller tasks and simplifying complex information (Imundo et al., 2024; Pellas, 2025; Pergantis et al., 2025).
For example, adult learners can reduce cognitive load and cut down on busy work by prompting chatbots to:
- Summarize training content or certification materials in their preferred format (e.g., bullet points, executive summaries)
- Explain complex industry concepts in plain language
- Identify and highlight key information to prioritize for assessments
- Edit written responses for grammar, clarity, and professionalism
- Format references, citations, or documentation to meet organizational or certification standards
Support Stress Management and Mental Wellness
Whether candidates and employees are overwhelmed while preparing for an exam or stressed about work deadlines, chatbots can provide emotional support to help manage stress and anxiety. Aside from that, chatbots offer people an outlet to vent, ask for help, and process their emotions without judgement (Klos et al., 2021; Wu & Yu, 2024).
Some AI tools can even detect signs of stress in users. AI chatbots use natural language processing (NLP), sentiment analysis, and emotion recognition models to detect stress in user messages. These tools help them chatbot pick up on emotional cues and adjust its responses, like offering calming strategies, motivational messages, or directing them to support resources. For professionals under pressure, this kind of instant, private support can be an invaluable tool for staying focused and resilient.
Pi AI has pre-built conversation tracks, like “Let it rip” which encourages you to vent and it provides extra resources to help learn how to regulate your emotions.
Enhance Accessibility and Inclusivity
AI chatbots also help make professional training and assessments more accessible, which is beneficial for individuals with disabilities, language barriers, or diverse learning needs. They can transcribe spoken content into text, simplify complex terminology, or provide translations and vocabulary support in real time (Evmenova et al., 2024).
For example, a candidate preparing for a technical certification in a non-native language could use a chatbot to translate terminology or clarify unfamiliar concepts. Individuals with dyslexia might use chatbots to reformat content for easier reading or summarize lengthy documents into digestible points. These capabilities not only remove barriers to success, but also build confidence and self-sufficiency, core components of effective adult learning and performance.
Not only can this support learning, but it can also build confidence, like when chatbots provide side-by-side translations to strengthen vocabulary and reading skills (Evmenova et al., 2024).
Cons of AI in Professional Education Environments
Overreliance on AI Tools
While many are initially enthusiastic about using chatbots, that enthusiasm can fade, especially when interactions feel shallow, vague, or repetitive (Wu & Yu, 2024; Deng & Yu, 2023). These surface-level conversations aren’t engaging and typically lead to frustration and disengagement. The concern in this situation is is that employees and certification candidates could stop using the chatbot to learn and just rely on it complete tasks for them.
Chatbots can also unintentionally replace meaningful dialogue with trainers, mentors, or colleagues. These human connections are often what sustain motivation and engagement in professional learning environments. Without them, learners may become increasingly isolated, relying on AI for quick answers instead of participating in collaborative or reflective learning experiences.
Inaccurate or Misleading Information
Although AI models are steadily improving, they still frequently produce inaccurate or misleading information, known as “hallucinations” (Susnjak & McIntosh, 2024). These errors can be especially problematic in professional contexts where accuracy is critical.
There are two main reasons this happens:
1. AI generates what sounds right, not what is right. Chatbots predict words based on patterns, not verified facts—much like an advanced form of your phone’s predictive text. Even when the content is wrong, it may still sound plausible and authoritative.
2. AI is only as reliable as its training data. Chatbots replicate patterns from the content they were trained on. If those sources contain inaccuracies, those errors can be repeated with confidence.
This risk is especially concerning in fields like healthcare, finance, law, or engineering, where misinformation can have serious consequences.
Assessment Integrity and Credential Validity
In high-stakes professional assessments, integrity isn’t just about fairness, it’s about public trust, reputation, and the ability to prepare individuals for their careers. With AI tools now widely accessible, some candidates may attempt to use them to complete certification exams, skills tests, or pre-hire evaluations.
Even when done unintentionally, this kind of misuse can compromise the validity of your certifications. In regulated industries, it may also violate compliance standards or introduce liability. Beyond these risks, it undermines your program’s credibility and the value of the credentials you award.
Effective proctoring solutions that can manage and block AI use are more important than ever to keep certification programs reliable and help employee training build the job skills that are needed in the workplace.
Part 3: Using AI to Support Instructional Design in Professional Education
Main Menu
AI chatbots can streamline content development, reduce repetitive tasks, and improve accessibility in professional learning and credentialing environments. Here’s how Instructional designers, training developers, and program administrators can use AI.
Support Modular Training and Assessment Design
AI can generate content aligned with certification objectives and specific job competencies:
- Draft or refine learning objectives based on required competencies or standards
- Generate sample assessment questions for formative or summative evaluations
- Repurpose learning content into microlearning formats or mobile-friendly summaries
Improve Accessibility and Inclusivity
AI tools can help make training content more accessible by:
- Generating alt text for multimedia content
- Simplifying technical language for diverse audiences or ESL learners
- Organizing content into formats that support learners with dyslexia or cognitive challenges
- Translating content or localizing language (e.g., “licensure” vs. “registration,” “certification” vs. “qualification”)
- The prompts can be as simple as: “Translate this to [language]: [add your text]” and “Localize this text for [audience]: [add your text]”
Note: While chatbots can support these efforts, any final content should be reviewed by a subject matter expert or accessibility specialist to ensure compliance and quality.
That said, you can use chatbots as a tool to summarize and outline an essay before you read it as a way to get a quick overview of what’s discussed or identify areas to focus on. However, some have fair ethical concerns about using public or unsecured chatbots because anything you submit, such as candidate or employee work, might be shared with the company and used to train future models.
Summarize and Repurpose Content
Professional learning often involves technical manuals, compliance documents, or recorded webinars.
Chatbots can help:
- Summarize transcripts or training sessions into key takeaways
- Create discussion prompts or quiz questions from source materials
- Convert content into policy summaries, quick-reference guides, or new employee onboarding materials
Use Caution with Grading or Evaluation
Chatbots can help organize and summarize written responses from certification exam essays and scenario-based training assessments. However, they shouldn’t be used to score their final work by themselves because AI lacks the judgment to reliably assess complex reasoning or handling of ethical situations.
That said, you can still use can still AI to assist with:
- Pre-reading summaries of candidate responses
- Highlighting potential gaps based on predefined rubrics
Important: Avoid submitting confidential or regulated content (e.g., candidate answers or internal documentation) to public AI tools without data privacy controls.
Part 4: Protecting Credentialing Integrity From AI
Main Menu
The Challenge of AI-Generated Content
AI chatbots are getting better every day, which makes it difficult to detect AI-generated text. While AI detectors can identify content that’s entirely AI-generated, their accuracy is significantly reduced when candidates make a few manual edits or use AI paraphrasing tools. Even experienced reviewers struggle to determine if a response was written by a person or AI.
Common AI Risks in Professional Education Environments
While chatbots offer benefits, they also introduce new risks in high-stakes settings like certification exams, employee training, and pre-hire assessments.
Candidate Overreliance on AI Tools
Professionals may use AI as a shortcut rather than engaging with the material and learning to problem-solve. This is especially common in time-sensitive scenarios or when facing unfamiliar content.
Inaccurate or Misleading Outputs
Chatbots generate plausible-sounding but often incorrect information. This can introduce errors into learning and development materials, as well as content for certification programs and exams. These errors can also leave employees or candidates misinformed and unprepared, which ultimately hurts their job performance.
Real-World Examples of AI Misuse in the Workplace:
- Candidates using chatbots to answer questions during IT certification exams or to auto-generate code or scripts for programming tasks.
- Healthcare credentialing candidates entering clinical scenarios into AI tools to receive diagnostic suggestions.
- Job applicants using AI to “game” personality or ethics-based pre-hire assessments by crafting manipulated responses.
Implementing Online Proctoring
Integrating a comprehensive online proctoring solution is a crucial element of counteracting the challenges that chatbots create. Honorlock’s hybrid approach combines AI monitoring with live proctors to secure testing environments.
Honorlock’s proctoring solution can:
- Restrict Access to Unauthorized Resources: Prevent the use of chatbots, unauthorized websites, and applications during assessments.
- Monitor Candidate Behavior: Detect attempts to copy/paste content, use secondary devices, or seek external assistance (including voice AI assistants).
- Customize Assessment Settings: Allow specific tools or resources as permitted, ensuring flexibility without compromising security.
For instance, in a pre-employment test for an accounting position, a candidate might be required to prepare a balance sheet in Excel. Honorlock can prevent the candidate from copying formulas from external sources and from receiving unauthorized assistance during the task.
Designing Authentic Assessments
- Incorporating Real Job Tasks/Situations: Design tasks that require candidates to apply their knowledge and skills to real work situations.
- Using Industry-Specific Tools: Require the use of software and tools that are actually used in that job/industry.
- Requiring Reflective Exercises: Include reflective exercises like self-assessment and assessment of their fellow employees’ work to encourage deeper engagement and social learning elements.
Addressing Emerging Threats: Cluely AI
Cluely AI is a new threat to the integrity of exams and interviews. It’s an AI application that sits on screen as a transparent overlay and provides real-time answers based on what’s on the screen or being asked by an interviewer.
In other words, it’s an chatbot that sees and hears everything, and it’s almost impossible to detect as it:
- Bypasses Keyboard Logging: Uses hidden shortcuts to avoid detection.
- Masks Tab Activity: Employs overlays that prevent monitoring tools from recognizing tab switches.
- Remains Invisible During Screen Sharing: Ensures its presence isn’t apparent during proctored sessions.

Honorlock proctoring platform blocks Cluely
Honorlock’s proctoring platform can block Cluely and other AI by:
- Block Unauthorized Applications: Prevents Cluely and any other unauthorized tools from being launched during certification exams, interviews, presentations, etc.
- Allow Necessary Tools: Admins can configure proctoring settings to allow certain applications that are required for the test, such as Microsoft Word or Excel.
- Monitor for Suspicious Behavior: AI detects suspicious behavior and alerts a live proctor to review the behavior and intervene if necessary.
By combining advanced proctoring technology with thoughtfully designed assessments, professional education providers can uphold the integrity of their programs, ensuring that credentials reflect true competence and readiness for the workforce.
Part 5: Establishing Clear AI Policies
in Professional Education
Main Menu
AI is becoming more common in workplace learning and certification, and organizations need to develop clear policies on when and how it can be used. These policies should find a balance that supports productivity without risking integrity, accessibility, regulatory compliance, and privacy.
AI Policies Are Necessary, But Not Sufficient
Unfortunately, though, creating AI policies isn’t enough won’t prevent misuse. While the policies can help define rules and expectations, enforcement relies the organizational leaders and culture, while also incorporating proactive measures like using proctoring software to block or manage AI use.
Candidates preparing for certification exams, employees completing security training, and applicants completing pre-hire assessments may use generative AI tools without fully understanding the ethical or professional implications.
What Professional Education AI Policies Should Cover
According to recent research (An et al., 2024; McDonald et al., 2024), the most effective policies address both the opportunities and risks of generative AI. For professional learning and credentialing environments, policies should clearly define:
- Permitted Uses: Specify the specific AI tools that can be used and how to use them in training and development of content.
- Prohibited Uses: Define what constitutes cheating or credential fraud (e.g., using AI to complete an exam, simulate a skill, or impersonate a candidate).
- Disclosure Requirements: Require learners to cite or declare when AI tools are used during assessments or projects.
- Privacy and Security: Be clear that entering or uploading confidential or regulated information into public AI tools isn’t allowed for any reason.
- Data Integrity: Reinforce the importance of verifying all AI-generated content for accuracy.
Tiered AI Use Framework
Organizations can provide a clear definition of AI use in your programs, such as these that we adapted from the University of Florida:
- AI-permitted: AI use is encouraged for certain tasks (e.g., brainstorming policy drafts or analyzing training transcripts). Disclosure is required.
- Limited AI: AI may be used with limitations (e.g., summarizing background material but not writing final deliverables). Clear boundaries are defined per module.
- No AI: AI tools are prohibited during certain activities, such as certification exams, compliance testing, or live job simulations. Misuse is considered a policy violation.
Training for Administrators and Candidates
For Trainers, Managers, and Program Leaders
- Understand How AI works: Focus on practical knowledge of how tools generate content and where they introduce risks (e.g., hallucinations, bias).
- Design secure assessments: Build assessments that reduce reliance on AI, such as task-based evaluations, simulations, or multi-step problem-solving.
- Develop talking points: Provide examples of acceptable vs. unacceptable AI use and how to communicate policies to learners or employees.
For Candidates and Learners
- Clarify ethical use: Explain what constitutes misuse or misconduct, especially in high-stakes certification or compliance scenarios.
- Teach AI prompting best practices: Provide examples of how to responsibly use AI to draft study guides or practice questions, not to complete assessments.
- Fact-check and validate: Reinforce the need to verify AI-generated content, especially for technical or regulated information.
Don’t Rely Solely on AI Detectors
AI detectors can be useful, but they’re more of a gut-check than legitimate evidence their effectiveness and accuracy suffer when AI-generated text is edited or paraphrased by other AI tools. A better approach is to combine them with live proctoring, real-time behavior monitoring, and clear communication of exam policies.
Use AI as a Workforce Support Tool—Not a Replacement
Organizations should position AI as a supplemental tool to increase productivity and problem-solving, not as a substitute for human judgment, training, or certification requirements. For example, AI can be used to draft reports, summarize documents, or assist with administrative tasks, but not to simulate live skills or complete high-stakes assessments.
How Proctoring Technology Like Honorlock Protects Certification Integrity
Honorlock allows organizations and professional education programs to maintain trust and fairness, even when AI creates new integrity risks. Here’s how:
- Blocks Generative AI Tools: Honorlock’s browser extension detects and flags attempts to access AI tools like ChatGPT during exams, helping prevent misuse before it compromises exam validity.
- Combines AI with Live Human Proctoring: Our hybrid approach combines AI with trained, empathetic human proctors who review flagged behaviors in real time.
- Built for Global, Remote Certification Environments: No scheduling. No hassles. Honorlock supports on-demand testing across time zones and devices, which is ideal for remote learners and global credentialing programs.
- Validates Workforce-Ready Credentials: Credentialing leaders like the Smart Automation Certification Alliance (SACA) use Honorlock to secure certifications so they retain their value and reflect real job skills instead of AI-generated shortcuts.
Part 6: Launching AI in Your Certification or Workplace Program
Main Menu
Why AI is Worth the Investment
Whether you’re incorporating AI into your upskilling curriculum or using it to develop assessments development, here’s why it’s worth it:
- Prepares Candidates and Improves Retention
Equipping learners and employees with AI literacy helps them build confidence and feel more prepared for their current and future roles, which is key to improving engagement and completion rates in training or certification programs. - Attract Talent and Stay Competitive
Using AI to improve learning and assessments shows that your organization is open and committed to innovation and helps you stand out in your industry and attract strong candidates and partners. - Support Evolving Workforce Needs
Using AI into your programs supports your organization’s learning and development while aligning with industry trends and expectations.
Building the Right Team for AI Integration
Successful AI initiatives require cross-functional collaboration across technical, compliance, training, and leadership teams. Here are common roles involved in ProEd AI strategy:
Program Owner/AI Champion
Drives the initiative, aligns goals with organizational priorities, and builds momentum across departments.
Learning & Development Leadership
Identifies use cases for AI in content creation, test design, and learner engagement; oversees instructional design adaptations.
IT & Security Teams
Makes sure any AI tools are compliant, secure, and scalable across platforms. Evaluates infrastructure requirements and integration points.
Compliance/Legal Officers
Validates that AI use aligns with data privacy regulations (e.g., ADA, FERPA, GDPR), credentialing standards, and internal governance.
Talent Development/HR
Aligns AI initiatives with broader workforce development goals, e.g., future skill mapping, performance management, DEI.
Marketing & Communications
Helps position your AI efforts as innovative and trustworthy, internally to staff and externally to candidates, regulators, and partners.
Industry or Credentialing Partners
Engage third-party partners (e.g., certification bodies, industry boards) early to validate alignment with standards and ensure credibility.
Drive Buy-In and Adoption
AI adoption requires trust and clarity. Learners, trainers, and hiring managers need to understand the value and feel confident in how tools will be used.
- Normalize AI Through Familiar Examples
Many users already engage with AI (e.g., Grammarly, Microsoft Copilot, chatbots) without labeling it as such. Use these everyday tools as an entry point for conversation. - Start Small and Scale in Phases
Test different AI tools for specific use cases, such as automating content tagging, summarizing policy updates, or generating practice questions. Use early wins to build momentum. - Create Feedback Loops
Conduct surveys and/or focus groups with candidates, employees, and trainers to gather input that can help develop your organization’s policies and uses. - Host Demo Sessions & AI Workshops
Offer virtual or in-person events where users can ask questions and try out approved tools and, ideally, increase interest and greater adoption.
Ensure Workforce Readiness with Valid, Secure Credentials
To secure your certification and training programs and protect reputation and credibility, you need a comprehensive proctoring solution that’s flexible and agile enough to keep pace.
Honorlock offers all the features and services to protect exam integrity while creating a candidate-first experience from start to finish. Whether your organization offers professional certifications and certificates or wants to expand internal upskilling programs and pre-hire assessments, Honorlock helps confirm that the skills you certify are the skills they’ll use on the job.
More Reading & Learning
References
Aad, S., & Hardey, M. (2025). Generative AI: Hopes, controversies and the future of faculty roles in education. Quality Assurance in Education, 33(2), 267–282. https://doi.org/10.1108/QAE-02-2024-0043
Abbas, M., Jam, F. A., & Khan, T. I. (2024). Is it harmful or helpful? Examining the causes and consequences of generative AI usage among university students. International Journal of Educational Technology in Higher Education, 21(1), 10–22. https://doi.org/10.1186/s41239-024-00444-7
An, Y., Yu, J. H., & James, S. (2025). Investigating the higher education institutions’ guidelines and policies regarding the use of generative AI in teaching, learning, research, and administration. International Journal of Educational Technology in Higher Education, 22(1), 10–23. https://doi.org/10.1186/s41239-025-00507-3
Bhutoria, A. (2022). Personalized education and Artificial Intelligence in the United States, China, and India: A systematic review using a Human-In-The-Loop model. Computers and Education: Artificial Intelligence, 3, 100068.
Chang, D. H., Lin, M. P.-C., Hajian, S., & Wang, Q. Q. (2023). Educational design principles of using ai chatbot that supports self-regulated learning in education: Goal setting, feedback, and personalization. Sustainability, 15(17), 12921-. https://doi.org/10.3390/su151712921
Chapman, E., Zhao, J., & Sabet, P. G. P. (2024). Generative artificial intelligence and assessment task design: Getting back to basics through the lens of the AARDVARC Model. Education, Research and Perspectives, 51, 1–36.
Creely, E. (2024). Exploring the role of generative ai in enhancing language learning: opportunities and challenges. International Journal of Changes in Education, 1(3), Article 3. https://doi.org/10.47852/bonviewIJCE42022495
Dai, W., Lin, J., Jin, H., Li, T., Tsai, Y.-S., Gašević, D., & Chen, G. (2023). Can large language models provide feedback to students? A case study on ChatGPT. 2023 IEEE International Conference on Advanced Learning Technologies (ICALT), 323–325. https://doi.org/10.1109/ICALT58122.2023.00100
Deng, X., & Yu, Z. (2023). A meta-analysis and systematic review of the effect of chatbot technology use in sustainable education. Sustainability, 15(4), Article 4. https://doi.org/10.3390/su15042940
Deschenes, A., & McMahon, M. (2024). A survey on student use of generative ai chatbots for academic research. Evidence Based Library and Information Practice, 19(2), Article 2. https://doi.org/10.18438/eblip30512
Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., Baabdullah, A. M., Koohang, A., Raghavan, V., Ahuja, M., Albanna, H., Albashrawi, M. A., Al-Busaidi, A. S., Balakrishnan, J., Barlette, Y., Basu, S., Bose, I., Brooks, L., Buhalis, D., … Wright, R. (2023). Opinion paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71, 102642. https://doi.org/10.1016/j.ijinfomgt.2023.102642
Evmenova, A. S., Borup, J., & Shin, J. K. (2024). Harnessing the power of generative ai to support all learners. TechTrends, 68(4), 820–831. https://doi.org/10.1007/s11528-024-00966-x
Gruenhagen, J. H., Sinclair, P. M., Carroll, J.-A., Baker, P. R. A., Wilson, A., & Demant, D. (2024). The rapid rise of generative AI and its implications for academic integrity: Students’ perceptions and use of chatbots for assistance with assessments. Computers and Education: Artificial Intelligence, 7, 100273. https://doi.org/10.1016/j.caeai.2024.100273
Hasebrook, J. P., Michalak, L., Kohnen, D., Metelmann, B., Metelmann, C., Brinkrolf, P., Flessa, S., & Hahnenkamp, K. (2023). Digital transition in rural emergency medicine: Impact of job satisfaction and workload on communication and technology acceptance. PLOS ONE, 18(1), e0280956. https://doi.org/10.1371/journal.pone.0280956
Imundo, M. N., Watanabe, M., Potter, A. H., Gong, J., Arner, T., & McNamara, D. S. (2024). Expert thinking with generative chatbots. Journal of Applied Research in Memory and Cognition, 13(4), 465–484. https://doi.org/10.1037/mac0000199
Jin, Y., Yan, L., Echeverria, V., Gašević, D., Martinez-Maldonado, R. (2024). Generative AI in higher education: A global perspective of institutional adoption policies and guidelines. arXiv:2405.11800v1. https://doi.org/10.48550/arXiv. 2405.11800
Klos, M. C., Escoredo, M., Joerin, A., Lemos, V. N., Rauws, M., & Bunge, E. L. (2021). Artificial intelligence–based chatbot for anxiety and depression in university students: Pilot randomized controlled trial. JMIR Formative Research, 5(8), e20678. https://doi.org/10.2196/20678
Kofinas, A. K., Tsay, C. H.-H., & Pike, D. (2025). The impact of generative AI on academic integrity of authentic assessments within a higher education context. British Journal of Educational Technology. https://doi.org/10.1111/bjet.13585
Koudela-Hamila, S., Santangelo, P. S., Ebner-Priemer, U. W., & Schlotz, W. (2022). Under which circumstances does academic workload lead to stress? Journal of Psychophysiology, 36(3), 188–197. https://doi.org/10.1027/0269-8803/ a000293
Labadze, L., Grigolia, M., & Machaidze, L. (2023). Role of AI chatbots in education: Systematic literature review. International Journal of Educational Technology in Higher Education, 20(1), 56. https://doi.org/10.1186/s41239-023-00426-1
Liu, J. Q. J., Hui, K. T. K., Al Zoubi, F., Zhou, Z. Z. X., Samartzis, D., Yu, C. C. H., Chang, J. R., & Wong, A. Y. L. (2024). The great detectives: Humans versus AI detectors in catching large language model-generated medical writing. International Journal for Educational Integrity, 20(1), 8–14. https://doi.org/10.1007/s40979-024-00155-6
Liu, L., Subbareddy, R., & Raghavendra, C. G. (2022). AI Intelligence Chatbot to Improve Students Learning in the Higher Education Platform. Journal of Interconnection Networks, 22(Supp02), 2143032. https://doi.org/10.1142/S0219265921430325
Liu, M., & Reinders, H. (2025). Do AI chatbots impact motivation? Insights from a preliminary longitudinal study. System (Linköping), 128, 103544-. https://doi.org/10.1016/j.system.2024.103544
Mai, D. T. T., Da, C. V., & Hanh, N. V. (2024). The use of ChatGPT in teaching and learning: A systematic review through SWOT analysis approach. Frontiers in Education, 9. https://doi.org/10.3389/feduc.2024.1328769
McDonald, N., Johri, A., Ali, A., & Hingle, A. (2024). Generative artificial intelligence in higher education: Evidence from an analysis of institutional policies and guidelines. arXiv:2402.01659.
Neumann, A. T., Arndt, T., Köbis, L., Meissner, R., Martin, A., de Lange, P., Pengel, N., Klamma, R., & Wollersheim, H.-W. (2021). Chatbots as a Tool to Scale Mentoring Processes: Individually Supporting Self-Study in Higher Education. Frontiers in Artificial Intelligence, 4. https://doi.org/10.3389/frai.2021.668220
Nguyen Thanh, B., Vo, D. T. H., Nguyen Nhat, M., Pham, T. T. T., Thai Trung, H., & Ha Xuan, S. (2023). Race with the machines: Assessing the capability of generative AI in solving authentic assessments. Australasian Journal of Educational Technology, 39(5), 59–81. https://doi.org/10.14742/ajet.8902
Pellas, N. (2025). The role of students’ higher-order thinking skills in the relationship between academic achievements and machine learning using generative AI chatbots. Research and Practice in Technology Enhanced Learning, 20, 36-. https://doi.org/10.58459/rptel.2025.20036
Pergantis, P., Bamicha, V., Skianis, C., & Drigas, A. (2025). AI Chatbots and Cognitive Control: Enhancing Executive Functions Through Chatbot Interactions: A Systematic Review. Brain Sciences, 15(1), 47-. https://doi.org/10.3390/brainsci15010047
Sánchez-Vera, F. (2025). Subject-specialized chatbot in higher education as a tutor for autonomous exam preparation: Analysis of the impact on academic performance and students’ perception of its usefulness. Education Sciences, 15(1), 26-. https://doi.org/10.3390/educsci15010026
Schei, O. M., Møgelvang, A., & Ludvigsen, K. (2024). Perceptions and Use of AI Chatbots among Students in Higher Education: A Scoping Review of Empirical Studies. Education Sciences, 14(8), 922-. https://doi.org/10.3390/educsci14080922
Shankar, S. K., Pothancheri, G., Sasi, D., & Mishra, S. (2025). Bringing Teachers in the Loop: Exploring Perspectives on Integrating Generative AI in Technology-Enhanced Learning. International Journal of Artificial Intelligence in Education, 35(1), 155–180. https://doi.org/10.1007/s40593-024-00428-8
Song, C., & Song, Y. (2023). Enhancing academic writing skills and motivation: Assessing the efficacy of ChatGPT in AI-assisted language learning for EFL students. Frontiers in Psychology, 14. https://doi.org/10.3389/fpsyg.2023.1260843
Stojanov, A., Liu, Q., & Koh, J. H. L. (2024). University students’ self-reported reliance on ChatGPT for learning: A latent profile analysis. Computers and Education: Artificial Intelligence, 6, 100243. https://doi.org/10.1016/j.caeai.2024.100243
Susnjak, T., & McIntosh, T. (2024). ChatGPT: The end of online exam integrity? Education Sciences, 14(6), 656-. https://doi.org/10.3390/educsci14060656
Walter, Y. (2024). Embracing the future of Artificial Intelligence in the classroom: The relevance of AI literacy, prompt engineering, and critical thinking in modern education. International Journal of Educational Technology in Higher Education, 21(1), 15. https://doi.org/10.1186/s41239-024-00448-3
Wach, K., Duong, C. D., Ejdys, J., Kazlauskaitė, R., Korzynski, P., Mazurek, G., Paliszkiewicz, J., & Ziemba, E. (2023). The dark side of generative artificial intelligence: A critical analysis of controversies and risks of ChatGPT. Entrepreneurial Business and Economics Review, 11(2), Article 2. https://doi.org/10.15678/EBER.2023.110201
Weber-Wulff, D., Anohina-Naumeca, A., Bjelobaba, S., Foltýnek, T., Guerrero-Dib, J., Popoola, O., Šigut, P., & Waddington, L. (2023). Testing of detection tools for AI-generated text. International Journal for Educational Integrity, 19(1), 26–39. https://doi.org/10.1007/s40979-023-00146-z
Wood, D., & Moss, S. H. (2024). Evaluating the impact of students’ generative AI use in educational contexts. Journal of Research in Innovative Teaching & Learning, 17(2), 152–167. https://doi.org/10.1108/JRIT-06-2024-0151
Wu, R., & Yu, Z. (2024). Do AI chatbots improve students learning outcomes? Evidence from a meta-analysis. British Journal of Educational Technology, 55(1), 10–33. https://doi.org/10.1111/bjet.13334
Zhong, T., Zhu, G., Hou, C., Wang, Y., & Fan, X. (2024). The influences of ChatGPT on undergraduate students’ demonstrated and perceived interdisciplinary learning. Education and Information Technologies, 29(17), 23577–23603. https://doi.org/10.1007/s10639-024-12787-9