Hands-On Training: Doctoral Supervision in the Age of AI

One-Day Doctoral Research Training | 10 May 2026 | Malta (In-Person)

Hands-On Training: Doctoral Supervision in the Age of AI

One-Day Doctoral Research Training | 10 May 2026 | Malta (In-Person)

Overview

This one-day intensive training is part of the Doctoral Faculty Residency – Innovation and Transformation in Doctoral Education and is offered as a stand-alone training module.

The training addresses one of the most significant transformations currently shaping doctoral education: the integration of Artificial Intelligence into doctoral research and supervision.

AI tools are already embedded in how doctoral candidates explore literature, collect and analyse data, and develop written work. The key question is not whether AI will be used, but how supervisors maintain academic judgement, research quality, and integrity in an AI-mediated research environment.

This training is practical, applied, and supervisor-centred. It focuses on what supervisors must actually do when AI becomes part of the doctoral research process.

Who This Training Is For

This training is ideal for:

  • Doctoral supervisors and faculty involved in doctoral education

  • Experienced doctoral practitioners transitioning into academia or supervision

  • Doctoral candidates that want to better understand how AI is transforming doctoral research practice

No advanced technical AI expertise is required.

 

Learning Outcomes

Participants will:

  • Understand how AI is transforming doctoral research practice

  • Strengthen their ability to supervise AI-supported research responsibly

  • Develop practical strategies to safeguard research quality and integrity

  • Gain confidence addressing authorship, data, and methodological risks

  • Translate AI from uncertainty into a supervised research tool

Training Agenda (Sunday 10 May 2026)

Training Objective: Build concrete supervisory and methodological capability across the full research lifecycle, with AI as a central—but controlled—tool. This is not a lecture about AI. This is a working day where you will use AI, critique AI, and leave with both sharper skills and a shared institutional stance on what responsible AI-mediated supervision looks like.

Training Host: Dr. Jim Wagstaff, GSBM Faculty, Educator & AI Innovator, Singapore

09:00–09:15 | Calibration: Where Are We Starting From?

AI experience in the room will vary. Before we can build together, we need to know what we’re working with.

  • Hands-on pulse check: Everyone completes the same task in ChatGPT, Claude, etc.
  • What did you notice? What surprised you?
  • Framing the day: You will experience AI as your candidates will, then judge it through the lens and experience that a supervisor must

Outcome: A shared baseline of experience. No assumptions. No one left behind or held back.

09:15–10:45 | Applied Lab I

AI Across the Research Lifecycle

Your doctoral candidates will use AI. The question is not whether but how—and whether they will use it well or badly. This lab puts you in the shoes of the doctoral candidates.

Station A: AI and the Literature (30 min)

  1. Tools: NotebookLM, Claude, ChatGPT, etc.
  2. Upload papers. Generate synthesis. Query across sources. Create an audio overview.
  3. The hard question: What does AI surface that you might have missed—and what does it flatten, oversimplify, gloss over, or get wrong?

Station B: AI and Qualitative Analysis (30 min)

  • Tools: Claude, ChatGPT, NotebookLM
  • Feed in interview transcripts. Ask for thematic coding. Request pattern identification.
  • The hard question: How does AI’s coding compare to your instincts? What would you accept, revise, or reject?

Station C: AI and Quantitative Analysis (30 min)

  • Tools: Claude, ChatGPT, Code Interpreter
  • Work with real data outputs—regression tables, survey results
  • The hard question: Could a candidate use this to understand their data better—or to avoid understanding it altogether?

Outcome: Direct experience with AI’s capabilities and limitations. Initial instincts about where AI helps and where it misleads.

10:45–11:00 | Break

11:00–12:00 | Applied Lab II

AI, Mixed Methods, and the Integration Problem

Mixed methods research is where AI gets interesting—and dangerous. Integration requires judgement that AI can simulate but not possess.

  • Scenario: A candidate has qualitative themes AND quantitative survey data on the same phenomenon
  • Task: Use AI to explore integration—joint displays, convergence and divergence, narrative synthesis
  • The hard question: Where might AI manufacture false coherence? What must a supervisor probe?

Outcome: Understanding of AI’s potential and pitfalls in the most complex methodological territory your candidates will navigate.

12:00–12:30 | Supervisor Reflection

When to Say: “Read the F***ing Papers”

Now put on your supervisor hat. Review outputs from the morning—yours or a colleague’s—as if a candidate submitted them.

  • What would you want to know about how this was produced?
  • What capability must the candidate still demonstrate themselves?
  • What’s your green light, yellow light, red light?

Human judgement is the non-automatable core of supervision. This session is about locating exactly where that judgement must be applied.

Outcome: The shift from experiencing AI to supervising its use. Surfaced tensions that will inform the afternoon.

12:30–13:30 | Lunch

13:30–14:15 | Data Clinic

Ethics, Integrity, and Real-World Data Problems

AI does not solve the fundamental problems of research integrity—it complicates them. This session addresses what supervisors must enforce regardless of the tools candidates use.

  • Data management failures: What goes wrong, and why it matters
  • Auditability and reproducibility: Can you trace how the results were produced?
  • AI-assisted data collection: Code generation, scraping, survey automation—where are the risks?
  • Institutional responsibility: What must the institution guarantee, and what falls to the supervisor?

Outcome: Clear understanding of the ethics and integrity issues that AI amplifies but does not create.

14:15–15:00 | Writing & Integrity Lab

AI-Supported Writing Without Academic Fraud

The difference between editing and authoring matters. So does the difference between using AI to clarify your thinking and using it to avoid thinking altogether.

  • Editing vs. authoring: Where is the line?
  • Detecting over-automation: What does AI-written prose look like, and why does it matter?
  • Making authorship defensible: What documentation and transparency should candidates provide?

Hands-on exercise:

  • Use AI to draft, summarise, or translate findings for non-academic audiences
  • Compare: What happens when AI drafts versus when AI refines your draft?
  • Pair critique: Swap outputs. Score on accuracy, insight, originality, voice.

Outcome: Practical capability in AI-supported research communication. A critical eye for detecting over-automation and defending authorship.

15:00–15:15 | Break

15:15–16:30 | Synthesis Workshop

What Supervision Must Now Enforce

AI will be used regardless of policy. The question is whether your institution has a defensible, transparent stance—or leaves every supervisor to figure it out alone.

Gallery walk (15 min): Review the tensions and insights surfaced throughout the day.

Sorting exercise (35 min): Working in groups, categorise AI use across the research lifecycle:

Research PhaseEncouragedPermitted with TransparencyDiscouraged or Prohibited
Literature synthesis   
Qualitative coding   
Quantitative interpretation   
Mixed methods integration   
Data collection & management   
Writing & communication   

Draft guidance (25 min): Together, we will produce a working draft of AI guidance for supervision: what candidates should be told at the start of their journey, and what supervisors must enforce throughout.

Outcome: A co-created institutional asset. Not imposed from above—built from shared experience.

16:30–17:30 | Presentation of Findings

From Working Groups to Shared Perspective

Each group presents their draft guidance to the full room.

  • Where do we align?
  • Where do we differ—and why?
  • What tensions remain unresolved?

This is not about forcing consensus. It is about making differences visible so faculty can make deliberate choices rather than drift into inconsistency.

Outcome: A consolidated draft of AI-related guidance for doctoral supervision: co-created, debated, and owned by the people who will use it.

17:30–18:00 | Closing

Commitments: What Will You Do Differently?

  • Which tools and use cases were most valuable? Least?
  • One thing you will try with your next supervisee
  • One thing you will watch out for
  • Who owns the draft guidance? When do we revisit it?

Outcome: Personal accountability. Institutional ownership. A path forward.

Exit Condition

Participants leave Day 3 with:

  • Experience using AI tools across the full doctoral research lifecycle—not watched someone else use them;
  • Sharpened judgement about when AI helps candidates learn and when it lets them avoid learning;
  • Clear understanding of ethics, integrity, and auditability requirements in AI-mediated research;
  • A co-created guidance document that makes guidance explicit and useful;
  • Personal commitments to supervise differently in an AI-mediated world.

 

What to Bring

  • A laptop with internet access
  • A Google account (for NotebookLM and Gemini)
  • Access to ChatGPT, Gemini, Perplexity, Claude, etc. (free tiers work; paid tiers unlock more capabilities)
  • Willingness to experiment, make mistakes, and think critically

Training Highlights

DURATION

1 Day
This training forms Day 3 of the Doctoral Faculty Residency and can be attended either: - as part of the full four-day residency, or - as a stand-alone one-day training module.

DATE

Sunday 10 May 2026
The full Doctoral Faculty Residency is running from Friday 8 to Monday 11 May 2026
Learn more about the full residency

FEES

Standard fee: 490€ | Early Bird rate: 290€ (until 31 March 2026) | Special Rate for Young Doctoral Researchers: 150€
*The following discounts apply: ● 70% discount for Young Doctoral Researcher. ● 40% discount for Doctoral Faculty Development Programme members ● Free for DoctorateHub GSBM Faculty as part of the Continuous Professional Development allowance.

STRUCTURE

The training combines Masterclass sessions, Applied supervisory labs, Case-based discussion, and Structured reflection.
All sessions are practice-oriented and grounded in real doctoral supervision contexts.

LOCATION

Malta
Venue to be announced

MINIMUM REQUIREMENTS

The training is targeted at persons engaged at doctoral level research and doctoral education.
No advanced technical AI expertise is required.

FAQs

What are the admission requirements

The training is targeted at persons engaged at doctoral level research and doctoral education.

What are the training fees?

The standard fees for the training are 450€. The Early Bird rate is 290€ (until 31 March 2026)

Do I need to be a member of the Doctoral Faculty Development programme to attend this training?

No. However, members of the Doctoral Development Programme will be admitted first.

Are there any discounts available?

Yes. The following discounts apply:

● 70% discount for Young Doctoral Researcher.
● 40% discount for Doctoral Faculty Development Programme members.
● Free for DoctorateHub GSBM Faculty as part of the Continuous Professional Development allowance.

Note: Discounts can not be combined with the Early Bird rate.

Want to get Ready for Doctoral Supervision in the Age of AI?

The training addresses one of the most significant transformations currently shaping doctoral education: the integration of Artificial Intelligence into doctoral research and supervision.