AI chatbots are ‘poisoning data’, regulator warns universities

Artificial intelligence is ‘poisoning’ data, the tertiary education regulator has warned universities. Pic: Getty Images
‘Data poisoning’ from AI is just one of many risks to research integrity identified by Australia’s university regulator, as institutions scramble to control its use.
Words by Natasha Bita for The Australian
Artificial intelligence is “poisoning data’’, the university regulator has told Australian researchers, as the federal government’s review of research grants is delayed by three months.
The risks of using generative AI in research have been outlined by the Tertiary Education Quality and Standards Agency, which has also revealed a jumble of contradictory AI policies among universities scrambling to slap rules on use of the runaway technology.
“All research staff and students need to understand the risks gen AI tools may pose to data security, accuracy and privacy, and to assess the efficacy and safety of using them,’’ TEQSA says in new guidance for the use of generative AI in research. “Some of the risks … are data poisoning resulting from models being trained on untrusted or unvalidated data.’’
Another threat is that AI has been trained using proprietary or customer-owned data, risking breaches of privacy and copyright laws. Researchers also risk “supply chain attacks’’ from third parties who modify the data.
Research using AI tools is also vulnerable to hackers who can “feed malicious prompts and control user input’’.
“(PhD) candidates, supervisors, support staff and external research partners need to understand that some gen AI outputs may undermine research integrity and quality due to inaccuracies, biases or the exclusion of extraneous variables,’’ TEQSA states.
As the regulator released AI guidelines, the Australian Research Council received a three-month extension to advise federal Education Minister Jason Clare on changes to the $1bn National Competitive Grants Program.
The ARC has missed its June deadline to recommend ways to cut red tape while imposing tougher grant conditions.
Flooded with more than 340 submissions to its discussion paper, the ARC has been granted an extra three months to report to Mr Clare on changes to the taxpayer-funded grant schemes.
The ARC and the National Health and Medical Research Council both issued policies for the use of generative AI in 2023, but have not updated them in light of growing evidence that AI is “learning to lie’’.
Apollo Research, which tests AI systems, in June said some AI was being deliberately deceptive by “lying and making up evidence’’.
An NHMRC spokesperson told The Australian that it did not have any data on the use or misuse of AI in research.
Individual institutions are responsible for investigating potential breaches of the Australian Code for the Responsible Conduct of Research, which has not been updated since 2018 – before generative AI became mainstream.
TEQSA’s latest guidance stops short of mandating oral exams for PhD students but it has told universities they must ensure “gen AI use is acknowledged and declared’’ by students and researchers. “To maintain assurances that HDR (postgraduate research degrees including PhDs) have attained the relevant skills and knowledge of their degree, institutions should consider including additional assessments as part of the thesis examination process,’’ TEQSA states.
“Oral examinations are a well-established complementary assessment that affords HDR candidates an opportunity to showcase their knowledge and achievements, providing additional certainty that examination requirements have been met.’’
TEQSA calls for closer monitoring of students’ work, stating “supervisors should get to know their students and monitor their progress to assure learning has taken place’’.
Conflicting policies are evident in TEQSA’s case studies of how universities are managing the use of AI. Monash University has banned the use of gen AI technologies, such as Chat GPT, during thesis examinations.
To “preserve the confidentiality of thesis examination’’, Monash also bans examiners from using AI to “support, prepare or write’’ an examiner’s report.
The University of Adelaide requires students to maintain their own records of gen AI use.
The University of Southern Queensland requires all PhD students to give an oral defence of their thesis, and at the University of Sydney, rules from this month will ban phones, computers, smart glasses, rings and ear pieces from “secure exams’’.
This article first appeared in The Australian as AI chatbots are ‘poisoning data’, regulator warns universities
Related Topics
UNLOCK INSIGHTS
Discover the untold stories of emerging ASX stocks.
Daily news and expert analysis, it's free to subscribe.
By proceeding, you confirm you understand that we handle personal information in accordance with our Privacy Policy.