In
the context of
Generative
Artificial
Intelligence
(AI) tools -
such as
ChatGPT - it
is important
for both staff
and students
to explore the
tools through
the lens of
their
disciplinary area
of expertise
and practice
in order to
better
comprehend the
benefits and
limitations
the tools
afford.
Among
the
disciplinary
areas of
validity,
reliability
and
appropriateness,
additional
broader
concerns that
warrant
consideration
in the context
of Academic
Integrity in
Higher
Education
include:
- Ethics – using
generative
AI without
appropriate
declaration/referencing
must be
considered
in terms
of
appropriate
referencing
of work
that is
not one's
own.
For reference, at the time of writing, MIC’s Academic Integrity Policy considers that, among other factors, Academic dishonesty includes:
‘falsely representing the work of others as one’s own in an assignment.’ and ‘using co-authoring assistance in individual academic work, including the commissioning or purchasing of essay writing services, i.e. syndication.’
However, detection in this fast-evolving space is very hard to prove and unreliable, even with the aid of detection tools such as Turnitin.
Further ethical implications of the inappropriate use of Generative AI tools include the well documented fact that is that the content may not always be reliable - something termed by AI technologists as 'hallucinations' [1]. Such inaccuracies do not appear to be a critical concern to AI and BigData companies [2]. Among other factors that warrant concern are that, at present, Generative AI is trained on already dated material [as, for example, with GPT-4 up to Sep 2021; 3] and may contain data that trains the Generative process or which perpetuate data-bound biases [4].
As a result, student’s will need to critically review any content generated from Generative AI tools, form their own opinions and validate these through appropriate referencing. - Integrity
of
Assessment –
There is a risk in using Generative AI to complete assessment tasks without an appropriate understanding of the assessment topic or critiquing of the materials that the
learning outcomes
of the module may
not be met.
Traditional written assignments are prone to this risk, though this can be mitigated through a considered review of the methods and approaches used in assessments. - Data
Protection
or
Breaches –
There are
privacy
concerns in using Generative AI
tools from the perspective of data privacy and protection [5]. All users - staff, students and others - need to have regard for
the appropriacy and potential later use of details
submitted to AI tools.
Caution should be applied as we are currently not able to trace the information and track further usage. See MIC’s Data Protection Policy