Skip to Main Content

Artificial intelligence (AI)

This research guide will support your research and learning journey in artificial intelligence.

Being AI literate

AI literacy refers to the skills and competencies needed to interact critically and meaningfully with AI technologies and applications. It includes understanding the embedded principles and limitations of each version of AI programs and being able to critically evaluate – and question, when necessary – their context, design and implementations.

(Silvano 2023, AI literacy for everyone, DigiCo, https://digico.global/ai-literacy-for-everyone/)


Evaluating AI tools

Being AI Literate does not mean you need to understand the advanced mechanics of AI. It means that you are actively learning about the technologies involved and that you critically approach any texts you read that concern AI, especially news articles. 

The following tool - ROBOT - can be used when reading about and using AI applications to help consider the legitimacy of the technology.

ReliabilityObjectiveBiasOwnershipType


Reliability

  • How reliable is the information available about the AI technology?
  • If it’s not produced by the party responsible for the AI, what are the author’s credentials? Bias?
  • If it is produced by the party responsible for the AI, how much information are they making available? 
    • Is information only partially available due to trade secrets?
    • How biased is they information that they produce?

 

Objective

  • What is the goal or objective of the use of AI?
  • What is the goal of sharing information about it?
    • To inform?
    • To convince?
    • To find financial support?

 

Bias 

  • What could create bias in the AI technology?
  • Are there ethical issues associated with this?
  • Are bias or ethical issues acknowledged?
    • By the source of information?
    • By the party responsible for the AI?
    • By its users?

 

Ownership

  • Who is the owner or developer of the AI technology?
  • Who is responsible for it?
    • Is it a private company?
    • The government?
    • A think tank or research group?
  • Who has access to it?
  • Who can use it?

 

Type

  • Which subtype of AI is it?
  • Is the technology theoretical or applied?
  • What kind of information system does it rely on?
  • Does it rely on human intervention? 

Creative Commons LicenseThis work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
To cite in APA: Hervieux, S. & Wheatley, A. (2020). The ROBOT test [Evaluation tool]. The LibrAIry.
https://thelibrairy.wordpress.com/2020/03/11/the-robot-test


Legal, social and ethical considerations

It's important to think critically about why and how you use any new digital tool or source of information and to consider its limitations.

Bias and discrimination: 

AI algorithms can perpetuate and amplify unfair outcomes due to societal biases ingrained in the massive amounts of data sets that are used to train an AI system, and by the algorithms that process that data. 

Further reading:

Transparency and accountability:

AI systems often operate within a "black box," where there is limited to no information about the datasets used to train their AI or where they sourced these datasets. The purpose of transparent AI is to ensure that AI models can be explained, communicated, and held accountable for errors or harm caused.

Further reading:

Creativity and ownership:

AI poses a significant risk to individuals' intellectual property as generative AI models have used data that is not lawfully obtained. Soundbites, art, music, and literature are used and transformed into patterns and relationships, which are used to create rules, and then make judgments and predictions when responding to a prompt.

On the other hand Who owns the AI-generated art? Who can commercialize it? Who is at risk for infringement? 

Further reading: