Using AI to Enhance Accessibility

The Langara Accessibility Handbook for Teaching and Learning has a new chapter on AI Generated Alt Text. Generative AI chatbots have rapidly improved their ability to recognize and describe images. CoPilot, ChatGPT, and Gemini have shown to be useful tools to describe images and provide a starting point for writing alternative text (alt text). This new resource explains:

  • How to upload images to CoPilot, ChatGPT, and Google Gemini.
  • Effective prompts to get image descriptions.
  • How to refine output to write effective alt text.
  • What alt text is, when to use it, and how to write it.

In addition to this new resource, EdTech is offering a workshop on using AI to enhance accessibility. Join EdTech on May 15th for this interactive session perfect for anyone creating content using visual elements. Adding alt text to images is essential to creating accessible content. AI can be a useful tool to help start the process of writing alt text and this session will introduce effective tools and prompts.

The session offers a blend of theory and practice, providing hands-on exercises to master the art of crafting prompts that yield precise and useful outcomes. This workshop will equip Langarans with the skills to harness the potential of generative AI, paving the way for accessibility and inclusion. EdTech AI and accessibility experts will be on hand to help. Participants are encouraged to bring visual material that needs alt text.

Register for Enhance Accessibility Using AI today!

Generative Artificial Intelligence (Gen AI) Resources

AI generated image of a humanoid robot teacher with a pointer in a classroom, standing in front of a blackboard with equations
Image generated by DALL-E.

Whether you are a superuser or a novice, the number of resources on generative artificial intelligence can be overwhelming. EdTech and TCDC have curated some that we’d like to recommend.

  • How to access Copilot (Microsoft)
    • Interested in trying a generative AI tool or using it in your course? ChatGPT and Copilot (formerly Bing Chat) are currently available in Canada. Langara College students and employees have access to a premium version of Copilot through Microsoft Enterprise and the Edge browser. Microsoft’s FAQs provide information on how to access Copilot through Microsoft Edge. 
  • Practical AI for Instructors and Students (Ethan Mollick/Wharton School, August 2023)
    • If you’re looking for a great primer on AI, this series of five videos is worth the watch. Each video is approximately 10 minutes so the whole series can be viewed in under an hour. Topics include: 1) an introduction to AI; 2) what large language model (LLM) platforms like ChatGPT are and how to start using them; 3) how to prompt AI; 4) how instructors can leverage AI; and 5) how students can use AI.
    • Note: this series references four LLMs: ChatGPT, BingCopilot, Bard, and Claude. Bard and Claude are not yet available in Canada. 
  • AI Primer by Educause
    • This article is a reading (and viewing) list that links to resources that do a deeper dive into generative AI. A good resource for those who know the basics and would like to learn more.  

EdTech and TCDC also regularly offer professional learning opportunities on AI topics. Check the PD Events Calendar for current offerings.

As always, if you’re planning to integrate AI into your course, please be aware that: 

  • There are privacy concerns with AI platforms. We recommend using caution when inputting – or having your students input – private, personal, or sensitive information (e.g. resumes or other identifying data).  
  • For those using assistive technology such as screen readers, some AI platforms are more accessible than others. For more information, please see Accessibility of AI Interfaces by Langara Assistive Technologist, Luke McKnight. 

If you would like more recommendations for AI resources, or any other AI-related support, please contact EdTech or TCDC

Accessibility at Langara College

Empowering Accessibility: Register to Join Our Upcoming Workshops

EdTech is excited to announce a series of upcoming workshops dedicated to improving the accessibility of Microsoft Word documents and PowerPoint presentations.

Why Accessibility Matters

In today’s digital age, ensuring that everyone, regardless of their abilities, can access and understand information is crucial. This is where accessibility comes in. It’s about making sure that everyone has equal access to information and functionality.

What Our Workshops Offer

Our workshops are designed to provide you with the tools and knowledge to create accessible Word documents and PowerPoint presentations. We believe that with a little effort and the right guidance, we can make a significant difference in making information more accessible to all.

Who Should Attend

These workshops are for everyone! Whether you’re a content creator, an educator, or just someone looking to learn, these workshops are for you. No prior experience is required.

Let’s Make a Difference Together

By participating in these workshops, not only will you enhance your skills, but you’ll also contribute to a more inclusive and accessible digital world. So why wait? Join us to learn how to make information accessible to all.

Learning Lab: Create an Accessible Word Document

Date: January 19

Time: 10:30 AM – 12:00 PM

Location: C202

How to Create Accessible PowerPoint Slide Presentations

Date: January 26

Time: 10:30 AM – 12:00 PM

Location: Zoom

Learning Lab: Improve the Accessibility of Existing PowerPoint Slides

Date: February 16

Time: 10:30 AM – 12:30 PM

Location: C202

Learning Lab: Improve the Accessibility of Existing PowerPoint Slides Drop-In

Date: February 20

Time: 10:30 AM – 12:00 PM

Location: C203

Learning Lab: Create an Accessible Word Document

Date: April 5

Time: 10:30 AM – 12:00 PM

Location: C202

Explore the World of AI: Join Our Monthly AI Tinker Time Workshops

Are you curious about artificial intelligence and its applications but unsure where to begin? Look no further! Join us at AI Tinker Time, happening on the first Thursday of every month, and dive into the exciting world of AI tools.

Whether you’re new to AI or looking to expand your knowledge, these sessions are the perfect opportunity to experiment with various AI tools and discover how they can enhance your practice. Our EdTech faculty, staff, and TCDC colleagues will lead hands-on sessions, where we’ll test different AI tools for accuracy, quality, and effectiveness in addressing diverse teaching and learning needs.

Together, we’ll explore critical questions such as:

  • How accurate is the AI output?
  • Is the quality of AI output sufficient for practical application?
  • Can AI-generated content serve as a solid foundation for further refinement?
  • How do various AI tools compare in functionality and output?
  • Which AI tools can we recommend for specific needs based on our testing?

Don’t miss this chance to join a community of like-minded individuals eager to unlock the potential of AI in education. Whether you’re an educator, a technologist, or simply AI-curious, our AI Tinker Time workshops are the ideal space to learn, experiment, and collaborate.

Monthly topics:

  • January – Using AI to generate alternative text
  • February – Comparing output generated by Bing Chat’s different modes
  • March – Using AI audio generation and editing
  • April – Using AI to “break” assignments

Please register to join our AI tinker time.

EdTech Tools and Privacy

Generative AI Tools & Privacy

Generative AI applications generate new content, such as text, images, videos, music, and other forms of media, based on user inputs. These systems learn from vast datasets containing millions of examples to recognize patterns and structures, without needing explicit programming for each task. This learning enables them to produce new content that mirrors the style and characteristics of the data they trained on.

AI-powered chatbots like ChatGPT can replicate human conversation. Specifically, ChatGPT is a sophisticated language model that understands and generates language by identifying patterns of word usage. It predicts the next words in a sequence, which proves useful for tasks ranging from writing emails and blogs to creating essays and programming code. Its adaptability to different writing and coding styles makes it a powerful and versatile tool. Major tech companies, such as Microsoft, are integrating ChatGPT into applications like MS Teams, Word, and PowerPoint, indicating a trend that other companies are likely to follow.

Despite their utility, these generative AI tools come with privacy risks for students. As these tools learn from the data they process, any personal information included in student assignments could be retained and used indefinitely. This poses several privacy issues: students may lose control over their personal data, face exposure to data breaches, and have their information used in ways they did not anticipate, especially when data is transferred across countries with varying privacy protections. To maintain privacy, it is crucial to handle student data transparently and with clear consent.

Detection tools like Turnitin now include features to identify content generated by AI, but these tools also collect and potentially store personal data for extended periods. While Turnitin has undergone privacy and risk evaluations, other emerging tools have not been similarly vetted, leaving their privacy implications unclear.

The ethical landscape of generative AI is complex, encompassing data bias concerns that can result in discriminatory outputs, and intellectual property issues, as these models often train on content without the original creators’ consent. Labour practices also present concerns: for example, OpenAI has faced criticism for the conditions of the workers it employs to filter out harmful content from its training data. Furthermore, the significant environmental impact of running large AI models, due to the energy required for training and data storage, raises sustainability questions. Users must stay well-informed and critical of AI platform outputs to ensure responsible and ethical use.


This article is part of a collaborative Data Privacy series by Langara’s Privacy Office and EdTech. If you have data privacy questions or would like to suggest a topic for the series, contact Joanne Rajotte (jrajotte@langara.ca), Manager of Records Management and Privacy, or Briana Fraser, Learning Technologist & Department Chair of EdTech

Accessibility Teaching Practices at Langara College

Accessibility of AI Interfaces

EdTech Home » Blog » large language models

The rapid spread of AI tools like ChatGPT and Bing have consumed the attention of educators, students, and researchers. Since the explosion of AI tools in late-2022, we have researched, read about, and attended events in an attempt to understand the dangers and opportunities of AI. One topic missing from the deluge of information is the accessibility of AI interfaces to users of assistive technology.

To augment this gulf, Langara’s assistive technologist tested 9 AI interfaces with automated testing tools and assistive technology.

Learn more about the evaluation process, results of testing, and recommendations on which AI tools are more accessible to users of assistive technology, read Accessibility of AI Interfaces.

For further discussion, comments, or questions please contact assistivetech@langara.ca.

EdTech Tools and Privacy

Peer Assessment and Privacy Risks

Instructors, have you considered how privacy, security, and confidentiality apply to teaching and learning, specifically the data you gather as part of assessment?

To support teaching and learning, you gather and analyze data about students all year and in many ways, including anecdotal notes, test results, grades, and observations. The tools we commonly use in teaching and learning, including Brightspace, gather information. The analytics collected and reports generated by teaching and learning tools are sophisticated and constantly changing. We should, therefore, carefully consider how we can better protect student data.  

When considering privacy, instructors should keep in mind that all student personal information belongs to the student and should be kept private. Students trust their instructors to keep their data confidential and share it carefully. Instructors are responsible for holding every student’s data in confidence.  This information includes things like assessment results, grades, student numbers, and demographic information. 

Although most students are digital natives, they aren’t necessarily digitally literate. Instructors can ensure students’ privacy by coaching them about what is appropriate to share and helping them understand the potential consequences of sharing personal information. 

One area of teaching and learning in which you may not have adequately considered privacy or coached students to withhold personal information and respect confidentiality is peer assessment. Peer assessment or peer review provides a structured learning process for students to critique and provide feedback to each other on their work. It helps students develop lifelong skills in assessing and providing feedback to others and equips them with skills to self-assess and improve their own work. However, in sharing their work, students may also be sharing personal identifying information, such as student numbers, or personal experiences. To help protect students’ personal information and support confidentiality, we recommend that you consider the following points.

Privacy Considerations for Peer Assessment 

  • If student work will be shared with peers, tell students not to disclose sensitive personal information. Sensitive personal information may include, for example, medical history, financial circumstances, traumatic life experiences, or their gender, race, religion, or ethnicity. 
  • Inform students of ways in which their work will be assessed by their peers. 
  • Consider having students evaluate anonymous assignments for more objective feedback.  
  • Coach students to exclude all identifiable information, including student number. 
  • If students’ work is to be posted online, consider associated risks, such as
    • another person posting the work somewhere else online without their consent; and
    • the content being accessed by Generative AI tools like ChatGPT that trawl the internet to craft responses to users’ queries.

This article is part of a collaborative Data Privacy series by Langara’s Privacy Office and EdTech. If you have data privacy questions or would like to suggest a topic for the series, contact Joanne Rajotte (jrajotte@langara.ca), Manager of Records Management and Privacy, or Briana Fraser, Learning Technologist & Department Chair of EdTech.

Generative AI and STEM

Background

Artificial intelligence is not new. It has been part of our personal and work lives for a long time (autocorrect, facial recognition, satnav, etc.) and large language models like ChatGPT have been a big topic in education since version 3.5 was released in late November, 2022. Large language models (LLMs) are trained on enormous amounts of data in order to recognize the patterns of and connections between words, and then produce text based on the probabilities of which word is most likely to come next. One thing that LLMs don’t do, however, is computation; however, the most recent Open AI release, GPT-4, seems to have made strides in standardized tests in many STEM areas and GPT-4 now has a plug-in for Wolfram Alpha, which does do computation.

Chart from Open AI: exam result improvements from ChatGPT 3.5 to 4

Andrew Roberts (Math Dept) and Susan Bonham (EdTech) did some testing to see how ChatGPT (3.5), GPT-4, and GPT-4 with the Wolfram plugin would handle some questions from Langara’s math courses.

Test Details

Full test results are available. (accessible version of the problems and full details of the “chats” and subsequent discussion for each AI response are available at the link)

The following questions were tested:

 

Problem 1: (supplied by Langara mathematics instructor Vijay Singh)

 

Problem 2: (Precalculus)

 

Problem 3: (Calculus I)

 

Problem 4: (Calculus II)

 

Discussion

Responses from current versions of ChatGPT are not reliable enough to be accepted uncritically.

ChatGPT needs to be approached as a tool and careful proof-reading of responses is needed to check for errors in computation or reasoning. Errors may be blatant and readily apparent, or subtle and hard to spot without close reading and a solid understanding of the concepts.

Perhaps the biggest danger for a student learning a subject is in the “plausibility” of many responses even when they are incorrect. ChatGPT will present its responses with full confidence in their correctness, whether this is justified or not.

When errors or lack of clarity is noticed in a response, further prompting needs to be used to correct and refine the initial response. This requires a certain amount of base knowledge on the part of the user in order to guide ChatGPT to the correct solution.

Algebraic computations cannot be trusted as ChatGPT does not “know” the rules of algebra but is simply appending steps based on a probabilistic machine-learning model that references the material on which it was trained. The quality of the answers will depend on the quality of the content on which ChatGPT was trained. There is no way for us to know exactly what training material ChatGPT is referencing when generating its responses. The average quality of solutions sourced online should give us pause.

Below is one especially concerning example of an error encountered during our testing sessions:

In the response to the optimization problem (Problem 3), GPT-3.5 attempts to differentiate the volume function:

However, the derivative is computed as:

We see that it has incorrectly differentiated the first term with respect to R while correctly differentiating the second term with respect to h.

It is the plausibility of the above solution (despite the bad error) that is dangerous for a student who may take the ChatGPT response at face value.

Access to the Wolfram plugin in GPT-4 should mean that algebraic computations that occur within requests sent to Wolfram can be trusted. But the issues of errors in reasoning and interpretation still exist between requests sent to Wolfram.

Concluding Thought

It will be important for us educate our students about the dangers involved in using this tool uncritically while acknowledging the potential benefits if used correctly.

Want to Learn More?

EdTech and TCDC run workshops on various AI topics. You can request a bespoke AI workshop tailored to your department or check out the EdTech and TCDC workshop offerings. For all other questions, please contact edtech@langara.ca

A.I. Detection: A Better Approach 

Over the past few months, EdTech has shared concerns about A.I. classifiers, such as Turnitin’s A.I. detection tool, AI Text Classifier, GPTZero, and ZeroGPT. Both in-house testing and statements from Turnitin and OpenAI confirm that A.I. text classifiers unreliably differentiate between A.I. and human generated writing. Given that the tools are unreliable and easy to manipulate, EdTech discourages their use. Instead, we suggest using Turnitin’s Similarity Report to help identify A.I.-hallucinated and fabricated references.  

What is Turnitin’s Similarity Report 

The Turnitin Similarity Report quantifies how similar a submitted work is to other pieces of writing, including works on the Internet and those stored in Turnitin’s extensive database, highlighting sections that match existing sources. The similarity score represents the percentage of writing that is similar to other works. 

AI Generated References 

A.I. researchers call the tendency of A.I. to make stuff up a “hallucination.” A.I.-generated responses can appear convincing, but include irrelevant, nonsensical, or factually incorrect answers.  

ChatGPT and other natural language processing programs do a poor job of referencing sources, and often fabricating plausible references. Because the references seem real, students often mistake them as legitimate. 

Common reference or citation errors include: 

  • Failure to include a Digital Object Identifier (DOI) or incorrect DOI 
  • Misidentification of source information, such as journal or book title 
  • Incorrect publication dates 
  • Incorrect author information 

Using Turnitin to Identify Hallucinated References 

To use Turnitin to identify hallucinated or fabricated references, do not exclude quotes and bibliographic material from the Similarity Report. Quotes and bibliographic information will be flagged as matching or highly similar to source-based evidence. Fabricated quotes, references, and bibliographic information will have zero similarity because they will not match source-based evidence.

Quotes and bibliographic information with no similarity to existing works should be investigated to confirm that they are fabricated.  

References

Athaluri S, Manthena S, Kesapragada V, et al. (2023). Exploring the boundaries of reality: Investigating the phenomenon of artificial intelligence hallucination in scientific writing through ChatGPT references. Cureus 15(4): e37432. doi:10.7759/cureus.37432 

Metz, C. (2023, March 29). What makes A.I. chatbots go wrong? The curious case of the hallucinating software. New York Times. https://www.nytimes.com/2023/03/29/technology/ai-chatbots-hallucinations.html 

Aligning language models to follow instructions. (2022, January 27). OpenAI. https://openai.com/research/instruction-following 

Weise, K., and Metz, C. (2023, May 1). What A.I. chatbots hallucinate. New York Times. https://www.nytimes.com/2023/05/01/business/ai-chatbots-hallucination.html 

Welborn, A. (2023, March 9). ChatGPT and fake citations. Duke University Libraries. https://blogs.library.duke.edu/blog/2023/03/09/chatgpt-and-fake-citations/ 

screenshot of a Turnitin Similarity Report, with submitted text on the left and the report panel on the right

AI Classifiers — What’s the problem with detection tools?

AI classifiers don’t work!

Natural language processor AIs are meant to be convincing. They are creating content that “sounds plausible because it’s all derived from things that humans have said” (Marcus, 2023). The intent is to produce outputs that mimic human writing. The result: The world’s leading AI companies can’t reliably distinguish the products of their own machines from the work of humans.

In January, OpenAI released its own AI text classifier. According to OpenAI “Our classifier is not fully reliable. In our evaluations on a “challenge set” of English texts, our classifier correctly identifies 26% of AI-written text (true positives) as “likely AI-written,” while incorrectly labeling human-written text as AI-written 9% of the time (false positives).”

A bit about how AI classifiers identify AI-generated content

GPTZero, a commonly used detection tool, identifies AI created works based on two factors: perplexity and burstiness.

Perplexity measures the complexity of text. Classifiers identify text that is predictable and lacking complexity as AI-generated and highly complex text as human-generated.

Burstiness compares variation between sentences. It measures how predictable a piece of content is by the homogeneity of the length and structure of sentences throughout the text. Human writing tends to be variable, switching between long and complex sentences and short, simpler ones. AI sentences tend to be more uniform with less creative variability.

The lower the perplexity and burstiness score, the more likely it is that text is AI generated.

Turnitin is a plagiarism-prevention tool that helps check the originality of student writing. On April 4th, Turnitin released an AI-detection feature.

According to Turnitin, its detection tool works a bit differently.

When a paper is submitted to Turnitin, the submission is first broken into segments of text that are roughly a few hundred words (about five to ten sentences). Those segments are then overlapped with each other to capture each sentence in context.

The segments are run against our AI detection model, and we give each sentence a score between 0 and 1 to determine whether it is written by a human or by AI. If our model determines that a sentence was not generated by AI, it will receive a score of 0. If it determines the entirety of the sentence was generated by AI it will receive a score of 1.

Using the average scores of all the segments within the document, the model then generates an overall prediction of how much text (with 98% confidence based on data that was collected and verified in our AI innovation lab) in the submission we believe has been generated by AI. For example, when we say that 40% of the overall text has been AI-generated, we’re 98% confident that is the case.

Currently, Turnitin’s AI writing detection model is trained to detect content from the GPT-3 and GPT-3.5 language models, which includes ChatGPT. Because the writing characteristics of GPT-4 are consistent with earlier model versions, our detector is able to detect content from GPT-4 (ChatGPT Plus) most of the time. We are actively working on expanding our model to enable us to better detect content from other AI language models.

The Issues

AI detectors cannot prove conclusively if text is AI generated. With minimal editing, AI-generated content evades detection.

L2 writers tend to write with less “burstiness.” Concern about bias is one of the reasons for UBC chose not to enable Turnitins’ AI-detection feature.

ChatGPT’s writing style may be less easy to spot than some think.

Privacy violations are a concern with both generators and detectors as both collect data.

Now what?

Langara’s EdTech, TCDC, and SCAI departments are working together to offer workshops on four potential approaches: Embrace it, Neutralize it, Ban it, Ignore it. Interested in a bespoke workshop for your department? Complete the request form.


References
Marcus, G. (2023, January 6). Ezra Klein interviews Gary Marcus [Audio podcast episode]. In The Ezra Klein Show. https://www.nytimes.com/2023/01/06/podcasts/transcript-ezra-klein-interviews-gary-marcus.html

Fowler, G.A. (2023, April 3). We tested a new ChatGPT-detector for teachers. If flagged an innocent student. Washington Post. https://www.washingtonpost.com/technology/2023/04/01/chatgpt-cheating-detection-Turnitin/