Accessibility at Langara College

Empowering Accessibility: Register to Join Our Upcoming Workshops

EdTech is excited to announce a series of upcoming workshops dedicated to improving the accessibility of Microsoft Word documents and PowerPoint presentations.

Why Accessibility Matters

In today’s digital age, ensuring that everyone, regardless of their abilities, can access and understand information is crucial. This is where accessibility comes in. It’s about making sure that everyone has equal access to information and functionality.

What Our Workshops Offer

Our workshops are designed to provide you with the tools and knowledge to create accessible Word documents and PowerPoint presentations. We believe that with a little effort and the right guidance, we can make a significant difference in making information more accessible to all.

Who Should Attend

These workshops are for everyone! Whether you’re a content creator, an educator, or just someone looking to learn, these workshops are for you. No prior experience is required.

Let’s Make a Difference Together

By participating in these workshops, not only will you enhance your skills, but you’ll also contribute to a more inclusive and accessible digital world. So why wait? Join us to learn how to make information accessible to all.

Learning Lab: Create an Accessible Word Document

Date: January 19

Time: 10:30 AM – 12:00 PM

Location: C202

How to Create Accessible PowerPoint Slide Presentations

Date: January 26

Time: 10:30 AM – 12:00 PM

Location: Zoom

Learning Lab: Improve the Accessibility of Existing PowerPoint Slides

Date: February 16

Time: 10:30 AM – 12:30 PM

Location: C202

Learning Lab: Improve the Accessibility of Existing PowerPoint Slides Drop-In

Date: February 20

Time: 10:30 AM – 12:00 PM

Location: C203

Learning Lab: Create an Accessible Word Document

Date: April 5

Time: 10:30 AM – 12:00 PM

Location: C202

Explore the World of AI: Join Our Monthly AI Tinker Time Workshops

Are you curious about artificial intelligence and its applications but unsure where to begin? Look no further! Join us at AI Tinker Time, happening on the first Thursday of every month, and dive into the exciting world of AI tools.

Whether you’re new to AI or looking to expand your knowledge, these sessions are the perfect opportunity to experiment with various AI tools and discover how they can enhance your practice. Our EdTech faculty, staff, and TCDC colleagues will lead hands-on sessions, where we’ll test different AI tools for accuracy, quality, and effectiveness in addressing diverse teaching and learning needs.

Together, we’ll explore critical questions such as:

  • How accurate is the AI output?
  • Is the quality of AI output sufficient for practical application?
  • Can AI-generated content serve as a solid foundation for further refinement?
  • How do various AI tools compare in functionality and output?
  • Which AI tools can we recommend for specific needs based on our testing?

Don’t miss this chance to join a community of like-minded individuals eager to unlock the potential of AI in education. Whether you’re an educator, a technologist, or simply AI-curious, our AI Tinker Time workshops are the ideal space to learn, experiment, and collaborate.

Monthly topics:

  • January – Using AI to generate alternative text
  • February – Comparing output generated by Bing Chat’s different modes
  • March – Using AI audio generation and editing
  • April – Using AI to “break” assignments

Please register to join our AI tinker time.

EdTech Monthly Tip

The New Quiz Experience

Brightspace has released a New Quiz Creation Experience, a similar appearance to what you find in the Assignment tool. Over the coming weeks, we’ll highlight a couple changes that you should be aware of.

Changes to Timing & Display View

By default, no time limit is set on new quizzes. Use Time Limit to set the amount of time students are given to complete the quiz once they have started it.

To set a time limit:

  • Click Set Time Limit to add a countdown clock to the quiz. If this box is left unchecked, no time limit will be set. Be aware that setting a time limit does not, on its own, enforce the time limit — it only shows a countdown clock for the student.

Timer Settings

Timer settings are made once “Set Time Limit” is checked. Click on Timer Settings to control how a quiz behave once students exceed the time limit.

Timer setting options include:

  • Automatically submit the quiz attempt
    • This is the default on all new quizzes if the “Set Time Limit” box is checked. Quiz auto-submission automatically hands in quizzes on enforced time limit quizzes at the end of the set time.
  • Flag as “exceeded time limit” and allow the learner to continue working
    • This option allows the student to continue working but adds an “exceeded time limit” notation to the quiz when submitted.
  • Do nothing: the time limit is not enforced
    • The countdown clock is made available to students, but no time limit is enforced.

Old and New Experience Comparison

Previously Available OptionNew Behaviour
Prevent the student from making further changesAutomatically submit the quiz attempt
Allow students to continue working but automatically score zeroAutomatically submit the quiz attempt  
Allow the student to continue workingFlag the attempt as exceeded time limit and allow the learner to continue working
A quiz that has a grace periodGrace period no longer available. Quiz now uses only the time limit set

Adding Time to a Quiz in Progress

Changes to the timer may result in the need to add time to a Brightspace quiz in progress. Adding time is done through the Special Access feature and requires students refresh their browsers for the new time setting to take effect.

To add time to a Quiz in progress:

  • Navigate to the Brightspace Manage Quizzes tab and click on the quiz name to edit.
  • Select Availability Dates & Conditions.
  • Click on the Manage Special Access link.
  • Ensure “Allow selected users special access to this quiz” is selected.
  • Click on Add Users to Special Access.
  • Scroll down to the Timing sections and check the box for “Override time limit.”
  • Enter the new time limit in the minutes field.
  • Scroll down to the Users section and check all the students’ names.
  • Click Save.
  • Click Save and Close.
  • Tell your students to refresh their browsers.

Watch Changes to the Brightspace Quiz Experience (video, 8:56) to learn more about the recent tool updates.

EdTech Tools and Privacy

Generative AI Tools & Privacy

Generative AI applications generate new content, such as text, images, videos, music, and other forms of media, based on user inputs. These systems learn from vast datasets containing millions of examples to recognize patterns and structures, without needing explicit programming for each task. This learning enables them to produce new content that mirrors the style and characteristics of the data they trained on.

AI-powered chatbots like ChatGPT can replicate human conversation. Specifically, ChatGPT is a sophisticated language model that understands and generates language by identifying patterns of word usage. It predicts the next words in a sequence, which proves useful for tasks ranging from writing emails and blogs to creating essays and programming code. Its adaptability to different writing and coding styles makes it a powerful and versatile tool. Major tech companies, such as Microsoft, are integrating ChatGPT into applications like MS Teams, Word, and PowerPoint, indicating a trend that other companies are likely to follow.

Despite their utility, these generative AI tools come with privacy risks for students. As these tools learn from the data they process, any personal information included in student assignments could be retained and used indefinitely. This poses several privacy issues: students may lose control over their personal data, face exposure to data breaches, and have their information used in ways they did not anticipate, especially when data is transferred across countries with varying privacy protections. To maintain privacy, it is crucial to handle student data transparently and with clear consent.

Detection tools like Turnitin now include features to identify content generated by AI, but these tools also collect and potentially store personal data for extended periods. While Turnitin has undergone privacy and risk evaluations, other emerging tools have not been similarly vetted, leaving their privacy implications unclear.

The ethical landscape of generative AI is complex, encompassing data bias concerns that can result in discriminatory outputs, and intellectual property issues, as these models often train on content without the original creators’ consent. Labour practices also present concerns: for example, OpenAI has faced criticism for the conditions of the workers it employs to filter out harmful content from its training data. Furthermore, the significant environmental impact of running large AI models, due to the energy required for training and data storage, raises sustainability questions. Users must stay well-informed and critical of AI platform outputs to ensure responsible and ethical use.


This article is part of a collaborative Data Privacy series by Langara’s Privacy Office and EdTech. If you have data privacy questions or would like to suggest a topic for the series, contact Joanne Rajotte (jrajotte@langara.ca), Manager of Records Management and Privacy, or Briana Fraser, Learning Technologist & Department Chair of EdTech

EdTech Tools and Privacy

Peer Assessment and Privacy Risks

Instructors, have you considered how privacy, security, and confidentiality apply to teaching and learning, specifically the data you gather as part of assessment?

To support teaching and learning, you gather and analyze data about students all year and in many ways, including anecdotal notes, test results, grades, and observations. The tools we commonly use in teaching and learning, including Brightspace, gather information. The analytics collected and reports generated by teaching and learning tools are sophisticated and constantly changing. We should, therefore, carefully consider how we can better protect student data.  

When considering privacy, instructors should keep in mind that all student personal information belongs to the student and should be kept private. Students trust their instructors to keep their data confidential and share it carefully. Instructors are responsible for holding every student’s data in confidence.  This information includes things like assessment results, grades, student numbers, and demographic information. 

Although most students are digital natives, they aren’t necessarily digitally literate. Instructors can ensure students’ privacy by coaching them about what is appropriate to share and helping them understand the potential consequences of sharing personal information. 

One area of teaching and learning in which you may not have adequately considered privacy or coached students to withhold personal information and respect confidentiality is peer assessment. Peer assessment or peer review provides a structured learning process for students to critique and provide feedback to each other on their work. It helps students develop lifelong skills in assessing and providing feedback to others and equips them with skills to self-assess and improve their own work. However, in sharing their work, students may also be sharing personal identifying information, such as student numbers, or personal experiences. To help protect students’ personal information and support confidentiality, we recommend that you consider the following points.

Privacy Considerations for Peer Assessment 

  • If student work will be shared with peers, tell students not to disclose sensitive personal information. Sensitive personal information may include, for example, medical history, financial circumstances, traumatic life experiences, or their gender, race, religion, or ethnicity. 
  • Inform students of ways in which their work will be assessed by their peers. 
  • Consider having students evaluate anonymous assignments for more objective feedback.  
  • Coach students to exclude all identifiable information, including student number. 
  • If students’ work is to be posted online, consider associated risks, such as
    • another person posting the work somewhere else online without their consent; and
    • the content being accessed by Generative AI tools like ChatGPT that trawl the internet to craft responses to users’ queries.

This article is part of a collaborative Data Privacy series by Langara’s Privacy Office and EdTech. If you have data privacy questions or would like to suggest a topic for the series, contact Joanne Rajotte (jrajotte@langara.ca), Manager of Records Management and Privacy, or Briana Fraser, Learning Technologist & Department Chair of EdTech.

Learning Labs

Introducing Learning Labs

Learning Labs are interactive, focused, and supported learning sessions where you can learn how to implement Langara’s teaching and learning technologies and tools. Capacity is limited to ensure all attendees have an opportunity to ask questions, try out tools, and receive support. Support in the room will reflect the technology, tool, and learning outcomes; however, you can expect to interact with EdTech Advisors, Specialists, and Technologists as well as TCDC Curriculum Consultants. The Labs are an opportunity to implement something new or improve what already exists with experts who can answer technical questions and provide advice.

Fall Learning Lab session topics include:

Brightspace HTML Templates

After participating in this lab, participants should be able to:

  • Use the Brightspace HTML editor.
  • Explain the benefits of using the Brightspace HTML templates.
  • Apply the templates to a new Brightspace HTML page.
  • Apply the latest version of the template to an existing Brightspace HTML page.
  • Mix and match HTML elements—such as image placement, accordions, callouts, tables, and tabs—from the various templates.

Adding closed captions to a video in Brightspace

After participating in this lab, participants should be able to:

  • Upload a video to MediaSpace.
  • Add closed captions to a video.
  • Use the MediaSpace captions editor.
  • Use the OneDrive captioning tool.
  • Embed a video in a Brightspace course file.

Creating an accessible Word document

After participating in this lab, participants should be able to:

  • Employ plain language.
  • Select styles that improve legibility of text.
  • Structure a document.
  • Create accessible hyperlinks and tables.
  • Add alternative text to visual content.
  • Use the built-in accessibility checker.

Improving the accessibility of existing PowerPoint slides

After participating in this lab, participants should be able to:

  • Avoid the most common PowerPoint accessibility mistakes.
  • Use the accessibility checker and make corrections.
  • Apply templates.
  • Apply structure.
  • Select accessible fonts and font styling.
  • Employ accessible use of colour.
  • Add alternative text to images.
  • Write meaningful hyperlink text.

Save time marking with Rubrics

After participating in this lab, participants should be able to: 

  • Define the purpose of the assignment or assessment 
  • Decide which type of rubric will be used with assignments
  • Create statements of expected performance at each level of the rubric 
  • Transfer analog rubrics into a digital version on Brightspace 
  • Associate their rubric with the assignment in Brightspace 

 

Brightspace – New Quiz Experience

Brightspace has released a New Quiz Creation Experience, a similar appearance to what you find in the Assignment tool. We want to highlight a couple changes that you should be aware of: 

  • Description is automatically visible – doesn’t need to be toggled on (but also can’t be hidden from students) 
  • Custom pagination is not possible – here are the options: all questions on same page, by 1/5/10 question(s) or by section.

Watch the New Quiz Experience video for more details.

A.I. Detection: A Better Approach 

Over the past few months, EdTech has shared concerns about A.I. classifiers, such as Turnitin’s A.I. detection tool, AI Text Classifier, GPTZero, and ZeroGPT. Both in-house testing and statements from Turnitin and OpenAI confirm that A.I. text classifiers unreliably differentiate between A.I. and human generated writing. Given that the tools are unreliable and easy to manipulate, EdTech discourages their use. Instead, we suggest using Turnitin’s Similarity Report to help identify A.I.-hallucinated and fabricated references.  

What is Turnitin’s Similarity Report 

The Turnitin Similarity Report quantifies how similar a submitted work is to other pieces of writing, including works on the Internet and those stored in Turnitin’s extensive database, highlighting sections that match existing sources. The similarity score represents the percentage of writing that is similar to other works. 

AI Generated References 

A.I. researchers call the tendency of A.I. to make stuff up a “hallucination.” A.I.-generated responses can appear convincing, but include irrelevant, nonsensical, or factually incorrect answers.  

ChatGPT and other natural language processing programs do a poor job of referencing sources, and often fabricating plausible references. Because the references seem real, students often mistake them as legitimate. 

Common reference or citation errors include: 

  • Failure to include a Digital Object Identifier (DOI) or incorrect DOI 
  • Misidentification of source information, such as journal or book title 
  • Incorrect publication dates 
  • Incorrect author information 

Using Turnitin to Identify Hallucinated References 

To use Turnitin to identify hallucinated or fabricated references, do not exclude quotes and bibliographic material from the Similarity Report. Quotes and bibliographic information will be flagged as matching or highly similar to source-based evidence. Fabricated quotes, references, and bibliographic information will have zero similarity because they will not match source-based evidence.

Quotes and bibliographic information with no similarity to existing works should be investigated to confirm that they are fabricated.  

References

Athaluri S, Manthena S, Kesapragada V, et al. (2023). Exploring the boundaries of reality: Investigating the phenomenon of artificial intelligence hallucination in scientific writing through ChatGPT references. Cureus 15(4): e37432. doi:10.7759/cureus.37432 

Metz, C. (2023, March 29). What makes A.I. chatbots go wrong? The curious case of the hallucinating software. New York Times. https://www.nytimes.com/2023/03/29/technology/ai-chatbots-hallucinations.html 

Aligning language models to follow instructions. (2022, January 27). OpenAI. https://openai.com/research/instruction-following 

Weise, K., and Metz, C. (2023, May 1). What A.I. chatbots hallucinate. New York Times. https://www.nytimes.com/2023/05/01/business/ai-chatbots-hallucination.html 

Welborn, A. (2023, March 9). ChatGPT and fake citations. Duke University Libraries. https://blogs.library.duke.edu/blog/2023/03/09/chatgpt-and-fake-citations/ 

screenshot of a Turnitin Similarity Report, with submitted text on the left and the report panel on the right

AI Classifiers — What’s the problem with detection tools?

AI classifiers don’t work!

Natural language processor AIs are meant to be convincing. They are creating content that “sounds plausible because it’s all derived from things that humans have said” (Marcus, 2023). The intent is to produce outputs that mimic human writing. The result: The world’s leading AI companies can’t reliably distinguish the products of their own machines from the work of humans.

In January, OpenAI released its own AI text classifier. According to OpenAI “Our classifier is not fully reliable. In our evaluations on a “challenge set” of English texts, our classifier correctly identifies 26% of AI-written text (true positives) as “likely AI-written,” while incorrectly labeling human-written text as AI-written 9% of the time (false positives).”

A bit about how AI classifiers identify AI-generated content

GPTZero, a commonly used detection tool, identifies AI created works based on two factors: perplexity and burstiness.

Perplexity measures the complexity of text. Classifiers identify text that is predictable and lacking complexity as AI-generated and highly complex text as human-generated.

Burstiness compares variation between sentences. It measures how predictable a piece of content is by the homogeneity of the length and structure of sentences throughout the text. Human writing tends to be variable, switching between long and complex sentences and short, simpler ones. AI sentences tend to be more uniform with less creative variability.

The lower the perplexity and burstiness score, the more likely it is that text is AI generated.

Turnitin is a plagiarism-prevention tool that helps check the originality of student writing. On April 4th, Turnitin released an AI-detection feature.

According to Turnitin, its detection tool works a bit differently.

When a paper is submitted to Turnitin, the submission is first broken into segments of text that are roughly a few hundred words (about five to ten sentences). Those segments are then overlapped with each other to capture each sentence in context.

The segments are run against our AI detection model, and we give each sentence a score between 0 and 1 to determine whether it is written by a human or by AI. If our model determines that a sentence was not generated by AI, it will receive a score of 0. If it determines the entirety of the sentence was generated by AI it will receive a score of 1.

Using the average scores of all the segments within the document, the model then generates an overall prediction of how much text (with 98% confidence based on data that was collected and verified in our AI innovation lab) in the submission we believe has been generated by AI. For example, when we say that 40% of the overall text has been AI-generated, we’re 98% confident that is the case.

Currently, Turnitin’s AI writing detection model is trained to detect content from the GPT-3 and GPT-3.5 language models, which includes ChatGPT. Because the writing characteristics of GPT-4 are consistent with earlier model versions, our detector is able to detect content from GPT-4 (ChatGPT Plus) most of the time. We are actively working on expanding our model to enable us to better detect content from other AI language models.

The Issues

AI detectors cannot prove conclusively if text is AI generated. With minimal editing, AI-generated content evades detection.

L2 writers tend to write with less “burstiness.” Concern about bias is one of the reasons for UBC chose not to enable Turnitins’ AI-detection feature.

ChatGPT’s writing style may be less easy to spot than some think.

Privacy violations are a concern with both generators and detectors as both collect data.

Now what?

Langara’s EdTech, TCDC, and SCAI departments are working together to offer workshops on four potential approaches: Embrace it, Neutralize it, Ban it, Ignore it. Interested in a bespoke workshop for your department? Complete the request form.


References
Marcus, G. (2023, January 6). Ezra Klein interviews Gary Marcus [Audio podcast episode]. In The Ezra Klein Show. https://www.nytimes.com/2023/01/06/podcasts/transcript-ezra-klein-interviews-gary-marcus.html

Fowler, G.A. (2023, April 3). We tested a new ChatGPT-detector for teachers. If flagged an innocent student. Washington Post. https://www.washingtonpost.com/technology/2023/04/01/chatgpt-cheating-detection-Turnitin/

AI Detection Tool Testing — Initial Results

We’ve limited our testing to Turnitin’s AI detection tool. Why? Turnitin has undergone privacy and risk reviews and is a college-approved technology. Other detection tools haven’t been reviewed and may not meet recommended data privacy standards.

What We’ve Learned So Far

  • Unedited AI-generated content often receives a 100% AI-generated score, although more complex writing by ChatGPT4 can score far less than 100%.
  • Adding typos and grammar mistakes or prompting the AI generator to include errors throughout a document canchange the AI-generated score from 100% to 0%. 
  • Adding I-statements throughout a document has a dramatic impact in lowering the AI score. 
  • Interrupting the flow of text by replacing one word every couple of sentences with a less likely word, increases the perplexity of the wording and lowers the AI-generated percentage.  AI text generators act like text predictors, creating text by adding the most likely next work. If the detector is perplexed by a word because the word is not the most likely choice, then it’s determined to be human written.
  • Unlike human-generated writing, AI sentences tend to be uniform. Changing the length of sentences throughout a document, making some sentences shorter and others longer and more complex, alters the burstiness and lowers the generated-by-AI score. 
  • By replacing one or two words per paragraph and modifying the length of sentences here and there throughout a chunk of text — i.e. by doing minor tweaks of both perplexity and burstiness — the AI-generated score changes from 100% to 0%. 

To learn more about how AI detection tools work, read AI Classifiers — What’s the problem with detection tools?