Inclusive Content Delivery

Creating accessible content is important to ensure an inclusive and welcoming experience. What you do with that material is equally important. Enhance your delivery—whether in-person, online, or during conversation—with new Langara resources.

The Accessibility Handbook for Teaching and Learning now has a chapter on Inclusive Content Delivery.

Accessibility Handbook for Teaching and Learning

In this resource find information, tips, and tools to guide you with:

  • Setting up for an in-person, hybrid, or remote session
  • Tips when speaking, presenting, or otherwise delivering content
  • How to consider your audience
  • How to account for disability and diversity
  • Bimodal delivery
  • Creating content that can be reused, remixed, and repurposed

Inclusive content delivery is not only a matter of respect and empathy, but a powerful way to increase the impact and reach of your material. Read Inclusive Content Delivery to learn more about creating inclusive and engaging experiences for everyone.

Generative Artificial Intelligence (Gen AI) Resources

AI generated image of a humanoid robot teacher with a pointer in a classroom, standing in front of a blackboard with equations
Image generated by DALL-E.

Whether you are a superuser or a novice, the number of resources on generative artificial intelligence can be overwhelming. EdTech and TCDC have curated some that we’d like to recommend.

  • How to access Copilot (Microsoft)
    • Interested in trying a generative AI tool or using it in your course? ChatGPT and Copilot (formerly Bing Chat) are currently available in Canada. Langara College students and employees have access to a premium version of Copilot through Microsoft Enterprise and the Edge browser. Microsoft’s FAQs provide information on how to access Copilot through Microsoft Edge. 
  • Practical AI for Instructors and Students (Ethan Mollick/Wharton School, August 2023)
    • If you’re looking for a great primer on AI, this series of five videos is worth the watch. Each video is approximately 10 minutes so the whole series can be viewed in under an hour. Topics include: 1) an introduction to AI; 2) what large language model (LLM) platforms like ChatGPT are and how to start using them; 3) how to prompt AI; 4) how instructors can leverage AI; and 5) how students can use AI.
    • Note: this series references four LLMs: ChatGPT, BingCopilot, Bard, and Claude. Bard and Claude are not yet available in Canada. 
  • AI Primer by Educause
    • This article is a reading (and viewing) list that links to resources that do a deeper dive into generative AI. A good resource for those who know the basics and would like to learn more.  

EdTech and TCDC also regularly offer professional learning opportunities on AI topics. Check the PD Events Calendar for current offerings.

As always, if you’re planning to integrate AI into your course, please be aware that: 

  • There are privacy concerns with AI platforms. We recommend using caution when inputting – or having your students input – private, personal, or sensitive information (e.g. resumes or other identifying data).  
  • For those using assistive technology such as screen readers, some AI platforms are more accessible than others. For more information, please see Accessibility of AI Interfaces by Langara Assistive Technologist, Luke McKnight. 

If you would like more recommendations for AI resources, or any other AI-related support, please contact EdTech or TCDC

Accessibility at Langara College

Empowering Accessibility: Register to Join Our Upcoming Workshops

EdTech is excited to announce a series of upcoming workshops dedicated to improving the accessibility of Microsoft Word documents and PowerPoint presentations.

Why Accessibility Matters

In today’s digital age, ensuring that everyone, regardless of their abilities, can access and understand information is crucial. This is where accessibility comes in. It’s about making sure that everyone has equal access to information and functionality.

What Our Workshops Offer

Our workshops are designed to provide you with the tools and knowledge to create accessible Word documents and PowerPoint presentations. We believe that with a little effort and the right guidance, we can make a significant difference in making information more accessible to all.

Who Should Attend

These workshops are for everyone! Whether you’re a content creator, an educator, or just someone looking to learn, these workshops are for you. No prior experience is required.

Let’s Make a Difference Together

By participating in these workshops, not only will you enhance your skills, but you’ll also contribute to a more inclusive and accessible digital world. So why wait? Join us to learn how to make information accessible to all.

Learning Lab: Create an Accessible Word Document

Date: January 19

Time: 10:30 AM – 12:00 PM

Location: C202

How to Create Accessible PowerPoint Slide Presentations

Date: January 26

Time: 10:30 AM – 12:00 PM

Location: Zoom

Learning Lab: Improve the Accessibility of Existing PowerPoint Slides

Date: February 16

Time: 10:30 AM – 12:30 PM

Location: C202

Learning Lab: Improve the Accessibility of Existing PowerPoint Slides Drop-In

Date: February 20

Time: 10:30 AM – 12:00 PM

Location: C203

Learning Lab: Create an Accessible Word Document

Date: April 5

Time: 10:30 AM – 12:00 PM

Location: C202

EdTech Tools and Privacy

Peer Assessment and Privacy Risks

Instructors, have you considered how privacy, security, and confidentiality apply to teaching and learning, specifically the data you gather as part of assessment?

To support teaching and learning, you gather and analyze data about students all year and in many ways, including anecdotal notes, test results, grades, and observations. The tools we commonly use in teaching and learning, including Brightspace, gather information. The analytics collected and reports generated by teaching and learning tools are sophisticated and constantly changing. We should, therefore, carefully consider how we can better protect student data.  

When considering privacy, instructors should keep in mind that all student personal information belongs to the student and should be kept private. Students trust their instructors to keep their data confidential and share it carefully. Instructors are responsible for holding every student’s data in confidence.  This information includes things like assessment results, grades, student numbers, and demographic information. 

Although most students are digital natives, they aren’t necessarily digitally literate. Instructors can ensure students’ privacy by coaching them about what is appropriate to share and helping them understand the potential consequences of sharing personal information. 

One area of teaching and learning in which you may not have adequately considered privacy or coached students to withhold personal information and respect confidentiality is peer assessment. Peer assessment or peer review provides a structured learning process for students to critique and provide feedback to each other on their work. It helps students develop lifelong skills in assessing and providing feedback to others and equips them with skills to self-assess and improve their own work. However, in sharing their work, students may also be sharing personal identifying information, such as student numbers, or personal experiences. To help protect students’ personal information and support confidentiality, we recommend that you consider the following points.

Privacy Considerations for Peer Assessment 

  • If student work will be shared with peers, tell students not to disclose sensitive personal information. Sensitive personal information may include, for example, medical history, financial circumstances, traumatic life experiences, or their gender, race, religion, or ethnicity. 
  • Inform students of ways in which their work will be assessed by their peers. 
  • Consider having students evaluate anonymous assignments for more objective feedback.  
  • Coach students to exclude all identifiable information, including student number. 
  • If students’ work is to be posted online, consider associated risks, such as
    • another person posting the work somewhere else online without their consent; and
    • the content being accessed by Generative AI tools like ChatGPT that trawl the internet to craft responses to users’ queries.

This article is part of a collaborative Data Privacy series by Langara’s Privacy Office and EdTech. If you have data privacy questions or would like to suggest a topic for the series, contact Joanne Rajotte (jrajotte@langara.ca), Manager of Records Management and Privacy, or Briana Fraser, Learning Technologist & Department Chair of EdTech.

Brightspace – Introducing “New Experience” Discussions

As of August 28, 2023, Brightspace Discussions has a new look and feel, as well as some changes to functionality. Below we summarize the most important changes to the new version of Discussions.

Look & Feel

New Experience Discussions has been changed to bring it into alignment with how Assignments and Quizzes in Brightspace look and function. This consistency across tools is meant to make it easier for new users to Brightspace.

On the create/edit topic page, the main settings (title, grade out of, description, etc.) are on the left side of the page (1), and the more advanced settings (availability dates, restrictions, evaluation settings, etc.) are in the expandable tabs along the right (2).

Functionality

The are several changes to functionality and locations of settings that are significant in New Experience Discussions.

Automatically Create New Forum When Creating New Topic

All discussion topics need to sit within a forum (a container for topics). In New Experience Discussions, creating a new topic will automatically create a new forum of the same name. This eliminates the necessity of creating a forum prior to creating a topic. After the topic is created, instructors will be able to edit the name of the newly created forum or associate the current topic with another existing forum, if wanted.

Post and Completion

The Post and Completion settings are where you can allow anonymous posts and specify posting requirements. In New Experience discussions, only one of the following three options is possible:

1.     Default participation, which is a new option and has been added so that the default settings are clearly stated. The default settings do NOT allow for anonymous posts or require that users must start a thread.

2.     The option of Allow learners to hide their name from other learners is the setting that allows anonymous posts.

3.     The last option is Learners must start a thread before they can view or reply to other threads.

Manage Restrictions (replaces “Topic Type”)

The default for discussions is an “open topic” that all learners in the course can participate in; however, accessing the Manage Restrictions settings allows instructors to restrict discussions, if needed, so that learners can only see and reply to their own group or section’s posts. To set topic restrictions in New Experience, go to the Availability & Conditions settings on the right side of the edit page and look for Manage Restrictions.

Note: In Classic Experience, topic types could not be revised once set; however, in New Experience topic restrictions can now be revised up until a topic has an associated post, providing greater flexibility.

Restricting Topic and Separate Threads

To restrict a topic so that learners can only view threads from their group or section, go to Manage Restrictions and choose the option Restrict topic and separate threads. Then select which group category or section will have their threads separated.

Restrict Topic

To restrict a topic so that only selected groups or sections can view a topic and all threads, choose the radio option Restrict topic in the new Manage Restrictions workflow. Then select which sections/groups can see and participate in this discussion.

Availability Dates

Managing availability dates in Discussions is now similar to Assignments. Once a start or end date is added, additional settings can be adjusted to specify how learners see and access the topic outside of the availability dates.

Questions?

If you need assistance with Brightspace Discussions, please contact EdTech.

Accessibility Teaching Practices at Langara College

Accessible Teaching Practices

Accessible BC Act – Start acting now. 

On June 21st, 2021, the Accessible British Columbia Act came into effect. The intention of the act is to create accessibility standards that will reduce accessibility barriers and promote inclusion throughout the province.  The act is being implemented in a phased rollout, with education one of the first sectors expected to comply. This mean that course content, such as presentation material, communications, documents, and videos will need to be made accessible to students with disabilities. 

EdTech is publishing resources, offering workshops, and providing other learning opportunities for instructors and other employees to develop the skills needed to improve the accessibility of course materials. 

Improving accessibility in the classroom. 

When aiming to improve accessibility in the classroom, instructors need to consider learning spaces, course design, assessment, content, and delivery. Read Bridging the Gap to get a sense of the ways in which critical barriers to learning may be addressed. 

Langara’s Assistive Technologist is here to help. 

Langara instructors (and students) are uniquely supported in improving access with an Assistive Technologist. If you haven’t had the pleasure of meeting Luke McKnight, consider joining one of EdTech’s upcoming accessibility focused learning opportunities. Luke will be on hand to offer expert advice and support in improving accessibility. 

Participate in EdTech’s upcoming accessibility-focused learning opportunities. 

Start developing your accessibility skills and knowledge by joining us for: 

Learning Lab: Brightspace HTML Templates 

September 15th, 10:30 AM – 12:00 PM in C202 

How to Create Accessible PowerPoint Slide Presentations 

September 27th, 10:30 AM – 12:00 PM online 

Learning Lab: Adding Closed Captions to a Video in Brightspace 

October 13, 10:30 AM – 12:00 PM in C202 

Learning Lab: Create an Accessible Word Document 

November 3rd, 10:30 AM – 12:00 PM in C202 

Learning Lab: Improve the Accessibility of Existing PowerPoint Slides 

December 8th, 10:30 AM – 12:00 PM in C202 

Generative AI and STEM

Background

Artificial intelligence is not new. It has been part of our personal and work lives for a long time (autocorrect, facial recognition, satnav, etc.) and large language models like ChatGPT have been a big topic in education since version 3.5 was released in late November, 2022. Large language models (LLMs) are trained on enormous amounts of data in order to recognize the patterns of and connections between words, and then produce text based on the probabilities of which word is most likely to come next. One thing that LLMs don’t do, however, is computation; however, the most recent Open AI release, GPT-4, seems to have made strides in standardized tests in many STEM areas and GPT-4 now has a plug-in for Wolfram Alpha, which does do computation.

Chart from Open AI: exam result improvements from ChatGPT 3.5 to 4

Andrew Roberts (Math Dept) and Susan Bonham (EdTech) did some testing to see how ChatGPT (3.5), GPT-4, and GPT-4 with the Wolfram plugin would handle some questions from Langara’s math courses.

Test Details

Full test results are available. (accessible version of the problems and full details of the “chats” and subsequent discussion for each AI response are available at the link)

The following questions were tested:

 

Problem 1: (supplied by Langara mathematics instructor Vijay Singh)

 

Problem 2: (Precalculus)

 

Problem 3: (Calculus I)

 

Problem 4: (Calculus II)

 

Discussion

Responses from current versions of ChatGPT are not reliable enough to be accepted uncritically.

ChatGPT needs to be approached as a tool and careful proof-reading of responses is needed to check for errors in computation or reasoning. Errors may be blatant and readily apparent, or subtle and hard to spot without close reading and a solid understanding of the concepts.

Perhaps the biggest danger for a student learning a subject is in the “plausibility” of many responses even when they are incorrect. ChatGPT will present its responses with full confidence in their correctness, whether this is justified or not.

When errors or lack of clarity is noticed in a response, further prompting needs to be used to correct and refine the initial response. This requires a certain amount of base knowledge on the part of the user in order to guide ChatGPT to the correct solution.

Algebraic computations cannot be trusted as ChatGPT does not “know” the rules of algebra but is simply appending steps based on a probabilistic machine-learning model that references the material on which it was trained. The quality of the answers will depend on the quality of the content on which ChatGPT was trained. There is no way for us to know exactly what training material ChatGPT is referencing when generating its responses. The average quality of solutions sourced online should give us pause.

Below is one especially concerning example of an error encountered during our testing sessions:

In the response to the optimization problem (Problem 3), GPT-3.5 attempts to differentiate the volume function:

However, the derivative is computed as:

We see that it has incorrectly differentiated the first term with respect to R while correctly differentiating the second term with respect to h.

It is the plausibility of the above solution (despite the bad error) that is dangerous for a student who may take the ChatGPT response at face value.

Access to the Wolfram plugin in GPT-4 should mean that algebraic computations that occur within requests sent to Wolfram can be trusted. But the issues of errors in reasoning and interpretation still exist between requests sent to Wolfram.

Concluding Thought

It will be important for us educate our students about the dangers involved in using this tool uncritically while acknowledging the potential benefits if used correctly.

Want to Learn More?

EdTech and TCDC run workshops on various AI topics. You can request a bespoke AI workshop tailored to your department or check out the EdTech and TCDC workshop offerings. For all other questions, please contact edtech@langara.ca

A.I. Detection: A Better Approach 

Over the past few months, EdTech has shared concerns about A.I. classifiers, such as Turnitin’s A.I. detection tool, AI Text Classifier, GPTZero, and ZeroGPT. Both in-house testing and statements from Turnitin and OpenAI confirm that A.I. text classifiers unreliably differentiate between A.I. and human generated writing. Given that the tools are unreliable and easy to manipulate, EdTech discourages their use. Instead, we suggest using Turnitin’s Similarity Report to help identify A.I.-hallucinated and fabricated references.  

What is Turnitin’s Similarity Report 

The Turnitin Similarity Report quantifies how similar a submitted work is to other pieces of writing, including works on the Internet and those stored in Turnitin’s extensive database, highlighting sections that match existing sources. The similarity score represents the percentage of writing that is similar to other works. 

AI Generated References 

A.I. researchers call the tendency of A.I. to make stuff up a “hallucination.” A.I.-generated responses can appear convincing, but include irrelevant, nonsensical, or factually incorrect answers.  

ChatGPT and other natural language processing programs do a poor job of referencing sources, and often fabricating plausible references. Because the references seem real, students often mistake them as legitimate. 

Common reference or citation errors include: 

  • Failure to include a Digital Object Identifier (DOI) or incorrect DOI 
  • Misidentification of source information, such as journal or book title 
  • Incorrect publication dates 
  • Incorrect author information 

Using Turnitin to Identify Hallucinated References 

To use Turnitin to identify hallucinated or fabricated references, do not exclude quotes and bibliographic material from the Similarity Report. Quotes and bibliographic information will be flagged as matching or highly similar to source-based evidence. Fabricated quotes, references, and bibliographic information will have zero similarity because they will not match source-based evidence.

Quotes and bibliographic information with no similarity to existing works should be investigated to confirm that they are fabricated.  

References

Athaluri S, Manthena S, Kesapragada V, et al. (2023). Exploring the boundaries of reality: Investigating the phenomenon of artificial intelligence hallucination in scientific writing through ChatGPT references. Cureus 15(4): e37432. doi:10.7759/cureus.37432 

Metz, C. (2023, March 29). What makes A.I. chatbots go wrong? The curious case of the hallucinating software. New York Times. https://www.nytimes.com/2023/03/29/technology/ai-chatbots-hallucinations.html 

Aligning language models to follow instructions. (2022, January 27). OpenAI. https://openai.com/research/instruction-following 

Weise, K., and Metz, C. (2023, May 1). What A.I. chatbots hallucinate. New York Times. https://www.nytimes.com/2023/05/01/business/ai-chatbots-hallucination.html 

Welborn, A. (2023, March 9). ChatGPT and fake citations. Duke University Libraries. https://blogs.library.duke.edu/blog/2023/03/09/chatgpt-and-fake-citations/ 

screenshot of a Turnitin Similarity Report, with submitted text on the left and the report panel on the right

AI Classifiers — What’s the problem with detection tools?

AI classifiers don’t work!

Natural language processor AIs are meant to be convincing. They are creating content that “sounds plausible because it’s all derived from things that humans have said” (Marcus, 2023). The intent is to produce outputs that mimic human writing. The result: The world’s leading AI companies can’t reliably distinguish the products of their own machines from the work of humans.

In January, OpenAI released its own AI text classifier. According to OpenAI “Our classifier is not fully reliable. In our evaluations on a “challenge set” of English texts, our classifier correctly identifies 26% of AI-written text (true positives) as “likely AI-written,” while incorrectly labeling human-written text as AI-written 9% of the time (false positives).”

A bit about how AI classifiers identify AI-generated content

GPTZero, a commonly used detection tool, identifies AI created works based on two factors: perplexity and burstiness.

Perplexity measures the complexity of text. Classifiers identify text that is predictable and lacking complexity as AI-generated and highly complex text as human-generated.

Burstiness compares variation between sentences. It measures how predictable a piece of content is by the homogeneity of the length and structure of sentences throughout the text. Human writing tends to be variable, switching between long and complex sentences and short, simpler ones. AI sentences tend to be more uniform with less creative variability.

The lower the perplexity and burstiness score, the more likely it is that text is AI generated.

Turnitin is a plagiarism-prevention tool that helps check the originality of student writing. On April 4th, Turnitin released an AI-detection feature.

According to Turnitin, its detection tool works a bit differently.

When a paper is submitted to Turnitin, the submission is first broken into segments of text that are roughly a few hundred words (about five to ten sentences). Those segments are then overlapped with each other to capture each sentence in context.

The segments are run against our AI detection model, and we give each sentence a score between 0 and 1 to determine whether it is written by a human or by AI. If our model determines that a sentence was not generated by AI, it will receive a score of 0. If it determines the entirety of the sentence was generated by AI it will receive a score of 1.

Using the average scores of all the segments within the document, the model then generates an overall prediction of how much text (with 98% confidence based on data that was collected and verified in our AI innovation lab) in the submission we believe has been generated by AI. For example, when we say that 40% of the overall text has been AI-generated, we’re 98% confident that is the case.

Currently, Turnitin’s AI writing detection model is trained to detect content from the GPT-3 and GPT-3.5 language models, which includes ChatGPT. Because the writing characteristics of GPT-4 are consistent with earlier model versions, our detector is able to detect content from GPT-4 (ChatGPT Plus) most of the time. We are actively working on expanding our model to enable us to better detect content from other AI language models.

The Issues

AI detectors cannot prove conclusively if text is AI generated. With minimal editing, AI-generated content evades detection.

L2 writers tend to write with less “burstiness.” Concern about bias is one of the reasons for UBC chose not to enable Turnitins’ AI-detection feature.

ChatGPT’s writing style may be less easy to spot than some think.

Privacy violations are a concern with both generators and detectors as both collect data.

Now what?

Langara’s EdTech, TCDC, and SCAI departments are working together to offer workshops on four potential approaches: Embrace it, Neutralize it, Ban it, Ignore it. Interested in a bespoke workshop for your department? Complete the request form.


References
Marcus, G. (2023, January 6). Ezra Klein interviews Gary Marcus [Audio podcast episode]. In The Ezra Klein Show. https://www.nytimes.com/2023/01/06/podcasts/transcript-ezra-klein-interviews-gary-marcus.html

Fowler, G.A. (2023, April 3). We tested a new ChatGPT-detector for teachers. If flagged an innocent student. Washington Post. https://www.washingtonpost.com/technology/2023/04/01/chatgpt-cheating-detection-Turnitin/

AI tools & privacy

ChatGPT is underpinned by a large language model that requires massive amounts of data to function and improve. The more data the model is trained on, the better it gets at detecting patterns, anticipating what will come next and generating plausible text.

Uri Gal notes the following privacy concerns in The Conversation:

  • None of us were asked whether OpenAI could use our data. This is a clear violation of privacy, especially when data are sensitive and can be used to identify us, our family members, or our location.
  • Even when data are publicly available their use can breach what we call contextual integrity. This is a fundamental principle in legal discussions of privacy. It requires that individuals’ information is not revealed outside of the context in which it was originally produced.
  • OpenAI offers no procedures for individuals to check whether the company stores their personal information, or to request it be deleted. This is a guaranteed right in accordance with the European General Data Protection Regulation (GDPR) – although it’s still under debate whether ChatGPT is compliant with GDPR requirements.
  • This “right to be forgotten” is particularly important in cases where the information is inaccurate or misleading, which seems to be a regular occurrencewith ChatGPT.
  • Moreover, the scraped data ChatGPT was trained on can be proprietary or copyrighted.

When we use AI tools, including detection tools, we are feeding data into these systems. It is important that we understand our obligations and risks.

When an assignment is submitted to Turnitin, the student’s work is saved as part of Turnitin’s database of more than 1 billion student papers. This raises privacy concerns that include:

  • Students’ inability to remove their work from the database
  • The indefinite length of time that papers are stored
  • Access to the content of the papers, especially personal data or sensitive content, including potential security breaches of the server

AI detection tools, including Turnitin, should not be used without students’ knowledge and consent. While Turnitin is a college-approved tool, using it without students’ consent poses a copyright risk (Strawczynski, 2004).  Other AI detection tools have not undergone privacy and risk assessments by our institution and present potential data privacy and copyright risks.

For more information, see our Guidelines for Using Turnitin.