Generative Artificial Intelligence (Gen AI) Resources

AI generated image of a humanoid robot teacher with a pointer in a classroom, standing in front of a blackboard with equations
Image generated by DALL-E.

Whether you are a superuser or a novice, the number of resources on generative artificial intelligence can be overwhelming. EdTech and TCDC have curated some that we’d like to recommend.

  • How to access Copilot (Microsoft)
    • Interested in trying a generative AI tool or using it in your course? ChatGPT and Copilot (formerly Bing Chat) are currently available in Canada. Langara College students and employees have access to a premium version of Copilot through Microsoft Enterprise and the Edge browser. Microsoft’s FAQs provide information on how to access Copilot through Microsoft Edge. 
  • Practical AI for Instructors and Students (Ethan Mollick/Wharton School, August 2023)
    • If you’re looking for a great primer on AI, this series of five videos is worth the watch. Each video is approximately 10 minutes so the whole series can be viewed in under an hour. Topics include: 1) an introduction to AI; 2) what large language model (LLM) platforms like ChatGPT are and how to start using them; 3) how to prompt AI; 4) how instructors can leverage AI; and 5) how students can use AI.
    • Note: this series references four LLMs: ChatGPT, BingCopilot, Bard, and Claude. Bard and Claude are not yet available in Canada. 
  • AI Primer by Educause
    • This article is a reading (and viewing) list that links to resources that do a deeper dive into generative AI. A good resource for those who know the basics and would like to learn more.  

EdTech and TCDC also regularly offer professional learning opportunities on AI topics. Check the PD Events Calendar for current offerings.

As always, if you’re planning to integrate AI into your course, please be aware that: 

  • There are privacy concerns with AI platforms. We recommend using caution when inputting – or having your students input – private, personal, or sensitive information (e.g. resumes or other identifying data).  
  • For those using assistive technology such as screen readers, some AI platforms are more accessible than others. For more information, please see Accessibility of AI Interfaces by Langara Assistive Technologist, Luke McKnight. 

If you would like more recommendations for AI resources, or any other AI-related support, please contact EdTech or TCDC

EdTech Tools and Privacy

Generative AI Tools & Privacy

Generative AI applications generate new content, such as text, images, videos, music, and other forms of media, based on user inputs. These systems learn from vast datasets containing millions of examples to recognize patterns and structures, without needing explicit programming for each task. This learning enables them to produce new content that mirrors the style and characteristics of the data they trained on.

AI-powered chatbots like ChatGPT can replicate human conversation. Specifically, ChatGPT is a sophisticated language model that understands and generates language by identifying patterns of word usage. It predicts the next words in a sequence, which proves useful for tasks ranging from writing emails and blogs to creating essays and programming code. Its adaptability to different writing and coding styles makes it a powerful and versatile tool. Major tech companies, such as Microsoft, are integrating ChatGPT into applications like MS Teams, Word, and PowerPoint, indicating a trend that other companies are likely to follow.

Despite their utility, these generative AI tools come with privacy risks for students. As these tools learn from the data they process, any personal information included in student assignments could be retained and used indefinitely. This poses several privacy issues: students may lose control over their personal data, face exposure to data breaches, and have their information used in ways they did not anticipate, especially when data is transferred across countries with varying privacy protections. To maintain privacy, it is crucial to handle student data transparently and with clear consent.

Detection tools like Turnitin now include features to identify content generated by AI, but these tools also collect and potentially store personal data for extended periods. While Turnitin has undergone privacy and risk evaluations, other emerging tools have not been similarly vetted, leaving their privacy implications unclear.

The ethical landscape of generative AI is complex, encompassing data bias concerns that can result in discriminatory outputs, and intellectual property issues, as these models often train on content without the original creators’ consent. Labour practices also present concerns: for example, OpenAI has faced criticism for the conditions of the workers it employs to filter out harmful content from its training data. Furthermore, the significant environmental impact of running large AI models, due to the energy required for training and data storage, raises sustainability questions. Users must stay well-informed and critical of AI platform outputs to ensure responsible and ethical use.


This article is part of a collaborative Data Privacy series by Langara’s Privacy Office and EdTech. If you have data privacy questions or would like to suggest a topic for the series, contact Joanne Rajotte (jrajotte@langara.ca), Manager of Records Management and Privacy, or Briana Fraser, Learning Technologist & Department Chair of EdTech

EdTech Tools and Privacy

Peer Assessment and Privacy Risks

Instructors, have you considered how privacy, security, and confidentiality apply to teaching and learning, specifically the data you gather as part of assessment?

To support teaching and learning, you gather and analyze data about students all year and in many ways, including anecdotal notes, test results, grades, and observations. The tools we commonly use in teaching and learning, including Brightspace, gather information. The analytics collected and reports generated by teaching and learning tools are sophisticated and constantly changing. We should, therefore, carefully consider how we can better protect student data.  

When considering privacy, instructors should keep in mind that all student personal information belongs to the student and should be kept private. Students trust their instructors to keep their data confidential and share it carefully. Instructors are responsible for holding every student’s data in confidence.  This information includes things like assessment results, grades, student numbers, and demographic information. 

Although most students are digital natives, they aren’t necessarily digitally literate. Instructors can ensure students’ privacy by coaching them about what is appropriate to share and helping them understand the potential consequences of sharing personal information. 

One area of teaching and learning in which you may not have adequately considered privacy or coached students to withhold personal information and respect confidentiality is peer assessment. Peer assessment or peer review provides a structured learning process for students to critique and provide feedback to each other on their work. It helps students develop lifelong skills in assessing and providing feedback to others and equips them with skills to self-assess and improve their own work. However, in sharing their work, students may also be sharing personal identifying information, such as student numbers, or personal experiences. To help protect students’ personal information and support confidentiality, we recommend that you consider the following points.

Privacy Considerations for Peer Assessment 

  • If student work will be shared with peers, tell students not to disclose sensitive personal information. Sensitive personal information may include, for example, medical history, financial circumstances, traumatic life experiences, or their gender, race, religion, or ethnicity. 
  • Inform students of ways in which their work will be assessed by their peers. 
  • Consider having students evaluate anonymous assignments for more objective feedback.  
  • Coach students to exclude all identifiable information, including student number. 
  • If students’ work is to be posted online, consider associated risks, such as
    • another person posting the work somewhere else online without their consent; and
    • the content being accessed by Generative AI tools like ChatGPT that trawl the internet to craft responses to users’ queries.

This article is part of a collaborative Data Privacy series by Langara’s Privacy Office and EdTech. If you have data privacy questions or would like to suggest a topic for the series, contact Joanne Rajotte (jrajotte@langara.ca), Manager of Records Management and Privacy, or Briana Fraser, Learning Technologist & Department Chair of EdTech.

AI Classifiers — What’s the problem with detection tools?

AI classifiers don’t work!

Natural language processor AIs are meant to be convincing. They are creating content that “sounds plausible because it’s all derived from things that humans have said” (Marcus, 2023). The intent is to produce outputs that mimic human writing. The result: The world’s leading AI companies can’t reliably distinguish the products of their own machines from the work of humans.

In January, OpenAI released its own AI text classifier. According to OpenAI “Our classifier is not fully reliable. In our evaluations on a “challenge set” of English texts, our classifier correctly identifies 26% of AI-written text (true positives) as “likely AI-written,” while incorrectly labeling human-written text as AI-written 9% of the time (false positives).”

A bit about how AI classifiers identify AI-generated content

GPTZero, a commonly used detection tool, identifies AI created works based on two factors: perplexity and burstiness.

Perplexity measures the complexity of text. Classifiers identify text that is predictable and lacking complexity as AI-generated and highly complex text as human-generated.

Burstiness compares variation between sentences. It measures how predictable a piece of content is by the homogeneity of the length and structure of sentences throughout the text. Human writing tends to be variable, switching between long and complex sentences and short, simpler ones. AI sentences tend to be more uniform with less creative variability.

The lower the perplexity and burstiness score, the more likely it is that text is AI generated.

Turnitin is a plagiarism-prevention tool that helps check the originality of student writing. On April 4th, Turnitin released an AI-detection feature.

According to Turnitin, its detection tool works a bit differently.

When a paper is submitted to Turnitin, the submission is first broken into segments of text that are roughly a few hundred words (about five to ten sentences). Those segments are then overlapped with each other to capture each sentence in context.

The segments are run against our AI detection model, and we give each sentence a score between 0 and 1 to determine whether it is written by a human or by AI. If our model determines that a sentence was not generated by AI, it will receive a score of 0. If it determines the entirety of the sentence was generated by AI it will receive a score of 1.

Using the average scores of all the segments within the document, the model then generates an overall prediction of how much text (with 98% confidence based on data that was collected and verified in our AI innovation lab) in the submission we believe has been generated by AI. For example, when we say that 40% of the overall text has been AI-generated, we’re 98% confident that is the case.

Currently, Turnitin’s AI writing detection model is trained to detect content from the GPT-3 and GPT-3.5 language models, which includes ChatGPT. Because the writing characteristics of GPT-4 are consistent with earlier model versions, our detector is able to detect content from GPT-4 (ChatGPT Plus) most of the time. We are actively working on expanding our model to enable us to better detect content from other AI language models.

The Issues

AI detectors cannot prove conclusively if text is AI generated. With minimal editing, AI-generated content evades detection.

L2 writers tend to write with less “burstiness.” Concern about bias is one of the reasons for UBC chose not to enable Turnitins’ AI-detection feature.

ChatGPT’s writing style may be less easy to spot than some think.

Privacy violations are a concern with both generators and detectors as both collect data.

Now what?

Langara’s EdTech, TCDC, and SCAI departments are working together to offer workshops on four potential approaches: Embrace it, Neutralize it, Ban it, Ignore it. Interested in a bespoke workshop for your department? Complete the request form.


References
Marcus, G. (2023, January 6). Ezra Klein interviews Gary Marcus [Audio podcast episode]. In The Ezra Klein Show. https://www.nytimes.com/2023/01/06/podcasts/transcript-ezra-klein-interviews-gary-marcus.html

Fowler, G.A. (2023, April 3). We tested a new ChatGPT-detector for teachers. If flagged an innocent student. Washington Post. https://www.washingtonpost.com/technology/2023/04/01/chatgpt-cheating-detection-Turnitin/

AI Detection Tool Testing — Initial Results

We’ve limited our testing to Turnitin’s AI detection tool. Why? Turnitin has undergone privacy and risk reviews and is a college-approved technology. Other detection tools haven’t been reviewed and may not meet recommended data privacy standards.

What We’ve Learned So Far

  • Unedited AI-generated content often receives a 100% AI-generated score, although more complex writing by ChatGPT4 can score far less than 100%.
  • Adding typos and grammar mistakes or prompting the AI generator to include errors throughout a document canchange the AI-generated score from 100% to 0%. 
  • Adding I-statements throughout a document has a dramatic impact in lowering the AI score. 
  • Interrupting the flow of text by replacing one word every couple of sentences with a less likely word, increases the perplexity of the wording and lowers the AI-generated percentage.  AI text generators act like text predictors, creating text by adding the most likely next work. If the detector is perplexed by a word because the word is not the most likely choice, then it’s determined to be human written.
  • Unlike human-generated writing, AI sentences tend to be uniform. Changing the length of sentences throughout a document, making some sentences shorter and others longer and more complex, alters the burstiness and lowers the generated-by-AI score. 
  • By replacing one or two words per paragraph and modifying the length of sentences here and there throughout a chunk of text — i.e. by doing minor tweaks of both perplexity and burstiness — the AI-generated score changes from 100% to 0%. 

To learn more about how AI detection tools work, read AI Classifiers — What’s the problem with detection tools?

AI tools & privacy

ChatGPT is underpinned by a large language model that requires massive amounts of data to function and improve. The more data the model is trained on, the better it gets at detecting patterns, anticipating what will come next and generating plausible text.

Uri Gal notes the following privacy concerns in The Conversation:

  • None of us were asked whether OpenAI could use our data. This is a clear violation of privacy, especially when data are sensitive and can be used to identify us, our family members, or our location.
  • Even when data are publicly available their use can breach what we call contextual integrity. This is a fundamental principle in legal discussions of privacy. It requires that individuals’ information is not revealed outside of the context in which it was originally produced.
  • OpenAI offers no procedures for individuals to check whether the company stores their personal information, or to request it be deleted. This is a guaranteed right in accordance with the European General Data Protection Regulation (GDPR) – although it’s still under debate whether ChatGPT is compliant with GDPR requirements.
  • This “right to be forgotten” is particularly important in cases where the information is inaccurate or misleading, which seems to be a regular occurrencewith ChatGPT.
  • Moreover, the scraped data ChatGPT was trained on can be proprietary or copyrighted.

When we use AI tools, including detection tools, we are feeding data into these systems. It is important that we understand our obligations and risks.

When an assignment is submitted to Turnitin, the student’s work is saved as part of Turnitin’s database of more than 1 billion student papers. This raises privacy concerns that include:

  • Students’ inability to remove their work from the database
  • The indefinite length of time that papers are stored
  • Access to the content of the papers, especially personal data or sensitive content, including potential security breaches of the server

AI detection tools, including Turnitin, should not be used without students’ knowledge and consent. While Turnitin is a college-approved tool, using it without students’ consent poses a copyright risk (Strawczynski, 2004).  Other AI detection tools have not undergone privacy and risk assessments by our institution and present potential data privacy and copyright risks.

For more information, see our Guidelines for Using Turnitin.

‘De-clutter your Kaltura media’ competition winners!

Krista Kieswetter receiving her prize of a book and gift card.

Back in October 2022 we launched a competition to see who could delete the most content from their Kaltura My Media, in order to help us save on storage and bandwidth costs. We were delighted with the response so a big thank-you to everyone who took on the challenge and deleted unwanted content to help us out. We are happy to announce that Krista Kieswetter from Continuing Studies (pictured) was the winner of the competition for which she received Marie Kondo’s book Joy at Work and an Amazon gift card. Runners-up were Yue-Ching Chen (Recreation Studies) and Katrina Erdos (Geography) who both receive gift cards.

While on the subject of deleting Kaltura media we would like to direct you to our Kaltura media retention policy which we recently formulated based on good practice from other institutions and in consultation with our Records Management and Privacy Manager. As well as continuing to encourage you to archive and delete any unwanted or unplayed media, we will be carrying out periodic deletions of unplayed media, at the end of the summer and fall semesters.

As ever, if you have any questions about our Kaltura media retention policy (or other Kaltura issues) please email edtech@langara.ca.

Using Peer Assessment for Collaborative Learning

Peer Assessment

Peer Assessment PictureThere are several benefits to using peer assessment within your course, one of which is to provide students with a more engaging experience. Opportunities to assess other learners’ work will help students learn to give constructive feedback and gain different perspectives through viewing their peers’ work. There is evidence to show that including students in the assessment process improves their performance. (1) (2) (3)

Research also shows that students can improve their linguistic and communicative skills through peer review. (4) The exposure to a variety of feedback can help students improve their work and can even enhance their understanding of the subject matter. Furthermore, learning to give effective feedback helps develop self-regulated learning, as ‘assessment for learning [shifts] to assessment as learning’ in that it is ‘an active process of cognitive restructuring that occurs when individuals interact with new ideas’ (5).

In addition to the benefits to students, peer assessment can also provide instructors with an efficient way of engaging with a formative assessment framework where the student is given the chance to learn from their initial submission of an assignment.

Options for Peer Assessment within Brightspace:

Within Brightspace, there are several ways that instructors can set up peer assessment activities depending on the nature of the assignment and the needs of the instructor. Here we highlight several use cases.

Peer Assessment Example #1:

The instructor wants to have students assess each other’s group contributions for an assignment within Brightspace.

Using a Fillable PDF, which gives the students a rubric-like experience, a student can rate their peers based on different criteria that has been built into the assessment by the instructor. Students can provide feedback on a rating scale but also can provide more in-depth feedback if needed.

The advantage of using a fillable PDF is that the student can easily download the file and fill in the blanks. The student can reflect on the built-in criteria and the entire process should be quick and easy. The scores are calculated, and the instructor can interpret the results once the student has uploaded the PDF into the assignment folder.

A few disadvantages of this method are that the instructor will have to download each fillable PDF and manually enter a grade if marks are captured for peer assessment. The other issue is the level of student digital literacy. Directions on downloading the fillable PDF to the student’s desktop and not using the PDF within the browser is a key step for this process to work. Not all students are aware that fillable PDFs cannot be used successfully in-browser.

Peer Assessment Example #2:

Students are working towards a final paper that is worth 15% of their overall mark. Before they submit the final version to the instructor, they will have the opportunity to evaluate another student’s draft and their own work using a rubric. If time is limited for this activity, learners can be invited to submit just the first paragraph of the paper, rather than the whole draft.

Through peer assessment, learners can often receive feedback more quickly than if they had to wait for the marker or instructor to review the class’s work.Aropa

Students upload their work to Brightspace Assignments where they are given a link to Aropä, a third-party open software which pairs students so they can assess each other’s work using a built-in rubric. Assessment can be anonymous, and the instructor can restrict feedback to students who have already submitted one review. Self-assessment can be required.

The advantages of Aropä is that it is free and gives instructors the ability to modify rubrics to suit one’s objectives. The disadvantage of this software is that it requires more time to set up. Rubrics provide only basic options: radio buttons or comment boxes. Instructors should be aware of privacy issues with Aropa and only upload first names of their students but avoid uploading student numbers.

Peer Assessment Activity #3

Students complete group presentations after which the class assesses each group’s performance, including their own group’s presentation, using a predetermined marking scheme.

The activity of assessing presentations encourages engagement with the work, versus passive observation, since students will be required to give feedback, encouraging deeper learning and enhancing retention.

The advantages of using an H5P Documentation tool are that H5P can be created directly within Brightspace. It looks nice and is versatile. The disadvantage is that learners will have to export their feedback and then upload it into Brightspace. This two-step process requires some digital literacy skills.

Sample H5P Documentation Tool

Peer Assessment Activity #4:

This peer assessment activity is more about checking completion. Instructor needs to ensure accountability with group work.

Students are given an MS Form with some basic criteria by which to rate themselves and their peers in terms of attendance to meetings, work on the final product / assignment and collaboration. Students will use a point rating scale and need to justify their evaluation by providing a concrete example.

Similar to Example #1, students can complete a form using a Fillable PDF or another software such as Jotform or MS Forms to reflect and assess their own work as well as the work of their teammates. Jotform allows for more complex form building and will calculate totals for each student while MS Forms will not calculate but will allow you to get a sense of how students are doing overall with a basic rating on each criteria. (Focus on qualitative assessment)

Sample MS Form

Sample Jotform

A Note on Third Party Peer Review Software:

There are many different software available for peer assessment. Edtech is currently testing out several different ones and hopes to pilot them in the spring or summer semester. Currently, the only one that we are recommending (because it’s 0-cost) is Aropa. Aropa does a great job of providing several options for peer assessment, including self-assessment, privacy options for students, anonymous assessment, etc. It does not integrate completely with Brightspace so that is one disadvantage over some of the paid peer assessment programs currently available. Programs such as peerScholar, Feedback Fruits and Peerceptive have the capability to integrate into the Gradebook, thereby making it very easy to provide marks for the feedback that your students provide for one another.

For more information on any of the above tools, please contact edtech@langara.ca

References

  1. Wu, Wenyan, et al. “Evaluating Peer Feedback as a Reliable and Valid Complementary Aid to Teacher Feedback in EFL Writing Classrooms: A Feedback Giver Perspective.” Studies in Educational Evaluation, vol. 73, June 2022. EBSCOhost, https://doi.org/10.1016/j.stueduc.2022.101140.
  2. Double, Kit S., et al. “The Impact of Peer Assessment on Academic Performance: A Meta-Analysis of Control Group Studies.” Educational Psychology Review, vol. 32, no. 2, June 2020, pp. 481–509. EBSCOhost, https://doi.org/10.1007/s10648-019-09510-3.
  3. Planas-Lladó, A. et al., 2018. Using peer assessment to evaluate teamwork from a multidisciplinary perspective. Assessment & Evaluation in Higher Education, 43(1), pp.14–30.
  4. de Brusa, M. F. P., & Harutyunyan, L. (2019). Peer Review: A Tool to Enhance the Quality of Academic Written Productions. English Language Teaching, 12(5), 30-39.
  5. Western and Northern Canadian Protocol for Collaboration in Education, 2006 p.41

Turnitin and Student Privacy

Turnitin is a text matching tool that compares students’ written work with a database of student papers, web pages, and academic publications. The two main uses for Turnitin are: 1) for formative or low-stakes assessment of paraphrasing or citation; and 2) for prevention and identification of plagiarism.

Privacy Concerns

When an assignment is submitted to Turnitin for a text matching report, the student’s work is saved as part of Turnitin’s database of more than 1 billion student papers. This raises privacy concerns that include:

  • Students’ inability to remove their work from the database
  • The indefinite length of time that papers are stored
  • Access to the content of the papers, especially personal data or sensitive content, including potential security breaches of the server

Copyright Concerns

In addition, saving a student’s work on Turnitin’s database without their consent may put an institution at risk for legal action based on Canadian copyright law (Strawczynski, 2004). 

Guidelines for Using Turnitin

To mitigate these concerns, we recommend the following guidelines for all instructors using Turnitin:

  1. Be clear and transparent that you will be using Turnitin. Even if a course outline includes a statement indicating that Turnitin will be used in a course, we recommend not relying on that statement alone. Ideally, instructors should also explain to students that their papers will be stored on the company’s database and ask for their consent. If they don’t provide consent, have an alternate plan (see below).
  2. Decide whether or not students’ work needs to be saved on Turnitin’s database. The default is for all papers to be saved, but this can be changed. Not saving papers to the database means that those papers can’t be used to generate future similarity reports, but it does remove the privacy and copyright concerns.
  3. Coach students to remove identifying details. If the students’ submissions will be added to Turnitin’s database, make sure you get them to remove any personal information from their assignment, including their name, student number, address, etc. Meta-data that is embedded should also be removed (e.g. in track changes or file properties). If you’re having them submit to an assignment folder on Brightspace, their name will be with their submission so it shouldn’t be a problem if it’s not on the paper itself.
  4. Don’t run a similarity report for an individual student without their knowledge. Ethical use of Turnitin occurs when it is transparently and equally used for all students. Running a report only on a specific student’s work without their knowledge or consent is not transparent or equal.
  5. Consider whether or not the assignment is appropriate for Turnitin. If the students need to include personal or sensitive information in the assignment, Turnitin should probably not be used. If you do decide to use it, the students’ papers should not be stored in the database.
  6. If contacted by another institution, be cautious about revealing student information. If at some point in the future there is a match to one of your student’s papers in Turnitin’s database, Turnitin does not give the other institution access to the text of the paper but will provide the instructor at the other institution with your email. If you are contacted about a match, consider carefully before forwarding the paper or any identifying details about the student to the other institution. If you do want to forward the paper, you should obtain the student’s consent.

Alternatives to Confirm Authorship When Turnitin is Not Used

If a student objects to having their paper submitted to Turnitin, or if the assignment is not appropriate for submission to Turnitin because it includes personal or sensitive content, you can increase confidence that the students are doing their own work in other ways. For example, an instructor can require any or all of the following:

  • submission of multiple drafts
  • annotation of reference lists
  • oral defence of their work

Requiring students to complete any or all of these will increase the student’s workload which would mean that students who opt out of Turnitin aren’t at an advantage over students who opt in.

Helping Students Make Turnitin Work for Them

If you’re using Turnitin, it’s highly recommended that you adjust the settings to allow the students to see their similarity reports. You may need to teach students how to interpret the reports if they haven’t learned how to do so from a previous course. Turnitin’s website has resources if you need them (https://help.turnitin.com/feedback-studio/turnitin-website/student/student-category.htm#TheSimilarityReport) and you can also point your students to the Turnitin link on Langara’s Help with Student Learning Tools iweb (https://iweb.langara.ca/lts/brightspace/turnitin/). Finally, remember that these reports won’t be helpful to a student if they’re not given the chance to revise and resubmit after they see the report. In Brightspace, we recommend that instructors set up two separate assignment folders with Turnitin enabled: one for their draft and one for the final submission.

Have questions?

If you need support with Turnitin, please contact edtech@langara.ca

References

Strawczynski, J. (2004). When students won’t Turnitin: An examination of the use of plagiarism prevention services in Canada. Education & Law Journal 14(2), 167-190. 

Vanacker, B. (2011). Returning students’ right to access, choice and notice: a proposed code of ethics for instructors using Turnitin. Ethics & Information Technology, 13(4), 327-338.

Zaza, C., & McKenzie, A. (2018). Turnitin® Use at a Canadian University. Canadian Journal for the Scholarship of Teaching and Learning, 9(2). https://doi.org/10.5206/cjsotl-rcacea.2018.2.4

Zimmerman, T.A. (2018). Twenty years of Turnitin: In an age of big data, even bigger questions remain. The 2017 CCCC Intellectual Property Annual. Retrieved from https://prod-ncte-cdn.azureedge.net/nctefiles/groups/cccc/committees/ip/2017/zimmerman2017.pdf