Strategies, Structure, and Considerations for Implementing Artificial Intelligence into Education Delivery

The rapid growth of Artificial Intelligence (AI) is creating both excitement and angst on where and how to begin safe, effective integration. While there are many ways AI can be applied, these suggestions are focused on educational delivery. For those who have begun applying and engaging student-facing AI, the vertical evolution requires structured and continuous development of institutional-level AI governance to maintain safe and ethical use. Governance structures should consider continuous growth of policies, guidelines and directives for use, approval processes, and staged training for faculty, staff, and students based on the speed of changes occurring in the market. 

Lacking or vague policies, structure, or training approaches may leave administrators, faculty, and students without the guidance needed to reap the benefits of AI use. Lacking governance structures or broad policies may unintentionally promote unsafe or unethical use of AI on the part of students, faculty, staff, and instructional designers who may not have the proper guidance to integrate. This can lead to compromised course design and program content, AI hallucinations, misinformation, and privacy breaches to proprietary university content, impacting the university and quality of education. 

In my work nationally with AI, a majority of university faculty and administrators who have been part of this conversation are either unaware of their university policies that govern AI, indicate they do not have policies yet in place, or the policies they have are broad leaving them without the training or knowledge to safely implement or effectively engage AI. Because of the popular focus on more open gradient generative AI forms, governance policies may narrowly focus more on academic integrity and less on quality and safe education delivery or use. Lacking knowledge and lacking governance seems to result in avoidance of AI, negative attitudes, engaging in potentially harmful uses of AI, AI application in silos, lacking accountability, and ineffective use of the technology. Ultimately, this leaves the university in a place of reactivity versus proactivity.

Proactive approaches to consider may include hiring or promoting key leaders who are attuned to the market and involved in national conversation with other key leaders and organizations to stay abreast of changes in an ongoing effort of staged adjustment and change. These expert leaders can dedicate their time to structuring and maintaining governance and training needed with the speed of AI growth, to provide relevance, growth, support, and accountability.

To help engage support and stage change in response to the rapid AI evolution, planning stages should also consider surveying various stakeholders involved in education delivery including staff, instructional designers, deans, directors, faculty, and adjuncts on familiarity, experience, types of AI use, and attitudes toward AI use. This can help ensure staged change that protects culture and motivates appropriate use while guiding what governance structures would be most helpful.

Surveying should also aim to understand how each program’s specific skills are assessed and competencies are measured on both course and programming levels to understand what centralized technologies could be purchased and integrated across the university. This can help focus centralized training efforts, integrate standardized AI ethics into programming, customize competency-based delivery to relevant departments, reduce cost, and enhance strategic integration of technology. Training plans should consider exposure to different forms of AI, uses, risks, benefits, security and privacy, specific tool integration, compliance, ethics, and accountability in conjunction with university governance and guidance to support appropriate and effective integration. 

Governance structures should consider a framework, processes, and guidelines of approved uses for AI use, a clear process for assessing and approving AI use in education delivery based on compliance, safety, data security, and privacy. Criteria should be established to support the ethical and safe use of AI for students, faculty who are using generative programming to provide analysis and feedback, instructional designers developing course content, administrators who may use AI for programming and content development, policy analysis, and written proposals. Guidance on what institutional content is and is not permitted for upload into AI platforms should be provided. Once information is uploaded into most generative platforms, the content may be permanently public and reused by others. Without this guidance, training, structure, and accountability, employees and students may unknowingly upload client information, institutional information, and content to generate an outcome, not recognizing security and privacy concerns, lending to various forms of risk. 

The combined effort of training and governance can help better support and guide faculty and administrators, but also foster student development of critical thinking, preparing them for the changes occurring in the marketplace. It can help to enhance competency-based education, allowing for more robust feedback for students, supporting faculty in teaching needed skill sets, and helping improve academic skills. Institutional training and governance structures can lay a foundation for the university to pivot more effectively and efficiently as growth continues.

Jamie Sundvall, PhD, PysD, LICSW
Director of Online Education, Online MSW Program Director, Touro University 




Share this post:

Comments on "Strategies, Structure, and Considerations for Implementing Artificial Intelligence into Education Delivery "

Comments 0-5 of 0

Please login to comment