With the creation of the new college, implemented fall of 2025, and the uproar of AI usage all across the country, many students and faculty have been wondering when Drew University will release rules and policies about using AI on campus.
When the new college was introduced, it left many students with questions; How will the learning be balanced with AI? What is it? How will it work?
Many students know of the new college, and even though it is still a prototype, no one knows what it is and what the goal is. The impact that this college will have on the students is unknown so far, and should be addressed in the policies that are being created by the Sense Making group.
The AI Sense Making group is a group of administration, faculty, staff and a few graduate students who have been meeting since September with the goal of creating policies that benefit the university and everyone in it.
Chief Academic Officer Steve Johnson, a member of the Sense Making group, explained in an interview that sense making begins with the phrase, “we don’t know [what will happen], but we need to act.”
It is important to note that there is no central push to have AI at the university. The policy work being done is to act as other universities act and create boundaries when they are needed to ensure integrity at the school.
Technology is forever changing and advancing, and with a complex subject like AI, policy for it won’t be made in a day. Johnson explained that, “we have to grapple with what the real triggers are. When we started this work six months ago, we weren’t thinking about Agentic AI.”
Working with ever-evolving technology, the Sense Making group has been meeting constantly, having already published a draft of the AI policy. However, they continue to rework and create new boundaries for topics like Agentic AI.
When it comes to the future of AI at Drew, Johnson stated that, “we will have to address it because it will continue… it’s unavoidable, and yet, there’s great risks.”
Like all university policies, the Sense Making group has to assess every risk that comes with introducing something new.
As Johnson creatively said, “cavemen created spears to hunt in new ways, which helped our brains develop, but at the same time, spears also murder people.” This metaphor is a perfect example of the ethical dilemmas that may arise, ones that the Sense Making group will have to face when introducing AI.
A key point that the group is trying to include within the policy work would be involving AI literacy for both students and staff. The goal of this would be to educate everyone on the disadvantages and advantages of having AI at school, as well as to help Drew make decisions as a community.
The policy work is still underway and changes will continue to be made, however the base draft will “provide a framework for the responsible, transparent, safe and equitable use of Ai by all members of the Drew community.”
According to an email with Johnson, a group of faculty from the College of Arts and Sciences as well as the Theological School are planning on creating academic guidelines that will apply to time in the classroom and research use, and also will become integrated into the academic integrity guidelines. These guidelines will be released by administration before the start of the fall 2026 semester.
If you have any questions, please feel free to reach out to us at the Acorn. theacorn@drew.edu, and we will forward the questions to the appropriate personnel.
Allison Cannon is a sophomore majoring in psychology and minoring in Spanish and law, justice and society.

