GIA Conference AI Policy

While Language Learning Models (LLM), commonly referred to as Generative AI, can aid in their assistive capabilities in a variety of ways, it is our responsibility to acknowledge and understand the harm they bring by "[the deliberate] targeting of predominantly BIPOC and low-income communities for polluting industries and the placement of toxic waste,” (Feminist Majority Foundation). As a result, GIA has outlined how our organization intends to engage with Generative AI, including, but not limited to, ChatGPT (OpenAI), Gemini (Google), DALL-E, and GitHub Copilot.

GIA believes it is acceptable for session organizers to use:

  • LLMs to generate ideas, create outlines, and refine existing concepts

  • Identifying trends or summarizing large datasets

  • AI translation tools

  • AI transcription tools

  • AI tools embedded in platforms such as Slack, Google, and Zoom that are used for AI-generated captioning, note-taking, and summaries


GIA does not consider it acceptable for session organizers to use:

  • AI-Generated content as final work without human review

  • Initial drafts of reports or summaries generated by AI tools

  • Copyrighted materials modified by AI


GIA encourages its conference presenters to evaluate and understand their responsibilities and relationships regarding Generative AI. By engaging with AI tools, one should recognize that auto-generated work may contain inaccuracies, reflect bias, and limit fairness and inclusion. Those who engage with AI tools are encouraged to review, fact-check, and ensure the accuracy of their work, using AI to assist, not replace.

We encourage GIA members to explore our programming around this topic: