Those governing enterprise AI use may find it challenging to keep pace with the current breakneck pace of innovation within the field. A few pragmatic tactics can aid professionals not directly engaged in the current AI arms race in maintaining functional expertise without being overwhelmed.
In a time when AI hucksters abound, AI governance requires that claims of productivity gains or cost savings be viewed with an appropriate degree of skepticism, which ought to fall within a Goldilocks sweet spot between credulous and dismissive. However, the “just right” mixture of acquiescence and mistrust requires knowledge of the state of AI. Being sufficiently versed in the current landscape of AI technology enables governance professionals to ask the right questions and ensure measured decision-making rather than reactionary acceptance or dismissal.
To avoid paralysis while keeping pace with trends, consider keeping a personal summary, cultivating AI enthusiasts, and conducting informal experiments.
Episodically Update a One-Page “Cheat Sheet”
Maintaining a living document with key takeaways can serve as an on-demand knowledge refresher or reference. A short set of personal notes, accessible from a phone or tablet, can be particularly useful before attending an AI vendor presentation or engaging in discussions with executives or teams that wish to expand or shift AI use.
A personal AI knowledge recap might include:
- Terminology: Keep a few notes on terminology: “generative” means creating new content; “GPT” stands for general-purpose transformer; “agents” as automation tools; “overfitting”; etc.
- Key characteristics: Know common tasks and the model learning types to which they are best suited (e.g., supervised learning for prediction or classification, unsupervised learning for identifying patterns or for grouping similar data, and reinforcement learning for decision-making) and different forms of deep learning networks (convolutional neural networks for spotting a specific pattern, recurrent neural networks for processing sequential data).
- Model types and notable products: Keep abreast of model types and products—for example, general-purpose transformers (GPTs) and/or LLMs such as ChatGPT, Claude Sonnet, Gemini, Llama, etc., and special-purpose models such as Dall-E 3, Stable Diffusion, or Midjourney for image creation, Soundverse for music generation, ElevenLabs for voice cloning, etc. Intermittently peruse an AI tools directory to keep up to date.
By default, AI governance practitioners must necessarily stay informed on directly applicable laws and regulations like the EU AI Act, the NIST AI Risk Management Framework, and the FTC guidance on AI-driven decision-making, as well as implicated regulations such as the EU NIS2 Directive and GDPR, NIST CSF 2.0, and the CCPA and related state data privacy laws. A condensed, structured AI technology reference reduces the cognitive load required to stay current while ensuring governance decisions remain well-grounded.
Find and Engage AI Enthusiast Colleagues
No single person can track the full scope of AI’s evolution, but governance professionals can crowd-source the problem, tapping into collective expertise by identifying and networking with AI-savvy colleagues and acquaintances.
To leverage relationships with AI users or enthusiasts for governance insights, consider:
- Cross-functional conversations: Engage with everyone about AI, not just data scientists, software engineers, or the like. You could be surprised to learn that a salesperson in your workout group uses the audio interface for ChatGPT to rehearse responses to customer rejection. Understanding the reasons for AI product preferences in different areas can help with AI governance decisions.
- Internal AI roundtable discussions: Allocate 15-20 minutes during a monthly or quarterly meetup to discuss notable AI advancements and how they may impact operations could provide insight into how governance policies may need to change.
- Chats or channels: Establish a dedicated AI chat or Slack/Microsoft Teams channel where employees can share relevant articles, product updates, model evaluations, or discoveries.
By leveraging AI champions, governance professionals can absorb key insights efficiently without needing to track every advance independently.
Road Test AI Products (When Possible)
Governance decisions are strongest when based on direct experience rather than abstract discussions. Whenever feasible, professionals responsible for AI governance should test AI products firsthand to gain a greater understanding of strengths, limitations, and risks.
Ways to road test AI products include:
- Use open platforms: Try ChatGPT, Claude, or any open Hugging Face model to understand AI behavior and potential pitfalls firsthand.
- Experiment with enterprise AI tools: When licenses are available for company-deployed, AI-driven automation or analytics tools or chatbots, undertake the company-provided training and then test the tools to evaluate accuracy, transparency, and compliance risks.
- Engage in adversarial prompting: Challenge AI systems with edge-case scenarios (e.g., biased queries, misleading inputs) or efforts to subvert guardrails to see where weaknesses emerge.
Hands-on testing equips governance professionals with practical insights into how AI operates in real-world settings, making it easier to anticipate compliance risks, regulatory challenges, and ethical dilemmas.
Conclusion
AI governance professionals must learn fast enough to make informed decisions—but without being consumed by the flood of information in AI’s evolving landscape. By maintaining a curated cheat sheet, collaborating with AI-savvy colleagues, and personally road-testing AI products, governance leaders can cut through the hype, challenge questionable claims, and effectively regulate AI use in their organizations.