Should we use generative AI to write intelligence analysis?
Why a common professional code of ethics would be helpful right about now.
I did not intend for this to be my first post.
In fact, I have a few half-written drafts lingering off on the sidelines. All of which seem interesting and promising in their own way, but this one just kept pestering me.
Why? Because it seems as though every other headline I read is about generative AI and how it’s changing (or about to change) everything. Cue the doomsday soundtrack.
And while I promised this wouldn’t be a place where I gave analytical hot takes on the headlines, I’m breaking my own rule because this particular headline involves the “what” and “how” of intelligence analysis. Let me explain.
We’ve seen evidence of how generative AI tools can write convincing academic papers, causing college honor committees to debate implications, teachers to overhaul how they assign homework, and conferences to ban AI generated submissions.
I can’t help but think about how this might impact my own profession.
Intelligence analysis brings together disparate pieces of information to produce a cohesive written product. All of which can now be outsourced to a robot and passed off as a bonafied analytic product. Don’t believe me? Check out this sample Pakistan security risk analysis I found on LinkedIn from Matt Kish. Or the slides in this post from Varun Kareparambil, who asked ChatGPT five intelligence questions with pretty spot on analytic responses.
So can generative AI produce intelligence analysis? Yes.
But should analysts use it to produce assessments?
I’m not sure. And neither is the broader community of intelligence analysis practitioners.
Why? Because there is no unified code of analytic ethics we all agree to uphold when we arrive at our desks each day. Individual teams and organizations may have them, sure, but there isn’t any consensus explicitly outlining guardrails for the intelligence profession as a whole. Leaving individual analysts and teams wondering what, exactly, to do about the new AI tool that is suddenly everywhere.
Do we use it?
Yes. Analysts can leverage it as a supercharged productivity tool to help quickly draft content, pull together aggregate sources, identify trends and themes, and jot together an outline. It could even help produce a quick infographic or visual to help augment your analytic bottom line, or write talking points for an upcoming brief. All of which would need a human analyst to rigorously review and revise. But in the private sector where time = money, having a tool to speed things along will always be enticing.
No. Without knowing where the tool is pulling sources from, its output is suspect from the start. It could be outdated. It could be biased. It could be completely wrong, because a tool like ChatGPT can’t parse fact from fiction. Even if an analyst used it to produce a quick draft with the intention of rigorously editing it, they are unintentionally introducing bias by turning over that first “connect the dots” opportunity to a black box we don’t fully understand and maybe never will.
What do you think?
And in case anyone is curious where the robot overlords land on this debate, here is ChatGPT’s opinion…
Hey, great article coming from a fellow intelligence analyst.
I see AI certainly being a tool for analysts to use to increase productivity but it should never be used to replace human reason and producing assessments (until there is AI that is trained to specifically do that..).
Love this piece...last year I did an experiment to see what kind of intel products AI could put together compared to human products. https://mas.to/@TwShiloh/109509054129480489