Matt Uebel
Solutions Architect Candidate
Insecure software provisioning is a silent killer of productivity and security, leading to costly rework, project delays, and critical security incidents.
Fixing flaws in production is exponentially more expensive.
Last-minute security findings halt deployments.
Misconfigurations are a leading cause of data breaches.
To solve this, we first need to understand the evolution of Generative AI.
For many, GenAI is a clever conversationalist. We treat it like a person because it's the most intuitive way to interact.
The true power is in translating human goals directly into machine-parsable syntax. It creates a strange but powerful new paradigm where we find ourselves writing plain language inside our scripts and codebases.
"Create a public S3 bucket."
resource "aws_s3_bucket" "b" {
bucket = "marketing-assets-12345"
acl = "public-read"
}
"Show me sales by region for last quarter."
SELECT region, SUM(amount)
FROM sales
WHERE sale_date >= '2025-01-01'
AND sale_date < '2025-04-01'
GROUP BY region;
def summarize_security_report(report_text):
# The "prompt" is where semantics and syntax collide.
prompt = f"""
You are a security analyst. Please summarize the key
vulnerabilities from the following report and list the
top 3 most critical items.
REPORT: {report_text}
"""
# This looks like a normal API call, but it's powered by intent.
response = genai_api.generate(prompt)
return response.summary
Andrej Karpathy
@karpathy
We predict behavior based on its explicit programming.
Input -> Rules -> Output
We predict behavior by treating the AI as a rational agent with goals.
Goal -> Beliefs -> Action
This shift allows us to move from giving perfect instructions to simply stating our high-level intent.
This agentic paradigm isn't just theoretical. A new technology stack is emerging to support it, exemplified by concepts like Anthropic's Model Context Protocol (MCP).
Writing documentation not just for humans, but for AIs to understand and use tools, as pioneered by companies like Stripe.
A standardized way for an AI agent to ask an application, "What can you do, and how do I talk to you?"—an API for AIs.
This is a real-world example from Vercel: a simple llms.txt
file providing clean, structured markdown for an AI to consume.
Now, let's apply it.
Let's return to the problem we started with: the exponential cost of discovering security flaws too late.
Cost: 1x
Cost: ~6.5x
Cost: ~15x
Cost: >30x
The solution is a system that applies agentic AI to proactively analyze planning documents. It works in four stages:
Slack, Google Docs, etc.
CIS, Internal Policies, etc.
[ Mock Demo ]
[Automated Review]: Analysis will appear here...
Enter a plan above and click the button to see the AI in action.
A fully automated system can be alarming. We must address potential cultural friction head-on by acknowledging concerns like the "chilling effect" on brainstorming or alert fatigue.
Phased adoption via ad-hoc, self-serve, and active functions helps build trust. A clear "What, Why, How" message is critical: this is a safety net to *assist*, not a surveillance tool to *report*.
To succeed, the system must be built with robust, transparent controls from day one.
Ability to exclude sensitive documents and mask PII.
Log all findings and user interactions for review and compliance.
Manage who can configure policies, view findings, or grant exceptions.
Version, track, and approve the AI models used for analysis.
An ad hoc tool for developers to self-check plans.
The system passively monitors document stores and adds comments.
The system acts as a blocking quality gate in the CI/CD pipeline.
Pull Request #42: New Marketing Service
✓ Linting... OK
✓ Formatting Check... OK
✗ Automated Security Review... FAILED
Heads up! I've identified a potential risk in this plan.
Risk: Publicly accessible S3 buckets are a common source of data breaches.
Recommendation: Use CloudFront with signed URLs instead.
Policy: See CLOUD-GUIDE-07.1
Alice
Okay team, for the user profile pictures, let's just create a new public S3 bucket. That'll make them easy to access from the web client.
Bob
Sounds good to me, much simpler than setting up a whole CDN and signing URLs.
Alice
@Automated Review can you check our plan in the last 2 messages?
Automated Review BOT
Heads up! A plan to create a public S3 bucket was detected.
...and to simplify access for the web client, we'll just create a new public S3 bucket for all the user profile pictures.
[Automated Review]: Heads up!
A plan to create a public S3 bucket was detected.
This allows us to maintain a central repository for these assets...
This isn't just a scanner; it's a proactive system that scales security expertise and improves developer velocity.
Identify and fix design flaws before a single line of insecure code is written.
Embed the knowledge of your best security architects into an automated system.
Reduce late-stage rework and eliminate entire categories of vulnerabilities.
Provide developers with immediate, context-aware feedback to build their security skills.
By moving beyond the chatbot, we can use GenAI as an integrated reasoning engine.
We can apply this engine to solve the expensive problem of insecure design.
The result is a system that proactively guides and secures our development lifecycle from the very beginning.
"A system like this is fundamentally a data problem. It relies on ingesting diverse, unstructured telemetry, correlating it, and running advanced analytics to surface actionable insights. This aligns perfectly with Splunk's core mission of turning data into doing."