Security musings

Catégories

Tags

🔍 Licence d'Utilisation 🔍

Sauf mention contraire, le contenu de ce blog est sous licence CC BY-NC-ND 4.0.

© 2025 à 2042 Sébastien Gioria. Tous droits réservés.

For years, software supply chain security has boiled down to a well-established routine: scanning dependencies. We’ve all integrated npm audit, renovate, or Dependabot into our processes. It’s necessary, it’s basic hygiene. But in 2025, focusing solely on dependencies is like locking the front door while leaving the keys in the ignition of the car parked in the driveway.

The threat has mutated. Attackers, faced with increasingly hardened applications, have decided to swim upstream. They’re no longer just targeting the final code, but the factory that builds the code (your CI/CD) and the assistants that help you design it (your local AI).

The Paradigm Shift: From SolarWinds to AI

The SolarWinds attack was a global wake-up call. The attackers didn’t break the software at customers’ sites; they compromised the publisher’s build system to insert a backdoor before the software was signed. Shortly after, the Codecov incident showed how a simple Bash script used in CI could be modified to exfiltrate thousands of environment variables (AWS keys, GitHub tokens) to a third-party server.

Today, a new layer of complexity is added: Agentic Artificial Intelligence. Developers now invite language models (LLMs) directly into their terminals to execute commands. This is an unprecedented attack surface that blends social engineering with technical exploitation.

In this article series, we’ll explore these attack vectors, based on several OWASP standards and offensive security research.

This analysis is divided into three distinct technical parts, designed to be read sequentially or independently:

Part 1: AI Agents in the Terminal

Tools like GitHub Copilot CLI, Claude Code, or Gemini CLI are not simple “chatbots”. They have execution rights on your machine.

  • The risk: Indirect prompt injection. How a cloned Git repo can “hack” your AI to execute commands without your knowledge.
  • The focus: Analysis of the “Gemini CLI Prompt Injection” research.

Part 2: The MCP Protocol (Model Context Protocol)

MCP is the new standard for connecting AI to your data (local files, Google Drive, Slack). It’s powerful, but misconfigured, it’s an open door to your most intimate secrets.

  • The risk: Data exfiltration and privilege escalation via poorly audited connectors.
  • The focus: STRIDE analysis applied to MCP architecture.

Part 3: CI/CD Pipelines (GitHub Actions / GitLab CI)

The heart of the software factory. The OWASP Top 10 CI/CD shows us that our pipelines are often sieves.

  • The risk: Remote code execution (RCE) via Pull Requests and secret theft (PPE - Poisoned Pipeline Execution).
  • The focus: How to sign your artifacts with Sigstore/Cosign and lock down your workflows.

This series aims to be pragmatic: for each domain, we’ll go beyond theory to provide commands, configurations, and concrete remediation strategies.