G360 Technologies

Operation Bizarre Bazaar: The Resale Market for Stolen AI Access

Operation Bizarre Bazaar: The Resale Market for Stolen AI Access

A Timeline (Hypothetical, Based on Reported Patterns)

Hour 0: An engineering team deploys a self-hosted LLM endpoint for internal testing. Default port. No authentication. Public IP.

Hour 3: The endpoint appears in Shodan search results.

Hour 5: First automated probe arrives. Source: unknown scanning infrastructure.

Hour 6: A different operator tests placeholder API keys: sk-test, dev-key. Enumerates available models. Queries logging configuration.

Hour 8: Access is validated and listed for resale.

Day 4: Finance flags an unexplained $14,000 spike in inference costs. The endpoint appears to be functioning normally.

Day 7: The team discovers their infrastructure has been advertised on a Discord channel as part of a “unified LLM API gateway” offering 50% discounts.

More than 35,000 attack sessions over 40 days. Exploitation attempts within 2 to 8 hours of discovery. Researchers describe Operation Bizarre Bazaar as the first publicly attributed, large-scale LLMjacking campaign with a commercial marketplace for reselling unauthorized access to LLM infrastructure.

There is a shift in AI infrastructure threats: from isolated API misuse to an organized pipeline that discovers, validates, and monetizes access at scale.

The campaign targeted exposed Large Language Model endpoints and Model Context Protocol servers, focusing on common deployment mistakes: unauthenticated services, default ports, and development or staging environments with public IP addresses.

Separately, GreyNoise Intelligence observed a concurrent reconnaissance campaign focused specifically on MCP endpoints, generating tens of thousands of sessions over a short period.

How the Mechanism Works

Operation Bizarre Bazaar operates as a three-layer supply chain with clear separation of roles.

Layer 1: Reconnaissance and discovery

Automated scanning infrastructure continuously searches for exposed LLM and MCP endpoints. Targets are harvested from public indexing services such as Shodan and Censys. Exploitation attempts reportedly begin within hours of endpoints appearing in these services, suggesting continuous monitoring of scan results.

Primary targets include Ollama instances on port 11434, OpenAI-compatible APIs on port 8000, MCP servers reachable from the internet, production chatbots without authentication or rate limiting, and development environments with public exposure.

Layer 2: Validation and capability checks

A second layer confirms whether discovered endpoints are usable and valuable. Operators test placeholder API keys, enumerate available models, run response quality checks, and probe logging configuration to assess detection risk.

Layer 3: Monetization through resale

Validated access is packaged and resold through a marketplace operating under silver.inc and the NeXeonAI brand, advertised via Discord and Telegram.

Attacker Economics

ElementDetail
Resale pricing40-60% below legitimate provider rates
Advertised inventoryAccess to 30+ LLM providers
Payment methodsCryptocurrency, PayPal
Distribution channelsDiscord, Telegram
Marketing positioning“Unified LLM API gateway”

The separation between scanning, validation, and resale allows each layer to operate independently. Discovery teams face minimal risk. Resellers maintain plausible distance from the initial compromise. The model scales.

What’s Actually at Risk

Compute theft is the obvious outcome: someone else runs inference on your infrastructure, and you pay the bill. But the attack surface extends further depending on what’s exposed.

LLM endpoints may leak proprietary system prompts, fine-tuning data, or conversation logs if not properly isolated.

MCP servers are designed to connect models to external systems. Depending on configuration, a compromised MCP endpoint could provide access to file systems, databases, cloud APIs, internal tools, or orchestration platforms. Reconnaissance today may become lateral movement tomorrow.

Credential exposure is possible if API keys, tokens, or secrets are passed through compromised endpoints or logged in accessible locations.

The research notes describe both compute theft and potential data exposure, but do not quantify how often each outcome occurred.

Why This Matters Now

Two factors compress defender response timelines.

First, the 2 to 8 hour window between public indexing and exploitation attempts means periodic security reviews are insufficient. Exposure becomes actionable almost immediately.

Second, the resale marketplace changes attacker incentives. Operators no longer need to abuse access directly. They can monetize discovery and validation at scale, sustaining continuous targeting even when individual victims remediate quickly.

Implications for Enterprises

Operational

AI endpoints should be treated as internet-facing production services, even when intended for internal or experimental use. Unexpected inference cost spikes should be treated as potential security signals, not only budget anomalies. Reduced staffing periods may increase exposure if monitoring and response are delayed.

Technical

Authentication and network isolation are foundational controls for all LLM and MCP endpoints. Rate limiting and request pattern monitoring are necessary to detect high-volume validation and enumeration activity. MCP servers require particular scrutiny given their potential connectivity to internal systems.

Risks & Open Questions

Attribution confidence: Research links the campaign to specific aliases and infrastructure patterns, but the confidence level cannot be independently assessed.

MCP exploitation depth: Large-scale reconnaissance is described, but the extent to which probing progressed to confirmed lateral movement is not established.

Detection reliability: Behavioral indicators such as placeholder key usage and model enumeration may overlap with legitimate testing, raising questions about false positive rates.

Further Reading

  • Pillar Security
  • GreyNoise Intelligence
  • BleepingComputer
  • TechZine
  • InfoWorld
  • CtrlAltNod
  • Sysdig
  • Entro Security
  • SC Media