Following Gamaredons Infrastructure Rotations using Kraken (1/7)


by Robin Dost

Part 1 of 7 of building the Malwarebox Ecosystem
Official Websitehttps://kraken.malwarebox.eu
Whitepaper

Tracking Gamaredon infrastructure is frustrating.

Domains rotate.
IPs disappear.
Dead-drops change.
And your IOC list is outdated before your report is even finished.

This is exactly where traditional tracking breaks.

Infrastructure is not a list.
It’s a system.

So instead of chasing indicators, I built something else: Kraken.

An actor-centric platform that tracks infrastructure as a continuously evolving graph.

Over the past months I used Kraken to follow Gamaredons infrastructure rotations, automatically expand clusters via passive DNS, dead-drop resolution, additional enrichment and keep visibility even as things change.

Kraken is still in evaluation, but it already shows why this approach works.

Gamaredon is basically the perfect test case: fast rotations, simple patterns, constant change.

Annoying if you rely on IOC lists.
Interesting if you actually track the system behind it.


The Problem

The core problem is simple:

Most threat intelligence workflows are still indicator/ioc-centric.

Indicators are collected, stored and shared as lists.
But infrastructure operated by threat actors is not a list.

It is a system.

Domains resolve to IPs
IPs host multiple domains
Dead-drops reference infrastructure
Passive DNS reveals historical relationships

Once you model these relationships as a graph and not as a list, entirely new analysis possibilities show up.


Methodology

I do not collect indicators and throw them into another IOC list as I often did before, my approach used here focuses on tracking infrastructure as a system.

Gamaredon infrastructure changes constantly. Domains rotate, IPs disappear, new dead-drops appear and old ones quietly vanish again. If you try to track this using static indicator lists you quickly run into a simple problem:
your data ages faster than your report.

So instead of treating indicators as the final result, they are treated as entry points.

Each domain, IP address or dead-drop reference becomes a starting node from which additional infrastructure can be discovered through relationships.

Actor-Centric Tracking

The key idea behind this is fairly simple.

I do not just track indicators, the whole tracking process focuses on the actor and the infrastructure ecosystem they operate.
Indicators are therefore not stored as isolated artifacts but as nodes within a larger infrastructure graph.

Some examples:

  • Domains resolve to IP addresses
  • IP addresses host multiple domains
  • Dead-drop channels reference infrastructure
  • Passive DNS exposes historical relationships between these elements

When you follow these relationships consistently, infrastructure clusters start to emerge on their own, just by letting Kraken collect data by itself.

Continuous Collection

Manual lookups are fine for small investigations, but they do not scale well when tracking infrastructure that changes constantly.
To deal with this problem, infrastructure collection is automated through small collection pipelines which continuously process new data as it arrives.

These pipelines typically follow a very simple structure:

Source > Extract > Normalize > Enrich

A source may be a Telegram channel used as a dead-drop, a blog platform or any location where infrastructure information can be found.

Once infrastructure artifacts such as domains or IP addresses are extracted, they are normalized and passed to enrichment stages which attempt to expand the infrastructure footprint.

Infrastructure Graph

All observed artifacts and relationships are stored within an intelligence graph.
Nodes represent infrastructure elements, while edges in the graph represent observed relationships between them.

This model makes it possible to pivot through infrastructure in multiple directions and observe how infrastructure clusters evolve over time.

In practice this basically turns infrastructure tracking into a continuous mapping process (rather than a one time indicator collection exercise).


Data Collection

A common pattern observed in Gamaredon operations is the use of publicly accessible locations to distribute infrastructure references.

These locations often act as so called “dead-drops”: pages or channels that contain references to infrastructure which can later be used by infected systems or operators.

We do not have to manually monitoring these sources anymore. The collection process is fully automated. A small collection pipeline periodically retrieves the content of known dead-drop locations and attempts to extract infrastructure artifacts such as domains, (worker) URLs or IP addresses.

Once a reference is identified, the artifact is normalized and passed into the intelligence pipeline where it can be processed further.

Figure 1 shows an example of such a collection run.
The pipeline processes a known dead-drop location and extracts a URL which is later used as a pivot point for more additional infrastructure discovery.

After the automatic extraction, the discovered artifact is converted into a structured entity within the intelligence model.

The artifact becomes a first-class infrastructure entity, which allows the system (and analyst) to attach metadata, track historical observations and establish relationships with other intelligence objects.

In this case, the extracted URL is automatically linked to the Gamaredon threat actor, making it possible to track the infrastructure within the context of the actors operational ecosystem.

Figure 2 shows the extracted URL represented as an infrastructure entity.
The system records the relationship between the artifact and the threat actor, allowing future pivots across related infrastructure elements.

Once infrastructure artifacts are linked to an active actor, they become part of the actors evolving infrastructure graph.

This helps analysts to observe how infrastructure elements connect to each other over time and it makes it possible to pivot between entities and communication channels associated with the actor.

I usually do not investigating isolated indicators anymore, because i can observe the structure of the infrastructure ecosystem operated by the threat actor.

Figure 3 shows the Gamaredon actor profile with linked infrastructure entities that were discovered through automated collection pipelines.


Automated Tracking Pipeline

Collecting infrastructure once is rarely useful when dealing with actors like Gamaredon.

Infrastructure appears, disappears and reappears somewhere else. Domains rotate, IP addresses change and new dead-drops appear regularly.
A single snapshot of indicators therefore provides very limited value.

To deal with this, infrastructure tracking is performed within automated tracking pipelines.

A tracking pipeline is essentially a small workflow which periodically collects infrastructure artifacts, processes the results and finally feeds newly discovered artifacts back into the intelligence graph 🙂

Instead of performing manual enrichment during an investigation, the pipeline continuously performs these steps in the background.

Pipeline Structure

Each tracking pipeline follows a simple structure.

A tracking definition specifies what should be monitored and how the resulting artifacts should be processed.
Once triggered, the pipeline executes a series of collection and enrichment modules we selected in our definition.

These modules are responsible for extracting infrastructure artifacts, normalizing them and expanding the infrastructure footprint through our additional data source.

Scheduling and Execution

Tracking pipelines run on a scheduled basis and automatically process new data as it appears.

Each execution produces a structured result set which is evaluated by the processing stage of the pipeline. Newly discovered artifacts are converted into infrastructure entities and linked to the relevant threat actor.

Over time this allows the intelligence graph to grow organically as new infrastructure elements are discovered and related artifacts are connected through historical observations.

Result

In practice this means that infrastructure tracking no longer depends on manual analyst activity.

Once a tracking definition has been configured, the pipeline continuously monitors the relevant sources and expands the actors infrastructure graph as new artifacts appear.


Gamaredon Infrastructure Tracking

To demonstrate how the tracking pipeline operates in practice, i configured a small Gamaredon tracking definition to monitor known dead-drop locations used by the actor.

These locations often contain URLs or domains which later or currently appear in malicious campaigns.

The tracking pipeline periodically retrieves the content and extracts infrastructure artifacts which can then be used as starting points for further analysis.

Extraction

During one such collection run, the pipeline processed a known dead-drop location and extracted a URL which had not previously been observed within the intelligence dataset.

While a single URL may not appear particularly interesting on its own, it serves as an entry point into the actors infrastructure ecosystem.

Once the artifact enters the tracking pipeline it becomes a pivot point for further enrichment.

Infrastructure Cluster

Repeated enrichment and pivoting gradually expands the visible infrastructure associated with the actor.

New artifacts discovered through passive DNS or other enrichment sources are automatically linked to the existing actor profile, allowing the infrastructure graph to grow organically as additional relationships are observed.

This process transforms a single infrastructure artifact into a broader cluster of related assets which can be monitored continuously by the tracking pipeline.


Infrastructure Expansion

A single infrastructure artifact rarely provides much information on its own.

A domain or IP address might appear in a dead-drop location, but without additional context it is difficult to determine whether the artifact is actually part of a larger operational infrastructure or simply unrelated noise.

For this reason each newly discovered artifact is treated as a pivot point for further enrichment.

Once an artifact enters the tracking pipeline it is automatically processed by enrichment modules which attempt to expand the observable infrastructure cluster around that artifact.

Passive DNS data is particularly useful for this step.

By examining historical DNS resolutions it becomes possible to identify additional domains that have previously resolved to the same IP address, as well as IP infrastructure that hosted related domains in the past.

While not every discovered artifact will belong to the same actor, this process often reveals clusters of infrastructure which would not be visible when looking at individual indicators in isolation.

As new artifacts are discovered they are automatically added to the intelligence graph and linked to the relevant threat actor when sufficient context is available.

Over time this process gradually expands the visible infrastructure associated with the actor and allows analysts to follow infrastructure rotations across domains, IP addresses and hosting environments.


Findings

Applying the automated tracking pipeline to Gamaredon-related dead-drop locations quickly revealed several patterns in the actors infrastructure usage.

While the dataset used in this analysis is relatively small, a number of observations could already be made regarding infrastructure rotation and clustering behaviour.

Infrastructure Clusters

Passive DNS expansion frequently revealed clusters of domains associated with the same hosting infrastructure.

In several cases multiple domains discovered through enrichment stages resolved to the same IP address or appeared historically connected through shared DNS infrastructure.

These clusters provide additional nodes which may lead to previously unobserved infrastructure related to the actor.

Dead-Drop-Usage

Dead-drop locations appear to play an important role in distributing infrastructure references.

Public platforms such as blogs or messaging channels can be updated quickly and allow operators to rotate infrastructure without modifying malware samples directly.

Monitoring these locations therefore provides an effective entry point for continuous infrastructure discovery.

Value of Continuous Tracking

The observations above highlight the value of continuous infrastructure tracking.

While individual indicators may appear and disappear quickly, the relationships between infrastructure artifacts often persist long enough to reveal broader infrastructure clusters.

By automatically collecting and enriching infrastructure artifacts over time, it becomes possible to map parts of the actors infrastructure ecosystem.


Limitations

While the approach described above proved useful for discovering and tracking infrastructure artifacts, limitations should be taken into account when interpreting the results.

First, infrastructure enrichment based on passive DNS data is inherently incomplete. Passive DNS datasets depend on external collection sources and may not contain the full historical resolution history of a domain or IP address. As a result, certain infrastructure relationships may remain invisible to the analysis.

Second, infrastructure expansion through DNS relationships can produce noise. Shared hosting environments, cloud infrastructure and content delivery networks frequently host unrelated domains on the same IP addresses. Without additional information these relationships can lead to false associations within the infrastructure graph.

Another limitation is that dead-drop monitoring only provides visibility into infrastructure that is publicly referenced by the actor. Infrastructure used exclusively within malware samples or internal command-and-control channels may not appear in these sources and therefore remain outside the scope of this analysis (but i am working on a solution for this c:).

Finally, Kraken itself is currently in evaluation phase. While the platform already supports automated tracking pipelines and infrastructure modeling, additional modules and enrichment sources are still being developed.
Future iterations will improve infrastructure expansion and reduce noise introduced by shared hosting environments.


About Kraken

Kraken is a modular cyber threat intelligence orchestration platform designed for continuous infrastructure tracking and actor-centric intelligence modeling.

It models infrastructure as a relationship graph between domains, IP addresses, communication channels and other infrastructure artifacts. Automated tracking pipelines collect and enrich infrastructure data and continuously extend the intelligence graph when new artifacts appear.

The platform is currently in an evaluation phase (version 0.9.1-eval) and actively developed. Additional collection and enrichment modules are being added to improve infrastructure discovery and analysis capabilities.

A more detailed description of the platform architecture and intelligence pipeline is available in the Kraken technical whitepaper.

To gather early feedback from practitioners in the threat intelligence community, a small number of early evaluation access slots will be made available during 2026. The initial evaluation phase will be limited to ten vetted participants. Interested researchers or organizations can already request consideration for this early access program. Due to the limited number of evaluation slots, requests will go through a strict vetting process before access is granted.

Request access here: https://kraken.malwarebox.eu


Conclusion

Tracking infrastructure operated by threat actors such as Gamaredon requires more than static lists of indicators. Infrastructure changes quickly and isolated artifacts often provide little context on their own.

By combining automated collection pipelines with relationship-based infrastructure modeling, it becomes possible to gradually map portions of an actors infrastructure ecosystem and observe how it evolves over time.

While my approach described in this article represents only a small subset of possible tracking techniques, it demonstrates how automated infrastructure collection and enrichment can support continuous threat intelligence workflows.

Further development of the Kraken platform will focus on expanding collection capabilities and improving infrastructure correlation across multiple data sources.

Actor-Centric Defensive Prioritization – ACDP (2/7)

by Robin Dost
Part 2 of 7 of building the Malwarebox Ecosystem
Official Websitehttps://acdp.malwarebox.eu
GitHubhttps://github.com/MalwareboxEU/ACDP

Introduction

Much of my previous work has focused on understanding threat actors not through individual malware samples alone, but through infrastructure, operational behavior, and strategic intent. Over time, a recurring problem became increasingly difficult to ignore:

While our visibility into attacker tradecraft continues to improve, our ability to decide what to defend first often does not.

Most security frameworks excel at describing what exists, techniques, controls, and best practices, but remain largely silent on what should matter most when resources are limited or when a specific threat actor becomes relevant.
Instead of introducing another generic framework, this article applies an actor-centric prioritization methodology focused on attacker intent and impact.
This approach emerged from a practical need I repeatedly encountered: organizations often had technically sound controls in place, yet still failed to prevent strategic loss, because their defensive investments were misaligned with the attackers actual objectives.

Why ACDP Was Developed

Actor-Centric Defensive Prioritization (ACDP) was developed to address a persistent gap between threat intelligence and defensive decision-making.

Threat intelligence typically answers questions such as:

  • Who is active?
  • What techniques are being used?

Defensive planning, however, must answer differnt questions:

  • What failure would be unacceptable?
  • Which controls meaningfully disrupt this actors strategy?

In real world environments, especially those facing state-aligned or destructive actors, it is rarely feasible to implement every recommended control simultaneously.
Prioritization becomes inevitable.

ACDP exists to make that prioritization explicit, defensible, and actor-aware.

What Makes This Approach Different

ACDP does not attempt to replace established frameworks such as MITRE ATT&CK or NIST.
Instead, it operates one level above them, using their outputs as inputs.

Its defining characteristic is a deliberate shift in perspective:

  • from technique coverage to strategic disruption
  • from static maturity models to adaptive decision-making
  • from generic best practices to context-driven trade-offs

The result is a reasoned ordering of defensive actions, based on how much they interfere with an adversary intent and how much damage they prevent if that interference fails.

Advantages of an Actor-Centric Approach

The strengths of ACDP are primarily practical:

  • ACDP enables rapid reprioritization when threat actor relevance changes
  • It aligns defensive investment with attacker objectives instead of abstract risk scores
  • It acknowledges budget and operational constraints instead of assuming ideal conditions
  • It helps explain why certain controls take priority, a critical factor in both technical and executive discussions

Most importantly: ACDP encourages organizations to think in terms of losing conditions, not only detection coverage

Limitations and Trade-Offs

ACDP is not without drawbacks.

Because it is actor-centric, its effectiveness depends on the quality of the underlying threat intelligence.
Incorrect assumptions about attacker intent can lead to misplaced priorities.
Additionally, ACDP does not provide prescriptive control definitions or implementation guidance. It assumes foundational security hygiene already exists and focuses solely on ordering, not designing, defensive measures.
Finally, ACDP deliberately avoids the comfort of exhaustive coverage. It forces uncomfortable decisions, choosing what not to prioritize, which may challenge existing security narratives.

A Practical Motivation

The motivation behind ACDP is operational necessity

In environments where time, budget, and certainty are constrained, security decisions must be made before all information is available. ACDP provides a structured way to make those decisions without pretending that all risks are equal.

If defensive priorities do not change when attacker intent changes, prioritization is no longer strategic

ACDP in Practice: Quantified Prioritization Example

This section demonstrates how ACDP produces concrete priorities, using a scoring example against Sandworm.

Step 0: Define the Objective Profile

Based on historical and current operations, Sandworm is modeled as a:

Destructive, state-aligned actor prioritizing strategic impact over access longevity

This assumption directly influences weighting.

Step 1: Define the Scoring Axes

Each defensive control is evaluated across four axes.

Axis A: Actor Disruption Value (ADV)

How strongly does this control interfere with the actors campaign strategy?

ScoreMeaning
5Breaks or invalidates the strategy
4Forces significant tradecraft change
3Causes friction
2Minor inconvenience
1Negligible effect

Axis B: Impact Risk Reduction (IRR)

How much real damage does this control prevent if the actor succeeds elsewhere?

ScoreMeaning
5Prevents catastrophic / irreversible impact
4Strongly limits blast radius
3Partial damage reduction
2Minimal reduction
1Cosmetic

Axis C: Cost & Operational Complexity (CC)

How realistic is implementation under time and budget pressure?

ScoreMeaning
5Low cost, low friction
4Moderate effort
3Manageable complexity
2High effort
1Very expensive / disruptive

Axis D: Detection-to-Decision Time (DDT)

Does this control provide usable signal early enough to change outcomes?

ScoreMeaning
5Immediate / pre-impact
4Early warning
3Delayed but actionable
2Mostly reactive
1Post-incident only

Step 2: Actor-Specific Weighting

For a destructive actor, impact prevention and strategic disruption dominate:

  • ADV = 35%
  • IRR = 35%
  • CC = 15%
  • DDT = 15%

Step 3: Scoring Selected Controls

Candidate Controls

  • Immutable offline backups
  • VPN / edge device inventory & patch SLAs
  • Historical DNS & infrastructure analysis
  • PowerShell command-line logging
  • Scheduled task auditing
  • Security awareness training

Scoring Table

ControlADVIRRCCDDT
Immutable offline backups5535
Edge inventory & patching4444
Historical DNS analysis4344
PowerShell logging3243
Scheduled task auditing3233
Awareness training1151

Step 4: Priority Index Calculation

Formula

PI = (ADV × 0.35) + (IRR × 0.35) + (CC × 0.15) + (DDT × 0.15)

Example Calculations

Immutable offline backups

(5×0.35) + (5×0.35) + (3×0.15) + (5×0.15) = 4.70

PowerShell logging

(3×0.35) + (2×0.35) + (4×0.15) + (3×0.15) = 2.80

Step 5: Final Prioritization

ControlPIPriority Tier
Immutable offline backups4.70Tier 1
Edge inventory & patching4.00Tier 1
Historical DNS analysis3.75Tier 2
PowerShell logging2.80Tier 2
Scheduled task auditing2.75Tier 3
Awareness training1.90Tier 4

Step 6: Interpretation

The results highlight several non-obvious truths:

  • Recovery capability outranks detection
  • Infrastructure hygiene outranks user behavior
  • Controls that look impressive may rank low
  • Controls that feel boring often dominate

This is basically the mathematical expression of attacker intent.

Why the Calculation Matters

Without explicit scoring:

  • prioritization becomes opinion
  • decisions become political
  • trade offs remain implicit

ACDP forces those trade-offs into the open.

If you cannot explain your priorities numerically, you are not prioritizing: you are rationalizing

Closing Thoughts

Often times security failures are caused not by missing controls, but by misaligned priorities.
ACDP was developed to address this misalignment by shifting the focus from technique coverage to attacker intent, and from abstract risk to tangible impact. Its value lies in deliberate selection.
Used correctly, ACDP does not replace existing frameworks, but sharpens them by forcing uncomfortable but necessary decisions.
This methodology is intentionally opinionated!


If you have any feedback or questions, please feel free to contact me by email.
I welcome criticism and feedback!