Following Gamaredons Infrastructure Rotations using Kraken


by Robin Dost
Click for Whitepaper

Tracking Gamaredon infrastructure is frustrating.

Domains rotate.
IPs disappear.
Dead-drops change.
And your IOC list is outdated before your report is even finished.

This is exactly where traditional tracking breaks.

Infrastructure is not a list.
It’s a system.

So instead of chasing indicators, I built something else: Kraken.

An actor-centric platform that tracks infrastructure as a continuously evolving graph.

Over the past months I used Kraken to follow Gamaredons infrastructure rotations, automatically expand clusters via passive DNS, dead-drop resolution, additional enrichment and keep visibility even as things change.

Kraken is still in evaluation, but it already shows why this approach works.

Gamaredon is basically the perfect test case: fast rotations, simple patterns, constant change.

Annoying if you rely on IOC lists.
Interesting if you actually track the system behind it.


The Problem

The core problem is simple:

Most threat intelligence workflows are still indicator/ioc-centric.

Indicators are collected, stored and shared as lists.
But infrastructure operated by threat actors is not a list.

It is a system.

Domains resolve to IPs
IPs host multiple domains
Dead-drops reference infrastructure
Passive DNS reveals historical relationships

Once you model these relationships as a graph and not as a list, entirely new analysis possibilities show up.


Methodology

I do not collect indicators and throw them into another IOC list as I often did before, my approach used here focuses on tracking infrastructure as a system.

Gamaredon infrastructure changes constantly. Domains rotate, IPs disappear, new dead-drops appear and old ones quietly vanish again. If you try to track this using static indicator lists you quickly run into a simple problem:
your data ages faster than your report.

So instead of treating indicators as the final result, they are treated as entry points.

Each domain, IP address or dead-drop reference becomes a starting node from which additional infrastructure can be discovered through relationships.

Actor-Centric Tracking

The key idea behind this is fairly simple.

I do not just track indicators, the whole tracking process focuses on the actor and the infrastructure ecosystem they operate.
Indicators are therefore not stored as isolated artifacts but as nodes within a larger infrastructure graph.

Some examples:

  • Domains resolve to IP addresses
  • IP addresses host multiple domains
  • Dead-drop channels reference infrastructure
  • Passive DNS exposes historical relationships between these elements

When you follow these relationships consistently, infrastructure clusters start to emerge on their own, just by letting Kraken collect data by itself.

Continuous Collection

Manual lookups are fine for small investigations, but they do not scale well when tracking infrastructure that changes constantly.
To deal with this problem, infrastructure collection is automated through small collection pipelines which continuously process new data as it arrives.

These pipelines typically follow a very simple structure:

Source > Extract > Normalize > Enrich

A source may be a Telegram channel used as a dead-drop, a blog platform or any location where infrastructure information can be found.

Once infrastructure artifacts such as domains or IP addresses are extracted, they are normalized and passed to enrichment stages which attempt to expand the infrastructure footprint.

Infrastructure Graph

All observed artifacts and relationships are stored within an intelligence graph.
Nodes represent infrastructure elements, while edges in the graph represent observed relationships between them.

This model makes it possible to pivot through infrastructure in multiple directions and observe how infrastructure clusters evolve over time.

In practice this basically turns infrastructure tracking into a continuous mapping process (rather than a one time indicator collection exercise).


Data Collection

A common pattern observed in Gamaredon operations is the use of publicly accessible locations to distribute infrastructure references.

These locations often act as so called “dead-drops”: pages or channels that contain references to infrastructure which can later be used by infected systems or operators.

We do not have to manually monitoring these sources anymore. The collection process is fully automated. A small collection pipeline periodically retrieves the content of known dead-drop locations and attempts to extract infrastructure artifacts such as domains, (worker) URLs or IP addresses.

Once a reference is identified, the artifact is normalized and passed into the intelligence pipeline where it can be processed further.

Figure 1 shows an example of such a collection run.
The pipeline processes a known dead-drop location and extracts a URL which is later used as a pivot point for more additional infrastructure discovery.

After the automatic extraction, the discovered artifact is converted into a structured entity within the intelligence model.

The artifact becomes a first-class infrastructure entity, which allows the system (and analyst) to attach metadata, track historical observations and establish relationships with other intelligence objects.

In this case, the extracted URL is automatically linked to the Gamaredon threat actor, making it possible to track the infrastructure within the context of the actors operational ecosystem.

Figure 2 shows the extracted URL represented as an infrastructure entity.
The system records the relationship between the artifact and the threat actor, allowing future pivots across related infrastructure elements.

Once infrastructure artifacts are linked to an active actor, they become part of the actors evolving infrastructure graph.

This helps analysts to observe how infrastructure elements connect to each other over time and it makes it possible to pivot between entities and communication channels associated with the actor.

I usually do not investigating isolated indicators anymore, because i can observe the structure of the infrastructure ecosystem operated by the threat actor.

Figure 3 shows the Gamaredon actor profile with linked infrastructure entities that were discovered through automated collection pipelines.


Automated Tracking Pipeline

Collecting infrastructure once is rarely useful when dealing with actors like Gamaredon.

Infrastructure appears, disappears and reappears somewhere else. Domains rotate, IP addresses change and new dead-drops appear regularly.
A single snapshot of indicators therefore provides very limited value.

To deal with this, infrastructure tracking is performed within automated tracking pipelines.

A tracking pipeline is essentially a small workflow which periodically collects infrastructure artifacts, processes the results and finally feeds newly discovered artifacts back into the intelligence graph 🙂

Instead of performing manual enrichment during an investigation, the pipeline continuously performs these steps in the background.

Pipeline Structure

Each tracking pipeline follows a simple structure.

A tracking definition specifies what should be monitored and how the resulting artifacts should be processed.
Once triggered, the pipeline executes a series of collection and enrichment modules we selected in our definition.

These modules are responsible for extracting infrastructure artifacts, normalizing them and expanding the infrastructure footprint through our additional data source.

Scheduling and Execution

Tracking pipelines run on a scheduled basis and automatically process new data as it appears.

Each execution produces a structured result set which is evaluated by the processing stage of the pipeline. Newly discovered artifacts are converted into infrastructure entities and linked to the relevant threat actor.

Over time this allows the intelligence graph to grow organically as new infrastructure elements are discovered and related artifacts are connected through historical observations.

Result

In practice this means that infrastructure tracking no longer depends on manual analyst activity.

Once a tracking definition has been configured, the pipeline continuously monitors the relevant sources and expands the actors infrastructure graph as new artifacts appear.


Gamaredon Infrastructure Tracking

To demonstrate how the tracking pipeline operates in practice, i configured a small Gamaredon tracking definition to monitor known dead-drop locations used by the actor.

These locations often contain URLs or domains which later or currently appear in malicious campaigns.

The tracking pipeline periodically retrieves the content and extracts infrastructure artifacts which can then be used as starting points for further analysis.

Extraction

During one such collection run, the pipeline processed a known dead-drop location and extracted a URL which had not previously been observed within the intelligence dataset.

While a single URL may not appear particularly interesting on its own, it serves as an entry point into the actors infrastructure ecosystem.

Once the artifact enters the tracking pipeline it becomes a pivot point for further enrichment.

Infrastructure Cluster

Repeated enrichment and pivoting gradually expands the visible infrastructure associated with the actor.

New artifacts discovered through passive DNS or other enrichment sources are automatically linked to the existing actor profile, allowing the infrastructure graph to grow organically as additional relationships are observed.

This process transforms a single infrastructure artifact into a broader cluster of related assets which can be monitored continuously by the tracking pipeline.


Infrastructure Expansion

A single infrastructure artifact rarely provides much information on its own.

A domain or IP address might appear in a dead-drop location, but without additional context it is difficult to determine whether the artifact is actually part of a larger operational infrastructure or simply unrelated noise.

For this reason each newly discovered artifact is treated as a pivot point for further enrichment.

Once an artifact enters the tracking pipeline it is automatically processed by enrichment modules which attempt to expand the observable infrastructure cluster around that artifact.

Passive DNS data is particularly useful for this step.

By examining historical DNS resolutions it becomes possible to identify additional domains that have previously resolved to the same IP address, as well as IP infrastructure that hosted related domains in the past.

While not every discovered artifact will belong to the same actor, this process often reveals clusters of infrastructure which would not be visible when looking at individual indicators in isolation.

As new artifacts are discovered they are automatically added to the intelligence graph and linked to the relevant threat actor when sufficient context is available.

Over time this process gradually expands the visible infrastructure associated with the actor and allows analysts to follow infrastructure rotations across domains, IP addresses and hosting environments.


Findings

Applying the automated tracking pipeline to Gamaredon-related dead-drop locations quickly revealed several patterns in the actors infrastructure usage.

While the dataset used in this analysis is relatively small, a number of observations could already be made regarding infrastructure rotation and clustering behaviour.

Infrastructure Clusters

Passive DNS expansion frequently revealed clusters of domains associated with the same hosting infrastructure.

In several cases multiple domains discovered through enrichment stages resolved to the same IP address or appeared historically connected through shared DNS infrastructure.

These clusters provide additional nodes which may lead to previously unobserved infrastructure related to the actor.

Dead-Drop-Usage

Dead-drop locations appear to play an important role in distributing infrastructure references.

Public platforms such as blogs or messaging channels can be updated quickly and allow operators to rotate infrastructure without modifying malware samples directly.

Monitoring these locations therefore provides an effective entry point for continuous infrastructure discovery.

Value of Continuous Tracking

The observations above highlight the value of continuous infrastructure tracking.

While individual indicators may appear and disappear quickly, the relationships between infrastructure artifacts often persist long enough to reveal broader infrastructure clusters.

By automatically collecting and enriching infrastructure artifacts over time, it becomes possible to map parts of the actors infrastructure ecosystem.


Limitations

While the approach described above proved useful for discovering and tracking infrastructure artifacts, limitations should be taken into account when interpreting the results.

First, infrastructure enrichment based on passive DNS data is inherently incomplete. Passive DNS datasets depend on external collection sources and may not contain the full historical resolution history of a domain or IP address. As a result, certain infrastructure relationships may remain invisible to the analysis.

Second, infrastructure expansion through DNS relationships can produce noise. Shared hosting environments, cloud infrastructure and content delivery networks frequently host unrelated domains on the same IP addresses. Without additional information these relationships can lead to false associations within the infrastructure graph.

Another limitation is that dead-drop monitoring only provides visibility into infrastructure that is publicly referenced by the actor. Infrastructure used exclusively within malware samples or internal command-and-control channels may not appear in these sources and therefore remain outside the scope of this analysis (but i am working on a solution for this c:).

Finally, Kraken itself is currently in evaluation phase. While the platform already supports automated tracking pipelines and infrastructure modeling, additional modules and enrichment sources are still being developed.
Future iterations will improve infrastructure expansion and reduce noise introduced by shared hosting environments.


About Kraken

Kraken is a modular cyber threat intelligence orchestration platform designed for continuous infrastructure tracking and actor-centric intelligence modeling.

It models infrastructure as a relationship graph between domains, IP addresses, communication channels and other infrastructure artifacts. Automated tracking pipelines collect and enrich infrastructure data and continuously extend the intelligence graph when new artifacts appear.

The platform is currently in an evaluation phase (version 0.9.1-eval) and actively developed. Additional collection and enrichment modules are being added to improve infrastructure discovery and analysis capabilities.

A more detailed description of the platform architecture and intelligence pipeline is available in the Kraken technical whitepaper.

To gather early feedback from practitioners in the threat intelligence community, a small number of early evaluation access slots will be made available during 2026. The initial evaluation phase will be limited to ten vetted participants. Interested researchers or organizations can already request consideration for this early access program. Due to the limited number of evaluation slots, requests will go through a strict vetting process before access is granted.

Request access here: https://kraken.malwarebox.eu


Conclusion

Tracking infrastructure operated by threat actors such as Gamaredon requires more than static lists of indicators. Infrastructure changes quickly and isolated artifacts often provide little context on their own.

By combining automated collection pipelines with relationship-based infrastructure modeling, it becomes possible to gradually map portions of an actors infrastructure ecosystem and observe how it evolves over time.

While my approach described in this article represents only a small subset of possible tracking techniques, it demonstrates how automated infrastructure collection and enrichment can support continuous threat intelligence workflows.

Further development of the Kraken platform will focus on expanding collection capabilities and improving infrastructure correlation across multiple data sources.

Observed Telegram Bot Naming Patterns in Recent MuddyWater Malware Activity

by Robin Dost

I recently took a look at the wave of MuddyWater malware samples from 2026 and came across something interesting I wanted to share.

The sample I analyzed (SHA256: “81a6e6416eb7ab6ce6367c6102c031e2ae2730c3c50ab9ce0b8668fec3487848“) turns out to be LampoRAT, as already described in Unite42’s article.
According to previous analysis, the malware was developed on a workstation named “Jacob” more on that in my earlier post.

I won’t go into full detail about the RAT itself, that has already been covered extensively.
I’ll link a few relevant articles below if you’re interested.

Right now my focus is more on actor-centric detection, specifically identifying infrastructure early rather than chasing IOCs after the fact.


Quick overview

The malware uses the Telegram Bot API as a command-and-control channel.

After infection, the client connects to a hardcoded bot token and waits for commands from the operator.

Received commands are executed locally via the Windows shell, and the results are sent back to the attacker via Telegram.

Because all of this runs over legitimate HTTPS traffic to Telegram, it blends in much better than traditional C2 infrastructure.


The interesting part

The actor uses a bot with the username:

stager_51_bot

In offensive operations, a stager is typically a small initial payload that establishes a foothold and then pulls in additional components.

The “51” immediately suggests some form of sequential usage and that’s where things get interesting.


Enumerating the pattern

I wrote a quick script to check which usernames of the form:

stager_X_bot   (1 ≤ X ≤ 100)

are actually registered.

We don’t even need a Telegram account for this.
Instead, we can abuse the way Telegram’s web interface behaves and completely avoid the API.

If a username exists -> it shows up
If not -> it doesn’t

Simple as that.

Since stager_51_bot is currently offline, here’s how it looks:

Username not taken:

Username not taken:


If the user exists, the username is highlighted as the page title.

for x in {1..100}
do
    res=$(curl -s https://t.me/stager_$x\_bot | grep "tgme_page_title")
    if [ -n "$res" ]; then
        echo "Bot exists stager_$x\_bot";
    fi
    sleep 3
done

(The sleep is just there to avoid rate limiting)

Results:

I then pulled the Telegram IDs for all identified bots and built a small table:

UsernameTelegram IDDisplay NameStill Active
stager_51_bot8398566164OlalampoNo
stager_55_bot8468064242stager_55botYes
stager_56_bot8372926576foltinao\Yes
stager_58_bot8466129060NikoroYes
stager_59_bot8331208203haydayYes
stager_60_bot8128190363clashYes
stager_61_bot8357834418AsusYes
stager_62_bot8405262043appleYes
stager_63_bot7824201354botYes
stager_64_bot8236964013activeYes

Observations

At first glance, it looks like these bots are sequentially created starting at around stager_51_bot.

But once you look at the Telegram IDs, things don’t line up.

While Telegram IDs generally increase over time, they do not match the numeric order of the bot names.

Bots with higher numbers are not necessarily newer, and some appear to have been created earlier despite their naming.

This strongly suggests that the naming scheme is not tied to creation order, but maybe controlled by the operator most likely as part of internal tooling or campaign logic.


Another interesting detail is the display names:

  • Random-looking words like Olalampo, Nikoro, foltinao
  • Game-related names like HayDay and Clash
  • Generic words like apple, bot, active

Nothing conclusive here just… weird enough to notice.

Also worth mentioning:

When putting Olalampo, Nikoro or foltinao into a translator, it consistently suggests the same language, despite there being no real translation ^^

No idea if that means anything. Probably nothing. Still interesting.


Attribution (or lack of it)

There is currently no definitive proof that all identified bots belong to the same campaign or actor.

The observed connections are based on naming patterns and timing and should be treated as a hypothesis, not a confirmed attribution.

And that’s important.


Why this matters

The interesting part here is not a single bot.

It’s the pattern.

Instead of looking at individual IOCs, we’re seeing a reusable naming and infrastructure scheme, something that can potentially be tracked and predicted.

Detection / Prevention

Looking at the Telegram requests generated by the RAT, we can already preemptively block known infrastructure.

Example request:

https://api.telegram.org/bot8398566164:AAEJbk6EOirZ_ybm4PJ-q8mOpr1RkZx1H7Q/GetWebhookInfo	

(Token is invalid, don’t bother)

Since we have multiple bot IDs, we can derive detection patterns like:

https://api.telegram.org/bot8468064242.*
https://api.telegram.org/bot8372926576.*
https://api.telegram.org/bot8466129060.*
https://api.telegram.org/bot8331208203.*
https://api.telegram.org/bot8128190363.*
https://api.telegram.org/bot8357834418.*
https://api.telegram.org/bot8405262043.*
https://api.telegram.org/bot7824201354.*
https://api.telegram.org/bot8236964013.*

But more importantly:

Instead of blocking static IOCs, we can move towards pattern-based detection, for example:

  • monitoring Telegram API usage
  • correlating with suspicious bot naming schemes
  • identifying unusual communication patterns

Long term, this is far more robust than chasing individual indicators.


Final thoughts

I’ve been experimenting with different tracking techniques to identify patterns like this earlier.

To make that easier, I built a platform that helps me to automate exactly this kind of analysis.

More on that soon, releasing on Monday :3


Conclusion

The observed naming and infrastructure pattern shows that even simple components like Telegram bots can be used to build reusable and scalable C2 infrastructure.

Even without definitive attribution, analyzing these patterns allows early identification of potential infrastructure and enables proactive detection and blocking.

The real value here is the pattern behind it.


More References

https://unit42.paloaltonetworks.com/boggy-serpens-threat-assessment

https://www.group-ib.com/blog/muddywater-operation-olalampo