An unknown malware
Source code
Features
The recruitment process
Those famously unbiased, next‑gen XDR/NGAV solutions that never miss a thing
LLVM-EX howto
A kernel‑level network connection hiding malware
A great invasion of your privacy
Data breach

A previously unknown malware

Detection‑based security does not necessarily recognize (and therefore cannot protect against) unknown threats, including new malware. This limitation applies to virtually all vendors listed on VirusTotal.com as well as the majority of enterprise‑grade security providers, since they rely primarily on signature‑ or pattern‑based detection. Behavioral‑based detection also tends to produce false positives easily. With only a few exceptions — based on the author’s earlier testing with unpublished malware samples — unknown threats are typically discovered and addressed by these legacy detection‑based endpoint security solutions only after they have already caused damage to their customers.

Malware can map out most of a company’s internal network simply by examining the process list and their network connections, all without the user or the organization noticing. And when you think about unauthorized actions performed by an endpoint protection solution, this would definitely qualify — yet at the same time, it’s also part of the core functionality these systems rely on.

The question now is whether this information is being used for the wrong purposes or sold to third parties. War is also fought in cyberspace. So keep in mind that a user‑level, non‑admin program is capable of doing all of this.

You might be very mistaken when you say that even Defender is enough. It’s not that straightforward once you understand what’s happening under the hood and grasp the full concept.

There is no solution on the market that achieves 100% detection. It is also important to note the misleading marketing claims found on various vendor websites, where long market presence is implied to equal superior protection. Market research shows that many, if not most, vendors sell multiple product versions under different names, even though the actual technical differences between them are often purely cosmetic. The screenshot below, captured during my unbiased testing, supports my claims.

Key insight. New frontier‑level models, such as Anthropic’s Claude Mythos, are capable of autonomously discovering — and even exploiting — zero‑day vulnerabilities at scale. This represents an entirely new level of capability in both cyber offense and defense. This is precisely where the kernel-level allow‑listing becomes critical: new and unknown threats simply do not execute in the first place. As a result, hunting for new threats becomes dramatically simpler and faster, because the only things that need to be examined are programs that have never been used within that organization before.

A piece of malware running under standard user rights can inspect the process list to determine whether any process exposes a potential CVE‑????‑????‑type vulnerability that could be leveraged for privilege escalation.

Some of the most dangerous unknown threats are malware or ransomware that exploit previously undiscovered zero‑day vulnerabilities. Modern offensive AI models can now identify exploitable bugs far faster than any human researcher.

This is exactly why HEX DEREF ANTI‑MALWARE X includes a fully functional kernel‑level Authenticode verification mechanism: unknown or unsigned code simply never executes, not even for a fraction of a second, unless it is explicitly on the allowed list. The solution runs alongside every endpoint protection solution and kernel‑level anti‑cheat. Malware doesn’t know that this is an analysis tool.

Is there even any privacy left when using these solutions? Collected data:

  • Browsing history including Tor-browser
  • File history
  • Process list

What data do endpoint protection solutions collect (e.g., those listed on VirusTotal.com)? Browsing history is usually sent without an opt‑out option. File names, sizes, and their hashes (SHA‑256), because without this information traditional antivirus products cannot determine whether an executable (or some other file) on the device is malicious. Nothing on a user’s device should be transmitted to a vendor’s servers without explicitly asking the user for permission, otherwise the cybersecurity product itself is performing unauthorized actions — effectively a data breach. At this point we are approaching GDPR violations, and this could even lead to the revocation of the EV code‑signing certificate. None of the above can be disabled through any antivirus solution’s settings.

Usually files are named according to what they relate to. For example Supplier_shipments_03-2025.xlsx or Packaging_equipment_project_xyz.xlsx. It all depends on the company’s naming standard. When a traditional antivirus solution performs a "Scan all files" operation (IoC), it could, in principle, enable industrial espionage without the user's awareness, based solely on file names. To my understanding, no endpoint protection product or kernel-level anti-cheat mechanism is permitted to transmit any files from a device without first obtaining explicit user consent; doing so would be functionally comparable to an information-stealing malware. In many products, automatic sample submission is enabled by default.

As a result, personal data may be sent from your device. At some point you have to trust something. The test malware mentioned in the article does not send anything out from the device — and even if it did, the data it sends is not used for any malicious purpose. There are no guarantees about how the data is used or who can access it, despite the polished privacy statements. And if we consider features like protected folders, that alone would already point to a very clear place to start…

Data breach

A single breach can compromise weapons development, military readiness, or national security — and in many cases, one data breach is enough to expose a company’s most critical trade secrets. For a software company, that means the source code; for a pharmaceutical company, it means proprietary formulas and manufacturing processes; and for other industries, it may be the core intellectual property that defines their entire competitive advantage.

Let's start by signing you in and bringing over your passwords, browsing history and more from Microsoft Cloud. If this data cannot be accessed for one reason or another, then in practice the instance or the entire company is unable to operate — even payroll would become impossible to process.

A data breach already occurs at least partially the moment your sensitive data is transferred into a third‑party cloud service. All of that data should be encrypted or otherwise protected before being uploaded to any of these services.

What puzzled the author even more was that in none of the data‑breach investigation reports or ransomware incident analyses was there ever any mention of what cybersecurity solution the affected organization had been using — or whether they had any at all besides Defender.

One of the services (CIR) charged a 5,000 USD upfront fee and then 300 USD per hour. That’s far from inexpensive, especially when the site provides no description of the service. And a reinstallation won’t do much if the attacker gained access through a zero‑day vulnerability. In any case, breaches like these shouldn’t be happening very often — if at all — at the device level when using a solution like HEX DEREF ANTI‑MALWARE X, assuming it’s operating in a zero‑trust mode.

If the root cause was a zero‑day vulnerability, the attacker is likely to return unless the ransom was paid. The author even contacted several of these CIR providers and quickly realized that many of them are essentially cash‑grabs or outright scams.

Determining the root cause of a data breach isn’t quite that straightforward. Malware may be deployed through remote access using credentials exposed in a breach, insider-launched malware, an undisclosed 0-day vulnerability, or triggered accidentally by an employee without malicious intent. If you rule out all other possibilities, the only thing left is unauthorized actions carried out directly on the endpoint itself. In other words, the system may end up using the very data it collected to protect the organization against that same organization.

LLVM-EX virtualized sandbox aware malware

Using a tool like llvm-msvc-ex to virtualize your malware can significantly increase the complexity of reverse engineering or modifying the code. This is because virtualization techniques, such as control flow flattening, bogus control flow, and VM-based obfuscation, make the malware harder to analyze and understand. These techniques can deter attackers or researchers from tampering with or reverse engineering the malware. With the right LLVM obfuscation passes, a C/C++ malware sample automatically becomes a unique version that pretty much evades all malware analysis sandboxes. Reversing an LLVM‑virtualized piece of malware (especially reversing the C2 communication) requires significant expertise, time, and money. It demands a highly skilled professional team. As a loader-type malware, each version is automatically unique (string encryption and so on). Static detection therefore lags far behind, and based on my initial tests, this got through about 95% of all enterprise solutions — even those that proudly advertise themselves as so‑called leading solutions. Likely renders even a sophisticated sandbox largely ineffective, ultimately forcing a dynamic analysis.

Note that this LLVM‑EX version has been modified so that it can virtualize only the functions you choose. Because if we're talking about a P2C, you obviously can't obfuscate the entire executable for reasons that should be quite self‑evident. Sandbox environments static analysis may interpret this as a polymorphic malware. 1-3/72 security vendors flagged this file as malicious so I almost defeated every static analysis sandbox. So much for a 100% detection. I tested them all with an unknown threat. A sophisticated modularized info-stealing malware written in C/C++.

Those famously unbiased, next‑gen XDR/NGAV solutions that never miss a thing

How can you do threat hunting for new, unknown threats with such limited or non-existent process and command‑line data — not to mention the missing process network connections? You can't easily determine the root cause of a breach at the device or process level if the device owner or SOC doesn't have unrestricted access to this crucial information. Or can you? If you've read even a single analysis of a somewhat sophisticated ransomware strain, you'll know that even today's basic ramsonware variants wipe all logs and make forensic analysis on the disk extremely difficult, if not nearly impossible.

At some point the malware has to report back to its operator that it has spread to a new device, and that C2 channel is the most critical detection vector for malware.

Full size

An endpoint protection solution that doesn't even inform the user at a basic level about possible malware persistence has received full scores/awards in these tests. The site refers to an award‑winning solution. My intention was also to highlight the misleading product marketing used on these sites. This is highly questionable behavior and amounts to distorting fair competition. Customers end up purchasing a product that does not match the description advertised on the website. A malware analyst from the same company — or another one — writes an article about the malware's persistence. But why doesn't their own solution even notify the user that a process added persistence? Or maybe they sacrificed features for the sake of stability, leaving the product mostly with static detection.

The test purpose is to highlight possible shortcomings or biases in the tests conducted by "AV-TEST/AV-COMPARATIVES/AVLAB" and similar certification organizations

Makes me wonder if there’s some bias in these tests, since none of them answered my question about whether they have ever tested with previously unpublished malware that has no static detection. This was the main reason I published this article and the entire project.

The detections are primarily caused by a classic false positive caused by virtualization/obfuscation.

By reading the /r/Malware subreddit. It becomes clear that new malware and their variants emerges daily. An anti‑virus solution that relies solely on static signature‑based detection is a thing of the past. And honestly, if an endpoint protection product can't even alert the user when a program exhibits behavior typical of malware, that should raise serious concerns. What other approach besides kernel‑level allow‑listing–based endpoint protection can realistically stay ahead of these threats? Solutions like HEX DEREF ANTI‑MALWARE X demonstrate why deeper, kernel‑integrated defenses matter.

An LLVM‑virtualized piece of malware can bypass virtually all static detection mechanisms and turn analysis into an excessively time‑consuming process (as shown above). Although competing vendors may publish articles about their malware‑analysis capabilities, my preliminary testing suggests that many of the issues highlighted in those write‑ups remain unaddressed in the actual product implementations. It reminds me of a situation where a professional team was hired to analyze malware but chose not to consult the developers responsible for the kernel‑level sensor — a rather puzzling decision.

When you look at the industry's marketing, certifications, and the flawless scores handed out by testing labs, it's easy to believe that today's security products are fully prepared for real‑world threats. But the reality is far less reassuring. When an actual, modern cyberattack strikes, all those polished numbers and promises evaporate in seconds. That's the moment when the thinness of the protective layer becomes painfully obvious — and it happens only after the damage is already done. Organizations discover too late that the tests never reflected real adversaries, and the marketing never told the whole story. A cyberattack doesn't wait for anyone to update reports or adjust scoring criteria. It exposes what works and what doesn't, and it does so only after the consequences are already irreversible.

You've positioned yourselves as some sort of authoritative entity — so isn't this exactly the kind of thing that borders on knowingly misleading investors? Not to mention undermining fair competition.

The consequences of an impossible recruitment process...

When I applied for these positions (for example, a Windows kernel sensor developer. This already requires highly specialized expertise), I attached a clear proof‑of‑concept video to my application — which, in most cases, was exactly what the job posting was asking for. I got the impression that these recruiters don't even conceptually understand how malware or endpoint protection works, and therefore have no real idea who they should be hiring.

In most cases I didn't receive even a basic reply. It made me wonder whether they're just using fake job postings to collect as much information for free as possible.

The skill set presented in this article doesn't seem to meet the expectations of these recruiters. The videos attached to the article were produced with the dedication of a single person over a long period of time, yet they still weren’t good enough for those who only crave shallow, trivial content.

The recruitment process somehow seems to support the idea that sensitive data is being collected without permission under the pretext of various features — unless the entire solution has been deliberately designed for spying on the user in this case, the cybersecurity solution ends up turning against the very company it was originally meant to protect.

Hiding a network connection at the process level (Windows 10 22H2 - Windows 11 25H2)

Hiding a process's network activity from other applications. Hide connections belonging to a specific browser or VPN process. This is a unique C/C++ user‑mode (UM) / (KM) Windows kernel‑driver implementation. This allows you to limit which user processes may request network connections. The kernel‑level equivalent of netstat -ano. It even works in a manually mapped driver. This isn't about hiding from the user — it's about hiding from malware. Even malware running with only standard user privileges can map internal network addresses and the processes that connect to them. And since the kernel‑level attack surface has been neglected for years — or even decades — the most severe ransomware attacks can still succeed. And now, with that same capability, it is also possible to hide the network activity of a RAT‑type malware threat. This feature lets you instantly view all user‑mode processes that are making network connection requests at the NSI layer.

Isn't it strange that something like this still slips past of these products, and yet the testers keep handing out perfect scores in every single test.

Your product development simply can't keep up with the competition when you refuse to hire the right talent. This kind of thing has been running around undetected on both personal and corporate devices for, what, about a decade now on users devices without them having the slightest clue.

The source code of the UNKNOWN‑123 project

SUPPORT

I published this project for several reasons. That is why the test project's (UNKNOWN-123) source code is available for purchase as a software work for educational and informational purposes only — so you can literally see for yourselves that you have been sold nothing more than pseudo‑cybersecurity for years, perhaps even decades. One of the reasons was to bring attention to the deliberate or conscious NIS2 violations. Not to mention these biased tests. Because NIS2, DORA, and GDPR require organizations to regularly test their cybersecurity posture, the author published this article using an advanced Red Team test malware that simulates real‑world cyberattacks. Whenever someone releases something for free, it undermines the competition. And then when you try to build something commercial around the same idea, the first thing people ask is: what makes yours better, and why should anyone pay for it — especially when Windows Defender is free? With investors, even garbage can turn into a scam. The source code is sold as software development for APT simulation, which mirrors a real cyberattack. None of the endpoint security vendors have any sample of this. That's why the project is fittingly named UNKNOWN‑123.

The project price starts at 4,999 USD in BTC, which must be paid upfront once you have accepted the software work agreement. Please also include your Telegram in the email. Thank you.

Features

Coded in C/C++. It operates and performs actions depending on whether it is running as a regular user or with administrator privileges. The settings are configured in the Config.h file, meaning it cannot be identified based on command‑line parameters. LLVM‑EX virtualization makes every build unique, so static detection does not work against it.

  • Anti-VM
  • Anti-Debug
  • Anti-Inject
  • Anti‑Static analysis
  • IAT Hiding
  • Clipboard DATA
  • Disk serials (NVMe etc.)

Anti-VM: Sandbox aware malware can render even a sophisticated sandbox largely ineffective, ultimately forcing a manual analysis. If malware detects when it is running in a virtual machine, and in such cases, it may refrain from executing any malicious actions.

All of these listed features are required in any commercial software if you want even a basic level of copy protection. For example, using the disk’s serial number allows you to bind the license to a specific device. And for obvious reasons, any function related to copy‑protection must of course be virtualized.