Malware Alerts

Subscribe to Malware Alerts feed Malware Alerts
Online headquarters of Kaspersky Lab security experts.
Updated: 2 hours 1 min ago

Unraveling the Lamberts Toolkit

Tue, 04/11/2017 - 05:59

Yesterday, our colleagues from Symantec published their analysis of Longhorn, an advanced threat actor that can be easily compared with Regin, ProjectSauron, Equation or Duqu2 in terms of its complexity.

Longhorn, which we internally refer to as “The Lamberts”, first came to the attention of the ITSec community in 2014, when our colleagues from FireEye discovered an attack using a zero day vulnerability (CVE-2014-4148). The attack leveraged malware we called ‘BlackLambert’, which was used to target a high profile organization in Europe.

Since at least 2008, The Lamberts have used multiple sophisticated attack tools against high-profile victims. Their arsenal includes network-driven backdoors, several generations of modular backdoors, harvesting tools, and wipers. Versions for both Windows and OSX are known at this time, with the latest samples created in 2016.

Although the operational security displayed by actors using the Lamberts toolkit is very good, one sample includes a PDB path that points to a project named “Archan~1” (perhaps ‘Archangel’). The root folder on the PDB path is named “Hudson”. This is one of the very few mistakes we’ve seen with this threat actor.

While in most cases the infection vector remains unknown, the high profile attack from 2014 used a very complex Windows TTF zero-day exploit (CVE-2014-4148).

Kaspersky Lab products successfully detect and eradicate all the known malware from the Lamberts family. For more information please contact: intelreports@kasperskycom

An Overview of the Lamberts

Figure 1. Lamberts discovery timeline

The first time the Lambert family malware was uncovered publicly was in October 2014, when FireEye posted a blog about a zero day exploit (CVE-2014-4148) used in the wild. The vulnerability was patched by Microsoft at the same time. We named the malware involved ‘Black Lambert’ and described it thoroughly in a private report, available to Kaspersky APT Intel Reports subscribers.

The authors of Black Lambert included a couple of very interesting details in the sample, which read as the following: toolType=wl, build=132914, versionName = 2.0.0. Looking for similar samples, we were able to identify another generation of related tools which we called White Lambert. While Black Lambert connects directly to its C&C for instructions, White Lambert is a fully passive, network-driven backdoor.

Black Lambert White Lambert Implant type Active Passive toolType wl aa (“ArchAngel”) build 132914 113140 versionName 2.0.0 5.0.2

Internal configuration similarities in Black and White Lambert

White Lambert runs in kernel mode and intercepts network traffic on infected machines. It decrypts packets crafted in a special format to extract instructions. We named these passive backdoors ‘White Lambert’ to contrast with the active “Black Lambert” implants.

Looking further for any other malware related to White Lambert and Black Lambert, we came by another generation of malware that we called Blue Lambert.

One of the Blue Lambert samples is interesting because it appears to have been used as second stage malware in a high profile attack, which involved the Black Lambert malware.

Looking further for malware similar to Blue Lambert, we came by another family of malware we called Green Lambert. Green Lambert is a lighter, more reliable, but older version of Blue Lambert. Interestingly, while most Blue Lambert variants have version numbers in the range of 2.x, Green Lambert is mostly in 3.x versions. This stands in opposition to the data gathered from export timestamps and C&C domain activity that points to Green Lambert being considerably older than the Blue variant. Perhaps both Blue and Green Lamberts have been developed in parallel by two different teams working under the same umbrella, as normal software version iterations, with one seeing earlier deployment than the other.

Signatures created for Green Lambert (Windows) have also triggered on an OS X variant of Green Lambert, with a very low version number: 1.2.0. This was uploaded to a multiscanner service in September 2014. The OS X variant of Green Lambert is in many regards functionally identical to the Windows version, however it misses certain functionality such as running plugins directly in memory.

Kaspersky Lab detections for Blue, Black, and Green Lamberts have been triggered by a relatively small set of victims from around the world. While investigating one of these infections involving White Lambert (network-driven implant) and Blue Lambert (active implant), we found yet another family of tools that appear to be related. We called this new family Pink Lambert.

The Pink Lambert toolset includes a beaconing implant, a USB-harvesting module and a multi-platform orchestrator framework which can be used to create OS-independent malware. Versions of this particular orchestrator were found on other victims, together with White Lambert samples, indicating a close relationship between the White and Pink Lambert malware families.

By looking further for other undetected malware on victims of White Lambert, we found yet another apparently related family. The new family, which we called Gray Lambert is the latest iteration of the passive network tools from the Lamberts’ arsenal. The coding style of Gray Lambert is similar to the Pink Lambert USB-harvesting module, however, the functionality mirrors that of White Lambert. Compared to White Lambert, Gray Lambert runs in user mode, without the need for exploiting a vulnerable signed driver to load arbitrary code on 64-bit Windows variants.

Connecting all these different families by shared code, data formats, C&C servers, and victims, we have arrived at the following overarching picture:

Figure 2. An overview of connections between the Lambert families

The Lamberts in Brief – from Black to Gray

Below, we provide a small summary of all the Lamberts. A full description of all variants is available to subscribers of Kaspersky APT Reports. Contact intelreports@kaspersky.com

Black Lambert

The only known sample of Black Lambert was dropped by a TTF-exploit zero day (CVE-2014-4148). Its internal configuration included a proxy server which suggests the malware was created to work in a very specific network configuration, inside the victim’s network.

An internal description of Black Lambert indicates what appears to be a set of markers used by the attackers to denote this particular branch: toolType=wl, build=132914, versionName = 2.0.0.

Hash Description 683afdef710bf3c96d42e6d9e7275130 generic loader (hdmsvc.exe) 79e263f78e69110c09642bbb30f09ace winlib.dll, final payload (toolType=wl) Blue Lambert

The Blue Lambert implants contain what appear to be version numbers in the 2.x range, together with project/operation codename sets, which may also indicate codenames for the victims or campaigns.

Figure 4. Blue Lambert configuration in decrypted form, highlighting internal codenames

Known codenames include TRUE CRIME (2.2.0.2), CERVELO YARDBIRD (2.6.1.1), GAI SHU (2.2.0.5), DOUBLESIDED SCOOBYSNACK (2.3.0.2), FUNNELCAKE CARNIVAL (2.5.0.2), PROSPER SPOCK (2.0.0.2), RINGTOSS CARNIVAL (2.4.2.2), COD FISH (2.2.0.0), and INVERTED SHOT (2.6.2.3).

Green Lambert

Green Lambert is a family of tools deeply related to Blue Lambert. The functionality is very similar, both Blue and Green are active implants. The configuration data shares the same style of codenames for victims, operations, or projects.

Figure 5. Green Lambert configuration block (decrypted) highlighting internal codenames

The Green Lambert family is the only one where non-Windows variants have been found. An old version of Green Lambert, compiled for OS X was uploaded from Russia to a multiscanner service in 2014. Its internal codename is HO BO (1.2.0).

The Windows versions of Green Lambert have the following code names: BEARD BLUE (2.7.1), GORDON FLASH (3.0), APE ESCAPE (3.0.2), SPOCK LOGICAL (3.0.2), PIZZA ASSAULT (3.0.5), and SNOW BLOWER (3.0.5).

Interestingly, one of the droppers of Green Lambert abused an ICS software package named “Subway Environmental Simulation Program” or “SES”, which has been available on certain forums visited by engineers working with industrial software. Similar techniques have been observed in the past from other threat groups, for instance, trojanized Oracle installers by the Equation group.

White Lambert

White Lambert is a family of tools that share the same internal description as Black Lambert. Known tool types, builds, and version names include:

  • ToolType “aa”, protocol 3, version 7, versionName 5.0.2, build 113140
  • ToolType “aa”, protocol 3, version 7, versionName 5.0.0, build 113140
  • ToolType “aa”, protocol 3, version 6, versionName 4.2.0, build 110836M
  • ToolType “aa”, protocol 3, version 5, versionName 3.2.0

One of the White Lambert samples is interesting because it has a forgotten PDB path inside, which points to “Archan~1l” and “Hudson”. Hudson could point to a project name, if the authors name their projects by rivers in the US, or, it could also be the developer’s first name. The truncated (8.3) path “archan~1” most likely means “Archangel”. The tool type “aa” could also suggest “ArchAngel”. By comparison, the Black Lambert tool type “wl” has no known meaning.

White Lambert samples run in kernel mode and sniff network traffic looking for special packets containing instructions to execute. To run unsigned code in kernel mode on 64-bit Windows, White Lambert uses an exploit against a signed, legitimate SiSoftware Sandra driver. The same method was used before by Turla, ProjectSauron, and Equation’s Grayfish, with other known, legitimate drivers.

Pink Lambert

Pink Lambert is a suite of tools initially discovered on a White Lambert victim. It includes a beaconing implant, partially based on publicly available source code. The source code on top of which Pink Lambert’s beaconing implant was created is “A Fully Featured Windows HTTP Wrapper in C++”.

Figure 6. “A Fully Featured Windows HTTP Wrapper” by shicheng

Other tools in the Pink Lambert suite include USB stealer modules and a very complex multi-platform orchestrator.

In a second incident, a Pink Lambert orchestrator was found on another White Lambert victim, substantiating the connection between the Pink and White Lamberts.

Gray Lambert

Gray Lambert is the most recent tool in the Lamberts’ arsenal. It is a network-driven backdoor, similar in functionality to White Lambert. Unlike White Lambert, which runs in kernel mode, Gray Lambert is a user-mode implant. The compilation and coding style of Gray Lambert is similar to the Pink Lambert USB stealers. Gray Lambert initially appeared on the computers of victims infected by White Lambert, which could suggest the authors were upgrading White Lambert infections to Gray. This migration activity was last observed in October 2016.

Some of the known filenames for Gray Lambert are mwapi32.dll and poolstr.dll – it should be pointed though that the filenames used by the Lamberts are generally unique and have never been used twice.

Timeline

Most of the Blue and Green Lambert samples have two C&C servers hardcoded in their configuration block: a hostname and an IP address. Using our own pDNS as well as DomainTools IP history, we plotted the times when the C&C servers were active and pointing to the same IP address as the one from the configuration block.

Unfortunately, this method doesn’t work for all samples, since some of them don’t have a domain for C&C. Additionally, in some cases we couldn’t find any pDNS information for the hostname configured in the malware.

Luckily, the attackers have made a few mistakes, which allow us to identify the activity times for most of the other samples. For instance, in case when no pDNS information was available for a subdomain on top of the main C&C domain, the domain registration dates were sufficient to point out when the activity began. Additionally, in some cases the top domain pointed to the same IP address as the one from the configuration file, allowing us to identify the activity times.

Another worthwhile analysis method focuses on the set of Blue Lambert samples that have exports. Although most compilation timestamps in the PE header appear to have been tampered (to reflect a 2003-2004 range), the authors forgot to alter the timestamps in the export section. This allowed us to identify not just the activity / compilation timestamps, but also the method used for faking the compilation timestamps in the PE header.

It seems the algorithm used to tamper with the samples was the following: subtract 0x10 from the highest byte of timestamp (which amounts to about 8 and half years) and then randomize the lowest 3 bytes. This way we conclude that for Blue Lamberts, that original compilation time of samples was in the range of 2012-2015.

Putting together all the various families, with recovered activity times, we come to the following picture:

Figure 8. A timeline of activity for known Lamberts

As it can be seen from the chart above, Green Lambert is the oldest and longest-running in the family, while Gray is the newest. White, Blue and Pink somehow overlap in deployment, with Blue replacing Green Lambert. Black Lambert was seen only briefly and we assume it was “retired” from the arsenal after being discovered by FireEye in 2014.

Codenames and Popular Culture Referenced in Lamberts

The threat group(s) behind the Lambert toolkits have used a large number of codenames extensively throughout their projects. Some of these codenames are references to old computer games, Star Trek, and cartoons, which is very unusual for high profile APT groups. We really enjoyed going through the backstories of these codenames and wanted to provide them below for others to enjoy as well.

For instance, one of the Green Lambert versions has the internal codename “GORDON FLASH”, which can also be read as “FLASH GORDON”. Flash Gordon is the hero of a space opera adventure comic strip created by and originally drawn by Alex Raymond. It was first published in 1934 and subsequently turned into a popular film in 1980.

Flash Gordon poster

A ‘Funnel cake’ is a regional food popular in North America at carnivals, fairs, sporting events, and seaside resorts. This explains the codename “FUNNELCAKE CARNIVAL”:

Figure 9. A typical funnel cake

Spock and Prosper obviously refers to Star Trek, the well-known science fiction television series created by Gene Roddenberry. Cdr. Spock is a half-Vulcan, half-human character, portrayed by Leonard Nimoy. “Live long and prosper” is the traditional Vulcan greeting in the series.

Leonard Nimoy as “Spock” displaying the traditional Vulcan greeting “Live long and prosper”

Ringtoss is a game that is very popular at carnivals in North America.

DOUBLESIDED SCOOBYSNACK is likely a reference to an NFL Lip Reading video featuring Adrian Peterson that went viral in mid-2013. According to the urban dictionary, it is also used to denote a sexual game in which the participants are dressed as Scooby-Doo and his master.

Ape Escape (also known as Saru Get You (サルゲッチュ Saru Getchu) in Japan) is a series of video games made by SCE Japan Studio, starting with Ape Escape for PlayStation in 1999. The series often incorporates ape-related humor, unique gameplay, and a wide variety of pop culture references; it is also notable for being the first game to make the DualShock or Dual Analog controller mandatory.

Ape Escape

INVERTED SHOT is likely a reference to a mixed martial arts move also known as an ‘Imanari roll takedown’, named after Masakazu Imanari who popularized the grappling technique. It consists of a modified Brazilian jiu-jitsu granby roll that places the fighter in inverted guard position while taking the opponent down to the mat.

GAI and SHU (as used in Green Lambert OS X) are characters from the Guilty Crown anime series. Gai Tsutsugami (恙神 涯 Tsutsugami Gai) is the 17-year-old resourceful and charismatic leader of the “Funeral Parlor” resistance group, while Shu Ouma (桜満 集 Ōma Shū) is the 17-year-old main protagonist of Guilty Crown.

Figure 10. Main characters of Guilty Crown with Shu Ouma in the middle.

Conclusions

The Lamberts toolkit spans across several years, with most activity occurring in 2013 and 2014. Overall, the toolkit includes highly sophisticated malware, which relies on high-level techniques to sniff network traffic, run plugins in memory without touching the disk, and leverages exploits against signed drivers to run unsigned code on 64-bit Windows.

To further exemplify the proficiency of the attackers leveraging the Lamberts toolkit, deployment of Black Lambert included a rather sophisticated TTF zero day exploit, CVE-2014-4148. Taking that into account, we classify the Lamberts as the same level of complexity as Regin, ProjectSauron, Equation and Duqu2, which makes them one of the most sophisticated cyber espionage toolkits we have ever analysed.

Considering the complexity of these projects and the existence of an implant for OS X, we assume that it is highly possible that other Lamberts also exist for other platforms, such as Linux. The fact that in the vast majority of cases the infection method is unknown probably means there are still a lot of unknown details about these attacks and the group(s) leveraging them.

As usual, defense against attacks such as those from the Lamberts/Longhorn should include a multi-layered approach. Kaspersky products include special mitigation strategies against the malware used by this group, as well as the many other APT groups we track. If you are interested in reading more about effective mitigation strategies in general, we recommend the following articles:

We will continue tracking the Lamberts and sharing new findings with our intel report subscribers, as well as with the general public. If you would like to be the first to hear our news, we suggest you subscribe to our intel reports.

Kaspersky Lab products successfully detect and eradicate all the known malware from the Lamberts family.

For more information about the Lamberts, please contact: intelreports@kaspersky.com

Ransomware in targeted attacks

Tue, 04/04/2017 - 12:08

Ransomware’s popularity has attracted the attention of cybercriminal gangs; they use these malicious programs in targeted attacks on large organizations in order to steal money. In late 2016, we detected an increase in the number of attacks, the main goal of which was to launch an encryptor on an organization’s network nodes and servers. This is due to the fact that organizing such attacks is simple, while their profitability is high:

  • The cost of developing a ransom program is significantly lower compared to other types of malicious software.
  • These programs entail a clear monetization model.
  • There is a wide range of potential victims.

Today, an attacker (or a group) can easily create their own encryptor without making any special effort. A vivid example is the Mamba encryptor based on DiskCryptor, an open source software. Some cybercriminal groups do not even take the trouble of involving programmers; instead, they use this legal utility “out of the box.”

DiskСryptor utility

The model of attack looks like this:

  1. Search for an organization that has an unprotected server with RDP access.
  2. Guess the password (or buy access on the black market).
  3. Encrypt a node or server manually.

Notification about encrypting the organization’s server

The cost to organize such an attack is minimal, while the profit could reach thousands of dollars. Some partners of well-known encryptors resort to the same scheme. The only difference is the fact that, in order to encrypt the files, they use a version of a ransom program purchased from the group’s developer.

However, true professionals are also active on the playing field. They carefully select targets (major companies with a large number of network nodes), and organize attacks that can last weeks and go through several stages:

  1. Searching for a victim
  2. Studying the possibility of penetration
  3. Penetrating the organization’s network by using exploits for popular software or Trojans on the infected network nodes
  4. Gaining a foothold on the network and researching its topology
  5. Acquiring the necessary rights to install the encryptor on all the organization’s nodes/servers
  6. Installing the encryptor

Recently, we have written about one of these types of ransomware, PetrWrap, on our blog.

The screen of a machine infected with PetrWrap

Of special note is the software arsenal of a few groups that is used to penetrate and anchor in an organization’s network. For example, one of the groups used open source exploits for the server software that was being used on the server of the victim organization. Once the attackers had exploited this vulnerability, they installed an open sourced RAT tool, called PUPY, on the system.

Pupy RAT description

Once they had gained a foothold in the victim network, the attackers used a Mimikatz tool to acquire the necessary access rights, and then installed the encryptor on the network using PsExec.

Considering the above, we can conclude that the scenario of ransomware infection in a target attack differs significantly from the usual infection scenario (malicious email attachments, drive-by-attacks, etc.). To ensure comprehensive security of an organization’s network, it is necessary to audit the software installed on all nodes and servers of the network. If any outdated software is discovered, then it should be updated immediately. Additionally, network administrators should ensure all types of remote access are reliably protected.

Of special note is the fact that, in most cases, the targets of attacks are the servers of an organization, which means that they should be safeguarded by security measures. In addition, the constant process of creating backup copies must be imperative; this will help bring the company’s IT infrastructure back to operational mode quickly and with minimal financial loss.

ATMitch: remote administration of ATMs

Tue, 04/04/2017 - 04:59

In February 2017, we published research on fileless attacks against enterprise networks. We described the data collected during incident response in several financial institutions around the world, exploring how attackers moved through enterprise networks leaving no traces on the hard drives. The goal of these attackers was money, and the best way to cash out and leave no record of transactions is through the remote administration of ATMs. This second paper is about the methods and techniques that were used by the attackers in the second stage of their attacks against financial organizations – basically enabling remote administration of ATMs.

In June 2016, Kaspersky Lab received a report from a Russian bank that had been the victim of a targeted attack. During the heist, the criminals were able to gain control of the ATMs and upload malware to them. After cashing out, the malware was removed. The bank’s forensics specialists were unable to recover the malicious executables because of the fragmentation of a hard drive after the attack, but they were able to restore the malware’s logs and some file names.

The bank’s forensic team were able, after careful forensic analysis of the ATM’s hard drive, to recover the following files containing logs:

  • C:\Windows\Temp\kl.txt
  • C:\logfile.txt

In addition, they were able to find the names of two deleted executables. Unfortunately, they were not able to recover any of the contents:

  • C:\ATM\!A.EXE
  • C:\ATM\IJ.EXE

Within the log files, the following pieces of plain text were found:

[Date – Time]
[%d %m %Y – %H : %M : %S] > Entering process dispense.
[%d %m %Y – %H : %M : %S] > Items from parameters converted successfully. 4 40
[%d %m %Y – %H : %M : %S] > Unlocking dispenser, result is 0
[%d %m %Y – %H : %M : %S] > Catch some money, bitch! 4000000
[%d %m %Y – %H : %M : %S] > Dispense success, code is 0

As mentioned in the previous paper, based on the information from the log file we created a YARA rule to find a sample, in this case: MD5 cef6c2aa78ff69d894903e41a3308452. And we’ve found one. This sample was uploaded twice (from Kazakhstan and Russia) as “tv.dll”.

The malware, which we have dubbed ATMitch, is fairly straightforward. Once remotely installed and executed via Remote Desktop Connection (RDP) access to the ATM from within the bank, the malware looks for the “command.txt” file that should be located in the same directory as the malware and created by the attacker. If found, the malware reads the one character content from the file and executes the respective command:

  • ‘O’ – Open dispenser
  • ‘D’ – Dispense
  • ‘I’ – Init XFS
  • ‘U’ – Unlock XFS
  • ‘S’ – Setup
  • ‘E’ – Exit
  • ‘G’ – Get Dispenser id
  • ‘L’ – Set Dispenser id
  • ‘C’ – Cancel

After execution, ATMitch writes the results of this command to the log file and removes “command.txt” from the ATM’s hard drive.

The sample “tv.dll” successfully retrieved in this case does not try to conceal itself within the system.

The malware’s command parser

The malware uses the standard XFS library to control the ATM. It should be noted that it works on every ATM that supports the XFS library (which is the vast majority).

Unfortunately, we were unable to retrieve the executables (!A.exe and IJ.exe, located in C:\ATM) from the ATM; only the file names were found as artefacts during the forensic analysis. We assume that these are the installer and uninstaller of the malware. It should also be noted that “tv.dll” contained one Russian-language resource.

Kaspersky Lab continues to monitor and track these kinds of threats and reiterates the need for whitelisting in ATMs as well as the use of anti-APT solutions in banking networks.

Lazarus Under The Hood

Mon, 04/03/2017 - 13:57

 Download full report (PDF)

In February 2017 an article in the Polish media broke the silence on a long-running story about attacks on banks, allegedly related to the notoriously known Lazarus Group. While the original article didn’t mention Lazarus Group it was quickly picked up by security researchers. Today we’d like to share some of our findings, and add something new to what’s currently common knowledge about Lazarus Group activities, and their connection to the much talked about February 2016 incident, when an unknown attacker attempted to steal up to $851M USD from Bangladesh Central Bank.

Since the Bangladesh incident there have been just a few articles explaining the connection between Lazarus Group and the Bangladesh bank heist. One such publication was made available by BAE systems in May 2016, however it only included analysis of the wiper code. This was followed by another blogpost by Anomali Labs, confirming the same wiping code similarity. This similarity was found to be satisfying to many readers, however at Kaspersky Lab, we were looking for a stronger connection.

Other claims that Lazarus was the group behind attacks on the Polish financial sector, came from Symantec in 2017, which noticed string reuse in malware at one of their Polish customers. Symantec also confirmed seeing the Lazarus wiper tool in Poland at one of their customers. However, from this it’s only clear that Lazarus might have attacked Polish banks.

While all these facts are fascinating, the connection between Lazarus attacks on banks, and their role in attacks on banks’ systems, was still loose. The only case where specific malware targeting the bank’s infrastructure used to connect to SWIFT messaging server was discovered, is the Bangladesh Central Bank case. However, while almost everybody in the security industry has heard about the attack, few technical details have been revealed to the public based on the investigation that took place on site at the attacked company. Considering that the afterhack publications by the media mentioned that the investigation stumbled upon three different attackers, it was not obvious whether Lazarus was the one responsible for the fraudulent SWIFT transactions, or if Lazarus had in fact developed its own malware to attack banks’ systems.

We would like to add some strong facts that link some attacks on banks to Lazarus, and share some of our own findings as well as shed some light on the recent TTPs used by the attacker, including some yet unpublished details from the attack in Europe in 2017.

This is the first time we announce some Lazarus Group operations that have thus far gone unreported to the public. We have had the privilege of investigating these attacks and helping with incident response at a number of financial institutions in South East Asia and Europe. With cooperation and support from our research partners, we have managed to address many important questions about the mystery of Lazarus attacks, such as their infiltration method, their relation to attacks on SWIFT software and, most importantly, shed some light on attribution.

Lazarus attacks are not a local problem and clearly the group’s operations span across the whole world. We have seen the detection of their infiltration tools in multiple countries in the past year. Lazarus was previously known to conduct cyberespionage and cybersabotage activities, such as attacks on Sony Pictures Entertainment with volumes of internal data leaked, and many system harddrives in the company wiped. Their interest in financial gain is relatively new, considering the age of the group, and it seems that they have a different set of people working on the problems of invisible money theft or the generation of illegal profit. We believe that Lazarus Group is very large and works mainly on infiltration and espionage operations, while a substantially smaller units within the group, which we have dubbed Bluenoroff, is responsible for financial profit.

The watering hole attack on Polish banks was very well covered by media, however not everyone knows that it was one of many. Lazarus managed to inject malicious code in many other locations. We believe they started this watering hole campaign at the end of 2016 after their other operation was interrupted in South East Asia. Lazarus/Bluenoroff regrouped and rushed into new countries, selecting mostly poorer and less developed locations, hitting smaller banks because they are, apparently, easy prey.

To date, we’ve seen Bluenoroff attack four main types of targets:

  • Financial institutions
  • Casinos
  • Companies involved in the development of financial trade software
  • Crypto-currency businesses

Here is the full list of countries where we have seen Bluenoroff watering hole attacks:

  • Mexico
  • Australia
  • Uruguay
  • Russian Federation
  • Norway
  • India
  • Nigeria
  • Peru
  • Poland

Of course, not all attacks were as successful as the Polish attack case, mainly because in Poland they managed to compromise a government website. This website was frequently accessed by many financial institutions making it a very powerful attack vector. Nevertheless, this wave of attacks resulted in multiple infections across the world, adding new hits to the map we’ve been building.

One of the most interesting discoveries about Lazarus/Bluenoroff came from one of our research partners who completed a forensic analysis of a C2 server in Europe used by the group. Based on the forensic analysis report, the attacker connected to the server via Terminal Services and manually installed an Apache Tomcat server using a local browser, configured it with Java Server Pages and uploaded the JSP script for C2. Once the server was ready, the attacker started testing it. First with a browser, then by running test instances of their backdoor. The operator used multiple IPs: from France to Korea, connecting via proxies and VPN servers. However, one short connection was made from a very unusual IP range, which originates in North Korea.

In addition, the operator installed an off-the-shelf cryptocurrency mining software that should generate Monero cryptocoins. The software so intensely consumed system resources that the system became unresponsive and froze. This could be the reason why it was not properly cleaned, and the server logs were preserved.

This is the first time we have seen a direct link between Bluenoroff and North Korea. Their activity spans from backdoors to watering hole attacks, and attacks on SWIFT servers in banks of South East Asia and Bangladesh Central Bank. Now, is it North Korea behind all the Bluenoroff attacks after all? As researchers, we prefer to provide facts rather than speculations. Still, seeing IP in the C2 log, does make North Korea a key part of the Lazarus Bluenoroff equation.

Conclusions

Lazarus is not just another APT actor. The scale of the Lazarus operations is shocking. It has been on a spike since 2011 and activities didn’t disappear after Novetta published the results of its Operation Blockbuster research, in which we also participated. All those hundreds of samples that were collected give the impression that Lazarus is operating a factory of malware, which produces new samples via multiple independent conveyors.

We have seen them using various code obfuscation techniques, rewriting their own algorithms, applying commercial software protectors, and using their own and underground packers. Lazarus knows the value of quality code, which is why we normally see rudimentary backdoors being pushed during the first stage of infection. Burning those doesn’t impact the group too much. However, if the first stage backdoor reports an interesting infection they start deploying more advanced code, carefully protecting it from accidental detection on disk. The code is wrapped into a DLL loader or stored in an encrypted container, or maybe hidden in a binary encrypted registry value. It usually comes with an installer that only attackers can use, because they password protect it. It guarantees that automated systems – be it a public sandbox or a researcher’s environment – will never see the real payload.

Most of the tools are designed to be disposable material that will be replaced with a new generation as soon as they are burnt. And then there will be newer, and newer, and newer versions. Lazarus avoids reusing the same tools, same code, and the same algorithms. “Keep morphing!” seems to be their internal motto. Those rare cases when they are caught with same tools are operational mistakes, because the group seems to be so large that one part doesn’t always know what the other is doing.

This level of sophistication is something that is not generally found in the cybercriminal world. It’s something that requires strict organisation and control at all stages of operation. That’s why we think that Lazarus is not just another APT actor.

Of course such processes require a lot of money to keep running, which is why the appearance of the Bluenoroff subgroup within Lazarus was logical.

Bluenoroff, being a subgroup of Lazarus, is focusing on financial attacks only. This subgroup has reverse engineering skills because they spend time tearing apart legitimate software, and implementing patches for SWIFT Alliance software, in attempts to find ways to steal big money. Their malware is different and they aren’t exactly soldiers that hit and run. Instead, they prefer to make an execution trace to reconstruct and quickly debug the problem. They are field engineers that come when the ground is already cleared after conquering new lands.

One of Bluenoroff’s favorite strategies is to silently integrate into running processes without breaking them. From the code we’ve seen, it looks as if they are not exactly looking for a hit and run solution when it comes to money theft. Their solutions are aimed at invisible theft without leaving a trace. Of course, attempts to move around millions of USD can hardly remain unnoticed, but we believe that their malware might be secretly deployed now in many other places and it isn’t triggering any serious alarms because it’s much more quiet.

We would like to note, that in all of the observed attacks against banks that we have analyzed, SWIFT software solutions running on banks’ servers haven’t demonstrated or exposed any specific vulnerability. The attacks were focused on banking infrastructure and staff, exploiting vulnerabilities in commonly used software or websites, bruteforcing passwords, using keyloggers and elevating privileges. However, the way banks use servers with SWIFT software installed requires personnel responsible for the administration and operation. Sooner or later, the attackers find these personnel, gain the necessary privileges, and access the server connected to the SWIFT messaging platform. With administrative access to the platform they can manipulate software running on the system as they wish. There is not much that can stop them, because from a technical perspective, their activities may not differ from what an authorized and qualified engineer would do: starting and stopping services, patching software, modifying the database. Therefore, in all the breaches we have analyzed, SWIFT, as an organization has not been at direct fault. More than that, we have witnessed SWIFT trying to protect its customers by implementing the detection of database and software integrity issues. We believe that this is a step in the right direction and these activities should be extended with full support. Complicating the patches of integrity checks further may create a serious threat to the success of future operations run by Lazarus/Bluenoroff against banks worldwide.

To date, the Lazarus/Bluenoroff group has been one of the most successful in launching large scale operations against the financial industry. We believe that they will remain one of the biggest threats to the banking sector, finance and trading companies, as well as casinos for the next few years. We would like to note that none of the financial institutions we helped with incident response and investigation reported any financial loss.

As usual, defense against attacks such as those from Lazarus/Bluenoroff should include a multi-layered approach. Kaspersky products include special mitigation strategies against this group, as well as the many other APT groups we track. If you are interested in reading more about effective mitigation strategies in general, we recommend the following articles:

We will continue tracking the Lazarus/Bluenoroff actor and share new findings with our intel report subscribers, as well as with the general public. If you would like to be the first to hear our news, we suggest you subscribe to our intel reports.

For more information, contact: intelreports@kaspersky.com.

Download full report (PDF)

Penquin’s Moonlit Maze

Mon, 04/03/2017 - 11:36

 Download full report (PDF)

 Download Appendix B (PDF)

Download YARA rules

Back to the Future – SAS 2016

As Thomas Rid left the SAS 2016 stage, he left us with a claim that turned the heads of the elite researchers who filled the detective-themed Tenerife conference hall. His investigation had turned up multiple sources involved in the original investigation into the historic Moonlight Maze cyberespionage campaign who claimed that the threat actor had evolved into the modern day Turla. What would this all mean?

The Titans of Old

Moonlight Maze is the stuff of cyberespionage legend. In 1996, in the infancy of the Internet, someone was rummaging through military, research, and university networks primarily in the United States, stealing sensitive information on a massive scale. Victims included the Pentagon, NASA, and the Department of Energy, to name a very limited few. The scale of the theft was literally monumental, as investigators claimed that a printout of the stolen materials would stand three times taller than the Washington Monument.

To say that this historic threat actor is directly related to the modern day Turla would elevate an already formidable modern day attacker to another league altogether. Turla is a prolific Russian-speaking group known for its covert exfiltration tactics such as the use of hijacked satellite connections, waterholing of government websites, covert channel backdoors, rootkits, and deception tactics. Its presumed origins track back to the famous Agent.BTZ, a campaign to spread through military networks through the use of USB keys that took formidable cooperation to purge (in the form of an interagency operation codenamed Buckshot Yankee in 2008). Though mitigating the threat got the most attention at the time, further research down the line saw this toolkit connecting directly to the modern Turla.

Further confirmation came through our own Kurt Baumgartner’s research for Virus Bulletin 2014 when he discovered Agent.BTZ samples that contacted a hijacked satellite IP jumping point, the same that was used by Turla later on. This advanced exfiltration technique is classic Turla and cemented the belief that the Agent.BTZ actor and Turla were one and the same. This would place Turla back as early as 2006-2007. But that’s still a decade ahead of the Moonlight Maze attack.

By 2016 the Internet was over-crowded with well-resourced cyberespionage crews. But twenty years ago there were few players in this game. Few paid attention to cyberespionage. In retrospect, we know that the Equation Group was probably active at this time. A command-and-control registration places Equation in the mid-1990s. That makes Equation the longest running cyberespionage group/toolkit in history. To then claim that Turla, in one form or another, was active for nearly as long, places them in a greater league than their pre-historic counterpart in pioneering state-sponsored cyberespionage.

A Working Hypothesis

By the time of the SAS 2016 presentation, we had already discussed at length how one might go about proving this link. The revelation that the Moonlight Maze attacks were dependent on a Solaris/*NIX toolkit and not a Windows one as is the case with most of Turla, actually revived our hopes. We would not have to look for older Windows samples where so far there were none, but could instead focus on another discovery. In 2014, Kaspersky announced the discovery of Penquin Turla, a Linux backdoor leveraged by Turla in specific attacks. We turned our attention once again to the rare Penquin samples and noticed something interesting: the code was compiled for the Linux Kernel versions 2.2.0 and 2.2.5, released in 1999. Moreover, the statically linked binaries libpcap and OpenSSL corresponded to versions released in the early 2000s. Finally, despite the original assessment incorrectly surmising that Penquin Turla was based on cd00r (an open-source backdoor by fx), it was actually based on LOKI2, another open-source backdoor for covert exfiltration written by Alhambra and daemon9 and released in Phrack in the late 1990s. This all added up to an extremely unusual set of circumstances for malware that was leveraged in attacks in from 2011-2016, with the latest Penquin Sample discovered just a month ago being submitted from a system in Germany.

Kurt Baumgartner’s prescient observation upon the discovery of the first Penquin Turla samples

Our working hypothesis became this: “The Turla developers decided to dust down old code and recompile it for current Windows victims in the hope of getting a stealthier beachhead on systems that are less likely to be monitored.” Were that to be the case, Penquin Turla could be the modern link that tied Turla to Moonlight Maze. But in order to prove our hypothesis and this historic evolution, we’d need a glimpse at the original artefacts, something we had no access to.

The Cupboard Samples

Our last hope was that someone somewhere had kept a set of backups collecting dust in a cupboard that they might be willing to share. Thomas took to the road to follow up his sources and eventually stumbled upon something remarkable. The Moonlight Maze operators were early adopters of a certain degree of operational security, using a series of hacked servers as relays to mask their original location. During the later stages of their campaign, they hacked a Solaris box in the U.K. to use as a relay. Unbeknown to them, the system administrator—in cooperation with the Metropolitan Police in London and the FBI—turned the server against the malicious operators. The machine known as ‘HRTest’ would proceed to log everything the attackers did keystroke-by-keystroke and save each and every binary and archive that transited through it. This was a huge win for the original investigators and provided something close to a six-month window of visibility before the attackers ditched this relay site (curiously, as a result of the campaign’s first publicity in early March 1999). Finding these samples was hard and fortuitous—due to a redaction error in an FBI FOIA release, we were able to ultimately track down David Hedges after about a year of sleuthing. “I hear you’re looking for HRTest,” David said when he finally called Thomas for the first time. Then, the now-retired administrator kicked a machine under his desk, chuckling as he said “well it’s sitting right here, and it’s still working.”

Thomas Rid, David Hedges, Daniel Moore, and Juan Andres Guerrero-Saade at King’s College London

Paydirt but not the Motherlode

What we had in our hands allowed us to recreate a portion of the constellation of attacks that constitutes Moonlight Maze. The samples and logs became our obsession for months. While Juan Andres and Costin at GReAT reversed the binaries (most compiled in SPARC for Solaris and MIPS for IRIX, ancient assembly languages), Daniel Moore went so far as to create an entire UI to parse and load the logs onto, so as to be able to visualize the extent of the networks and nodes under attack. We set out to profile our attackers and understand their methods. Among these, some salient features emerged:

Moore’s Rapyd Graph Data Analyzer tracking the victims of Moonlight Maze linked to HRTest

  1. The attackers were prolific Unix users. They used their skills to script their attack phases, which allowed a sort of old school automation. Rather than have the malware communicate to command-and-control servers and carry out functions and exfiltration of their own accord, the attackers would manually log in to victim nodes and leverage scripts and tasking files (usually located in the /var/tmp/ directory) to instruct all of these nodes on what they should do, what information to collect, and finally on where to send it. This allowed them to orchestrate large swaths of infected machines despite being an ‘operator-at-keyboard’ style of attack.
  1. The operators were learning as they went. Our analysis of the binaries shows a trial and error approach to malware development. Many binaries were simply open-source exploits leveraged as needed. Others were open-source backdoors and sniffers. However, despite not having exact compilation timestamps (as would happen in Windows executables), it’s possible to trace a binary evolution of sorts. The devs would test out new capabilities, then recompile binaries to fix issues and expand functionality as needed. This allowed us to graph a sort of binary tree of development to see how the attacks functionalities developed throughout this campaign.
  1. Despite their early interest in OpSec, and use of tools specifically designed for this effect, the operators made a huge mistake. It was their standard behavior to use infected machines to look for further victims on the same network or to relay onto other networks altogether. In more than a dozen cases, the attackers had infected a machine with a sniffer that collected any activity on the victim machine and then proceeded to use these machines to connect to other victims. That meant that the attackers actually created near complete logs of everything they themselves did on these systems—and once they did their routine exfiltration, those self-logs were saved on the HRTest node for posterity. The attackers created their own digital footprint for perpetuity.
So what’s the verdict?

A complete analysis of the attack artefacts is provided in the whitepaper, for those interested in a look under the hood of a portion of the Moonlight Maze attacks. For those who would like to jump straight to the conclusion: our parallel investigation into the connection between Moonlight Maze and Turla yielded a more nuanced answer predicated upon the limitations in our visibility.

An objective view of the investigation would have to admit that a conclusion is simply premature. The unprecedented public visibility into the Moonlight Maze attack provided by David Hedges is fascinating, but far from complete. It spans a window between 1998-1999 as well as samples apparently compiled as far back as late 1996. On the other hand, the Penquin Turla codebase appears to have been primarily developed from 1999-2004 before being leveraged in more modern attacks. What we are left with is a circumstantial argument that takes into account the binary evolution witnessed from 1998-1999 as well as the functionality and tools leveraged at that time, both of which point us to a development trend that could lead directly to what is now known as Penquin Turla. This includes the use of tasking files, LOKI2 for covert channel communications, and promiscuous sniffers – all of which made it into the modern Penquin Turla variants.

The next step in our ongoing parallel investigation would have to focus on a little known operation codenamed ‘Storm Cloud’. This codename represents the evolved toolkit leveraged by the same Moonlight Maze operators once the initial intrusions became public in 1999. In 2003, the story of Storm Cloud leaked with little fanfare, but a few prescient details led us to believe a more definitive answer may be found in this intrusion set:

Storm Cloud reference in a 2003 Wall Street Journal Article mentions further use of LOKI2

Just as the SAS 2016 talk enabled us to find David and his time capsule of Moonlight Maze artefacts, we hope this glimpse into our ongoing research will bring another dedicated sysadmin out of the woodwork who may still have access to Storm Cloud artefacts, allowing us to settle this question once and for all. Beyond the historical value of this understanding, it would afford greater perspective into a tool being leveraged in cyberespionage attacks to this day.

The epic Moonlight Maze hunt continues…

If you have information or artefacts you’d like to share with the researchers, please contact penquin[at]kaspersky.com

 Download full report (PDF)

 Download Appendix B (PDF)

Download YARA rules