Malware RSS Feed
In February 2017 an article in the Polish media broke the silence on a long-running story about attacks on banks, allegedly related to the notoriously known Lazarus Group. While the original article didn’t mention Lazarus Group it was quickly picked up by security researchers. Today we’d like to share some of our findings, and add something new to what’s currently common knowledge about Lazarus Group activities, and their connection to the much talked about February 2016 incident, when an unknown attacker attempted to steal up to $851M USD from Bangladesh Central Bank.
Since the Bangladesh incident there have been just a few articles explaining the connection between Lazarus Group and the Bangladesh bank heist. One such publication was made available by BAE systems in May 2016, however it only included analysis of the wiper code. This was followed by another blogpost by Anomali Labs, confirming the same wiping code similarity. This similarity was found to be satisfying to many readers, however at Kaspersky Lab, we were looking for a stronger connection.
Other claims that Lazarus was the group behind attacks on the Polish financial sector, came from Symantec in 2017, which noticed string reuse in malware at one of their Polish customers. Symantec also confirmed seeing the Lazarus wiper tool in Poland at one of their customers. However, from this it’s only clear that Lazarus might have attacked Polish banks.
While all these facts are fascinating, the connection between Lazarus attacks on banks, and their role in attacks on banks’ systems, was still loose. The only case where specific malware targeting the bank’s infrastructure used to connect to SWIFT messaging server was discovered, is the Bangladesh Central Bank case. However, while almost everybody in the security industry has heard about the attack, few technical details have been revealed to the public based on the investigation that took place on site at the attacked company. Considering that the afterhack publications by the media mentioned that the investigation stumbled upon three different attackers, it was not obvious whether Lazarus was the one responsible for the fraudulent SWIFT transactions, or if Lazarus had in fact developed its own malware to attack banks’ systems.
We would like to add some strong facts that link some attacks on banks to Lazarus, and share some of our own findings as well as shed some light on the recent TTPs used by the attacker, including some yet unpublished details from the attack in Europe in 2017.
This is the first time we announce some Lazarus Group operations that have thus far gone unreported to the public. We have had the privilege of investigating these attacks and helping with incident response at a number of financial institutions in South East Asia and Europe. With cooperation and support from our research partners, we have managed to address many important questions about the mystery of Lazarus attacks, such as their infiltration method, their relation to attacks on SWIFT software and, most importantly, shed some light on attribution.
Lazarus attacks are not a local problem and clearly the group’s operations span across the whole world. We have seen the detection of their infiltration tools in multiple countries in the past year. Lazarus was previously known to conduct cyberespionage and cybersabotage activities, such as attacks on Sony Pictures Entertainment with volumes of internal data leaked, and many system harddrives in the company wiped. Their interest in financial gain is relatively new, considering the age of the group, and it seems that they have a different set of people working on the problems of invisible money theft or the generation of illegal profit. We believe that Lazarus Group is very large and works mainly on infiltration and espionage operations, while a substantially smaller units within the group, which we have dubbed Bluenoroff, is responsible for financial profit.
The watering hole attack on Polish banks was very well covered by media, however not everyone knows that it was one of many. Lazarus managed to inject malicious code in many other locations. We believe they started this watering hole campaign at the end of 2016 after their other operation was interrupted in South East Asia. Lazarus/Bluenoroff regrouped and rushed into new countries, selecting mostly poorer and less developed locations, hitting smaller banks because they are, apparently, easy prey.
To date, we’ve seen Bluenoroff attack four main types of targets:
- Financial institutions
- Companies involved in the development of financial trade software
- Crypto-currency businesses
Here is the full list of countries where we have seen Bluenoroff watering hole attacks:
- Russian Federation
Of course, not all attacks were as successful as the Polish attack case, mainly because in Poland they managed to compromise a government website. This website was frequently accessed by many financial institutions making it a very powerful attack vector. Nevertheless, this wave of attacks resulted in multiple infections across the world, adding new hits to the map we’ve been building.
One of the most interesting discoveries about Lazarus/Bluenoroff came from one of our research partners who completed a forensic analysis of a C2 server in Europe used by the group. Based on the forensic analysis report, the attacker connected to the server via Terminal Services and manually installed an Apache Tomcat server using a local browser, configured it with Java Server Pages and uploaded the JSP script for C2. Once the server was ready, the attacker started testing it. First with a browser, then by running test instances of their backdoor. The operator used multiple IPs: from France to Korea, connecting via proxies and VPN servers. However, one short connection was made from a very unusual IP range, which originates in North Korea.
In addition, the operator installed an off-the-shelf cryptocurrency mining software that should generate Monero cryptocoins. The software so intensely consumed system resources that the system became unresponsive and froze. This could be the reason why it was not properly cleaned, and the server logs were preserved.
This is the first time we have seen a direct link between Bluenoroff and North Korea. Their activity spans from backdoors to watering hole attacks, and attacks on SWIFT servers in banks of South East Asia and Bangladesh Central Bank. Now, is it North Korea behind all the Bluenoroff attacks after all? As researchers, we prefer to provide facts rather than speculations. Still, seeing IP in the C2 log, does make North Korea a key part of the Lazarus Bluenoroff equation.Conclusions
Lazarus is not just another APT actor. The scale of the Lazarus operations is shocking. It has been on a spike since 2011 and activities didn’t disappear after Novetta published the results of its Operation Blockbuster research, in which we also participated. All those hundreds of samples that were collected give the impression that Lazarus is operating a factory of malware, which produces new samples via multiple independent conveyors.
We have seen them using various code obfuscation techniques, rewriting their own algorithms, applying commercial software protectors, and using their own and underground packers. Lazarus knows the value of quality code, which is why we normally see rudimentary backdoors being pushed during the first stage of infection. Burning those doesn’t impact the group too much. However, if the first stage backdoor reports an interesting infection they start deploying more advanced code, carefully protecting it from accidental detection on disk. The code is wrapped into a DLL loader or stored in an encrypted container, or maybe hidden in a binary encrypted registry value. It usually comes with an installer that only attackers can use, because they password protect it. It guarantees that automated systems – be it a public sandbox or a researcher’s environment – will never see the real payload.
Most of the tools are designed to be disposable material that will be replaced with a new generation as soon as they are burnt. And then there will be newer, and newer, and newer versions. Lazarus avoids reusing the same tools, same code, and the same algorithms. “Keep morphing!” seems to be their internal motto. Those rare cases when they are caught with same tools are operational mistakes, because the group seems to be so large that one part doesn’t always know what the other is doing.
This level of sophistication is something that is not generally found in the cybercriminal world. It’s something that requires strict organisation and control at all stages of operation. That’s why we think that Lazarus is not just another APT actor.
Of course such processes require a lot of money to keep running, which is why the appearance of the Bluenoroff subgroup within Lazarus was logical.
Bluenoroff, being a subgroup of Lazarus, is focusing on financial attacks only. This subgroup has reverse engineering skills because they spend time tearing apart legitimate software, and implementing patches for SWIFT Alliance software, in attempts to find ways to steal big money. Their malware is different and they aren’t exactly soldiers that hit and run. Instead, they prefer to make an execution trace to reconstruct and quickly debug the problem. They are field engineers that come when the ground is already cleared after conquering new lands.
One of Bluenoroff’s favorite strategies is to silently integrate into running processes without breaking them. From the code we’ve seen, it looks as if they are not exactly looking for a hit and run solution when it comes to money theft. Their solutions are aimed at invisible theft without leaving a trace. Of course, attempts to move around millions of USD can hardly remain unnoticed, but we believe that their malware might be secretly deployed now in many other places and it isn’t triggering any serious alarms because it’s much more quiet.
We would like to note, that in all of the observed attacks against banks that we have analyzed, SWIFT software solutions running on banks’ servers haven’t demonstrated or exposed any specific vulnerability. The attacks were focused on banking infrastructure and staff, exploiting vulnerabilities in commonly used software or websites, bruteforcing passwords, using keyloggers and elevating privileges. However, the way banks use servers with SWIFT software installed requires personnel responsible for the administration and operation. Sooner or later, the attackers find these personnel, gain the necessary privileges, and access the server connected to the SWIFT messaging platform. With administrative access to the platform they can manipulate software running on the system as they wish. There is not much that can stop them, because from a technical perspective, their activities may not differ from what an authorized and qualified engineer would do: starting and stopping services, patching software, modifying the database. Therefore, in all the breaches we have analyzed, SWIFT, as an organization has not been at direct fault. More than that, we have witnessed SWIFT trying to protect its customers by implementing the detection of database and software integrity issues. We believe that this is a step in the right direction and these activities should be extended with full support. Complicating the patches of integrity checks further may create a serious threat to the success of future operations run by Lazarus/Bluenoroff against banks worldwide.
To date, the Lazarus/Bluenoroff group has been one of the most successful in launching large scale operations against the financial industry. We believe that they will remain one of the biggest threats to the banking sector, finance and trading companies, as well as casinos for the next few years. We would like to note that none of the financial institutions we helped with incident response and investigation reported any financial loss.
As usual, defense against attacks such as those from Lazarus/Bluenoroff should include a multi-layered approach. Kaspersky products include special mitigation strategies against this group, as well as the many other APT groups we track. If you are interested in reading more about effective mitigation strategies in general, we recommend the following articles:
We will continue tracking the Lazarus/Bluenoroff actor and share new findings with our intel report subscribers, as well as with the general public. If you would like to be the first to hear our news, we suggest you subscribe to our intel reports.
For more information, contact: email@example.com.
Back to the Future – SAS 2016
As Thomas Rid left the SAS 2016 stage, he left us with a claim that turned the heads of the elite researchers who filled the detective-themed Tenerife conference hall. His investigation had turned up multiple sources involved in the original investigation into the historic Moonlight Maze cyberespionage campaign who claimed that the threat actor had evolved into the modern day Turla. What would this all mean?The Titans of Old
Moonlight Maze is the stuff of cyberespionage legend. In 1996, in the infancy of the Internet, someone was rummaging through military, research, and university networks primarily in the United States, stealing sensitive information on a massive scale. Victims included the Pentagon, NASA, and the Department of Energy, to name a very limited few. The scale of the theft was literally monumental, as investigators claimed that a printout of the stolen materials would stand three times taller than the Washington Monument.
To say that this historic threat actor is directly related to the modern day Turla would elevate an already formidable modern day attacker to another league altogether. Turla is a prolific Russian-speaking group known for its covert exfiltration tactics such as the use of hijacked satellite connections, waterholing of government websites, covert channel backdoors, rootkits, and deception tactics. Its presumed origins track back to the famous Agent.BTZ, a campaign to spread through military networks through the use of USB keys that took formidable cooperation to purge (in the form of an interagency operation codenamed Buckshot Yankee in 2008). Though mitigating the threat got the most attention at the time, further research down the line saw this toolkit connecting directly to the modern Turla.
Further confirmation came through our own Kurt Baumgartner’s research for Virus Bulletin 2014 when he discovered Agent.BTZ samples that contacted a hijacked satellite IP jumping point, the same that was used by Turla later on. This advanced exfiltration technique is classic Turla and cemented the belief that the Agent.BTZ actor and Turla were one and the same. This would place Turla back as early as 2006-2007. But that’s still a decade ahead of the Moonlight Maze attack.
By 2016 the Internet was over-crowded with well-resourced cyberespionage crews. But twenty years ago there were few players in this game. Few paid attention to cyberespionage. In retrospect, we know that the Equation Group was probably active at this time. A command-and-control registration places Equation in the mid-1990s. That makes Equation the longest running cyberespionage group/toolkit in history. To then claim that Turla, in one form or another, was active for nearly as long, places them in a greater league than their pre-historic counterpart in pioneering state-sponsored cyberespionage.A Working Hypothesis
By the time of the SAS 2016 presentation, we had already discussed at length how one might go about proving this link. The revelation that the Moonlight Maze attacks were dependent on a Solaris/*NIX toolkit and not a Windows one as is the case with most of Turla, actually revived our hopes. We would not have to look for older Windows samples where so far there were none, but could instead focus on another discovery. In 2014, Kaspersky announced the discovery of Penquin Turla, a Linux backdoor leveraged by Turla in specific attacks. We turned our attention once again to the rare Penquin samples and noticed something interesting: the code was compiled for the Linux Kernel versions 2.2.0 and 2.2.5, released in 1999. Moreover, the statically linked binaries libpcap and OpenSSL corresponded to versions released in the early 2000s. Finally, despite the original assessment incorrectly surmising that Penquin Turla was based on cd00r (an open-source backdoor by fx), it was actually based on LOKI2, another open-source backdoor for covert exfiltration written by Alhambra and daemon9 and released in Phrack in the late 1990s. This all added up to an extremely unusual set of circumstances for malware that was leveraged in attacks in from 2011-2016, with the latest Penquin Sample discovered just a month ago being submitted from a system in Germany.
Kurt Baumgartner’s prescient observation upon the discovery of the first Penquin Turla samples
Our working hypothesis became this: “The Turla developers decided to dust down old code and recompile it for current Windows victims in the hope of getting a stealthier beachhead on systems that are less likely to be monitored.” Were that to be the case, Penquin Turla could be the modern link that tied Turla to Moonlight Maze. But in order to prove our hypothesis and this historic evolution, we’d need a glimpse at the original artefacts, something we had no access to.The Cupboard Samples
Our last hope was that someone somewhere had kept a set of backups collecting dust in a cupboard that they might be willing to share. Thomas took to the road to follow up his sources and eventually stumbled upon something remarkable. The Moonlight Maze operators were early adopters of a certain degree of operational security, using a series of hacked servers as relays to mask their original location. During the later stages of their campaign, they hacked a Solaris box in the U.K. to use as a relay. Unbeknown to them, the system administrator—in cooperation with the Metropolitan Police in London and the FBI—turned the server against the malicious operators. The machine known as ‘HRTest’ would proceed to log everything the attackers did keystroke-by-keystroke and save each and every binary and archive that transited through it. This was a huge win for the original investigators and provided something close to a six-month window of visibility before the attackers ditched this relay site (curiously, as a result of the campaign’s first publicity in early March 1999). Finding these samples was hard and fortuitous—due to a redaction error in an FBI FOIA release, we were able to ultimately track down David Hedges after about a year of sleuthing. “I hear you’re looking for HRTest,” David said when he finally called Thomas for the first time. Then, the now-retired administrator kicked a machine under his desk, chuckling as he said “well it’s sitting right here, and it’s still working.”
Thomas Rid, David Hedges, Daniel Moore, and Juan Andres Guerrero-Saade at King’s College LondonPaydirt but not the Motherlode
What we had in our hands allowed us to recreate a portion of the constellation of attacks that constitutes Moonlight Maze. The samples and logs became our obsession for months. While Juan Andres and Costin at GReAT reversed the binaries (most compiled in SPARC for Solaris and MIPS for IRIX, ancient assembly languages), Daniel Moore went so far as to create an entire UI to parse and load the logs onto, so as to be able to visualize the extent of the networks and nodes under attack. We set out to profile our attackers and understand their methods. Among these, some salient features emerged:
Moore’s Rapyd Graph Data Analyzer tracking the victims of Moonlight Maze linked to HRTest
- The attackers were prolific Unix users. They used their skills to script their attack phases, which allowed a sort of old school automation. Rather than have the malware communicate to command-and-control servers and carry out functions and exfiltration of their own accord, the attackers would manually log in to victim nodes and leverage scripts and tasking files (usually located in the /var/tmp/ directory) to instruct all of these nodes on what they should do, what information to collect, and finally on where to send it. This allowed them to orchestrate large swaths of infected machines despite being an ‘operator-at-keyboard’ style of attack.
- The operators were learning as they went. Our analysis of the binaries shows a trial and error approach to malware development. Many binaries were simply open-source exploits leveraged as needed. Others were open-source backdoors and sniffers. However, despite not having exact compilation timestamps (as would happen in Windows executables), it’s possible to trace a binary evolution of sorts. The devs would test out new capabilities, then recompile binaries to fix issues and expand functionality as needed. This allowed us to graph a sort of binary tree of development to see how the attacks functionalities developed throughout this campaign.
- Despite their early interest in OpSec, and use of tools specifically designed for this effect, the operators made a huge mistake. It was their standard behavior to use infected machines to look for further victims on the same network or to relay onto other networks altogether. In more than a dozen cases, the attackers had infected a machine with a sniffer that collected any activity on the victim machine and then proceeded to use these machines to connect to other victims. That meant that the attackers actually created near complete logs of everything they themselves did on these systems—and once they did their routine exfiltration, those self-logs were saved on the HRTest node for posterity. The attackers created their own digital footprint for perpetuity.
A complete analysis of the attack artefacts is provided in the whitepaper, for those interested in a look under the hood of a portion of the Moonlight Maze attacks. For those who would like to jump straight to the conclusion: our parallel investigation into the connection between Moonlight Maze and Turla yielded a more nuanced answer predicated upon the limitations in our visibility.
An objective view of the investigation would have to admit that a conclusion is simply premature. The unprecedented public visibility into the Moonlight Maze attack provided by David Hedges is fascinating, but far from complete. It spans a window between 1998-1999 as well as samples apparently compiled as far back as late 1996. On the other hand, the Penquin Turla codebase appears to have been primarily developed from 1999-2004 before being leveraged in more modern attacks. What we are left with is a circumstantial argument that takes into account the binary evolution witnessed from 1998-1999 as well as the functionality and tools leveraged at that time, both of which point us to a development trend that could lead directly to what is now known as Penquin Turla. This includes the use of tasking files, LOKI2 for covert channel communications, and promiscuous sniffers – all of which made it into the modern Penquin Turla variants.
The next step in our ongoing parallel investigation would have to focus on a little known operation codenamed ‘Storm Cloud’. This codename represents the evolved toolkit leveraged by the same Moonlight Maze operators once the initial intrusions became public in 1999. In 2003, the story of Storm Cloud leaked with little fanfare, but a few prescient details led us to believe a more definitive answer may be found in this intrusion set:
Storm Cloud reference in a 2003 Wall Street Journal Article mentions further use of LOKI2
Just as the SAS 2016 talk enabled us to find David and his time capsule of Moonlight Maze artefacts, we hope this glimpse into our ongoing research will bring another dedicated sysadmin out of the woodwork who may still have access to Storm Cloud artefacts, allowing us to settle this question once and for all. Beyond the historical value of this understanding, it would afford greater perspective into a tool being leveraged in cyberespionage attacks to this day.
The epic Moonlight Maze hunt continues…
If you have information or artefacts you’d like to share with the researchers, please contact penquin[at]kaspersky.com