Daily Tech Digest - November 25, 2019

Avoiding the pitfalls of operating a honeypot

honey jar dripper
Operators of honeypots sometimes desire to trick the hacker into downloading phone-home and other technologies for purposes of identifying the hacker and/or better tracking their movements. Understand that downloading programming and other technology onto someone’s systems or attempting to access their systems without their knowledge or consent almost certainly violates state and federal anti-hacking laws – even if done in the context of cyber security. Penalties for these activities can be substantial and harsh. Never engage in such activities without the involvement and direction of law enforcement. ... Except for interactions with law enforcement, uses of personally identifiable information should be strictly avoided. Only aggregated or de-identified information should be used, particularly in the context of any published reports or statistics regarding operation of the honeypot. ... The law regarding entrapment is complicated, but if someone creates a situation intended solely to snare a wrongdoer, there is the potential for an argument this constitutes entrapment. In such a case, law enforcement may decline to take action on information gained from the honeypot.


Exploit code published for dangerous Apache Solr remote code execution flaw

Apache Solr
At the time it was reported, the Apache Solr team didn't see the issue as a big deal, and developers thought an attacker could only access (useless) Solr monitoring data, and nothing else. Things turned out to be much worse when, on October 30, a user published proof-of-concept code on GitHub showing how an attacker could abuse the very same issue for "remote code execution" (RCE) attacks. The proof-of-concept code used the exposed 8983 port to enable support for Apache Velocity templates on the Solr server and then used this second feature to upload and run malicious code. A second, more refined proof-of-concept code was published online two days later, making attacks even easier to execute. It was only after the publication of this code that the Solr team realized how dangerous this bug really was. On November 15, they issued an updated security advisory. In its updated alert, the Solr team recommended that Solr admins set the ENABLE_REMOTE_JMX_OPTS option in the solr.in.sh config file to "false" on every Solr node and then restart Solr.



Stateful Serverless: Long-Running Workflows with Durable Functions

There are a few reasons the workload doesn’t appear to be a good fit for Azure Functions at first glance. It runs relatively long (the example was just part of the game; an entire game may take hours or days). In addition, it requires state to keep track of the game in progress. Azure Functions by nature are stateless. They are designed to be quickly run self-contained transactions. Any concept of state must be managed using cache, storage, or database. If only the function could be suspended while waiting for asynchronous actions to complete and maintain its state when resumed. The Durable Task Framework is an open source library that was written to manage state and control flow for long-running workflows. Durable Functions build on the framework to provide the same support for serverless functions. In addition to facilitating potential cost savings for longer running workflows, it opens a new set of patterns and possibilities for serverless applications. To illustrate these patterns, I created the Durable Dungeon. This article is based on a presentation I first gave at NDC Oslo.


The Edge of Test Automation: DevTestOps and DevSecOps

On the edge
DevTestOps allows developers, testers, and operation engineers to work together in a similar environment. Apart from running test cases, DevTestOps also involves writing test scripts, automation, manual, and exploratory testing. In the past few years, DevOps and automation testing strategies have received a lot of appreciation because teams were able to develop and deliver products in the minimum time possible. But, many organizations soon realized that without continuous testing, DevOps provide an incomplete delivery of software that might be full of bugs and issues. And that’s why DevTestOps was introduced. Now, DevTestOps is growing in popularity because it improves the relationship between the team members involved in a software development process. It not only helps in faster delivery of products but also provides high-quality software. And when the software is released, automated test cases are already stored in it for future releases.


Q&A with Tyler Treat on Microservice Observability

A common misstep I see is companies chasing tooling in hopes that it will solve all of their problems. "If we get just one more tool, things will get better." Similarly, seeking a "single pane of glass" is usually a fool’s errand. In reality, what the tools do is provide different lenses through which to view things. The composite of these is what matters, and there isn’t a single tool that solves all problems. But while tools are valuable, they aren’t the end of the story. As with most things, it starts with culture. You have to promote a culture of observability. If teams aren’t treating instrumentation as a first-class concern in their systems, no amount of tooling will help. Worse yet, if teams aren’t actually on-call for the systems they ship to production, there is no incentive for them to instrument at all. This leads to another common mistake, which is organizations simply renaming an Operations team to an Observability team. This is akin to renaming your Ops engineers to DevOps engineers thinking it will flip some switch. 


8 ways to prepare your data center for AI’s power draw

2 data center servers
Existing data centers might be able to handle AI computational workloads but in a reduced fashion, says Steve Conway, senior research vice president for Hyperion Research. Many, if not most, workloads can be operated at half or quarter precision rather than 64-bit double precision. “For some problems, half precision is fine,” Conway says. “Run it at lower resolution, with less data. Or with less science in it.” Double-precision floating point calculations are primarily needed in scientific research, which is often done at the molecular level. Double precision is not typically used in AI training or inference on deep learning models because it is not needed. Even Nvidia advocates for use of single- and half-precision calculations in deep neural networks. AI will be a part of your business but not all, and that should be reflected in your data center. “The new facilities that are being built are contemplating allocating some portion of their facilities to higher power usage,” says Doug Hollidge, a partner with Five 9s Digital, which builds and operates data centers. “You’re not going to put all of your facilities to higher density because there are other apps that have lower draw.”


Kubernetes meets the real world

Kubernetes meets the real world
Kubernetes is enabling enterprises of all sizes to improve their developer velocity, nimbly deploy and scale applications, and modernize their technology stacks. For example, the online retailer Ocado, which has been delivering fresh groceries to UK households since 2000, has built its own technology platform to manage logistics and warehouses. In 2017, the company decided to start migrating its Docker containers to Kubernetes, taking its first application into production in the summer of 2017 on its own private cloud. The big benefits of this shift for Ocado and others have been much quicker time-to-market and more efficient use of computing resources. At the same time, Kubernetes adopters also tend to cite the same drawback: The learning curve is steep, and although the technology makes life easier for developers in the long run, it doesn’t make life less complex. Here are some examples of large global companies running Kubernetes in production, how they got there, and what they have learned along the way.


HP to Xerox: We don't need you, you're a mess


The HP Board of Directors has reviewed and considered your November 21 letter, which has provided no new information beyond your November 5 letter. We reiterate that we reject Xerox's proposal as it significantly undervalues HP. Additionally, it is highly conditional and uncertain. In particular, there continues to be uncertainty regarding Xerox's ability to raise the cash portion of the proposed consideration and concerns regarding the prudence of the resulting outsized debt burden on the value of the combined company's stock even if the financing were obtained. Consequently, your proposal does not constitute a basis for due diligence or negotiation. We believe it is important to emphasize that we are not dependent on a Xerox combination. We have great confidence in our strategy and the numerous opportunities available to HP to drive sustainable long-term value, including the deployment of our strong balance sheet for increased share repurchases of our significantly undervalued stock and for value-creating M&A.


A new era of cyber warfare: Russia’s Sandworm shows “we are all Ukraine” on the internet

Cyber warfare  >  Russian missile launcher / Russian flag / binary code
This was “the kind of destructive act on the power grid we've never seen before, but we've always dreaded.” Even more concerning, “what happens in Ukraine we'll assume will happen to the rest of us too because Russia is using it as a test lab for cyberwar. That cyberwar will sooner or later spill out to the West,” Greenberg said. “When you make predictions like this, you don't really want them to come true.” Sandworm’s adversarial attacks did spill out to the West in its next big attack, the NotPetya malware, which swept across continents in June 2017 causing untold damage in Europe and the United States, but mostly in Ukraine. NotPetya, took down “300 Ukrainian companies and 22 banks, four hospitals that I'm aware of, multiple airports, pretty much every government agency. It was a kind of a carpet bombing of the Ukrainian internet, but it did immediately spread to the rest of the world fulfilling [my] prediction far more quickly than I would have ever wanted it to,” Greenberg said. The enormous financial costs of NotPetya are still unknown, but for companies that have put a price tag on the attack, the figures are staggering. 


Lessons Learned in Performance Testing


To remind ourselves, throughput is basically counting the number of operations done per some period of time (a typical example is operations per second). Latency, also known as response time, is the time from the start of the execution of the operation to receiving the answer. These two basic metrics of system performance are usually connected to each other. In a non-parallel system, latency is actually an inverse of throughput and vice versa. This is very intuitive - if I do 10 operations per second, one operation is (on average) taking 1/10 second. If I do more operations in one second, the single operation has to take less time. Intuitive. However, this intuition can easily break in a parallel system. As an example, just consider adding another request handling thread to the webserver. You’re not shortening the single operation time, hence latency stays (at best) the same, however, you double the throughput. From the example above, it’s clear that throughput and latency are essentially two different metrics of a system. Thus, we have to test them separately.



Quote for the day:


"Becoming a leader is synonymous with becoming yourself. It is precisely that simple, and it is also that difficult." -- Warren G. Bennis


No comments:

Post a Comment