Daily Tech Digest - October 09, 2021

Looking ahead to the API economy

While established companies invest in new APIs to support digital transformation projects, early startups build on top of the latest technology stacks. This trend is turning the Internet into a growing fabric of interconnected technologies the likes of which we've never seen. As the number of new technologies peaks, the underlying fabric — otherwise known as the API economy — fuels the market to undergo technology consolidations with the historic-high number of acquisitions. There are two interesting consequences of this trend. The first is that all of this drives the need for better, faster, and easier-to-understand APIs. Many Integration-Platform-as-a-Service (iPaaS ) vendors understand this quite well. Established iPaaS solutions, such as those from Microsoft, MuleSoft, and Oracle, are continually improved with new tools while new entrants, like Zapier and Workato, continue to emerge. All invest in simplifying the integration experience on top of APIs, essentially speeding the time-to-integration. Some call these experiences "connectors" while others call them "templates." 


Is Artificial Intelligence Taking over DevOps?

As a consequence of the utility of AI tools, they have been widely and rapidly adopted by all but the most stubborn DevOps teams. Indeed, for teams now running several different clouds (and that’s all teams, pretty much) AI interfaces have become almost a necessity as they evolve and scale their DevOps program. The most obvious and tangible outcome of this shift has been in the data and systems that developers spend their time looking at. It used to be that a major part of the role of the operations team, for instance, was to build and maintain a dashboard that all staff members could consult, and which contained all of the apposite data on a piece of software. Today, that central task has become largely obsolete. As software has grown more complex, the idea of a single dashboard containing all the relevant information on a particular piece of software has begun to sound absurd. Instead, most DevOps teams now make use of AI tools that “automatically” monitor the software they are working on, and only present data when it is clear that something has gone wrong.


Five Functions That Benefit From Cybersecurity Automation

Defending against cybersecurity threats is very expensive, said Michael Rogers, operating partner at venture capital firm Team8 and former director of the U.S. National Security Agency. But the costs for attackers are low, he told Data Center Knowledge. "Prioritizing cybersecurity solutions that provide smart, cost-effective ways to reduce, mitigate or even prevent cyberattacks is key," he said. "Inevitably, as we move to an increasingly digital world, these options are game-changers in safeguarding our society and digital future.” Some areas where cybersecurity automation is making a particular difference include incident response, data management, attack simulation, API and certificate management, and application security. ... "A lot of machine learning is being thrown at huge data sets," he said. "The analytics are getting better. And what do you do with that analysis? You want to do threat detection and response, you want to bring the environment back to a safer operating state. Now, these new tools are able to do a lot of this automatically."


Minimizing Design Time Coupling in a Microservice Architecture

To deliver software rapidly, frequently, and reliably, you need what I call the success triangle. You need a combination of three things: process, organization, and architecture. The process, which is DevOps, embraces concepts like continuous delivery and deployment, and delivers a stream of small changes frequently to production. You must structure your organization as a network of autonomous, empowered, loosely coupled, long-lived product teams. You need an architecture that is loosely coupled and modular. Once again, loose coupling is playing a role. If you have a large team developing a large, complex application, you must typically use microservices. That's because the microservice architecture gives you the testability and deployability that you need in order to do DevOps, and it gives you the loose coupling that enables your teams to be loosely coupled. I've talked a lot about loose coupling, but what is that exactly? Operations that span services create coupling between them. Coupling between services is the degree of connectedness.


D-Wave took its own path in quantum computing. Now it’s joining the crowd.

Although D-Wave was the first company to build a working quantum computer, it has struggled to gain commercial traction. Some researchers, most notably computer scientist Scott Aaronson at the University of Texas at Austin, faulted the company for over-hyping what its machines were capable of. (For a long time, Aaronson cast doubt on whether D-Wave's annealer was harnessing any quantum effects at all in making its calculations, although he later conceded that the company's machine was a quantum device.) In the past few years, the company has also had trouble exciting investors: in March, it secured a $40 million grant from the Canadian government. But that came after The Globe & Mail newspaper reported that a financing round in 2020 had valued the company at just $170 million, less than half of its previous $450 million valuation. The company's decision to add gate-model quantum computers to its lineup may be an acknowledgment that commercial momentum seems to be far greater for those machines than for the annealers that D-Wave has specialized in.


What SREs Can Learn From Facebook’s Largest Outage

Facebook was clearly prepared to respond to this incident quickly and efficiently. If it wasn’t, it would no doubt have taken days to restore service following a failure of this magnitude rather than just hours. Nonetheless, Facebook has reported that troubleshooting and resolving the network connectivity issues between data centers proved challenging for three main reasons. First and most obviously, engineers struggled to connect to data centers remotely without a working network. That’s not surprising: as an SRE, you’re likely to run into an issue like this sooner or later. Ideally, you’ll have some kind of secondary remote-access solution, but that’s hard to implement within the context of infrastructure like this. The second challenge is more interesting. Because Facebook’s data centers “are designed with high levels of physical and system security in mind,” according to the company, it proved especially difficult for engineers to restore networking even after they went on-site at the data centers.


Becoming a new chief information security officer today: The steps for success

As a new CISO, you should evaluate existing policies including cyber insurance, representation from legal teams, connections with incident response (IR) -- and also who is handling the firm's PR. Insurance providers may list recommended or approved IR and legal responders, and so CISOs need to make sure an organization's teams are either on the permissible list, or added to them. What is included in cyber insurance policies should also be explored. For example, does it cover ransomware infections or data theft and extortion, and if so, what is the limit of potential claims? You should also find out if you are covered when it comes to liability should you become part of a lawsuit due to a cybersecurity incident -- and whether or not the same applies to your team. ... Questions should be asked at leadership meetings which will give new security officers a fighting chance to perform well in their roles. This includes what cybersecurity budget is available -- and this is separate or part of general IT budgets -- and has there been an increase year-over-year?


How Do You Choose the Best Test Cases to Automate?

While automation frees up the tester’s time, organizations and individuals often overlook a crucial aspect of testing - the cost and time required to maintain the automated tests. If there are significant changes to the backend of your application, often writing and rewriting the code for automated tests is just as cumbersome as manual testing. One interesting way to tackle this is for test engineers to automate just enough to understand which part of the program is failing. You can do this by automating the broader application tests so that if something does break, you know exactly where to look. Smart test execution, one of the top trends in the test automation space, does exactly this by identifying the specific tests that need to be executed. ... How complex is the test suite you’re trying to automate? If the test results need to be rechecked with a human eye or need to have actual user interaction, automating it probably won’t help a lot. For example, user experience tests are best left unautomated because a testing software can never mimic human emotion while using a product.


The Cyber Insurance Market in Flux

Early cyber insurance policies only required filling out surveys on existing protocols. Now, insurers are moving toward active verification. “We need to be able to have a little more substantive evidence that you've done what you're saying you’re going to do,” says Soo. “This dynamic is causing a much-needed maturation in how the insurance industry is thinking about cybersecurity risks,” McNerny argues. “They are now thinking a lot harder about the kinds of controls they’d like to see in place.” Multi-factor authentication is among the primary cyber hygiene practices that is emerging as an industry standard. Reduction of attack surface, protection of credentials, and network segmentation will likely become necessary to secure coverage as well. And not all these factors will be the responsibility of a given organization’s cyber security team. According to McNerny, implementation will require a cultural shift. All employees need to be educated on how to prevent these attacks. “We often think in terms of technology,” he says.


Researchers discover ransomware that encrypts virtual machines hosted on an ESXi hypervisor

The investigation revealed that the attack began at 12:30 a.m. on a Sunday, when the ransomware operators broke into a TeamViewer account running on a computer that belonged to a user who also had domain administrator access credentials. According to the investigators, 10 minutes later, the attackers used the Advanced IP Scanner tool to look for targets on the network. The investigators believe the ESXi Server on the network was vulnerable because it had an active Shell, a programming interface that IT teams use for commands and updates. This allowed the attackers to install a secure network communications tool called Bitvise on the machine belonging to the domain administrator, which gave them remote access to the ESXi system, including the virtual disk files used by the virtual machines. At around 3:40 a.m., the attackers deployed the ransomware and encrypted these virtual hard drives hosted on the ESXi server. “Administrators who operate ESXi or other hypervisors on their networks should follow security best practices. ...” said Brandt.



Quote for the day:

"It is our choices that show what we truly are, far more than our abilities." - J.K. Rowling

No comments:

Post a Comment