Looking ahead to the API economy

Is Artificial Intelligence Taking over DevOps?
As a consequence of the utility of AI tools, they have been widely and rapidly adopted by all but the most stubborn DevOps teams. Indeed, for teams now running several different clouds (and that’s all teams, pretty much) AI interfaces have become almost a necessity as they evolve and scale their DevOps program. The most obvious and tangible outcome of this shift has been in the data and systems that developers spend their time looking at. It used to be that a major part of the role of the operations team, for instance, was to build and maintain a dashboard that all staff members could consult, and which contained all of the apposite data on a piece of software. Today, that central task has become largely obsolete. As software has grown more complex, the idea of a single dashboard containing all the relevant information on a particular piece of software has begun to sound absurd. Instead, most DevOps teams now make use of AI tools that “automatically” monitor the software they are working on, and only present data when it is clear that something has gone wrong.Five Functions That Benefit From Cybersecurity Automation

Minimizing Design Time Coupling in a Microservice Architecture
To deliver software rapidly, frequently, and reliably, you need what I call the success triangle. You need a combination of three things: process, organization, and architecture. The process, which is DevOps, embraces concepts like continuous delivery and deployment, and delivers a stream of small changes frequently to production. You must structure your organization as a network of autonomous, empowered, loosely coupled, long-lived product teams. You need an architecture that is loosely coupled and modular. Once again, loose coupling is playing a role. If you have a large team developing a large, complex application, you must typically use microservices. That's because the microservice architecture gives you the testability and deployability that you need in order to do DevOps, and it gives you the loose coupling that enables your teams to be loosely coupled. I've talked a lot about loose coupling, but what is that exactly? Operations that span services create coupling between them. Coupling between services is the degree of connectedness.D-Wave took its own path in quantum computing. Now it’s joining the crowd.
What SREs Can Learn From Facebook’s Largest Outage
Facebook was clearly prepared to respond to this incident quickly and efficiently. If it wasn’t, it would no doubt have taken days to restore service following a failure of this magnitude rather than just hours. Nonetheless, Facebook has reported that troubleshooting and resolving the network connectivity issues between data centers proved challenging for three main reasons. First and most obviously, engineers struggled to connect to data centers remotely without a working network. That’s not surprising: as an SRE, you’re likely to run into an issue like this sooner or later. Ideally, you’ll have some kind of secondary remote-access solution, but that’s hard to implement within the context of infrastructure like this. The second challenge is more interesting. Because Facebook’s data centers “are designed with high levels of physical and system security in mind,” according to the company, it proved especially difficult for engineers to restore networking even after they went on-site at the data centers.Becoming a new chief information security officer today: The steps for success
How Do You Choose the Best Test Cases to Automate?
While automation frees up the tester’s time, organizations and individuals often overlook a crucial aspect of testing - the cost and time required to maintain the automated tests. If there are significant changes to the backend of your application, often writing and rewriting the code for automated tests is just as cumbersome as manual testing. One interesting way to tackle this is for test engineers to automate just enough to understand which part of the program is failing. You can do this by automating the broader application tests so that if something does break, you know exactly where to look. Smart test execution, one of the top trends in the test automation space, does exactly this by identifying the specific tests that need to be executed. ... How complex is the test suite you’re trying to automate? If the test results need to be rechecked with a human eye or need to have actual user interaction, automating it probably won’t help a lot. For example, user experience tests are best left unautomated because a testing software can never mimic human emotion while using a product.The Cyber Insurance Market in Flux

Researchers discover ransomware that encrypts virtual machines hosted on an ESXi hypervisor
The investigation revealed that the attack began at 12:30 a.m. on a Sunday, when the ransomware operators broke into a TeamViewer account running on a computer that belonged to a user who also had domain administrator access credentials. According to the investigators, 10 minutes later, the attackers used the Advanced IP Scanner tool to look for targets on the network. The investigators believe the ESXi Server on the network was vulnerable because it had an active Shell, a programming interface that IT teams use for commands and updates. This allowed the attackers to install a secure network communications tool called Bitvise on the machine belonging to the domain administrator, which gave them remote access to the ESXi system, including the virtual disk files used by the virtual machines. At around 3:40 a.m., the attackers deployed the ransomware and encrypted these virtual hard drives hosted on the ESXi server. “Administrators who operate ESXi or other hypervisors on their networks should follow security best practices. ...” said Brandt.Quote for the day:
"It is our choices that show what we truly are, far more than our abilities." - J.K. Rowling
No comments:
Post a Comment