Daily Tech Digest - February 24, 2024

Business Continuity vs Disaster Recovery: 10 Key Differences

A key part of the BCP is identifying Recovery Strategies. These strategies outline how the business will continue critical operations after an incident. These strategies might involve alternative methods or locations for conducting business. The BCP also outlines the Incident Management Plan. It sets the roles, duties, and steps for managing an incident. This includes plans to talk to stakeholders and emergency services. The Development of Recovery Plans for key business areas such as IT systems, data, and customer service is also integral. These plans provide specific instructions for returning to normal operations after the disruption. ... A disaster recovery plan is intended to reduce data loss and downtime while facilitating the quick restoration of vital business operations following an unfavorable incident. The plan comprises actions to lessen the impact of a calamity so that the company may swiftly resume mission-critical operations or carry on with business as usual. A DRP typically includes an investigation of the demands for continuity and business processes. An organization often conducts a risk analysis (RA) and business impact analysis (BIA) to set recovery targets before creating a comprehensive strategy.


Test Outlines: A Novel Approach to Software Testing

The idea of Test Outlines is a re-imagination of the traditional approach present in test cases, and simply—a new one at that, introducing a narrative similar to that found in the cohesiveness and context of test scenarios. This combination of the methodologies is laying a base for the testing approach, which is visionary over its predecessors. The narrative structure of Test Outlines goes beyond the boundaries of all steps of a test case and instead draws these steps into a convincing storyline of a user journey through the software. This sets a narrative lens, not only for simplified, overall testing documentation but also for a holistic way that end-users will interact with the software in real settings. This depth allows for much more scope in understanding the testing process, moving it from a simple step checklist to a dynamic heuristic around the user experience. On the other hand, a narrative approach will inspire movement from isolated functionality with an interrelationship of the features. This builds up capability in identifying critical dependencies, potential integration issues, and system behavior in general during the user's interface.


Alarm Over GenAI Risk Fuels Security Spending in Middle East & Africa

Concerns over the business impact of generative AI is certainly not limited to the Middle East and Africa. Microsoft and OpenAI warned last week that the two companies had detected nation-state attackers from China, Iran, North Korea, and Russia using the companies' GenAI services to improve attacks by automating reconnaissance, answering queries about targeted systems, and improving the messages and lures used in social engineering attacks, among other tactics. And in the workplace, three-quarters of cybersecurity and IT professionals believe that GenAI is being used by workers, with or without authorization. The obvious security risks are not dampening enthusiasm for GenAI and LLMs. Nearly a third of organizations worldwide already have a pilot program in place to explore the use of GenAI in their business, with 22% already using the tools and 17% implementing them. "With a bit of upfront technical effort, this risk can be minimized by thinking through specific use cases for enabling access to generative AI applications while looking at the risk based on where data flows," Teresa Tung, cloud-first chief technologist at Accenture, stated in a 2023 analysis of the top generative AI threats.


What’s the difference between a software engineer and software developer?

One way to think of the main difference between software engineers and developers is the scope of their work. Software engineers tend to focus more on the larger picture of a project—working more closely with the infrastructure, security, and quality. Software developers, on the other hand, are more laser-focused on a specific coding task. In other words, software developers focus on ensuring software functionality whereas engineers ensure the software aligns with customer requirements, says Rostami. “One way to think about it: If you double your software developer team, you’ll double your code. But if you double your software engineering team, you’ll double the customer impact,” she tells Fortune. But it is also important to note that because of how often each title is used interchangeably, the exact differences between a software engineer and software developer role may differ slightly from company to company. Engineers may also have a greater grasp of the broader computer system ecosystems as well as have greater soft skills. ... When it comes to total pay, engineers bring home nearly $30,000 on average more, which could, in part, be due to project completion bonuses or other circumstances.


Simplified Data Management and Analytics Strategies for AI Environments

Leveraging automation tools such as Apache Airflow or Microsoft Power Automate offers significant advantages in streamlining and optimizing the entire data management lifecycle. These tools can play a crucial role in automating not only data collection, storage, and analysis but also in orchestrating complex workflows and data pipelines, thereby reducing manual intervention and accelerating data processing. For instance, these automation tools can be harnessed to schedule and automate the extraction of data from diverse sources, such as databases, APIs, and cloud services. By automating these processes, organizations can ensure timely and efficient data collection without the need for manual intervention, reducing the risk of human errors and enhancing the overall reliability of the data. Moreover, once the data is extracted, these automation tools can seamlessly transform the data into standardized formats, ensuring consistency and compatibility across different data sources. This standardized process not only simplifies the integration of heterogeneous data but also paves the way for efficient data analysis and reporting.


Low-code doesn’t mean low quality

Granted, no-code platforms make it easy to get the stack up and running to support back-office workflows, but what about supporting those outside the workflow? Does low-code offer the functionality and flexibility to support applications that fall outside the box? The truth is that low-code programming architectures are gaining popularity precisely because of their versatility. Rather than compromising on quality programming, low-code frees developers to make applications more creative and more productive. ... Modern low-code platforms include customization, configuration, and extensibility options. Every drag-and-drop widget is pretested to deliver flawless functionality and make it easier to build applications faster. However, those widgets also have multiple options to handle business logic in different ways at various events. Low-code widgets allow developers to focus on integration and functional testing rather than component testing. ... The productivity gains low-code gives developers come primarily from the ability to reuse abstractions at the component or module level; the ability to reuse code reduces the time needed to develop customized solutions. 


ConnectWise ScreenConnect attacks deliver malware

The vulnerabilities involves authentication bypass and path traversal issues within the server software itself, not the client software that is installed on the end-user devices. Attackers have found that they can deploy malware to servers or to workstations with the client software installed. Sophos has evidence that attacks against both servers and client machines are currently underway. Patching the server will not remove any malware or webshells attackers manage to deploy prior to patching and any compromised environments need to be investigated. Cloud-hosted implementations of ScreenConnect, including screenconnect.com and hostedrmm.com, received mitigations with hours of validation to address these vulnerabilities. Self-hosted (on-premise) instances remain at risk until they are manually upgraded, and it is our recommendation to patch to ScreenConnect version 23.9.8 immediately. ...  If you are no longer under maintenance, ConnectWise is allowing you to install version 22.4 at no additional cost, which will fix CVE-2024-1709, the critical vulnerability. However, this should be treated as an interim step. 


Microservices Modernization Missteps: Four Anti-Patterns of Rebuilding Apps

A common misstep when architecting legacy services to microservices is to make a functional, one to one replica of the legacy services. You simply look at what the existing services do, and you make sure the new bundle of microservices does that. The problem here is that your business has likely evolved its operations since the legacy services were made. That means that you likely don't need all the same functionality in the legacy services. And if you do need that functionality, you might need to do it differently, which is exactly the reason you are modernizing in the first place: The legacy services are no longer helping the business function as desired. Often, organizations will consider modernizing as purely technical work and exclude business stakeholders from the process. This means developers won't have enough input from business stakeholders when picking which parts of the legacy services to replicate, which to drop, and which to improve. In this situation, developers will just replicate the legacy services. When business stakeholders and users are not involved in microservice identification, you risk misalignment on new requirements and introducing new, potential problems or rework in the future.


Entering the Age of Explainable AI

Having access to good, clean data is always a crucial first step for businesses thinking about AI transformation because it ensures the accuracy of the predictions made by AI models. If the data being fed into the models is flawed or contains errors, the output will also be unreliable and is subject to bias. Investing in a self-service data analytics platform that includes sophisticated data cleansing and prep tools, along with data governance, provides business users with the trust and confidence they need to move forward with their AI initiatives. These tools also help with accountability and -- consequently -- data quality. When a code-based model is created, it can take time to track who made changes and why, leading to problems later when someone else needs to take over the project or when there is a bug in the code. ... Equally important to the technology is ensuring that data analytics methodologies are both accessible and scalable, which can be accomplished through training. Data scientists are hard to come by and you need people who understand the business problems, whether or not they can code. No-code/low-code data analytics platforms make it possible for people with limited programming experience to build and deploy data science models. 


End-To-End Test Automation for Boosting Software Effectiveness

To check the entire application flow, QA automation engineers must implement robust automated scripts based on test cases that follow real-life user scenarios. It’s vital to make sure the scripts are maintainable and can be easily understood by every team member. It’s also important to pay special attention to tests that verify UI to prevent flakiness, i.e., tests that either fail or not when being run under the same conditions and without any code changes. This may happen because of the complicated nature of tests or some outer conditions, such as problems with the network. ... To expedite software testing activities and obtain valuable feedback faster, it's good practice to run several automated scripts at the same time on diverse equipment or environments. While doing so, companies can either use cloud infrastructure, such as virtual machines, or use on-premises ones, depending on the client’s technical ecosystem. In addition, in the case of the former option, QA automation engineers can ramp up cloud infrastructure to support important releases, which allows more tests to run at the same time and avoids long-term investment in local infrastructure.



Quote for the day:

"Effective Leaders know that resources are never the problem; it's always a matter of resourcefulness." -- Tony Robbins

No comments:

Post a Comment