Daily Tech Digest - September 06, 2023

Open Source Needs Maintainers. But How Can They Get Paid?

The data show that not only are open source maintainers usually unaware of current security tools and standards, like software bills of materials (SBOMs) and supply-chain levels for software artifacts (SLSA), but they are largely unpaid and, to a frightening degree, on their own. A study released in May by Tidelift found that 60% of open source maintainers would describe themselves as “unpaid hobbyists.” And 44% of all maintainers said they are the only person maintaining a project. “Even more concerning than the sole maintainer projects are the zero maintainer projects, of which there are a considerable amount as well that are widely used,” Donald Fischer, CEO and co-founder of Tidelift, told The New Stack. “So many organizations are just unaware because they don’t even have telemetry, they have no data or visibility into that.” ... An even bigger threat to continuity in open source project maintenance is the “boss factor,” according to Fischer. The boss factor, he said, emerges when “somebody gets a new job, and so they don’t have as much time to devote to their open source projects anymore, and they kind of let them fall by the wayside.”

Your data is critical – do you have the right strategy in place for resilience?

Recovering multi-master databases requires specialist skills and understanding to prevent problems around concurrency. In effect, this means having one agreed list of transactions rather than multiple conflicting lists that might contradict each other. Similarly, you have to ensure that any recovery brings back the right data, rather than any corrupted records. Planning ahead on this process makes it much easier, but it also requires skills and experience to ensure that DR processes will work effectively. Alongside this, any DR plan will have to be tested to prove that it will work, and work consistently when it is most needed. Any plan around data has to take three areas into account – availability, restoration and cost. Availability planning covers how much work the organisation is willing to do to keep services up and running, while restoration covers how much time and data has to be recovered in the event of a disaster. Lastly, cost covers the amount of budget available to cover these two areas, and how much has to be spent in order to meet those requirements.

7 tough IT security discussions every IT leader must have

Cybercriminals never sleep; they’re always conniving and corrupting. “When it comes to IT security strategy, a very direct conversation must be held about the new nature of cyber threats,” suggests Griffin Ashkin, a senior manager at business management advisory firm MorganFranklin Consulting. Recent experience has demonstrated that cybercriminals are now moving beyond ransomware and into cyberextortion, Ashkin warns. “They’re threatening the release of personally identifiable information (PII) of organization employees to the outside world, putting employees at significant risk for identity theft.” ... The meetings and conversations should lead to the development or update of an incident response plan, he suggests. The discussions should also review mission-critical assets and priorities, assess an attack’s likely impact, and identify the most probable attack threats. By changing the enterprise’s risk management approach from matrix-based measurement (high, medium, or low) to quantitative risk reduction, you’re basing actual potential impact on as many variables as needed, Folk says.

Emerging threat: AI-powered social engineering

As malicious actors gain the upper hand, we could potentially find ourselves stepping into a new era of espionage, where the most resourceful and innovative threat actors thrive. The introduction of AI brings about a new level of creativity in various fields, including criminal activities. The crucial question remains: How far will malicious actors push the boundaries? We must not overlook the fact that cybercrime is a highly profitable industry with billions at stake. Certain criminal organizations operate similarly to legal corporations, having their own infrastructure of employees and resources. It is only a matter of time before they delve into developing their own deepfake generators (if they haven’t already done so). With their substantial financial resources, it’s not a matter of whether it is feasible but rather whether it will be deemed worthwhile. And in this case, it likely will be. What preventative measures are currently on offer? Various scanning tools have emerged, asserting their ability to detect deepfakes.

Scrum is Not Agile Enough

Scrum thrives in scenarios where the project’s requirements might evolve or where customer feedback is crucial because of its short sprints. It works well when a team can commit to the roles, ceremonies, and iterative nature of the framework. When there is a need for clear accountability and communication among team members, stakeholders, and customers, Scrum works better than Kanban which works on a less rigid task allocation. The problem is the scale at which Scrum is used. While there is some consensus on the strengths of the methodology, it is not applicable for all projects. One common situation engineers face is, in teams which build multiple applications, individuals can’t start a new story until all the ongoing stories are complete. The team members who’ve completed remain idle until each of them have finished their story, which is entirely inefficient. Long meetings are another pain point for users, there’s a substantial investment in planning and meetings. Significant time is allocated to discussing stories that sometimes require only 30 minutes for completion. 

Technology Leaders Can Turbocharge Their Company’s Growth In Five Ways

Some growth will be powered by new technologies; CIOs and other technology leaders can demonstrate how emerging technologies create specific growth opportunities. Instead of pitching random acts of metaverse or blockchain, which require radical changes in life or trade to matter, technology leaders can iterate on new technologies and infuse ideas from these into their own products. ... Outcomes of all kinds can always be improved — AI is just the newest tool in the improvement toolkit, joining analytics, automation and software. Personalization at scale is a good example of amplifying growth. Technology leaders should collaborate with marketing colleagues and mine databases to find better purchase signals that improve offers and outreach. They can also automate processes to streamline onboarding and improve revenue recognition. ... No technology leader and no company will do this alone. They will work with technology and service providers to build and operate the new capabilities, including those powered by generative AI.

Proposed SEC Cybersecurity Rule Will Put Unnecessary Strain on CISOs

In its current form, the proposed rule leaves a lot of room for interpretation, and it's impractical in some areas. For one, the tight disclosure window will put massive amounts of pressure on chief information security officers (CISOs) to disclose material incidents before they have all the details. Incidents can take weeks and sometimes months to understand and fully remediate. It is impossible to know the impact of a new vulnerability until ample resources are dedicated to remediation. CISOs may also end up having to disclose vulnerabilities that, with more time, end up being less of an issue and therefore not material. ... Another issue is the proposal's requirement to disclose circumstances in which a security incident was not material on its own but has become so "in aggregate." How does this work in practice? Is an unpatched vulnerability from six months ago now in scope for disclosure (given that the company didn't patch it) if it's used to extend the scope of a subsequent incident? We already conflate threats, vulnerabilities, and business impact.

Contending with Artificially Intelligent Ransomware

Deploying a malicious payload onto a targeted computer is a very complex task. It’s not a static executable that can be easily detected based on signatures. AI could generate a customized payload for each victim, progressively advancing within compromised systems with patience and precision. The key for successful malware lies in emulating normal, expected behavior to avoid triggering any defensive measures, even from vigilant users themselves. We’re witnessing genuinely authentic-looking software emerging in various distributions, ostensibly offering specific functionalities while harboring ulterior motives to earn users’ trust, eventually acting with a malicious intent. In this context, AI is entirely capable of streamlining the process, crafting software with dormant malicious capabilities primed for activation at a later point, possibly during the next update.

3 types of incremental forever backup

The first type of incremental forever backup is a file-level incremental forever backup product. This type of approach has actually been around for quite some time, with early versions of it available in the ‘90s. The reason why this is called a file-level incremental is that the decision to backup an item happens at the file level. If anything within a file changes, it will change its modification date , and the entire file will be backed up. ... Another incremental forever backup approach is block-level incremental forever. This method is similar to the previous method in that it will perform one full backup and a series of incremental backups – and will never again perform a full backup. In a block-level incremental backup approach, the decision to back up something will happen at the bit or block level. ... The final type of incremental forever backup is called source deduplication backup software, which performs the deduplication process at the very beginning of the backup. It will make the decision at the backup client as to whether or not to transfer a new chunk of data to the backup system.

The Future of Work is Remote: How to Prepare for the Security Challenges

When embracing hybrid or remote work, the lack of in-person contact among staff may have a less-than-ideal effect on corporate culture. For those “forced back” to the office, disgruntlement will breed resentment. In both cases, disengagement between staff and their employer will have an adverse effect on their attitudes toward the company and, consequently, heighten the risk of insider threats, either by accident, judgment errors or malicious intent. ... New security technology can streamline and bolster defenses but often falls short. Without human interaction and experience, these systems lack the context to make accurate decisions. As a result, they may generate false positives or miss real threats. Security technology is often designed to work with little or no human input, which can lead to problems when the system encounters something it doesn’t understand; for example, a new type of malware or a sophisticated attack. Security systems need to be regularly updated otherwise, they’re at risk of becoming obsolete. 

Quote for the day:

"Never say anything about yourself you do not want to come true." -- Brian Tracy

No comments:

Post a Comment