Daily Tech Digest - June 11, 2024

4 reasons existing anti-bot solutions fail to protect mobile APIs

Existing anti-bot solutions attempt to bend their products to address mobile-based threats. For example, some require the implementation of an SDK into the mobile app, because that’s the only way the mobile app can respond to the main methods used by WAFs to identify bots from humans. Such solutions also typically require separate servers to be deployed behind the WAF, which are used to evaluate connection requests to discern legitimate connections from malicious ones. These “workarounds” impose single points of failure, performance bottlenecks, and latency, and often come with unacceptable capacity limitations. On top of that, WAF mobile SDKs also have limitations in terms of the dev framework support and can require developers to rewrite the network stack to achieve compatibility with the WAF. Such workarounds create more work and more costs. To make matters worse, because most anti-bot solutions on the market are not sufficiently hardened to protect against clones, spoofing, malware, or tampering, hackers can easily compromise, bypass, or disable the anti-bot solution if it’s implemented inside a mobile app that is not sufficiently protected against reverse engineering and other attacks.


Advancing interoperability in Africa: Overcoming challenges for digital integration

From a legal perspective, Mihret Woodmatas, senior ICT expert, department of infrastructure and energy, African Union Commission (AUC), points out that differing levels of development across countries pose a challenge. A significant issue is the lack of robust legal frameworks for data protection and privacy. ... Hopkins underscores the importance of sharing data to benefit those it is collected for, particularly refugees. While sharing data comes with risks, particularly concerning security and privacy, these can be managed with proper risk treatments. The goal is to avoid siloed data systems and instead foster coordination and cooperation among different entities. Hopkins discussed the digital transformation across states and international agencies, emphasizing the need for effective data sharing. Good data sharing practices enable various entities to provide coordinated services, significantly benefiting refugees by facilitating their access to education, healthcare, and employment. Interoperability also supports local communities economically and ensures a unique and continuous identity for refugees, even if they remain displaced for years or decades. 


Cloud migration expands the CISO role yet again

CISOs must now ensure they can report to the SEC within four business days of determining an incident’s materiality, describing its nature, scope, and potential impact. They must also communicate risk management strategies and incident response plans to ensure the board is well-informed about the organization’s cybersecurity posture. These changes require a more structured and proactive approach because CISOs must now be aware of compliance status in near real-time, not only to provide all cybersecurity incident data and context to the board, compliance teams, and finance teams, but to ensure they can determine quickly whether an incident has a material impact and therefore must be reported to the SEC. CISOs who miss making a timely disclosure or have the wrong security and compliance strategy in place can expect to be fined, even if the incident doesn’t turn into a catastrophic cybersecurity event. Boards must be able to trust that CISOs can answer any question related to compliance and security quickly and accurately, and the board themselves must be familiar with cybersecurity concepts, able to understand the risks and ask the right questions.


Generative AI Is Not Going To Build Your Engineering Team For You

People act like writing code is the hard part of software. It is not. It never has been, it never will be. Writing code is the easiest part of software engineering, and it’s getting easier by the day. The hard parts are what you do with that code—operating it, understanding it, extending it, and governing it over its entire lifecycle. A junior engineer begins by learning how to write and debug lines, functions, and snippets of code. As you practice and progress towards being a senior engineer, you learn to compose systems out of software, and guide systems through waves of change and transformation. Sociotechnical systems consist of software, tools, and people; understanding them requires familiarity with the interplay between software, users, production, infrastructure, and continuous changes over time. These systems are fantastically complex and subject to chaos, nondeterminism and emergent behaviors. If anyone claims to understand the system they are developing and operating, the system is either exceptionally small or (more likely) they don’t know enough to know what they don’t know. Code is easy, in other words, but systems are hard.


Is Oracle Finally Killing MySQL?

Things have changed, though, in recent years with the introduction of “MySQL Heatwave”—Oracle’s MySQL Cloud Database. Heatwave includes a number of features that are not available in MySQL Community or MySQL Enterprise, such as acceleration of analytical queries or ML functionality. When it comes to “analytical queries,” it is particularly problematic as MySQL does not even have parallel query execution. At a time when CPUs with hundreds of cores are coming to market, those cores are not getting significantly faster, which is increasingly limiting performance. This does not just apply to queries coming from analytical applications but also simple “group by” queries common in operational applications. Note: MySQL 8 does have some parallelization support for DDLs but not for queries. Could this have something to do with giving people more reason to embrace MySQL Heatwave? Or, rather move to PostgreSQL or adopt Clickhouse? Vector Search is another area where open source MySQL lacks. While every other major open source database has added support for Vector Search functionality, and MariaDB is working on it, having it as a cloud-only MySQL Heatwave Feature in the MySQL ecosystem is unfortunate, to say the least.


Giant legacies

Thought leadership in general demands we stand on the shoulders of innovators who have gone before. Thinking in HR is no exception. The essence of this debt was captured in the Hippocratic Oath this column had proposed for HR professionals: "I shall not forget the debt and respect I owe to those who have taught me and freely pass on the best of my learnings to those who work with me as well as through professional bodies, educational institutes or other means of dissemination. ... Thinking brilliant new concepts or applying those that have taken root in one field to another is necessary but not sufficient for creating a LOG. There are two other tests. If the concept, strategy or process proves its worth, it should be lasting. It need not become an unchangeable sacrament but further developments should emanate from it rather than demand a reversal of the flow. While we can sympathize with radical ideas (or greedy cats) that are brought to a dead end by 'malignant fate', we cannot honour them as LOGs. Apart from durability over time, we have transmission across organisational boundaries which establishes the generalizability of the innovation. 


Solving the data quality problem in generative AI

One of the biggest misconceptions surrounding synthetic data is model collapse. However, model collapse stems from research that isn’t really about synthetic data at all. It is about feedback loops in AI and machine learning systems, and the need for better data governance. For instance, the main issue raised in the paper The Curse of Recursion: Training on Generated Data Makes Models Forget is that future generations of large language models may be defective due to training data that contains data created by older generations of LLMs. The most important takeaway from this research is that to remain performant and sustainable, models need a steady flow of high-quality, task-specific training data. For most high-value AI applications, this means fresh, real-time data that is grounded in the reality these models must operate in. Because this often includes sensitive data, it also requires infrastructure to anonymize, generate, and evaluate vast amounts of data—with humans involved in the feedback loop. Without the ability to leverage sensitive data in a secure, timely, and ongoing manner, AI developers will continue to struggle with model hallucinations and model collapse.


DevSecOps Made Simple: 6 Strategies

Collective Responsibility describes the common practices shared by organizations that have taken a program-level approach to security culture development. Broken into three key areas: 1) executive support and engagement, 2) program design and implementation, 3) program sustainment and measurement, the paper suggests how to best garner (and keep) executive support and engagement while building an inclusive cultural program based on cumulative experience. ... Collaboration and Integration addresses the importance of integrating DevSecOps into organizational processes and stresses the key role that fostering a sense of collaboration plays in its successful implementation. ... Pragmatic Implementation outlines the practices, processes, and technologies that organizations should consider when building out any DevSecOps program and how to implement DevSecOps pragmatically. ... Bridging Compliance and Development is broken into three parts offering 1) an approach to compartmentalization and assessment with an eye to minimizing operating impact, 2) best practices on how compliance can be designed and implemented into applications, and 3) a look at the different security tooling practices that can provide assurance to compliance requirements.


Change Management Skills for Data Leaders

Strategic planning and decision-making are pivotal aspects of successful organizational transformation, requiring nuanced change management skills. Developing a strategy for organizational change in Data Management is a critical task that requires an understanding of both the current state of affairs and the desired future state. For data leaders, this involves conducting a thorough assessment to identify gaps between these two states. ... Developing effective communication and collaboration strategies is paramount in navigating the complexities of change management. A key component of this process involves crafting clear, concise, and transparent messaging that resonates with all stakeholders involved. This ensures that everyone, from team members to top-level management, understands not only the nature of the change but also its purpose and the benefits it promises to bring. ... Resilience is not just about enduring change but also about emerging stronger from it. Data leaders are often at the forefront of navigating through uncharted territories, be it technological advancements or market shifts, which requires an inherent ability to withstand pressure and bounce back from setbacks. 


Sanity Testing vs. Regression Testing: Key Differences

Sanity testing is the process that evaluates the specific software application functionality after its deployment with added new features or modifications and bug fixes. In simple terms, it is the quick testing to check whether the changes made are as per the Software Requirement Specifications (SRS). It is generally performed after the minor code adjustment to ensure seamless integration with existing functionalities. If the sanity test fails, it's a red flag that something's wrong, and the software might not be ready for further testing. This helps catch problems early on, saving time and effort down the road. ... Regression testing is the process of re-running tests on existing software applications to verify that new changes or additions haven't broken anything. It's a crucial step performed after every code alteration, big or small, to catch regressions – the re-emergence of old bugs due to new changes. By re-executing testing scenarios that were originally scripted when known issues were initially resolved, you can ensure that any recent alterations to an application haven't resulted in regression or compromised previously functioning components.



Quote for the day:

"The two most important days in your life are the day you are born and the day you find out why." --Mark Twain

No comments:

Post a Comment