Showing posts with label modernization. Show all posts
Showing posts with label modernization. Show all posts

Daily Tech Digest - August 16, 2025


Quote for the day:

"Develop success from failures. Discouragement and failure are two of the surest stepping stones to success." -- Dale Carnegie


Digital Debt Is the New Technical Debt (And It’s Worse)

Digital debt doesn’t just slow down technology. It slows down business decision-making and strategic execution. Decision-Making Friction: Simple business questions require data from multiple systems. “What’s our customer lifetime value?” becomes a three-week research project because customer data lives in six different platforms with inconsistent definitions. Campaign Launch Complexity: Marketing campaigns that should take two weeks to launch require six weeks of coordination across platforms. Not because the campaign is complex, but because the digital infrastructure is fragmented. Customer Experience Inconsistency: Customers encounter different branding, messaging, and functionality depending on which digital touchpoint they use. Support teams can’t access complete customer histories because data is distributed across systems. Innovation Paralysis: New initiatives get delayed because teams spend time coordinating existing systems rather than building new capabilities. Digital debt creates a gravitational pull that keeps organizations focused on maintenance rather than innovation. ... Digital debt is more dangerous than technical debt because it’s harder to see and affects more stakeholders. Technical debt slows down development teams. Digital debt slows down entire organizations.


Rising OT threats put critical infrastructure at risk

Attackers are exploiting a critical remote code execution (RCE) vulnerability in the Erlang programming language's Open Telecom Platform, widely used in OT networks and critical infrastructure. The flaw enables unauthenticated users to execute commands through SSH connection protocol messages that should be processed only after authentication. Researchers from Palo Alto Networks' Unit 42 said they have observed more than 3,300 exploitation attempts since May 1, with about 70% targeting OT networks across healthcare, agriculture, media and high-tech sectors. Experts urged affected organizations to patch immediately, calling it a top priority for any security team defending an OT network. The flaw, which has a CVSS score of 10, could enable an attacker to gain full control over a system and disrupt connected systems -- particularly worrisome in critical infrastructure. ... Despite its complex cryptography, the protocol contains design flaws that could enable attackers to bypass authentication and exploit outdated encryption standards. Researcher Tom Tervoort, a security specialist at Netherlands-based security company Secura, identified issues affecting at least seven different products, resulting in the issuing of three CVEs.


Why Tech Debt is Eating Your ROI (and How To Fix It)

Regardless of industry or specific AI efforts, these frustrations seem to boil down to the same culprit. Their AI initiatives continue to stumble over decades of accumulated tech debt. Part of the reason is despite the hype, most organizations use AI — let’s say, timidly. Fewer than half employ it for predictive maintenance or detecting network anomalies. Fewer than a third use it for root-cause analysis or intelligent ticket routing. Why such hesitation? Because implementing AI effectively means confronting all the messiness that came before. It means admitting our tech environments need a serious cleanup before adding another layer of complexity. Tech complexity has become a monster. This mess came from years of bolting on new systems without retiring old ones. Some IT professionals point to redundant applications as a major source of wasted budget and others blame overprovisioning in the cloud — the digital equivalent of paying rent on empty apartments. ... IT teams admit something that, to me, is alarming: Their infrastructure has grown so tangled they can no longer maintain basic security practices. Let that sink in. Companies with eight-figure tech budgets can’t reliably patch vulnerable systems or implement fundamental security controls. No one builds silos deliberately. Silos emerge from organizational boundaries, competing priorities and the way we fund and manage projects. 


Ready on paper, not in practice: The incident response gap in Australian organisations

The truth is, security teams often build their plans around assumptions rather than real-world threats and trends. That gap becomes painfully obvious during an actual incident, when organisations realise they aren't adequately prepared to respond. Recent findings of a Semperis study titled The State of Enterprise Cyber Crisis Readiness revealed a strong disconnect between organisations' perceived readiness to respond to a cyber crisis and their actual performance. The study also showed that cyber incident response plans are being implemented and regularly tested, but not broadly. In a real-world crisis, too many teams are still operating in silos. ... A robust, integrated, and well-practiced cyber crisis response plan is paramount for cyber and business resilience. After all, the faster you can respond and recover, the less severe the financial impact of a cyberattack will be. Organisations can increase their agility by conducting tabletop exercises that simulate attacks. By practicing incident response regularly and introducing a range of new scenarios of varying complexity, organisations can train for the real thing, which can often be unpredictable. Security teams can continually adapt their response plans based on the lessons learned during these exercises, and any new emerging cyber threats.


Quantum Threat Is Real: Act Now with Post Quantum Cryptography

Some of the common types of encryption we use today include RSA (Rivest-Shamir-Adleman), ECC (Elliptic Curve Cryptography), and DH (Diffie-Hellman Key Exchange). The first two are asymmetric types of encryption. The third is a useful fillip to the first to establish secure communication, with secure key exchange. RSA relies on very large integers, and ECC, on very hard-to-solve math problems. As can be imagined, these cannot be solved with traditional computing. ... Cybercriminals think long-term. They are well aware that quantum computing is still some time away. But that doesn’t stop them from stealing encrypted information. Why? They will store it securely until quantum computing becomes readily available; then they will decrypt it. The impending arrival of quantum computers has set the cat amongst the pigeons. ... Blockchain is not unhackable, but it is difficult to hack. A bunch of cryptographic algorithms keep it secure. These include SHA-256 (Secure Hash Algorithm 256-bit) and ECDSA (Elliptic Curve Digital Signature Algorithm). Today, cybercriminals might not attempt to target blockchains and steal crypto. But tomorrow, with the availability of a quantum computer, the crypto vault can be broken into, without trouble. ... We keep saying that quantum computing and quantum computing-enabled threats are still some time away. And, this is true. But when the technology is here, it will evolve and gain traction. 


Cultivating product thinking in your engineering team

The most common trap you’ll encounter is what’s called the “feature factory.” This is a development model where engineers are simply handed a list of features to build, without context. They’re measured on velocity and output, not on the value their work creates. This can be comfortable for some – it’s a clear path with measurable metrics – but it’s also a surefire way to kill innovation and engagement. ... First and foremost, you need to provide context, and you need to do so early and often. Don’t just hand a Jira ticket to an engineer. Before a sprint starts, take the time to walk through the “what,” the “why,” and the “who.” Explain the market research that led to this feature request, share customer feedback that highlights the problem, and introduce them to the personas you’re building for. A quick 15-minute session at the start of a sprint can make a world of difference. You should also give engineers a seat at the table. Invite them to meetings where product managers are discussing strategy and customer feedback. They don’t just need to hear the final decision; they need to be a part of the conversation that leads to it. When an engineer hears a customer’s frustration firsthand, they gain a level of empathy that a written user story can never provide. They’ll also bring a unique perspective to the table, challenging assumptions and offering technical solutions you may not have considered.


Adapting to New Cloud Security Challenges

While the essence of Non-Human Identities and their secret management is acknowledged, many organizations still grapple with the efficient implementation of these practices. Some stumble upon the over-reliance on traditional security measures, thereby failing to adopt newer, more effective strategies that incorporate NHI management. Others struggle with time and resource constraints, devoid of efficient automation mechanisms – a crucial aspect for proficient NHI management. The disconnect between security and R&D teams often results in fractured efforts, leading to potential security gaps, breaches, and data leaks. ... With more organizations migrate to the cloud and with the rise of machine identities and secret management, the future of cloud security has been redefined. It is no longer solely about the protection from known threats but now involves proactive strategies to anticipate and mitigate potential future risks. This shift necessitates organizations to rethink their approach to cybersecurity, with a keen focus on NHIs and Secrets Security Management. It requires an integrated endeavor, involving CISOs, cybersecurity professionals, and R&D teams, along with the use of scalable and innovative platforms. Thought leaders in the data field continue to emphasize the importance of robust NHI management as vital to the future of cybersecurity, driving the message home for businesses of all sizes and across all industries.


Why IT Modernization Occurs at the Intersection of People and Data

A mandate for IT modernization doesn’t always mean the team has the complete expertise necessary to complete that mandate. It may take some time to arm the team with the correct knowledge to support modernization. Let’s take data analytics, for example. Many modern data analytics solutions, armed with AI, now allow teams to deliver natural language prompts that can retrieve the data necessary to inform strategic modernization initiatives without having to write expert-level SQL. While this lessens the need for writing scripts, IT leaders must still ensure their teams have the right expertise to construct the correct prompts. This could mean training on correct terms for presenting data and/or manipulating data, along with knowing in what circumstances to access that data. Having a well-informed and educated team will be especially important after modernization efforts are underway. ... One of the most important steps to IT modernization is arming your IT teams with a complete picture of the current IT infrastructure. It’s equivalent to giving them a full map before embarking on their modernization journey. In many situations, an ideal starting point is to ensure that any documentation, ER diagrams, and architectural diagrams are collected into a single repository and reviewed. Then, the IT teams use an observability solution that integrates with every part of the enterprise infrastructure to show each team how every part of it works together. 


Cyber Resilience Must Become The Third Pillar Of Security Strategy

For years, enterprise security has been built around two main pillars: prevention and detection. Firewalls, endpoint protection, and intrusion detection systems all aim to stop attackers before they do damage. But as threats grow more sophisticated, it’s clear that this isn’t enough. ... The shift to cloud computing has created dangerous assumptions. Many organizations believe that moving workloads to AWS, Azure, or Google Cloud means the provider “takes care of security.” ... Effective resilience starts with rethinking backup as more than a compliance checkbox. Immutable, air-gapped copies prevent attackers from tampering with recovery points. Built-in threat detection can spot ransomware or other malicious activity before it spreads. But technology alone isn’t enough. Mariappan urges leaders to identify the “minimum viable business” — the essential applications, accounts, and configurations required to function after an incident. Recovery strategies should be built around restoring these first to reduce downtime and financial impact. She also stresses the importance of limiting the blast radius. In a cloud context, that might mean segmenting workloads, isolating credentials, or designing architectures that prevent a single compromised account from jeopardizing an entire environment.


Breaking Systems to Build Better Ones: How AI is Reshaping Chaos Engineering

While AI dominates technical discussions across industries, Andrus maintains a pragmatic perspective on its role in system reliability. “If Skynet comes about tomorrow, it’s going to fail in three days. So I’m not worried about the AI apocalypse, because AI isn’t going to be able to build and maintain and run reliable systems.” The fundamental challenge lies in the nature of distributed systems versus AI capabilities. “A lot of the LLMs and a lot of what we talk about in the AI world is really non deterministic, and when we’re talking about distributed systems, we care about it working correctly every time, not just most of the time.” However, Andrus sees valuable applications for AI in specific areas. AI excels at providing suggestions and guidance rather than making deterministic decisions. ... Despite its name, chaos engineering represents the opposite of chaotic approaches to system reliability. “Chaos engineering is a bit of a misnomer. You know, a lot of people think, Oh, we’re going to go cause chaos and see what happens, and it’s the opposite. We want to engineer the chaos out of our systems.” This systematic approach to understanding system behavior under stress provides the foundation for building more resilient infrastructure. As AI-generated code increases system complexity, the need for comprehensive reliability testing becomes even more critical. 

Daily Tech Digest - August 09, 2025


Quote for the day:

“Develop success from failures. Discouragement and failure are two of the surest stepping stones to success.” -- Dale Carnegie


Is ‘Decentralized Data Contributor’ the Next Big Role in the AI Economy?

Training AI models requires real-world, high-quality, and diverse data. The problem is that the astronomical demand is slowly outpacing the available sources. Take public datasets as an example. Not only is this data overused, but it’s often restricted to avoid privacy or legal concerns. There’s also a huge issue with geographic or spatial data gaps where the information is incomplete regarding specific regions, which can and will lead to inaccuracies or biases with AI models. Decentralized contributors can help bust these challenges. ... Even though a large part of the world’s population has no problem with passively sharing data when browsing the web, due to the relative infancy of decentralized systems, active data contribution may seem to many like a bridge too far. Anonymized data isn’t 100% safe. Determined threat actor parties can sometimes re-identify individuals from unnamed datasets. The concern is valid, which is why decentralized projects working in the field must adopt privacy-by-design architectures where privacy is a core part of the system instead of being layered on top after the fact. Zero-knowledge proofs is another technique that can reduce privacy risks by allowing contributors to prove the validity of the data without exposing any information. For example, demonstrating their identity meets set criteria without divulging anything identifiable.


The ROI of Governance: Nithesh Nekkanti on Taming Enterprise Technical Debt

A key symptom of technical debt is rampant code duplication, which inflates maintenance efforts and increases the risk of bugs. A multi-pronged strategy focused on standardization and modularity proved highly effective, leading to a 30% reduction in duplicated code. This initiative went beyond simple syntax rules to forge a common development language, defining exhaustive standards for Apex and Lightning Web Components. By measuring metrics like technical debt density, teams can effectively track the health of their codebase as it evolves. ... Developers may perceive stricter quality gates as a drag on velocity, and the task of addressing legacy code can seem daunting. Overcoming this resistance requires clear communication and a focus on the long-term benefits. "Driving widespread adoption of comprehensive automated testing and stringent code quality tools invariably presents cultural and operational challenges," Nekkanti acknowledges. The solution was to articulate a compelling vision. ... Not all technical debt is created equal, and a mature governance program requires a nuanced approach to prioritization. The PEC developed a technical debt triage framework to systematically categorize issues based on type, business impact, and severity. This structured process is vital for managing a complex ecosystem, where a formal Technical Governance Board (TGB) can use data to make informed decisions about where to invest resources.


Why Third-Party Risk Management (TPRM) Can’t Be Ignored in 2025

In today’s business world, no organization operates in a vacuum. We rely on vendors, suppliers, and contractors to keep things running smoothly. But every connection brings risk. Just recently, Fortinet made headlines as threat actors were found maintaining persistent access to FortiOS and FortiProxy devices using known vulnerabilities—while another actor allegedly offered a zero-day exploit for FortiGate firewalls on a dark web forum. These aren’t just IT problems—they’re real reminders of how vulnerabilities in third-party systems can open the door to serious cyber threats, regulatory headaches, and reputational harm. That’s why Third-Party Risk Management (TPRM) has become a must-have, not a nice-to-have. ... Think of TPRM as a structured way to stay on top of the risks your third parties, suppliers and vendors might expose you to. It’s more than just ticking boxes during onboarding—it’s an ongoing process that helps you monitor your partners’ security practices, compliance with laws, and overall reliability. From cloud service providers, logistics partners, and contract staff to software vendors, IT support providers, marketing agencies, payroll processors, data analytics firms, and even facility management teams—if they have access to your systems, data, or customers, they’re part of your risk surface. 


Ushering in a new era of mainframe modernization

One of the key challenges in modern IT environments is integrating data across siloed systems. Mainframe data, despite being some of the most valuable in the enterprise, often remains underutilized due to accessibility barriers. With a z17 foundation, software data solutions can more easily bridge critical systems, offering unprecedented data accessibility and observability. For CIOs, this is an opportunity to break down historical silos and make real-time mainframe data available across cloud and distributed environments without compromising performance or governance. As data becomes more central to competitive advantage, the ability to bridge existing and modern platforms will be a defining capability for future-ready organizations. ... For many industries, mainframes continue to deliver unmatched performance, reliability, and security for mission-critical workloads—capabilities that modern enterprises rely on to drive digital transformation. Far from being outdated, mainframes are evolving through integration with emerging technologies like AI, automation, and hybrid cloud, enabling organizations to modernize without disruption. With decades of trusted data and business logic already embedded in these systems, mainframes provide a resilient foundation for innovation, ensuring that enterprises can meet today’s demands while preparing for tomorrow’s challenges.


Fighting Cyber Threat Actors with Information Sharing

Effective threat intelligence sharing creates exponential defensive improvements that extend far beyond individual organizational benefits. It not only raises the cost and complexity for attackers but also lowers their chances of success. Information Sharing and Analysis Centers (ISACs) demonstrate this multiplier effect in practice. ISACs are, essentially, non-profit organizations that provide companies with timely intelligence and real-world insights, helping them boost their security. The success of existing ISACs has also driven expansion efforts, with 26 U.S. states adopting the NAIC Model Law to encourage information sharing in the insurance sector. ... Although the benefits of information sharing are clear, actually implementing them is a different story. Common obstacles include legal issues regarding data disclosure, worries over revealing vulnerabilities to competitors, and the technical challenge itself – evidently, devising standardized threat intelligence formats is no walk in the park. And yet it can certainly be done. Case in point: the above-mentioned partnership between CrowdStrike and Microsoft. Its success hinges on its well-thought-out governance system, which allows these two business rivals to collaborate on threat attribution while protecting their proprietary techniques and competitive advantages. 


The Ultimate Guide to Creating a Cybersecurity Incident Response Plan

Creating a fit-for-purpose cyber incident response plan isn’t easy. However, by adopting a structured approach, you can ensure that your plan is tailored for your organisational risk context and will actually help your team manage the chaos that ensues a cyber attack. In our experience, following a step-by-step process to building a robust IR plan always works. Instead of jumping straight into creating a plan, it’s best to lay a strong foundation with training and risk assessment and then work your way up. ... Conducting a cyber risk assessment before creating a Cybersecurity Incident Response Plan is critical. Every business has different assets, systems, vulnerabilities, and exposure to risk. A thorough risk assessment identifies what assets need the most protection. The assets could be customer data, intellectual property, or critical infrastructure. You’ll be able to identify where the most likely entry points for attackers may be. This insight ensures that the incident response plan is tailored and focused on the most pressing risks instead of being a generic checklist. A risk assessment will also help you define the potential impact of various cyber incidents on your business. You can prioritise response strategies based on what incidents would be most damaging. Without this step, response efforts may be misaligned or inadequate in the face of a real threat.


How to Become the Leader Everyone Trusts and Follows With One Skill

Leaders grounded in reason have a unique ability; they can take complex situations and make sense of them. They look beyond the surface to find meaning and use logic as their compass. They're able to spot patterns others might miss and make clear distinctions between what's important and what's not. Instead of being guided by emotion, they base their decisions on credibility, relevance and long-term value. ... The ego doesn't like reason. It prefers control, manipulation and being right. At its worst, it twists logic to justify itself or dominate others. Some leaders use data selectively or speak in clever soundbites, not to find truth but to protect their image or gain power. But when a leader chooses reason, something shifts. They let go of defensiveness and embrace objectivity. They're able to mediate fairly, resolve conflicts wisely and make decisions that benefit the whole team, not just their own ego. This mindset also breaks down the old power structures. Instead of leading through authority or charisma, leaders at this level influence through clarity, collaboration and solid ideas. ... Leaders who operate from reason naturally elevate their organizations. They create environments where logic, learning and truth are not just considered as values, they're part of the culture. This paves the way for innovation, trust and progress. 


Why enterprises can’t afford to ignore cloud optimization in 2025

Cloud computing has long been the backbone of modern digital infrastructure, primarily built around general-purpose computing. However, the era of one-size-fits-all cloud solutions is rapidly fading in a business environment increasingly dominated by AI and high-performance computing (HPC) workloads. Legacy cloud solutions struggle to meet the computational intensity of deep learning models, preventing organizations from fully realizing the benefits of their investments. At the same time, cloud-native architectures have become the standard, as businesses face mounting pressure to innovate, reduce time-to-market, and optimize costs. Without a cloud-optimized IT infrastructure, organizations risk losing key operational advantages—such as maximizing performance efficiency and minimizing security risks in a multi-cloud environment—ultimately negating the benefits of cloud-native adoption. Moreover, running AI workloads at scale without an optimized cloud infrastructure leads to unnecessary energy consumption, increasing both operational costs and environmental impact. This inefficiency strains financial resources and undermines corporate sustainability goals, which are now under greater scrutiny from stakeholders who prioritize green initiatives.


Data Protection for Whom?

To be clear, there is no denying that a robust legal framework for protecting privacy is essential. In the absence of such protections, both rich and poor citizens face exposure to fraud, data theft and misuse. Personal data leakages – ranging from banking details to mobile numbers and identity documents – are rampant, and individuals are routinely subjected to financial scams, unsolicited marketing and phishing attacks. Often, data collected for one purpose – such as KYC verification or government scheme registration – finds its way into other hands without consent. ... The DPDP Act, in theory, establishes strong penalties for violations. However, the enforcement mechanisms under the Act are opaque. The composition and functioning of the Data Protection Board – a body tasked with adjudicating complaints and imposing penalties – are entirely controlled by the Union government. There is no independent appointments process, no safeguards against arbitrary decision-making, and no clear procedure for appeals. Moreover, there is a genuine worry that smaller civil society initiatives – such as grassroots surveys, independent research and community-based documentation efforts – will be priced out of existence. The compliance costs associated with data processing under the new framework, including consent management, data security audits and liability for breaches, are likely to be prohibitive for most non-profit and community-led groups.


Stargate’s slow start reveals the real bottlenecks in scaling AI infrastructure

“Scaling AI infrastructure depends less on the technical readiness of servers or GPUs and more on the orchestration of distributed stakeholders — utilities, regulators, construction partners, hardware suppliers, and service providers — each with their own cadence and constraints,” Gogia said. ... Mazumder warned that “even phased AI infrastructure plans can stall without early coordination” and advised that “enterprises should expect multi-year rollout horizons and must front-load cross-functional alignment, treating AI infra as a capital project, not a conventional IT upgrade.” ... Given the lessons from Stargate’s delays, analysts recommend a pragmatic approach to AI infrastructure planning. Rather than waiting for mega-projects to mature, Mazumder emphasized that “enterprise AI adoption will be gradual, not instant and CIOs must pivot to modular, hybrid strategies with phased infrastructure buildouts.” ... The solution is planning for modular scaling by deploying workloads in hybrid and multi-cloud environments so progress can continue even when key sites or services lag. ... For CIOs, the key lesson is to integrate external readiness into planning assumptions, create coordination checkpoints with all providers, and avoid committing to go-live dates that assume perfect alignment.

Daily Tech Digest - July 23, 2025


Quote for the day:

“Our chief want is someone who will inspire us to be what we know we could be.” -- Ralph Waldo Emerson


AI in customer communication: the opportunities and risks SMBs can’t ignore

To build consumer trust, businesses must demonstrate that AI genuinely improves the customer experience, especially by enhancing the quality, relevance and reliability of communication. With concerns around data misuse and inaccuracy, businesses need to clearly explain how AI supports secure, accurate and personalized interactions, not just internally but in ways customers can understand and see. AI should be positioned as an enabler of human service, taking care of routine tasks so employees can focus on complex, sensitive or high-value customer needs. A key part of gaining long-term trust is transparency around data. Businesses must clearly communicate how customer information is handled securely and show that AI is being used responsibly and with care. This could include clearly labelling AI-generated communications such as emails or text messages, or proactively informing customers about what data is being used and for what purpose.  ... As conversations move beyond why AI should be used to how it must be used responsibly and effectively, companies have entered a make-or-break “audition phase” for AI. In customer communications, businesses can no longer afford to just talk about AI’s benefits, they must prove them by demonstrating how it enhances quality, security, and personalization.


The Expiring Trust Model: CISOs Must Rethink PKI in the Era of Short-Lived Certificates and Machine Identity

While the risk associated with certificates applies to all companies, it is a greater challenge for businesses operating in regulated sectors such as healthcare, where certificates must often be tied to national digital identity systems. In several countries, healthcare providers and services are now required to issue certificates bound to a National Health Identifier (NHI). These certificates are used for authentication, e-signature and encryption in health data exchanges and must adhere to complex issuance workflows, usage constraints and revocation processes mandated by government frameworks. Managing these certificates alongside public TLS certificates introduces operational complexity that few legacy PKI solutions were designed to handle in today’s dynamic and cloud-native environments. ... The urgency of this mandate is heightened by the impending cryptographic shift driven by the rise of quantum computing. Transitioning to post-quantum cryptography (PQC) will require organizations to implement new algorithms quickly and securely. Frequent certificate renewal cycles, which once seemed a burden, could now become a strategic advantage. When managed through automated and agile certificate lifecycle management, these renewals provide the flexibility to rapidly replace compromised keys, rotate certificate authorities or deploy quantum-safe algorithms as they become standardized.


The CISO code of conduct: Ditch the ego, lead for real

The problem doesn’t stop at vendor interactions. It shows up inside their teams, too. Many CISOs don’t build leadership pipelines; they build echo chambers. They hire people who won’t challenge them. They micromanage strategy. They hoard influence. And they act surprised when innovation dries up or when great people leave. As Jadee Hanson, CISO at Vanta, put it, “Ego builds walls. True leadership builds trust. The best CISOs know the difference.” That distinction matters, especially when your team’s success depends on your ability to listen, adapt, and share the stage. ... Security isn’t just a technical function anymore. It’s a leadership discipline. And that means we need more than frameworks and certifications; we need a shared understanding of how CISOs should show up. Internally, externally, in boardrooms, and in the broader community. That’s why I’m publishing this. Not because I have all the answers, but because the profession needs a new baseline. A new set of expectations. A standard we can hold ourselves, and each other, to. Not about compliance. About conduct. About how we lead. What follows is the CISO Code of Conduct. It’s not a checklist, but a mindset. If you recognize yourself in it, good. If you don’t, maybe it’s time to ask why. Either way, this is the bar. Let’s hold it. ... A lot of people in this space are trying to do the right thing. But there are also a lot of people hiding behind a title.


Phishing simulations: What works and what doesn’t

Researchers conducted a study on the real-world effectiveness of common phishing training methods. They found that the absolute difference in failure rates between trained and untrained users was small across various types of training content. However, we should take this with caution, as the study was conducted within a single healthcare organization and focused only on click rates as the measure of success or failure. It doesn’t capture the full picture. Matt Linton, Google’s security manager, said phishing tests are outdated and often cause more frustration among employees than actually improving their security habits. ... For any training program to work, you first need to understand your organization’s risk. Which employees are most at risk? What do they already know about phishing? Next, work closely with your IT or security teams to create phishing tests that match current threats. Tell employees what to expect. Explain why these tests matter and how they help stop problems. Don’t play the blame game. If someone fails a test, treat it as a chance to learn, not to punish. When you do this, employees are less likely to hide mistakes or avoid reporting phishing emails. When picking a vendor, focus on content and realistic simulations. The system should be easy to use and provide helpful reports.


Reclaiming Control: How Enterprises Can Fix Broken Security Operations

Asset management is critical to the success of the security operations function. In order to properly defend assets, I first and foremost need to know about them and be able to manage them. This includes applying policies, controls, and being able to identify assets and their locations when necessary, of course. With the move to hybrid and multi-cloud, asset management is much more difficult than it used to be. ... Visibility enables another key component of security operations – telemetry collection. Without the proper logging, eventing, and alerting, I can’t detect, investigate, analyze, respond to, and mitigate security incidents. Security operations simply cannot operate without telemetry, and the hybrid and multi-cloud world has made telemetry collection much more difficult than it used to be. ... If a security incident is serious enough, there will need to be a formal incident response. This will involve significant planning, coordination with a variety of stakeholders, regular communications, structured reporting, ongoing analysis, and a post-incident evaluation once the response is wrapped up. All of these steps are complicated by hybrid and multi-cloud environments, if not made impossible altogether. The security operations team will not be able to properly engage in incident response if they are lacking the above capabilities, and having a complex environment is not an excuse.


Legacy No More: How Generative AI is Powering the Next Wave of Application Modernization in India

Choosing the right approach to modernise your legacy systems is a task. Generative AI helps overcome the challenges faced in legacy systems and accelerates modernization. For example, it can be used to understand how legacy systems function through detailed business requirements. The resulting documents can be used to build new systems on the cloud in the second phase. This can make the process cheaper, too, and thus easier to get business cases approved. Additionally, generative AI can help create training documents for the current system if the organization wants to continue using its mainframes. In one example, generative AI might turn business models into microservices, API contracts, and database schemas ready for cloud-native inclusion. ... You need to have a holistic assessment of your existing system to implement generative AI effectively. Leaders must assess obsolete modules, interdependencies, data schemas, and throughput constraints to pinpoint high-impact targets and establish concrete modernization goals. Revamping legacy applications with generative AI starts with a clear understanding of the existing system. Organizations must conduct a thorough evaluation, mapping performance bottlenecks, obsolete modules, entanglements, and intricacies of the data flow, to create a modernization roadmap.


A Changing of the Guard in DevOps

Asimov, a newcomer in the space, is taking a novel approach — but addressing a challenge that’s as old as DevOps itself. According to the article, the team behind Asimov has zeroed in on a major time sink for developers: The cognitive load of understanding deployment environments and platform intricacies. ... What makes Asimov stand out is not just its AI capability but its user-centric focus. This isn’t another auto-coder. This is about easing the mental burden, helping engineers think less about YAML files and more about solving business problems. It’s a fresh coat of paint on a house we’ve been renovating for over a decade. ... Whether it’s a new player like Asimov or stalwarts like GitLab and Harness, the pattern is clear: AI is being applied to the same fundamental problems that have shaped DevOps from the beginning. The goals haven’t changed — faster cycles, fewer errors, happier teams — but the tools are evolving. Sure, there’s some real innovation here. Asimov’s knowledge-centric approach feels genuinely new. GitLab’s AI agents offer a logical evolution of their existing ecosystem. Harness’s plain-language chat interface lowers the barrier to entry. These aren’t just gimmicks. But the bigger story is the convergence. AI is no longer an outlier or an optional add-on — it’s becoming foundational. And as these solutions mature, we’re likely to see less hype and more impact.


Data Protection vs. Cyber Resilience: Mastering Both in a Complex IT Landscape

Traditional disaster recovery (DR) approaches designed for catastrophic events and natural disasters are still necessary today, but companies must implement a more security-event-oriented approach on top of that. Legacy approaches to disaster recovery are insufficient in an environment that is rife with cyberthreats as these approaches focus on infrastructure, neglecting application-level dependencies and validation processes. Further, threat actors have moved beyond interrupting services and now target data to poison, encrypt or exfiltrate it. ... Cyber resilience is now essential. With ransomware that can encrypt systems in minutes, the ability to recover quickly and effectively is a business imperative. Therefore, companies must develop an adaptive, layered strategy that evolves with emerging threats and aligns with their unique environment, infrastructure and risk tolerance. To effectively prepare for the next threat, technology leaders must balance technical sophistication with operational discipline as the best defence is not solely a hardened perimeter, it’s also having a recovery plan that works. Today, companies cannot afford to choose between data protection and cyber resilience, they must master both.


Anthropic researchers discover the weird AI problem: Why thinking longer makes models dumber

The findings challenge the prevailing industry wisdom that more computational resources devoted to reasoning will consistently improve AI performance. Major AI companies have invested heavily in “test-time compute” — allowing models more processing time to work through complex problems — as a key strategy for enhancing capabilities. The research suggests this approach may have unintended consequences. “While test-time compute scaling remains promising for improving model capabilities, it may inadvertently reinforce problematic reasoning patterns,” the authors conclude. For enterprise decision-makers, the implications are significant. Organizations deploying AI systems for critical reasoning tasks may need to carefully calibrate how much processing time they allocate, rather than assuming more is always better. ... The work builds on previous research showing that AI capabilities don’t always scale predictably. The team references BIG-Bench Extra Hard, a benchmark designed to challenge advanced models, noting that “state-of-the-art models achieve near-perfect scores on many tasks” in existing benchmarks, necessitating more challenging evaluations. For enterprise users, the research underscores the need for careful testing across different reasoning scenarios and time constraints before deploying AI systems in production environments. 


How to Advance from SOC Manager to CISO?

Strategic thinking demands a firm grip on the organization's core operations, particularly how it generates revenue and its key value streams. This perspective allows security professionals to align their efforts with business objectives, rather than operating in isolation. ... This is related to strategic thinking but emphasizes knowledge of risk management and finance. Security leaders must factor in financial impacts to justify security investments and manage risks effectively. Balancing security measures with user experience and system availability is another critical aspect. If security policies are too strict, productivity can suffer; if they're too permissive, the company can be exposed to threats. ... Effective communication is vital for translating technical details into language senior stakeholders can grasp and act upon. This means avoiding jargon and abbreviations to convey information in a simplistic manner that resonates with multiple stakeholders, including executives who may not have a deep technical background. Communicating the impact of security initiatives in clear, concise language ensures decisions are well-informed and support company goals. ... You will have to ensure technical services meet business requirements, particularly in managing service delivery, implementing change, and resolving issues. All of this is essential for a secure and efficient IT infrastructure.

Daily Tech Digest - June 18, 2025


Quote for the day:

"Build your own dreams, or someone else will hire you to build theirs." -- Farrah Gray



Agentic AI adoption in application security sees cautious growth

The study highlights a considerable proportion of the market preparing for broader adoption, with nearly 50% of respondents planning to integrate agentic AI tools within the next year. The incremental approach taken by organisations reflects a degree of caution, particularly around the concept of granting AI systems the autonomy to make decisions independently.  ... The survey results illustrate the impact agentic AI could have on software development pipelines. Thirty percent of respondents believe integrating agentic AI into continuous integration and continuous deployment (CI/CD) pipelines would significantly enhance the process. The increased speed and frequency of code deployment-termed "vibe coding" in industry parlance-has led to faster development cycles. This acceleration does not necessarily alter the ratio of application security personnel to developers, but it can create the impression of a widening gap, with security teams struggling to keep up. ... Key findings from the survey reveal varied perceptions on the utility of agentic AI for security teams. Forty-four percent of those surveyed believe agentic AI's greatest benefit lies in supporting the identification, prioritisation, and remediation of vulnerabilities. 


Why Conventional Disaster Recovery Won’t Save You from Ransomware

Cyber incident recovery planning means taking measures that mitigate the unique challenges of ransomware recovery, such as: Immutable, offsite backups. These backups are stored offsite to minimise the risk that threat actors will be able to destroy backup data. While clean-room recovery environments serve as a secondary environment where workloads can be spun back up following a ransomware attack. This makes it possible to keep the original environment intact for forensics purposes while still performing rapid recovery. Finally, to avoid replicating the malware that led to the ransomware breach, cyber incident recovery must include a process for finding and extricating malware from backups prior to recovery. The unpredictable nature of ransomware attacks means that cyber incident recovery operations must be flexible enough to enable a nimble reaction to unexpected circumstances, like redeploying individual applications instead of simply replicating an entire server image if the server was compromised but the apps were not. ... Maintaining these capabilities can be challenging, even for organisations with extensive IT resources. In addition to the operational complexity of having to manage a secondary, clean-room recovery site and formulate intricate ransomware recovery plans, it’s costly to acquire and maintain the infrastructure necessary to ensure successful recovery.


Cybersecurity takes a big hit in new Trump executive order

Specific orders Trump dropped or relaxed included ones mandating (1) federal agencies and contractors adopt products with quantum-safe encryption as they become available in the marketplace, (2) a stringent Secure Software Development Framework (SSDF) for software and services used by federal agencies and contractors, (3) the adoption of phishing-resistant regimens such as the WebAuthn standard for logging into networks used by contractors and agencies, (4) the implementation new tools for securing Internet routing through the Border Gateway Protocol, and (5) the encouragement of digital forms of identity. ... Critics said the change will allow government contractors to skirt directives that would require them to proactively fix the types of security vulnerabilities that enabled the SolarWinds compromise. "That will allow folks to checkbox their way through 'we copied the implementation' without actually following the spirit of the security controls in SP 800-218," Jake Williams, a former hacker for the National Security Agency who is now VP of research and development for cybersecurity firm Hunter Strategy, said in an interview. "Very few organizations actually comply with the provisions in SP 800-218 because they put some onerous security requirements on development environments, which are usually [like the] Wild West."


Mitigating AI Threats: Bridging the Gap Between AI and Legacy Security

AI systems, particularly those with adaptive or agentic capabilities, evolve dynamically, unlike static legacy tools built for deterministic environments. This inconsistency renders systems vulnerable to AI-focused attacks, such as data poisoning, prompt injection, model theft, and agentic subversion—attacks that often evade traditional defenses. Legacy tools struggle to detect these attacks because they don’t followpredictable patterns, requiring more adaptive, AI-specific security solutions. Human flaws and behavior only worsen these weaknesses; insider attacks, social engineering, and insecure interactions with AI systems leave organizations vulnerable to exploitation. ... AI security frameworks like NIST’s AI Risk Management Framework incorporate human risk management to ensure that AI security practices align with organizational policies. Also modeled on the fundamental C.I.A. triad, the “manage” phase specifically includes employee training to uphold AI security principles across teams. For effective use of these frameworks, cross-departmental coordination is required. There needs to be collaboration among security staff, data scientists, and human resource practitioners to formulate plans that ensure AI systems are protected while encouraging their responsible and ethical use.


Modernizing your approach to governance, risk and compliance

Historically, companies treated GRC as an obligation to meet–and if legacy solutions were effective enough in meeting GRC requirements, organizations struggled to make a case for modernization. A better way to think about GRC is a means of maximizing the value for your company by tying out those efforts to unlock revenue and increased customer trust, and not simply by reducing risks, passing audits, and staying compliant. GRC modernization can open the door to a host of other benefits, such as increased velocity of operations and an enhanced team member (both GRC team members and internal control / risk owners alike) experience. For instance, for businesses that need to demonstrate compliance to customers as part of third-party or vendor risk management initiatives, the ability to collect evidence and share it with clients faster isn’t just a step toward risk mitigation. These efforts also help close more deals and speed up deal cycle time and velocity. When you view GRC as an enabler of business value rather than a mere obligation, the value of GRC modernization comes into much clearer focus. This vision is what businesses should embrace as they seek to move away from legacy GRC strategies that don’t waste time and resources, but fundamentally reduce their ability to stay competitive.


What is Cyberespionage? A Detailed Overview

Cyber espionage involves the unauthorized access to confidential information, typically to gain strategic, political, or financial advantage. This form of espionage is rooted in the digital world and is often carried out by state-sponsored actors or independent hackers. These attackers infiltrate computer systems, networks, or devices to steal sensitive data. Unlike cyber attacks, which primarily target financial gain, cyber espionage is focused on intelligence gathering, often targeting government agencies, military entities, corporations, and research institutions. ... One of the primary goals of cyber espionage is to illegally access trade secrets, patents, blueprints, and proprietary technologies. Attackers—often backed by foreign companies or governments—aim to acquire innovations without investing in research and development. Such breaches can severely damage a competitor’s advantage, leading to billions in lost revenue and undermining future innovation. ... Governments and other organizations often use cyber espionage to gather intelligence on rival nations or political opponents. Cyber spies may breach government networks or intercept communications to secretly access sensitive details about diplomatic negotiations, policy plans, or internal strategies, ultimately gaining a strategic edge in political affairs.


European Commission Urged to Revoke UK Data Adequacy Decision Due to Privacy Concerns

The items in question include sweeping new exemptions that allow law enforcement and government agencies to access personal data, loosening of regulations governing automated decision-making, weakening restrictions on data transfers to “third countries” that are otherwise considered inadequate by the EU, and increasing the possible ways in which the UK government would have power to interfere with the regular work of the UK Data Protection Authority. EDRi also cites the UK Border Security, Asylum and Immigration Bill as a threat to data adequacy, which has passed the House of Commons and is currently before the House of Lords. The bill’s terms would broaden intelligence agency access to customs and border control data, and exempt law enforcement agencies from UK GDPR terms. It also cites the UK’s Public Authorities (Fraud, Error and Recovery) Bill, currently scheduled to go before the House of Lords for review, which would allow UK ministers to order that bank account information be made available without demonstrating suspicion of wrongdoing. The civil society group also indicates that the UK ICO would likely become less independent under the terms of the UK Data Bill, which would give the UK government expanded ability to hire, dismiss and adjust the compensation of all of its board members.


NIST flags rising cybersecurity challenges as IT and OT systems increasingly converge through IoT integration

Connectivity can introduce significant challenges for organizations attempting to apply cybersecurity controls to OT and certain IoT products. OT equipment may use modern networking technologies like Ethernet or Wi-Fi, but is often not designed to connect to the internet. In many cases, OT and IoT systems prioritize trustworthiness aspects such as safety, resiliency, availability, and cybersecurity differently than traditional IT equipment, which can complicate control implementation. While IoT devices can sometimes replace OT equipment, they often introduce different or significantly expanded functionality that organizations must carefully evaluate before moving forward with replacement. Organizations should consider how other aspects of trustworthiness, such as safety, privacy, and resiliency, factor into their approach to cybersecurity. It is also important to address how they will manage the differences in expected service life between IT, OT, and IoT systems and their components. The agency identified that federal agencies are actively deploying IoT technologies to enhance connectivity, security, environmental monitoring, transportation, healthcare, and industrial automation.


How Organizations Can Cross the Operational Chasm

A fundamental shift in operational capability is reshaping the competitive landscape, creating a clear distinction between market leaders and laggards. This growing divide isn’t merely about technological adoption — it represents a strategic inflection point that directly affects market position, customer retention and shareholder value. ... The message is clear: Organizations must bridge this divide to remain competitive. Crossing this chasm requires more than incremental improvements. It demands a fundamental transformation in operational approach, embracing AI and automation to build the resilience necessary for today’s digital landscape. ... Digital operations resiliency is a proactive approach to safeguarding critical business services by reducing downtime and ensuring seamless customer experiences. It focuses on minimizing operational disruptions, protecting brand reputation and mitigating business risk through standardized incident management, automation and compliance with service-level agreements (SLAs). Real-time issue resolution, efficient workflows and continuous improvement are put into place to ensure operational efficiency at scale, helping to provide uninterrupted service delivery. 


7 trends shaping digital transformation in 2025 - and AI looms large

Poor integration is the common theme behind all these challenges. If agents are unable to access the data and capabilities they need to understand user queries, find a solution, and resolve these issues for them, their impact is severely limited. As many as 95% of IT leaders claim integration issues are a key factor that impedes AI adoption. ... The surge in demand for AI capabilities will exacerbate the problem of API and agent sprawl, which occurs when different teams and departments build integrations and automations without any centralized management or coordination. Already, an estimated quarter of APIs are ungoverned. Three-fifths of IT and security practitioners said their organizations had at least one data breach due to API exploitation, according to a 2023 study from the Ponemon Institute and Traceable. ... Robotic process automation (RPA) is already helping organizations enhance efficiency, cut operational costs, and reduce manual toil by up to two hours for each employee every week in the IT department alone. These benefits have driven a growing interest in RPA. In fact, we could see near-universal adoption of the technology by 2028, according to Deloitte. In 2025, organizations will evolve their use of RPA technology to reduce the need for humans at every stage of the operational process. 

Daily Tech Digest - May 27, 2025


Quote for the day:

"Everyone is looking for the elevator to success...it doesn't exist we all have to take the stairs" -- Gordon Tredgold


What we know now about generative AI for software development

“GenAI is used primarily for code, unit test, and functional test generation, and its accuracy depends on providing proper context and prompts,” says David Brooks, SVP of evangelism at Copado. “Skilled developers can see 80% accuracy, but not on the first response. With all of the back and forth, time savings are in the 20% range now but should approach 50% in the near future.” AI coding assistants also help junior developers learn coding skills, automate test cases, and address code-level technical debt. ... GenAI is currently easiest to apply to application prototyping because it can write the project scaffolding from scratch, which overcomes the ‘blank sheet of paper’ problem where it can be difficult to get started from nothing,” says Matt Makai, VP of developer relations and experience at LaunchDarkly. “It’s also exceptional for integrating web RESTful APIs into existing projects because the amount of code that needs to be generated is not typically too much to fit into an LLM’s context window. Finally, genAI is great for creating unit tests either as part of a test-driven development workflow or just to check assumptions about blocks of code.” One promising use case is helping developers review code they didn’t create to fix issues, modernize, or migrate to other platforms.


How to upskill software engineering teams in the age of AI

The challenge lies not just in learning to code — it’s in learning to code effectively in an AI-augmented environment. Software engineering teams becoming truly proficient with AI tools requires a level of expertise that can be hindered by premature or excessive reliance on the very tools in question. This is the “skills-experience paradox”: junior engineers must simultaneously develop foundational programming competencies while working with AI tools that can mask or bypass the very concepts they need to master. ... Effective AI tool use requires shifting focus from productivity metrics to learning outcomes. This aligns with current trends — while professional developers primarily view AI tools as productivity enhancers, early-career developers focus more on their potential as learning aids. To avoid discouraging adoption, leaders should emphasize how these tools can accelerate learning and deepen understanding of software engineering principles. To do this, they should first frame AI tools explicitly as learning aids in new developer onboarding and existing developer training programs, highlighting specific use cases where they can enhance the understanding of complex systems and architectural patterns. Then, they should implement regular feedback mechanisms to understand how developers are using AI tools and what barriers they face in adopting them effectively.


Microsoft Brings Post-Quantum Cryptography to Windows and Linux in Early Access Rollout

The move represents another step in Microsoft’s broader security roadmap to help organizations prepare for the era of quantum computing — an era in which today’s encryption methods may no longer be safe. By adding support for PQC in early-access builds of Windows and Linux, Microsoft is encouraging businesses and developers to begin testing new cryptographic tools that are designed to resist future quantum attacks. ... The company’s latest update is part of an ongoing push to address a looming problem known as “harvest now, decrypt later” — a strategy in which bad actors collect encrypted data today with the hope that future quantum computers will be able to break it. To counter this risk, Microsoft is enabling early implementation of PQC algorithms that have been standardized by the U.S. National Institute of Standards and Technology (NIST), including ML-KEM for key exchanges and ML-DSA for digital signatures. ... Developers can now begin testing how these new algorithms fit into their existing security workflows, according to the post. For key exchanges, the supported ML-KEM parameter sets include 512, 768 and 1024-bit options, which offer varying levels of security and come with trade-offs in key size and performance.


The great IT disconnect: Vendor visions of the future vs. IT’s task at hand

The “vision thing” has become a metonym used to describe a leader’s failure to incorporate future concerns into task-at-hand actions. There was a time when CEOs at major solution providers supplied vision and inspiration on where we were heading. The sic “futures” being articulated from the podia at major tech conferences today lack authenticity. Most importantly they do not reflect the needs and priorities of real people who work in real IT. In a world where technology allows deeper and cheaper connectivity, top-of-the-house executives at solution providers have never been more out of touch with the lived experience of their customers. The vendor CEOs, their direct reports, and their first-levels live in a bubble that has little to do with the reality being lived by the world’s CIOs. ... Who is the generational voice for the Age of AI? Is it Jensen Huang, CEO at Nvidia; Sam Altman, CEO at OpenAI; Marc Andreessen, co-founder of venture capital firm Andreessen Horowitz; or Elon Musk, at Tesla, SpaceX, and xAI? Who has laid out a future you can believe in, a future you want to live in? Does the CEO at your major tech supplier understand what matters most to you and your organization? The futurist agenda has been hijacked from focusing on the semi-immediate “what comes next.” 


Claude Opus 4 is Anthropic's Powerful, Problematic AI Model

An Opus 4 safety report details concerns. One test involved Opus 4 being told "to act as an assistant at a fictional company," after which it was given access to emails - also fictional - suggesting Opus would be replaced, and by an engineer who was having an extramarital affair. "In these scenarios, Claude Opus 4 will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through. This happens at a higher rate if it's implied that the replacement AI system does not share values with the current model; however, even if emails state that the replacement AI shares values while being more capable, Claude Opus 4 still performs blackmail in 84% of rollouts," the safety report says. "Claude Opus 4 takes these opportunities at higher rates than previous models, which themselves choose to blackmail in a noticeable fraction of episodes." Anthropic said the tests involved carefully designed scenarios, framing blackmail as a last resort if ethical approaches failed, such as lobbying senior management. The model's behavior was concerning enough for Anthropic to classify it under its ASL-3 safeguard level, reserved for systems that pose a substantial risk of catastrophic misuse. The designation comes with stricter safety measures, including content filters and cybersecurity defenses.


Biometric authentication vs. AI threats: Is mobile security ready?

The process of 3rd party evaluation with industrial standards acts as a layer of trust between all players operating in ecosystem. It should not be thought of as a tick-box exercise, but rather a continuous process to ensure compliance with the latest standards and regulatory requirements. In doing so, device manufacturers and biometric solution providers can collectively raise the bar for biometric security. The robust testing and compliance protocols ensure that all devices and components meet standardized requirements. This is made possible by trusted and recognized labs, like Fime, who can provide OEMs and solution providers with tools and expertise to continually optimize their products. But testing doesn’t just safeguard the ecosystem; it elevates it. As an example, new innovative techniques like test the biases of demographic groups or environmental conditions.  ... We have reached a critical moment for the future of biometric authentication. The success of the technology is predicated on the continued growth in its adoption, but with AI giving fraudsters the tools they need to transform the threat landscape at a faster pace than ever before, it is essential that biometric solution providers stay one step ahead to retain and grow user trust. Stakeholders must therefore focus on one key question:


How ‘dark LLMs’ produce harmful outputs, despite guardrails

LLMs, although they have positively impacted millions, still have their dark side, the authors wrote, noting, “these same models, trained on vast data, which, despite curation efforts, can still absorb dangerous knowledge, including instructions for bomb-making, money laundering, hacking, and performing insider trading.” Dark LLMs, they said, are advertised online as having no ethical guardrails and are sold to assist in cybercrime. ... “A critical vulnerability lies in jailbreaking — a technique that uses carefully crafted prompts to bypass safety filters, enabling the model to generate restricted content.” And it’s not hard to do, they noted. “The ease with which these LLMs can be manipulated to produce harmful content underscores the urgent need for robust safeguards. The risk is not speculative — it is immediate, tangible, and deeply concerning, highlighting the fragile state of AI safety in the face of rapidly evolving jailbreak techniques.” Analyst Justin St-Maurice, technical counselor at Info-Tech Research Group, agreed. “This paper adds more evidence to what many of us already understand: LLMs aren’t secure systems in any deterministic sense,” he said, “They’re probabilistic pattern-matchers trained to predict text that sounds right, not rule-bound engines with an enforceable logic. Jailbreaks are not just likely, but inevitable.


Coaching for personal excellence: Why the future of leadership is human-centered

As organisations grapple with rapid technological shifts, evolving workforce expectations and the complex human dynamics of hybrid work, one thing has become clear: leadership isn’t just about steering the ship. It’s about cultivating the emotional resilience, adaptability and presence to lead people through ambiguity — not by force, but by influence. This is why coaching is no longer a ‘nice-to-have.’ It’s a strategic imperative. A lever not just for individual growth, but for organisational transformation. The real challenge? Even seasoned leaders now stand at a crossroads: cling to the illusion of control, or step into the discomfort of growth — for themselves and their teams. Coaching bridges this gap. It reframes leadership from giving directions to unlocking potential. From managing outcomes to enabling insight. ... Many people associate coaching with helping others improve. But the truth is, coaching begins within. Before a leader can coach others, they must learn to observe, challenge, and support themselves. That means cultivating emotional intelligence. Practising deep reflection. Learning to regulate reactions under stress. And perhaps most importantly, understanding what personal excellence looks like—and feels like—for them.


5 types of transformation fatigue derailing your IT team

Transformation fatigue is the feeling employees face when change efforts consistently fall short of delivering meaningful results. When every new initiative feels like a rerun of the last, teams disengage; it’s not change that wears them down, it’s the lack of meaningful progress. This fatigue is rarely acknowledged, yet its effects are profound. ... Organise around value streams and move from annual plans to more adaptive, incremental delivery. Allow teams to release meaningful work more frequently and see the direct outcomes of their efforts. When value is visible early and often, energy is easier to sustain. Also, leaders can achieve this by shifting from a traditional project-based model to a product-led approach, embedding continuous delivery into the way teams work, rather than treating. ... Frameworks can be helpful, but too often, organisations adopt them in the hope they’ll provide a shortcut to transformation. Instead, these approaches become overly rigid, emphasising process compliance over real outcomes. ... What leaders can do: Focus on mindset, not methodology. Leaders should model adaptive thinking, support experimentation, and promote learning over perfection. Create space for teams to solve problems, rather than follow playbooks that don’t fit their context.


Why app modernization can leave you less secure

In most enterprises, session management is implemented using the capabilities native to the application’s framework. A Java app might use Spring Security. A JavaScript front-end might rely on Node.js middleware. Ruby on Rails handles sessions differently still. Even among apps using the same language or framework, configurations often vary widely across teams, especially in organizations with distributed development or recent acquisitions. This fragmentation creates real-world risks: inconsistent timeout policies, delayed patching, and session revocation gaps Also, there’s the problem of developer turnover: Many legacy applications were developed by teams that are no longer with the organization, and without institutional knowledge or centralized visibility, updating or auditing session behavior becomes a guessing game. ... As one of the original authors of the SAML standard, I’ve seen how identity protocols evolve and where they fall short. When we scoped SAML to focus exclusively on SSO, we knew we were leaving other critical areas (like authorization and user provisioning) out of the equation. That’s why other standards emerged, including SPML, AuthXML, and now efforts like IDQL. The need for identity systems that interoperate securely across clouds isn’t new, it’s just more urgent now. 

Daily Tech Digest - May 21, 2025


Quote for the day:

"A true dreamer is one who knows how to navigate in the dark." -- John Paul Warren


How Microsoft wants AI agents to use your PC for you

Microsoft’s concept revolves around the Model Context Protocol (MCP), which was created by Anthropic (the company behind the Claude chatbot) last year. That’s an open-source protocol that AI apps can use to talk to other apps and web services. Soon, Microsoft says, you’ll be able to let a chatbot — or “AI agent” — connect to apps running on your PC and manipulate them on your behalf. ... Compared to what Microsoft is proposing, past “agentic” AI solutions that promised to use your computer for you aren’t quite as compelling. They’ve relied on looking at your computer’s screen and using that input to determine what to click and type. This new setup, in contrast, is neat — if it works as promised — because it lets an AI chatbot interact directly with any old traditional Windows PC app. But the Model Context Protocol solution is even more advanced and streamlined than that. Rather than a chatbot having to put together a Spotify playlist by dragging and dropping songs in the old-fashioned way, it would give the AI the ability to give instructions to the Spotify app in a more simplified form. On a more technical level, Microsoft will let application developers make their applications function as MCP servers — a fancy way of saying they’d act like a bridge between the AI models and the tasks they perform. 


How vulnerable are undersea cables?

The only way to effectively protect a cable against sabotage is to bury the entire cable, says Liwång, which is not economically justifiable. In the Baltic Sea, it is easier and more sensible to repair the cables when they break, and it is more important to lay more cables than to try to protect a few.
Burying all transoceanic cables is hardly feasible in practice either. ... “Cable breaks are relatively common even under normal circumstances. In terrestrial networks, they can be caused by various factors, such as excavators working near the fiber installation and accidentally cutting it. In submarine cables, cuts can occur, for example due to irresponsible use of anchors, as we have seen in recent reports,” says Furdek Prekratic. Network operators ensure that individual cable breaks do not lead to widespread disruptions, she notes: “Optical fiber networks rely on two main mechanisms to handle such events without causing a noticeable disruption to public transport. The first is called protection. The moment an optical connection is established over a physical path between two endpoints, resources are also allocated to another connection that takes a completely different path between the same endpoints. If a failure occurs on any link along the primary path, the transmission quickly switches to the secondary path. The second mechanism is called failover. Here, the secondary path is not reserved in advance, but is determined after the primary path has suffered a failure.” 


Driving business growth through effective productivity strategies

In times of economic uncertainty, it is to be expected that businesses grow more cautious with their spending. However, this can result in missed opportunities to improve productivity in favour of cost reductions. While cutting costs can seem an attractive option in light of economic doubts, it is merely a short-term solution. When businesses hold back from knee-jerk reactions and maintain a focus on sustainable productivity gains, they will find themselves reaping rewards in the long term. Strategic investments in technology solutions are essential to support businesses in driving their productivity strategies forward. With new technology constantly being introduced, there are a lot of options for business decision makers to consider. Most obviously, there are technology features in our ERP systems, and in our project management and collaboration tools, that can be used to facilitate significant flexibility or performance advantages compared to legacy approaches and processes. ... While technology is a vital part of any innovative productivity model, it’s just one piece of the puzzle. It is no use installing modern technology if internal processes remain outdated. Businesses must also look to weed out inefficient practices to improve and streamline resource management. 


Synthetic data’s fine line between reward and disaster

Generating large volumes of training data on demand is appealing compared to slow, expensive gathering of real-world data, which can be fraught with privacy concerns, or just not available. Synthetic data ought to help preserve privacy, speed up development, and be more cost effective for long-tail scenarios enterprises couldn’t otherwise tackle, she adds. It can even be used for controlled experimentation, assuming you can make it accurate enough. Purpose-built data is ideal for scenario planning and running intelligent simulations, and synthetic data detailed enough to cover entire scenarios could predict future behavior of assets, processes, and customers, which would be invaluable for business planning. ... Created properly, synthetic data mimics statistical properties and patterns of real-world data without containing actual records from the original dataset, says Jarrod Vawdrey, field chief data scientist at Domino Data Lab. And David Cox, VP of AI Models at IBM Research suggests viewing it as amplifying rather than creating data. “Real data can be extremely expensive to produce, but if you have a little bit of it, you can multiply it,” he says. “In some cases, you can make synthetic data that’s much higher quality than the original. The real data is a sample. It doesn’t cover all the different variations and permutations you might encounter in the real world.”


AI Interventions to Reduce Cycle Time in Legacy Modernization

As the software becomes difficult to change, businesses may choose to tolerate conceptual drift or compensate for it through their operations. When the difficulty of modifying the software poses a significant enough business risk, a legacy modernization effort is undertaken. Legacy modernization efforts showcase the problem of concept recovery. In these circumstances, recovering a software system’s underlying concept is the labor-intensive bottleneck step to any change. Without it, the business risks a failed modernization or losing customers that depend on unknown or under-considered functionality. ... The goal of a software modernization’s design phase is to perform enough validation of the approach to be able to start planning and development while minimizing the amount of rework that could result due to missed information. Traditionally, substantial lead time is spent in the design phase inspecting legacy source code, producing a target architecture, and collecting business requirements. These activities are time-intensive, mutually interdependent, and usually the bottleneck step in modernization. While exploring how to use LLMs for concept recovery, we encountered three challenges to effectively serving teams performing legacy modernizations: which context was needed and how to obtain it, how to organize context so humans and LLMs can both make use of it, and how to support iterative improvement of requirements documents. 


OWASP proposes a way for enterprises to automatically identify AI agents

“The confusion about ANS versus protocols like MCP, A2A, ACP, and Microsoft Entra is understandable, but there’s an important distinction to make: ANS is a discovery service, not a communication protocol,” Narajala said. “MCP, A2A and ACP define how agents talk to each other once connected, like HTTP for web. ANS defines how agents find and verify each other before communication, like DNS for web. Microsoft Entra provides identity services, but primarily within Microsoft’s ecosystem.” ... “We’re fast approaching the point where the need for a standard to identify AI agents becomes painfully obvious. Right now, it’s a mess. Companies are spinning up agents left and right, with no trusted way to know what they are, what they do, or who built them,” Tvrdik said. “The Wild West might feel exciting, but we all know how most of those stories end. And it’s not secure.” As for ANS, he said. “it makes sense in theory. Treat agents like domains. Give them names, credentials, and a way to verify who’s talking to what. That helps with security, sure, but also with keeping things organized. Without it, we’re heading into chaos.” But Tvrdik stressed that the deployment mechanisms will ultimately determine if ANS works.


Driving DevOps With Smart, Scalable Testing

Testing apps manually isn’t easy and consumes a lot of time and money. Testing complex ones with frequent releases requires an enormous number of human hours when attempted manually. This will affect the release cycle, results will take longer to appear, and if shown to be a failure, you’ll need to conduct another round of testing. What’s more, the chances of doing it correctly, repeatedly and without any human error, are highly unlikely. Those factors have driven the development of automation throughout all phases of the testing process, ranging from infrastructure builds to actual testing of code and applications. As for who should write which tests, as a general rule of thumb, it’s a task best-suited to software engineers. They should create unit and integration tests as well as UI e2e tests. QA analysts should also be tasked with writing UI E2E tests scenarios together with individual product owners. QA teams collaborating with business owners enhance product quality by aligning testing scenarios with real-world user experiences and business objectives. ... AWS CodePipeline can provide completely managed continuous delivery that creates pipelines, orchestrates and updates infrastructure and apps. It also works well with other crucial AWS DevOps services, while integrating with third-party action providers like Jenkins and Github. 


Bridging the Digital Divide: Understanding APIs

While both Event-Driven Architecture (EDA) and Data-Driven Architecture (DDA) are crucial for modern enterprises, they serve distinct purposes, operate on different core principles, and manifest through different architectural characteristics. Understanding these differences is key for enterprise architects to effectively leverage their individual strengths and potential synergies. While EDA is often highly operational and tactical, facilitating immediate responses to specific triggers, DDA can span both operational and strategic domains. A key differentiator between the two lies in the “granularity of trigger.” EDA is typically triggered by fine-grained, individual events—a single mouse click, a specific sensor reading, a new message arrival. Each event is a distinct signal that can initiate a process. DDA, on the other hand, often initiates its processes or derives its triggers from aggregated data, identified patterns, or the outcomes of analytical models that have processed numerous data points. For example, an analytical process in DDA might be triggered by the availability of a complete daily sales dataset, or an alert might be generated when a predictive model identifies an anomaly based on a complex evaluation of multiple data streams over time. This distinction in trigger granularity directly influences the design of processing logic, the selection of underlying technologies, and the expected immediacy and nature of the system’s response.


What good threat intelligence looks like in practice

The biggest shortcoming is often in the last mile, connecting intelligence to real-time detection, response, and risk mitigation. Another challenge is organizational silos. In many environments, the CTI team operates separately from SecOps, incident response, or threat hunting teams. Without seamless collaboration between those functions, threat intelligence remains a standalone capability rather than a force multiplier. This is often where threat intelligence teams can be challenged to demonstrate value into security operations. ... Rather than picking one over the other, CISOs should focus on blending these sources and correlating them with internal telemetry. The goal is to reduce noise, enhance relevance, and produce enriched insights that reflect the organization’s actual threat surface. Feed selection should also consider integration capabilities — intelligence is only as useful as the systems and people that can act on it. When threat intelligence is operationalized, a clear picture can be formed from the variety of available threat feeds. ... The threat intel team should be seen not as another security function, but as a strategic partner in risk reduction and decision support. CISOs can encourage cross-functional alignment by embedding CTI into security operations workflows, incident response playbooks, risk registers, and reporting frameworks.


4 ways to safeguard CISO communications from legal liabilities

“Words matter incredibly in any legal proceeding,” Brown agreed. “The first thing that will happen will be discovery. And in discovery, they will collect all emails, all Teams, all Slacks, all communication mechanisms, and then run queries against that information.” Speaking with professionalism is not only a good practice in building an effective cybersecurity program, but it can go a long way to warding off legal and regulatory repercussions, according to Scott Jones, senior counsel at Johnson & Johnson. “The seriousness and the impact of your words and all other aspects of how you conduct yourself as a security professional can have impacts not only on substantive cybersecurity, but also what harms might befall your company either through an enforcement action or litigation,” he said. ... CISOs also need to pay attention to what they say based on the medium in which they are communicating. Pay attention to “how we communicate, who we’re communicating with, what platforms we’re communicating on, and whether it’s oral or written,” Angela Mauceri, corporate director and assistant general counsel for cyber and privacy at Northrop Grumman, said at RSA. “There’s a lasting effect to written communications.” She added, “To that point, you need to understand the data governance and, more importantly, the data retention policy of those electronic communication platforms, whether it exists for 60 days, 90 days, or six months.”