Showing posts with label change management. Show all posts
Showing posts with label change management. Show all posts

Daily Tech Digest - September 28, 2025


Quote for the day:

“Wisdom equals knowledge plus courage. You have to not only know what to do and when to do it, but you have to also be brave enough to follow through.” -- Jarod Kintz


What happens when AI becomes the customer?

If the first point of contact is no longer a person but an AI agent, then traditional tactics like branding, visual merchandising or website design will have reduced impact. Instead, the focus will move to how easily machines can find and understand product information. Retailers will need to ensure that data, from specifications and availability to pricing and reviews, is accurate, structured and optimised for AI discovery. Products will no longer be browsed by humans but scanned and filtered by autonomous systems making selections on someone else’s behalf. ... This trend is particularly strong among younger and higher-income consumers. People under 35 are far more likely to use AI throughout the buying process, particularly for everyday items like groceries, toiletries and clothes. For this group, convenience matters. Many are comfortable letting technology take over simple tasks, and when it comes to low cost, low risk products, they’re happy for AI to handle the entire purchase. ... These developments point to the rise of the agentic internet – a world in which AI agents become the main way consumers interact with brands. As these tools search, compare, buy and manage products on users’ behalf, they will reshape how visibility, loyalty and influence work. Retailers have less than five years to respond. That means investing in clean, structured product data, adapting automation where it’s welcomed, and keeping the human touch where trust matters. 


The overlooked cyber risk in data centre cooling systems

Data centre operations are critically dependent on a complex ecosystem of OT equipment, including HVAC and building management systems. As operators adopt closed-loop and waterless cooling to improve efficiency, these systems are increasingly tied into BMS and DCIM platforms. This expands the attack surface of networks that were once more segmented. A compromise of these systems could directly affect temperature, humidity or airflow, with clear implications for the availability of services that critical infrastructure asset owners rely on. ... Resilience also depends on secure remote access, including multi-factor authentication and controlled jump-host environments for vendors and third parties. Finally, risk-based vulnerability management ensures that critical assets are either patched, mitigated, or closely monitored for exploitation, even where systems cannot easily be taken offline. Taken together, these controls provide a framework for protecting data centre cooling and building systems without slowing the drive for efficiency and innovation. ... As the UK expands its data centre capacity to fuel AI ambitions and digital transformation, cybersecurity must be designed into the physical systems that keep those facilities stable. Cooling is not just an operational detail. It is a potential target — and protecting it is essential to ensuring the sector’s growth is sustainable, resilient, and secure.


Rethinking Regression Testing with Change-to-Test Mapping

Regression testing is essential to software quality, but in enterprise projects it often becomes a bottleneck. Full regression suites may run for hours, delaying feedback and slowing delivery. The problem is sharper in agile and DevOps, where teams must release updates daily. ... The need for smarter regression strategies is more urgent than ever. Modern software systems are no longer monoliths; they are built from microservices, APIs, and distributed components, each evolving quickly. Every code change can ripple across modules, making full regressions increasingly impractical. At the same time, CI/CD costs are rising sharply. Cloud pipelines scale easily but generate massive bills when regression packs run repeatedly. ... The core idea is simple: “If only part of the code changes, why not run only the tests covering that part?” Change-to-test mapping links modified code to the relevant tests. Instead of running the entire suite on every commit, the approach executes a targeted subset – while retaining safeguards such as safety tests and fallback runs. What makes this approach pragmatic is that it does not rely on building a “perfect” model of the system. Instead, it uses lightweight signals – such as file changes, annotations, or coverage data – to approximate the most relevant set of tests. Combined with guardrails, this creates a balance: fast enough to keep up with modern delivery, yet safe enough to trust in production-grade environments.


Is A Human Touch Needed When Compliance Has Automation?

Even with technical issues, automation may highlight missing patches, but humans are the ones who must prioritize fixes, coordinate remediation, and validate that vulnerabilities are closed. Audits highlight this divide even more clearly. Regulators rarely accept a data dump without explanation. Compliance officers must be able to explain how controls work, why exceptions exist, and what is being done to address them. Without human review, automated alerts risk creating false positives, blind spots, or alert fatigue. Perhaps most critically, over-dependence on automation can erode institutional knowledge, leaving teams unprepared to interpret risk independently. ... By eliminating repetitive evidence collection, teams gain the capacity to analyze training effectiveness, scenario-plan future threats, and interpret regulatory changes. Automation becomes not a replacement for people, but a multiplier of their impact. ... Over-reliance on automation carries its own risks. A clean dashboard may mask legacy systems still in production or system blind spots if a monitoring tool goes down. Without active oversight, teams may not discover gaps until the next audit. There’s also the danger of compliance becoming a “black box,” where staff interact with dashboards but never learn how to evaluate risk themselves. CIOs need to actively design against these vulnerabilities.


14 Challenges (And Solutions) Of Filling Fractional Leadership Roles

Filling a fractional leadership role is tough when companies underestimate the expertise required to thrive in such a role. Fractional leaders need both autonomy and seamless integration with key stakeholders. ... One challenge of fractional leadership is grasping the company culture and processes with limited time on site. Without that context, even the most skilled leader can struggle to drive meaningful change or build credibility. ... Finding the right culture fit for a fractional leadership role can be challenging. High-performing leadership teams are tight-knit ecosystems, and a fractional leader’s challenges with breaking into them and fitting into their culture can be daunting. ... One challenge is unrealistic expectations—wanting full-time availability at part-time cost. The key is to define scope, decision rights and deliverables upfront. Treat fractional leaders as strategic partners, not stopgaps. Clear onboarding and aligned incentives are essential to driving value and trust. ... A common hurdle with fractional roles is misaligned expectations—impact is needed fast, but boundaries and authority aren’t always defined. The fix? Be upfront: outline goals, decision-making limits and integration plans early so leaders can add value quickly without friction.


Will the EU Designate AI Under the Digital Markets Act?

There are two main ways in which the DMA will be relevant for generative AI services. First, a generative AI player may offer a core platform service and meet the gatekeeper requirements of the DMA. Second, generative AI-powered functionalities may be integrated or embedded in existing designated core platform services and therefore be covered by the DMA obligations. Those obligations apply in principle to the entire core platform service as designated, including features that rely on generative AI. ... Cloud computing is already listed as a core platform service under the DMA, and thus, designating cloud services would be a much faster process than creating a new core platform service category. Michelle Nie, a tech policy researcher formerly with the Open Markets Institute, says the EU should designate cloud providers to tackle the infrastructural advantages held by gatekeepers. Indeed, she has previously written for Tech Policy Press that doing so “would help address several competitive concerns like self-preferencing, using data from businesses that rely on the cloud to compete against them, or disproportionate conditions for termination of services.” ... Introducing contestability and fairness, the stated goals of the DMA, into digital ecosystems increasingly relied on by private and public institutions could not be more critical. 


The Looming Authorization Crisis: Why Traditional IAM Fails Agentic AI

From copilots booking travel to intelligent agents updating systems and coordinating with other bots, we’re stepping into a world where software can reason, plan, and operate with increasing autonomy.This shift brings immense promise and significant risk. The identity and access management (IAM) infrastructures that we rely upon today were built for people and fixed service accounts. They weren’t designed to manage self-directing, dynamic digital agents. And yet that’s what Agentic AI demands. ... The road to a comprehensive and internationally accessible Agentic AI IAM framework is a daunting task. The rapid pace of AI development demands accelerated IAM security guidance, especially for heavily regulated sectors. Continued research, continued development of standards, and rigorous interoperability are required to prevent fragmentation into incompatible identity silos. We must also address the ethical issues, such as bias detection and mitigation in credentials, and offer transparency and explainability of IAM decisions. ... The stakes are high. Without a comprehensive plan for managing these agents—one that tracks who they are, what they can perceive, and when their permissions expire—we risk disaster through way of complexity and compromise. Identity remains the foundation of enterprise security, and its scope must reach rapidly to shield the autonomous revolution.


How immutability tamed the Wild West

One of the first lessons that a new programmer should learn is that global variables are a crime against all that is good and just. If a variable is passed around like a football, and its state can change anywhere along the way, then its state will change along the way. Naturally, this leads to hair pulling and frustration. Global variables create coupling, and deep and broad coupling is the true crime against the profession. At first, immutability seems kind of crazy—why eliminate variables? Of course things need to change! How the heck am I going to keep track of the number of items sold or the running total of an order if I can’t change anything? ... The key to immutability is understanding the notion of a pure function. A pure function is one that always returns the same output for a given input. Pure functions are said to be deterministic, in that the output is 100% predictable based on the input. In simpler terms, a pure function is a function with no side effects. It will never change something behind your back. ... Immutability doesn’t mean nothing changes; it means values never change once created. You still “change” by rebinding a name to a new value. The notion of a “before” and “after” state is critical if you want features like undo, audit tracing, and other things that require a complete history of state. Back in the day, GOSUB was a mind-expanding concept. It seems so quaint today. 


What Lessons Can We Learn from the Internet for AI/ML Evolution?

One of the defining principles of the Internet was to keep the core simple and push the intelligence to the edge. The network and its host computers just simply delivered packets reliably without dictating or controlling applications. That principle enabled the explosion of the Web, streaming, and countless other services. In AI, similar principles should be considered. Instead of centralizing everything in “one foundational model”, we should empower distributed agents and edge intelligence. Core infrastructure should stay simple and robust, enabling diverse use cases on top. ... One of the most important lessons of all from the Internet is that there be no single company nor government-owned or controlled TCP/IP stack. It is neutral governance that created global trust and adoption. Institutions such as ICANN, and the regional Internet registries (RIRs) played a key role by managing domain names and IP address assignments in an open and transparent way, ensuring that resources were allocated fairly across geographies. This kind of neutral stewardship allowed the Internet to remain interoperable and borderless. On the other hand, today’s AI landscape is controlled by a handful of big-tech companies. To scale AI responsibly, we will need similar global governance structures—an “IETF for AI,” complemented by neutral registries that can manage shared resources such as model identifiers, agent IDs, coordinating protocols, among others.


Digital Transformation: Investments Soar, But Cyber Risks (Often) Outpace Controls

With the accelerating digital transformation, periodic security and compliance reviews are obsolete. Nelson emphasizes the need for “continuous assessment—continuous monitoring of privacy, regulatory, and security controls,” with automation used wherever feasible. Third-party and supply-chain risk must be continuously monitored, not just during vendor onboarding. Similarly, asset management can no longer be neglected, as even overlooked legacy devices—like unpatched Windows XP machines in manufacturing—can serve as vectors for persistent threats. Effective governance is crucial to enhancing security during periods of rapid digital transformation, Nelson emphasized. By establishing robust frameworks and clear policies for acceptable use, organizations can ensure that new technologies, such as AI, are adopted responsibly and securely. ... Maintaining cybersecurity within Governance, Risk, and Compliance (GRC) programs helps keep security from being a reactive cost center, as security measures are woven into the digital strategy from the outset, rather than being retrofitted. And GRC frameworks provide real-time visibility into organizational risks, facilitate data-driven decision-making, and create a culture where risk awareness coexists with innovation. This harmony between governance and digital initiatives helps businesses navigate the digital landscape while ensuring their operations remain secure, compliant, and prepared to adapt to change.

Daily Tech Digest - September 14, 2025


Quote for the day:

"Courage doesn't mean you don't get afraid. Courage means you don't let fear stop you." -- Bethany Hamilton


The first three things you’ll want during a cyberattack

The first wave of panic a cyberattack comes from uncertainty. Is it ransomware? A phishing campaign? Insider misuse? Which systems are compromised? Which are still safe? Without clarity, you’re guessing. And in cybersecurity, guesswork can waste precious time or make the situation worse. ... Clarity transforms chaos into a manageable situation. With the right insights, you can quickly decide: What do we isolate? What do we preserve? What do we shut down right now? The MSPs and IT teams that weather attacks best are the ones who can answer those questions without delays. ... Think of it like firefighting: Clarity tells you where the flames are, but control enables you to prevent the blaze from consuming the entire building. This is also where effective incident response plans matter. It’s not enough to have the tools; you need predefined roles, playbooks and escalation paths so your team knows exactly how to assert control under pressure. Another essential in this scenario is having a technology stack with integrated solutions that are easy to manage. ... Even with visibility and containment, cyberattacks can leave damage behind. They can encrypt data and knock systems offline. Panicked clients demand answers. At this stage, what you’ll want most is a lifeline you can trust to bring everything back and get the organization up and running again.


Emotional Blueprinting: 6 Leadership Habits To See What Others Miss

Most organizations use tools like process mapping, journey mapping, and service blueprinting. All valuable. But often, these efforts center on what needs to happen operationally—steps, sequences, handoffs. Even journey maps that include emotional states tend to track generalized sentiment (“frustrated,” “confused”) at key stages. What’s often missing is an observational discipline that reveals emotional nuance in real time. ... People don’t just come to get things done. They come with emotional residue—worries, power dynamics, pride, shame, hope, exhaustion. And while you may capture some of this through traditional tools, observation fills in what the tools can’t name. ... Set aside assumptions and resist the urge to explain. Just watch. Let insight come without forcing interpretation. ... Focus on micro-emotions in the moment, then pull back to observe the emotional arc of a journey. ... Observe what happens in thresholds—hallways, entries, exits, loading screens. These in-between moments often hold the strongest emotional cues. ... Track how people react, not just what they do. Does their behavior show trust, ease, confusion, or hesitance? ... Trace where momentum builds—or breaks. Energy flow is often a more reliable signal than feedback forms.


Cloud security gaps widen as skills & identity risks persist

According to the report, today's IT environment is increasingly complicated. The data shows that 82% of surveyed organisations now operate hybrid environments, and 63% make use of multiple cloud providers. As the use of cloud services continues to expand, organisations are required to achieve unified security visibility and enforce consistent security policies across fragmented platforms. However, the research found that most organisations currently lack the necessary controls to manage this complexity. This deficiency is leading to blind spots that can be exploited by attackers. ... The research identifies identity management as the central vulnerability in current cloud security practices. A majority of respondents (59%) named insecure identities and permissions as their primary cloud security concern. ... "Identity has become the cloud's weakest link, but it's being managed with inconsistent controls and dangerous permissions. This isn't just a technical oversight; it's a systemic governance failure, compounded by a persistent expertise gap that stalls progress from the server room to the boardroom. Until organisations get back to basics, achieving unified visibility and enforcing rigorous identity governance, they will continue to be outmanoeuvred by attackers," said Liat Hayun, VP of Product and Research at Tenable.


Biometrics inspire trust, policy-makers invite backlash

The digital ID ambitions of the EU and World are bold, the adoption numbers still to come, they hope. Romania is reducing the number of electronic identity cards it is planning to issue for free by a million and a half following a cut to the project’s budget. It risks fines that eventually in theory could stretch into hundreds of millions of euros for missing the EU’s digital ID targets. World now gives fans of IDs issued by the private sector, iris biometrics, decentralized systems and blockchain technologies an opportunity to invest in them on the NASDAQ. ... An analysis of the Online Safety Act by the ITIF cautions that any attempt to protect children from online harms invites backlash if it blocks benign content, or if it isn’t crystal clear about the lines between harmful and legal content. Content that promotes self-harm is being made illegal in the UK under the OSA, shifting the responsibility of online platforms from age assurance to content moderation. By making the move under the OSA, new UK Tech Secretary Liz Kendall risks strengthening arguments that the government is surreptitiously increasing censorship.  Her predecessor Peter Kyle, having presided over the project so far, now gets to explain it to the American government as Trade Secretary. Domestically, more children than adults consider age checks effective, survey respondents tell Sumsub, but nearly half of UK consumers worry about the OSA leading to censorship.


How to make your people love change

The answer lies in a core need every person has: self-concordance. When change is aligned with a person’s aspirations, values, and purpose, they are more likely to embrace it. To make that happen, we need a mindset shift. This needs to happen at two levels. ... The first thing to consider is that we have to think of employees not as objects of change but as internal customers. Just like marketers try to study consumer behaviour and aspirations with deep granularity, we must try to understand employees in similar detail. And not just see them as professionals but as individuals. ... Second, it meets the employees where they are, instead of trying to push them towards an agenda. And third, and most importantly, it makes them not just invested in the change process but turns them into the change architects. What these architects will build may not be the same as what we want them to, but there will be some overlaps. And because we empowered them to do this, they become fellow travelers, and this creates a positive change momentum, which we can harvest to effect the changes we want as well. ... We worked with a client where there was a need to get out of excessively critical thinking—a practice that had kept them compliant and secure, but was now coming in the way of growth—and move towards a more positive culture. 


Cloud-Native Security in 2025: Why Runtime Visibility Must Take Center Stage

For years, cloud security has leaned heavily on preventative controls like code scanning, configuration checks, and compliance enforcement. While essential, these measures provide only part of the picture. They identify theoretical risks, but not whether those risks are active and exploitable in production. Runtime visibility fills that gap. By observing what workloads are actually running — and how they behave — security teams gain the highest fidelity signal for prioritizing threats. ... Modern enterprises face an avalanche of alerts across vulnerability scanners, cloud posture tools, and application security platforms. The volume isn't just overwhelming — it's unsustainable. Analysts often spend more time triaging alerts than actually fixing problems. To be effective, organizations must map vulnerabilities and misconfigurations to:The workloads that are actively running. The business applications they support. The teams responsible for fixing them. This alignment is critical for bridging the gap between security and development. Developers often see security findings as disruptive, low-context interruptions. ... Another challenge enterprises face is accountability. Security findings are only valuable if they reach the right owner with the right context. Yet in many organizations, vulnerabilities are reported without clarity about which team should fix them.


Want to get the most out of agentic AI? Get a good governance strategy in place

The core challenge for CIOs overseeing agentic AI deployments will lie in ensuring that agentic decisions remain coherent with enterprise-level intent, without requiring constant human arbitration. This demands new governance models that define strategic guardrails in machine-readable logic and enforce them dynamically across distributed agents. ... Agentic agents in the network, especially those retrained or fine-tuned locally, may fail to grasp the nuance embedded in these regulatory thresholds. Worse, their decisions might be logically correct yet legally indefensible. Enterprises risk finding themselves in court arguing the ethical judgment of an algorithm. The answer lies in hybrid intelligence: pairing agents’ speed with human interpretive oversight for edge cases, while developing agentic systems capable of learning the contours of ambiguity. ... Enterprises must build policy meshes that understand where an agent operates, which laws apply, and how consent and access should behave across borders. Without this, global companies risk creating algorithmic structures that are legal in no country at all. In regulated industries, ethical norms require human accountability. Yet agent-to-agent systems inherently reduce the role of the human operator. This may lead to catastrophic oversights, even if every agent performs within parameters.


The Critical Role of SBOMs (Software Bill of Materials) In Defending Medtech From Software Supply Chain Threats

One of the primary benefits of an SBOM is enhanced transparency and traceability. By maintaining an accurate and up-to-date inventory of all software components, organizations can trace the origin of each component and monitor any changes or updates. ... SBOMs play a vital role in vulnerability management. By knowing exactly what components are present in their software, organizations can quickly identify and address vulnerabilities as they are discovered. Automated tools can scan SBOMs against known vulnerability databases, alerting organizations to potential risks and enabling timely remediation. ... For medical device manufacturers, compliance with regulatory requirements is paramount. Regulatory bodies, such as the U.S. FDA (Federal Drug Administration) and the EMA (European Medicines Agency), have recognized the importance of SBOMs in ensuring the security and safety of medical devices. ... As part of this regulatory framework, the FDA emphasizes the importance of incorporating cybersecurity measures throughout the product lifecycle, from design and development to post-market surveillance. One of the critical components of this guidance is the inclusion of an SBOM in premarket submissions. The SBOM serves as a foundational element in identifying and managing cybersecurity risks. The FDA’s requirement for an SBOM is not just about listing software components; it’s about promoting a culture of transparency and accountability within the medical device industry.


Shedding light on Shadow AI: Turning Risk to Strategic Advantage

The fact that employees are adopting these tools on their own tells us something important: they are eager for greater efficiency, creativity, and autonomy. Shadow AI often emerges because enterprise tools lag what’s available in the consumer market, or because official processes can’t keep pace with employee needs. Much like the early days of shadow IT, this trend is a response to bottlenecks. People want to work smarter and faster, and AI offers a tempting shortcut. The instinct of many IT and security teams might be to clamp down, block access, issue warnings, and attempt to regain control. ... Employees using AI independently are effectively prototyping new workflows. The real question isn’t whether this should happen, but how organisations can learn from and build on these experiences. What tools are employees using? What are they trying to accomplish? What workarounds are they creating? This bottom-up intelligence can inform top-down strategies, helping IT teams better understand where existing solutions fall short and where there’s potential for innovation. Once shadow AI is recognised, IT teams can move from a reactive to a proactive stance, offering secure, compliant alternatives and frameworks that still allow for experimentation. This might include vetted AI platforms, sandbox environments, or policies that clarify appropriate use without stifling initiative.


Why Friction Should Be a Top Consideration for Your IT Team

Some friction can be good, such as access controls that may require users to take a few seconds to authenticate their identities but that help to secure sensitive data, or change management processes that enable new ways of doing business. By contrast, bad friction creates delays and stress without adding value. Users may experience bad friction in busywork that delivers little value to an organization, or in provisioning delays that slow down important projects. “You want to automate good friction wherever possible,” Waddell said. “You want to eliminate bad friction.” ... As organizations work to eliminate friction, they can explore new approaches in key areas. The use of platform engineering lessens friction in multiple ways, enabling organizations to reduce the time needed to bring new products and services to market. Further, it can help organizations take advantage of automation and standardization while also cutting operational overhead. Establishing cyber resilience is another important way to remove friction. Organizations certainly want to avoid the massive friction of a data breach, but they also want to ensure that they can minimize the impact of a breach and enable faster incident response and recovery. “AI threats will outpace our ability to detect them,” Waddell said. “As a result, resilience will matter more than prevention.”

Daily Tech Digest - September 08, 2025


Quote for the day:

"Let no feeling of discouragement prey upon you, and in the end you are sure to succeed." -- Abraham Lincoln


Coding With AI Assistants: Faster Performance, Bigger Flaws

One challenge comes in the form of how AI coding assistants tend to package their code. Rather than delivering bite-size pieces, they generally deliver larger code pull requests for porting into the main project repository. Apiiro saw AI code assistants deliver three to four times as many code commits - meaning changes to a code repository - than non-AI code assistants, but packaging fewer pull requests. The problem is that larger PRs are inherently riskier and more time-consuming to verify. "Bigger, multi-touch PRs slow review, dilute reviewer attention and raise the odds that a subtle break slips through," said Itay Nussbaum, a product manager at Apiiro. ... At the same time, the tools generated deeper problems, in the form of a 150% increase in architectural flaws and an 300% increase in privilege issues. "These are the kinds of issues scanners miss and reviewers struggle to spot - broken auth flows, insecure designs, systemic weaknesses," Nussbaum said. "In other words, AI is fixing the typos but creating the time bombs." The tools also have a greater tendency to leak cloud credentials. "Our analysis found that AI-assisted developers exposed Azure service principals and storage access keys nearly twice as often as their non-AI peers," Nussbaum said. "Unlike a bug that can be caught in testing, a leaked key is live access: an immediate path into the production cloud infrastructure."


IT Leadership Is More Change Management Than Technical Management

Planning is considered critical in business to keep an organization moving forward in a predictable way, but Mahon doesn’t believe in the traditional annual and long-term planning in which lots of time is invested in creating the perfect plan which is then executed. “Never get too engaged in planning. You have a plan, but it’s pretty broad and open-ended. The North Star is very fuzzy, and it never gets to be a pinpoint [because] you need to focus on all the stuff that's going on around you,” says Mahon. “You should know exactly what you're going to do in the next two to three months. From three to six months out, you have a really good idea what you're going to do but be prepared to change. And from six to nine months or a year, [I wait until] we get three months away before I focus on it because tech and business needs change rapidly.” ... “The good ideas are mostly common knowledge. To be honest, I don’t think there are any good self-help books. Instead, I have a leadership coach who is also my mental health coach,” says Mahon. “Books try to get you to change who you are, and it doesn’t work. Be yourself. I have a leadership coach who points out my flaws, 90% of which I’m already aware of. His philosophy is don’t try to fix the flaw, address the flaw so, for example, I’m mindful about my tendency to speak too directly.”


The Anatomy of SCREAM: A Perfect Storm in EA Cupboard

SCREAM- Situational Chaotic Realities of Enterprise Architecture Management- captures the current state of EA practice, where most organizations, from medium to large complexity, struggle to derive optimal value from investments in enterprise architecture capabilities. It’s the persistent legacy challenges across technology stacks and ecosystems that need to be solved to meet strategic business goals and those moments when sudden, ill-defined executive needs are met with a hasty, reactive sprint, leading to a fractured and ultimately paralyzing effect on the entire organization. ... The paradox is that the very technologies offering solutions to business challenges are also key sources of architectural chaos, further entrenching reactive SCREAM. As noted, the inevitable chaos and fragmentation that emerge from continuous technology additions lead to silos and escalating compatibility issues. ... The chaos of SCREAM is not just an external force; it’s a product of our own making. While we preach alignment to the business, we often get caught up in our own storm in an EA cupboard. How often do we play EA on EA? ... While pockets of recognizable EA wins may exist through effective engagement, a true, repeatable value-add requires a seat at the strategic table. This means “architecture-first” must evolve beyond being a mere buzzword or a token effort, becoming a reliable approach that promotes collaborative success rather than individual credit-grabbing.


How Does Network Security Handle AI?

Detecting when AI models begin to vary and yield unusual results is the province of AI specialists, users and possibly the IT applications staff. But the network group still has a role in uncovering unexpected behavior. That role includes: Properly securing all AI models and data repositories on the network. Continuously monitoring all access points to the data and the AI system. Regularly scanning for network viruses and any other cyber invaders that might be lurking. ... both application and network teams need to ensure strict QA principles across the entire project -- much like network vulnerability testing. Develop as many adversarial prompt tests coming from as many different directions and perspectives as you can. Then try to break the AI system in the same way a perpetrator would. Patch up any holes you find in the process. ... Apply least privilege access to any AI resource on the network and continually monitor network traffic. This philosophy should also apply to those on the AI application side. Constrict the AI model being used to the specific use cases for which it was intended. In this way, the AI resource rejects any prompts not directly related to its purpose. ... Red teaming is ethical hacking. In other words, deploy a team whose goal is to probe and exploit the network in any way it can. The aim is to uncover any network or AI vulnerability before a bad actor does the same.


Lack of board access: The No. 1 factor for CISO dissatisfaction

CISOs who don’t get access to the board are often buried within their organizations. “There are a lot of companies that will hire at a director level or even a senior manager level and call it a CISO. But they don’t have the authority and scope to actually be able to execute what a CISO does,” says Nick Kathmann, CISO at LogicGate. Instead of reporting directly to the board or CEO, these CISOs will report to a CIO, CTO or other executive, despite the problems that can arise in this type of reporting structure. CIOs and CTOs are often tasked with implementing new technology. The CISO’s job is to identity risks and ensure the organization is secure. “If the CIO doesn’t like those risks or doesn’t want to do anything to fix those risks, they’ll essentially suppress them [CISOs] as much as they can,” says Kathmann. ... Getting in front of the board is one thing. Effectively communicating cybersecurity needs and getting them met is another. It starts with forming relationships with C-suite peers. Whether CISOs are still reporting up to another executive or not, they need to understand their peers’ priorities and how cybersecurity can mesh with those. “The CISO job is an executive job. As an executive, you rely completely on your peer relationships. You can’t do anything as an executive in a vacuum,” says Barrack. Working in collaboration, rather than contention, with other executives can prepare CISOs to make the most of their time in front of the board.


From Vault Sprawl to Governance: How Modern DevOps Teams Can Solve the Multi-Cloud Secrets Management Nightmare

Every time an application is updated or a new service is deployed, one or multiple new identities are born. These NHIs include service accounts, CI/CD pipelines, containers, and other machine workloads, the running pieces of software that connect to other resources and systems to do work. Enterprises now commonly see 100 or more NHIs for every single human identity. And that number keeps growing. ... Fixing this problem is possible, but it requires an intentional strategy. The first step is creating a centralized inventory of all secrets. This includes secrets stored in vaults, embedded in code, or left exposed in CI/CD pipelines and environments. Orphaned and outdated secrets should be identified and removed. Next, organizations must shift left. Developers and DevOps teams require tools to detect secrets early, before they are committed to source control or merged into production. Educating teams and embedding detection into the development process significantly reduces accidental leaks. Governance must also include lifecycle mapping. Secrets should be enriched with metadata such as owner, creation date, usage frequency, and last rotation. Automated expiration and renewal policies help enforce consistency and reduce long-term risk. Contributions should be both product- and vendor-agnostic, focusing on market insights and thought leadership.


Digital Public Infrastructure: The backbone of rural financial inclusion

When combined, these infrastructures — UPI for payments, ONDC for commerce, AAs for credit, CSCs for handholding support and broadband for connectivity form a powerful ecosystem. Together, these enable a farmer to sell beyond the village, receive instant payment and leverage that income proof for a micro-loan, all within a seamless digital journey. Adding to this, e-KYC ensures that identity verification is quick, low-cost and paperless, while AePS provides last-mile access to cash and banking services, ensuring inclusion even for those outside the smartphone ecosystem. This integration reduces dependence on middlemen, enhances transparency and fosters entrepreneurship. ...  Of course, progress does not mean perfection. There are challenges that must be addressed with urgency and sensitivity. Many rural merchants hesitate to fully embrace digital commerce due to uncertainties around Goods and Services Tax (GST) compliance. Digital literacy, though improving, still varies widely, particularly among older populations and women. Infrastructure costs such as last-mile broadband and device affordability remain burdensome for small operators. These are not reasons to slow down but opportunities to fine-tune policy. Simplifying tax processes for micro-enterprises, investing in vernacular digital literacy programmes, subsidising rural connectivity and embedding financial education into community touchpoints such as CSCs will be essential to ensure no one is left behind.


Cybersecurity research is getting new ethics rules, here’s what you need to know

Ethics analysis should not be treated as a one-time checklist. Stakeholder concerns can shift as a project develops, and researchers may need to revisit their analysis as they move from design to execution to publication. ...“Stakeholder ethical concerns impact academia, industry, and government,” Kalu said. “Security teams should replace reflexive defensiveness with structured collaboration: recognize good-faith research, provide intake channels and SLAs, support coordinated disclosure and pre-publication briefings, and engage on mitigation timelines. A balanced, invitational posture, rather than an adversarial one, will reduce harm, speed remediation, and encourage researchers to keep working on that project.” ... While the new requirements target academic publishing, the ideas extend to industry practice. Security teams often face similar dilemmas when deciding whether to disclose vulnerabilities, release tools, or adopt new defensive methods. Thinking in terms of stakeholders provides a way to weigh the benefits and risks of those decisions. ... Peng said ethical standards should be understood as “scaffolds that empower thoughtful research,” providing clarity and consistency without blocking exploration of adversarial scenarios. “By building ethics into the process from the start and revisiting it as research develops, we can both protect stakeholders and ensure researchers can study the potential threats that adversaries, who face no such constraints, may exploit,” she said.


From KYC to KYAI: Why ‘Algorithmic Transparency’ is Now Critical in Banking

This growing push for transparency into AI models has introduced a new acronym to the risk and compliance vernacular: KYAI, or "know your AI." Just like finance institutions must know the important details about their customers, so too must they understand the essential components of their AI models. The imperative has evolved beyond simply knowing "who" to "how." Based on my work helping large banks and other financial institutions integrate AI into their KYC workflows over the last few years, I’ve seen what can happen when these teams spend the time vetting their AI models and applying rigorous transparency standards. And, I’ve seen what can happen when they become overly trusting of black-box algorithms that deliver decisions based on opaque methods with no ability to attribute accountability. The latter rarely ever ends up being the cheapest or fastest way to produce meaningful results. ... The evolution from KYC to KYAI is not merely driven by regulatory pressure; it reflects a fundamental shift in how businesses operate today. Financial institutions that invest in AI transparency will be equipped to build greater trust, reduce operational risks, and maintain auditability without missing a step in innovation. The transformation from black box AI to transparent, governable systems represents one of the most significant operational challenges facing financial institutions today.


Why compliance clouds are essential

From a technical perspective, compliance clouds offer something that traditional clouds can’t match, these are the battle-tested security architectures. By implementing them, the organizations can reduce their data breach risk by 30-40% compared to standard cloud deployments. This is because compliance clouds are constantly reviewed and monitored by third-party experts, ensuring that we are not just getting compliance, but getting an enterprise-grade security that’s been validated by some of the most security-conscious organizations in the world. ... What’s particularly interesting is that 58% of this market is software focused. As organizations prioritize automation and efficiency in managing complex regulatory requirements, this number is set to grow further. Over 75% of federal agencies have already shifted to cloud-based software to meet evolving compliance needs. Following this, we at our organizations have also achieved FedRAMP® High Ready compliance for Cloud. ... Cloud compliance solutions deliver far-reaching benefits that extend well beyond regulatory adherence, offering a powerful mix of cost efficiency, trust building, adaptability, and innovation enablement. ... In an era where trust is a competitive currency, compliance cloud certifications serve as strong differentiators, signaling an organization’s unwavering commitment to data protection and regulatory excellence.

Daily Tech Digest - May 17, 2025


Quote for the day:

“Only those who dare to fail greatly can ever achieve greatly.” -- Robert F.


Top 10 Best Practices for Effective Data Protection

Your first instinct may be to try to keep up with all your data, but this may be a fool's errand. The key to success is to have classification capabilities everywhere data moves, and rely on your DLP policy to jump in when risk arises. Automation in data classification is becoming a lifesaver thanks to the power of AI. AI-powered classification can be faster and more accurate than traditional ways of classifying data with DLP. Ensure any solution you are evaluating can use AI to instantly uncover and discover data without human input. ... Data loss prevention (DLP) technology is the core of any data protection program. That said, keep in mind that DLP is only a subset of a larger data protection solution. DLP enables the classification of data (along with AI) to ensure you can accurately find sensitive data. Ensure your DLP engine can consistently alert correctly on the same piece of data across devices, networks, and clouds. The best way to ensure this is to embrace a centralized DLP engine that can cover all channels at once. Avoid point products that bring their own DLP engine, as this can lead to multiple alerts on one piece of moving data, slowing down incident management and response. Look to embrace Gartner's security service edge approach, which delivers DLP from a centralized cloud service. 


4 Keys To Successful Change Management From The Bain Playbook

From the start, Bain was crystal clear about its case for change, according to Razdan. The company prioritized change management, which meant IT partnering with finance; it also meant cultivating a mindset conducive to change. “We owned the change; we identified a group of high performers within our finance and our IT teams. This community of super-users could readily identify and deal with any of the problems that typically arise in an implementation of this size and scale,” Mackey said. “This was less just changing their technology; it’s changing employee behaviors and setting us up for how we want to grow and change processes going forward.” ... “We actually set up a program to be always measuring the value,” Razdan said. “You have internal stakeholders, you have external stakeholders, you have partnerships; we kind of built an ecosystem of governance and partnership that enabled us to keep everybody on the same page because transparency and communication is critical to success.” Gauging progress via transparent key performance indicators was all the more impressive, given that most of this happened during the worldwide, pandemic-driven move to remote work. “We could assess the implementation, as we went through it, to keep us on track [and] course correct,” Mackey said. 


Emerging AI security risks exposed in Pangea's global study

A significant finding was the non-deterministic nature of large language model (LLM) security. Prompt injection attacks, a method where attackers manipulate input to provoke undesired responses from AI systems, were found to succeed unpredictably. An attack that fails 99 times could succeed on the 100th attempt with identical input, due to the underlying randomness in LLM processing. The study also revealed substantial risks of data leakage and adversarial reconnaissance. Attackers using prompt injection can manipulate AI models to disclose sensitive information or contextual details about the environment in which the system operates, such as server types and network access configurations. 'This challenge has given us unprecedented visibility into real-world tactics attackers are using against AI applications today,' said Oliver Friedrichs, Co-Founder and Chief Executive Officer of Pangea. 'The scale and sophistication of attacks we observed reveal the vast and rapidly evolving nature of AI security threats. Defending against these threats must be a core consideration for security teams, not a checkbox or afterthought.' Findings indicated that basic defences, such as native LLM guardrails, left organisations particularly exposed. 


Dynamic DNS Emerges as Go-to Cyberattack Facilitator

Dynamic DNS (DDNS) services automatically update a domain name's DNS records in real-time when the Internet service provider changes the IP address. Real-time updating for DNS records wasn't needed in the early days of the Internet when static IP addresses were the norm. ... It sounds simple enough, yet bad actors have abused the services for years. More recently, though, cybersecurity vendors have observed an increase in such activity, especially this year. The notorious cybercriminal collective Scattered Spider, for instance, has turned to DDNS to obfuscate its malicious activity and impersonate well-known brands in social engineering attacks. This trend has some experts concerned about a rise in abuse and a surge in "rentable" subdomains. ... In an example of an observed attack, Scattered Spider actors established a new subdomain, klv1.it[.]com, designed to impersonate a similar domain, klv1.io, for Klaviyo, a Boston-based marketing automation company. Silent Push's report noted that the malicious domain had just five detections on VirusTotal at the time of publication. The company also said the use of publicly rentable subdomains presents challenges for security researchers. "This has been something that a lot of threat actors do — they use these services because they won't have domain registration fingerprints, and it makes it harder to track them," says Zach Edwards, senior threat researcher at Silent Push.


The Growing and Changing Threat of Deepfake Attacks

To ensure their deepfake attacks are convincing, malicious actors are increasingly focusing on more believable delivery, enhanced methods, such as phone number spoofing, SIM swapping, malicious recruitment accounts and information-stealing malware. These methods allow actors to convincingly deliver deepfakes and significantly increase a ploy’s overall credibility. ... High-value deepfake targets, such as C-suite executives, key data custodians, or other significant employees, often have moderate to high volumes of data available publicly. In particular, employees appearing on podcasts, giving interviews, attending conferences, or uploading videos expose significant volumes of moderate- to high-quality data for use in deepfakes. This dictates that understanding individual data exposure becomes a key part of accurately assessing the overall enterprise risk of deepfakes. Furthermore, ACI research indicates industries such as consulting, financial services, technology, insurance and government often have sufficient publicly available data to enable medium-to high-quality deepfakes. Ransomware groups are also continuously leaking a high volume of enterprise data. This information can help fuel deepfake content to “talk” about genuine internal documents, employee relationships and other internal details. 


Binary Size Matters: The Challenges of Fitting Complex Applications in Storage-Constrained Devices

Although we are here focusing on software, it is important to say that software does not run in a vacuum. Having an understanding of the hardware our programs run on and even how hardware is developed can offer important insights into how to tackle programming challenges. In the software world, we have a more iterative process, new features and fixes can usually be incorporated later in the form of over-the-air updates, for example. That is not the case with hardware. Design errors and faults in hardware can at the very best be mitigated with considerable performance penalties. These errors can introduce the meltdown and spectre vulnerabilities, or render the whole device unusable. Therefore the hardware design phase has a much longer and rigorous process before release than the software design phase. This rigorous process also impacts design decisions in terms of optimizations and computational power. Once you define a layout and bill of materials for your device, the expectation is to keep this constant for production as long as possible in order to reduce costs. Embedded hardware platforms are designed to be very cost-effective. Designing a product whose specifications such as memory or I/O count are wasted also means a cost increase in an industry where every cent in the bill of materials matters.


Cyber Insurance Applications: How vCISOs Bridge the Gap for SMBs

Proactive risk evaluation is a game-changer for SMBs seeking to maintain robust insurance coverage. vCISOs conduct regular risk assessments to quantify an organization’s security posture and benchmark it against industry standards. This not only identifies areas for improvement but also helps maintain compliance with evolving insurer expectations. Routine audits—led by vCISOs—keep security controls effective and relevant. Third-party risk evaluations are particularly valuable, given the rise in supply chain attacks. By ensuring vendors meet security standards, SMBs reduce their overall risk profile and strengthen their position during insurance applications and renewals. Employee training programs also play a critical role. By educating staff on phishing, social engineering, and other common threats, vCISOs help prevent incidents before they occur. ... For SMBs, navigating the cyber insurance landscape is no longer just a box-checking exercise. Insurers demand detailed evidence of security measures, continuous improvement, and alignment with industry best practices. vCISOs bring the technical expertise and strategic perspective necessary to meet these demands while empowering SMBs to strengthen their overall security posture.


How to establish an effective AI GRC framework

Because AI introduces risks that traditional GRC frameworks may not fully address, such as algorithmic bias and lack of transparency and accountability for AI-driven decisions, an AI GRC framework helps organizations proactively identify, assess, and mitigate these risks, says Heather Clauson Haughian, co-founding partner at CM Law, who focuses on AI technology, data privacy, and cybersecurity. “Other types of risks that an AI GRC framework can help mitigate include things such as security vulnerabilities where AI systems can be manipulated or exposed to data breaches, as well as operational failures when AI errors lead to costly business disruptions or reputational harm,” Haughian says. ... Model governance and lifecycle management are also key components of an effective AI GRC strategy, Haughian says. “This would cover the entire AI model lifecycle, from data acquisition and model development to deployment, monitoring, and retirement,” she says. This practice will help ensure AI models are reliable, accurate, and consistently perform as expected, mitigating risks associated with model drift or errors, Haughian says. ... Good policies balance out the risks and opportunities that AI and other emerging technologies, including those requiring massive data, can provide, Podnar says. “Most organizations don’t document their deliberate boundaries via policy,” Podnar says. 


How to Keep a Consultant from Stealing Your Idea

The best defense is a good offense, Thirmal says. Before sharing any sensitive information, get the consultant to sign a non-disclosure agreement (NDA) and, if needed, a non-compete agreement. "These legal documents set clear boundaries on what can and can't do with your ideas." He also recommends retaining records -- meeting notes, emails, and timestamps -- to provide documented proof of when and where the idea in question was discussed. ... If a consultant takes an idea and commercializes it, or shares it with a competitor, it's time to consult legal counsel, Paskalev says. The legal case's strength will hinge on the exact wording within contracts and documentation. "Sometimes, a well-crafted cease-and-desist letter is enough; other times, litigation is required." ... The best way to protect ideas isn't through contracts -- it's by being proactive, Thirmal advises. "Train your team to be careful about what they share, work with consultants who have strong reputations, and document everything," he states. "Protecting innovation isn’t just a legal issue -- it's a strategic one." Innovation is an IT leader's greatest asset, but it's also highly vulnerable, Paskalev says. "By proactively structuring consultant agreements, meticulously documenting every stage of idea development, and being ready to enforce protection, organizations can ensure their competitive edge."


Even the Strongest Leaders Burn Out — Here's the Best Way to Shake the Fatigue

One of the most overlooked challenges in leadership is the inability to step back from the work and see the full picture. We become so immersed in the daily fires, the high-stakes meetings, the make-or-break moments, that we lose the ability to assess the battlefield objectively. The ocean, or any intense, immersive activity, provides that critical reset. But stepping away isn't just about swimming in the ocean. It's about breaking patterns. Leaders are often stuck in cycles — endless meetings, fire drills, back-to-back calls. The constant urgency can trick you into believing that everything is critical. That's why you need moments that pull you out of the daily grind, forcing you to reset before stepping back in. This is where intentional recovery becomes a strategic advantage. Top-performing leaders across industries — from venture capitalists to startup founders — intentionally carve out time for activities that challenge them in different ways. ... The most effective leaders understand that managing their energy is just as important as managing their time. When energy levels dip, cognitive function suffers, and decision-making becomes less strategic. That's why companies known for their progressive workplace cultures integrate mindfulness practices, outdoor retreats and wellness programs — not as perks, but as necessary investments in long-term performance.

Daily Tech Digest - May 16, 2025


Quote for the day:

"Different times need different types of leadership." -- Park Geun-hye


AI Agents: Protocols Driving Next-Gen Enterprise Intelligence

MCP substantially simplifies agentic AI adoption for developers. This roadmap created by the MCP community clearly defines priorities and direction, providing helpful guidance for implementation. Organizations will also benefit from the key initiatives outlined in the roadmap, like the MCP Registry, which enables developers to build a comprehensive network of agents. The emergence of OAuth as a complementary standard protocol strengthens agent ecosystems even more. As with any other framework, MCP has its challenges. MCP offers a wide array of tools to support LLM reasoning, but it doesn’t prioritize coordinated, high-quality task execution. ... ACP will make it easier to implement AI agents on edge and local devices. In instances where the majority of decision-making happens “on the go” in a disconnected environment, this protocol will be useful. Now, developers can build modular systems that can coordinate with a standard protocol to make edge AI easier. A2A will gain momentum and enable cross-platform agents to work together to deliver superior intelligence to customers. A2A will help coordinate agents built using diverse frameworks with a common standard. The main requirement for this is to build an Agent Card that allows agents to be used and consumed by others.


Critical Infrastructure Under Siege: OT Security Still Lags

Industrial organizations and other kinds of critical infrastructure are regularly near or at the top of vendor lists highlighting ransomware targets. It's easy to see why; the important assets a threat actor could compromise put immense pressure on affected organizations to pay up. Kurt Gaudette, vice president of intelligence and services at Dragos, tells Dark Reading that the OT side of the house is "where the bottom line is." And indeed, Sophos reported last year that 65% of respondent organizations in the manufacturing sector reported that they suffered a ransomware attack in the year preceding the report; of those, 62% of organizations paid the ransom. Compounding this, the security postures of organizations that use OT/ICS can vary dramatically compared with traditional IT settings. The importance of staying patched is complicated by the reality that some industrial processes are meant to run uninterrupted for long periods of time and can't be subjected to the downtime necessary to patch. Second, an organization like a local water treatment plant might not have a significant security budget to invest in tools and personnel. Also, ICS products tend to be expensive, and aging equipment is everywhere, with many fields like healthcare drowning in legacy, hard-to-patch products or those without built-in security features.


Your Security Training Isn't Wrong. The Content Is Just Outdated

Although AI makes threats harder to detect, many breaches aren't caused by sophisticated hacking. They happen because organizations might not realize employees let their kids play Minecraft on their corporate laptops, or an old server or forgotten IoT device is still online. If IT doesn't know an asset exists, or who uses it, the team can't secure it, and hackers look for forgotten, unmonitored devices to break in. ... Managing and securing multiple systems can tempt employees to repeat passwords for simplicity. If employees continue to avoid using tools like corporate password managers to enforce strong, unique passwords, IT teams need to ask themselves why. How can they make warnings about this more impactful without burdening staff? ... The trouble is that, even with corporate password managers and MFA in place, hackers are still finding ways to steal credentials. These tools are designed to prevent hackers from entering your home, but if the door is left open, they won't stop anyone from walking in. The average annual growth rate of exposed accounts is 28%. Session expiration policies based on risk level and adaptive access policies can trigger forced signouts if a session shows abnormal behavior (e.g., logging in from a new IP while still active on another), which will help reduce account session takeovers.


Check Point CISO: Network segregation can prevent blackouts, disruptions

In 2025, industry watchers expect there will be an increase in the public budget allocated to defense. In Spain, one-third of the budget will be allocated to increasing cybersecurity. But for Fischbein, training teams is much more important than the budget. “The challenge is to distribute the budget in a way that can be managed,” he notes, and to leverage intuitive and easy-to-use platforms, so that organizations don’t have to invest all the money in training. “When you have information, management, users, devices, mobiles, data centers, clouds, cameras, printers… the security challenge is very complex. ” he says. ” ... “In a security operations center (SOC), a person using Check Point tools could previously take between two and four hours to investigate the causes of an alert. Today that time has dropped to 20 minutes,” he says. He also explains how they work with vulnerabilities. “Currently, Check Point checks all of them in a few seconds and tells you whether you are protected or not. And if you are not, it tells you which network to protect.” Regarding attackers, he acknowledges that they now make “richer and more logical” attacks. “With AI, they check the data and social networks of any person to impersonate a friend of the attacked person, because when someone receives something more personal they lower the defenses against phishing,” he says.


The Future (and Past) of Child Online Safety Legislation: Who Minds the Implementation Gap?

Acknowledging the limitations of exclusively using ID as a form of verification, many state bills, including Montana, Louisiana, Arkansas, Utah, and New York, have left the door open for “commercially reasonable” age verification methods. However, they give very little clarification as to what should be considered “commercially reasonable”. For example, in Utah, they only specify that these options can, “[rely] on public or private transactional data to verify the age of the person attempting to access the material.” ... Throughout all of these bills, there is no insight as to what type of data is permissible, how this data should be sourced, or any consent mechanisms for leveraging the data. By leaving a loophole open for undefined measures of age verification, there is a risk of potentially invasive and privacy-violating data, such as biometric data, being required of everyone who intends to access social media platforms. Not only could this potentially compromise people’s ability to remain anonymous on the internet, but it could also lead to the consolidation of uniquely identifiable sensitive data within the entities performing these verifications. To combat this, all bills with specifications for commercially reasonable age verification methods prohibit the data being used for verification from being stored or retained after verification is complete.


Beyond Code Coverage: A Risk-Driven Revolution in Software Testing With Machine Learning

Risk-based testing measures the importance of criteria instead of conducting equal checks for every factor. It evaluates potential flaws based on failure impact, likelihood of failure, and business criticality. This approach ensures efficient resource management and improves software reliability by: Focusing on Critical Areas: Instead of testing everything equally, RBT ensures that high-risk components receive the most attention. Evaluating Failure Impact: Identifies and tests areas where defects could cause significant damage. Assessing Likelihood of Failure: Targets unstable parts of the software by analyzing complexity, frequent changes, and past defects. Prioritizing Business-Critical Functions: Ensures essential systems like payment processing remain stable and reliable. Optimizing Resources and Time: Reduces unnecessary testing efforts, allowing teams to focus on what matters most. Improving Software Dependability: Detects major issues early, leading to more stable and reliable software. ... Machine learning improves software testing by examining prior data (code changes, bug reports, and test results) to identify high-risk locations. It gives key tests top priority; it finds anomalies before failures start; it keeps getting better with fresh data. Automating risk assessment helps ML speed tests, improve accuracy, maximize resources, and make software testing smarter and more effective.


Integrating Cybersecurity Into Change Management for Critical Infrastructure

The cyber MOC specifically targets changes affecting connected and configurable technologies, such as PLCs, IIoT devices, and network switches. The specific implementation of this process will vary depending on the organization’s structure and operational needs, as will the composition of the teams responsible for its execution. The reality is that many existing MOC frameworks were conceived before cybersecurity became a critical concern. Consequently, they often prioritize physical safety, leaving a significant gap in addressing potential cyber vulnerabilities. Traditional MOC tools, designed to support these processes, lack the necessary mechanisms to evaluate changes that could compromise cybersecurity. This oversight is a significant risk, particularly as infrastructure organizations become increasingly reliant on interconnected technologies. To bridge this gap, a fundamental shift is required. MOC tools and workflows must be revamped to incorporate cybersecurity considerations. While preserving core data fields and attributes, new fields must be introduced to capture cyber-related information. Similarly, RACI (responsible, accountable, consulted, and informed) matrices, which define responsibilities, must be expanded to include cyber risk accountability.


Deepfake attacks could cost you more than money

Treat deepfakes like any other cyber threat and apply a zero-trust mindset. That means don’t assume anything is real just because it looks or sounds convincing. Update your response plan to include steps for verifying video or audio content, especially if it’s being used to request sensitive actions. Build a risk model that considers how deepfakes could be used to target critical business processes, such as executive communications, financial approvals, or customer interactions. Make sure your team knows how to spot red flags, who to alert, and how to document the incident. Use detection tools that can scan media in real time and save flagged content for review. The faster you can identify and act, the more damage you can prevent. In today’s environment, it’s safer to question first and trust only after you verify. ... Deepfake awareness should be built into regular training so employees can spot warning signs early. Utilizing the detection tools to support teams by scanning and flagging suspicious media in real time, helping them make faster, safer decisions. Incident response plans must also cover how to escalate, preserve evidence, and communicate if a deepfake is suspected. At the end of the day, questioning unusual communications must become the norm, not the exception


Secure Code Development News to Celebrate

Another big payoff comes from paying down security debt. Wysopal said organizations with the most mature secure development practices fix 10% of their vulnerabilities on an annual basis and avoid having any security debt that is more than a year old. By contrast, "the lagging companies fix less than 1% of open bugs per month," he said. This strategy isn't always feasible. Notably, "we found that 70% of critical debt was in third-party code," and teams that built software with third-party - or sometimes fourth or fifth party - dependencies sometimes must wait months for fixes to become available, Wysopal said. "Some software packages that are widely used by other software packages are harder to fix, so you have a lot what we call transitive dependencies." There's no easy solution for this challenge. "When you're using open source, you're really dependent on the fixing speed of another team that is not getting paid, and they're just doing it because they love to do that project," he said. ... Another wrinkle is that more code is built by artificial intelligence tools - Google and Microsoft each say roughly a third of their code is AI-generated. Developers report being more productive, shipping on average 50% more code when they use AI tools. Wysopal said such AI tools appear to produce code with vulnerabilities at the same rate as classical development tools. More code shipped risks a greater number of vulnerabilities.


Powering the AI revolution: Legal and infrastructure challenges for data center development

Developing and operating AI-ready data centers necessitates specialized legal expertise across multiple disciplines. Financing attorneys provide guidance in structuring capital arrangements that support data center development, which requires substantial upfront investment before generating any operational revenue. Capital arrangements must incorporate sufficient flexibility to accommodate the rapid evolution of AI technology availability and unique power supply challenges at an individual site. Energy lawyers guide PPA negotiations, facilitate utility discussions, manage interconnection filings with relevant authorities, and resolve rate disputes when they arise. Their specialized work ensures that facilities maintain access to reliable, cost-effective power resources that meet operational requirements under all anticipated conditions. As regulatory approaches to AI infrastructure continue to evolve, energy counsel must remain current on emerging policies and their potential impact on both existing and future facilities. Technology and intellectual property specialists address essential operational aspects of data centers, including complex licensing arrangements, service level agreements, comprehensive data governance frameworks, and cross-border data flow compliance strategies.