Showing posts with label secure coding. Show all posts
Showing posts with label secure coding. Show all posts

Daily Tech Digest - February 19, 2026


Quote for the day:

“Being responsible sometimes means pissing people off.” -- Colin Powell



The new paradigm for raising up secure software engineers

CISOs were already struggling to help developers keep up with secure code principles at the speed of DevOps. Now, with AI-assisted development reshaping how code gets written and shipped, the challenge is rapidly intensifying. ... What is needed to get thrown out are traditional training methods. Consensus among security leaders is that dev training needs to be bite-sized, hands-on, and mostly embedded in developer tool chains. ... Rather than focus on preparing developers for line-by-line code review, the emphasis moves toward evaluating whether their features and functions behave securely in context of deployment conditions, says Hasan Yasar ... Developers need to recognize when AI-generated code introduces unsafe assumptions, insecure defaults, or integrations that can scale vulnerabilities across systems. And with more security enforcement built into automated engineering pipelines, developers should ideally also be trained to understand what automated gates catch, and what still requires human judgment. “Security awareness in engineering has shifted to a system-level approach rather than focusing on individual vulnerabilities,” Pinna says. ... The data from guardrails and controls being triggered can be used by the AppSec team to drive creation and delivery of more in-depth, but targeted education. When the same vulnerability or integration pattern pops up again and again, that’s a signal for focused training on a subject.


New agent framework matches human-engineered AI systems — and adds zero inference cost to deploy

In experiments on complex coding and software engineering tasks, GEA substantially outperformed existing self-improving frameworks. Perhaps most notably for enterprise decision-makers, the system autonomously evolved agents that matched or exceeded the performance of frameworks painstakingly designed by human experts. ... Unlike traditional systems where an agent only learns from its direct parent, GEA creates a shared pool of collective experience. This pool contains the evolutionary traces from all members of the parent group, including code modifications, successful solutions to tasks, and tool invocation histories. Every agent in the group gains access to this collective history, allowing them to learn from the breakthroughs and mistakes of their peers. ... The results demonstrated a massive leap in capability without increasing the number of agents used. This collaborative approach also makes the system more robust against failure. In their experiments, the researchers intentionally broke agents by manually injecting bugs into their implementations. GEA was able to repair these critical bugs in an average of 1.4 iterations, while the baseline took 5 iterations. The system effectively leverages the "healthy" members of the group to diagnose and patch the compromised ones. ... The success of GEA stems largely from its ability to consolidate improvements. The researchers tracked specific innovations invented by the agents during the evolutionary process. 


GitHub readies agents to automate repository maintenance

In order to help developers and enterprises manage the operational drag of maintaining repositories, GitHub is previewing Agentic Workflows, a new feature that uses AI to automate most routine tasks associated with repository hygiene. It won’t solve maintenance problems all by itself, though. Developers will still have to describe the automation workflows in natural language that agents can follow, storing the instructions as Markdown files in the repo created either from the terminal via the GitHub CLI or inside an editor such as Visual Studio Code. ... “Mid-sized engineering teams gain immediate productivity benefits because they struggle most with repetitive maintenance work like triage and documentation drift,” said Dion Hinchcliffe ... Patel also warned that beyond precision and signal-to-noise concerns, there is a more prosaic risk teams may underestimate at first: As agentic workflows scale across repositories and run more frequently, the underlying compute and model-inference costs can quietly compound, turning what looks like a productivity boost into a growing operational line item if left unchecked. This can become a boardroom issue for engineering heads and CIOs because they must justify return on investment, especially at a time when they are grappling with what it really means to let software agents operate inside production workflows, Patel added.


One stolen credential is all it takes to compromise everything

Identity-based compromise dominated incident response activity in 2025. Identity weaknesses played a material role in almost 90% of investigations. Initial access was driven by identity-based techniques in 65% of cases, including phishing, stolen credentials, brute force attempts, and insider activity. ... Rubin said the growing dominance of identity attacks reflects how enterprise environments have changed over the past few years, creating more opportunities for adversaries to quietly slip in through legitimate access pathways. “The increasing role of identity as the main attack vector is a result of a fundamental change in the enterprise environment,” Rubin said. “This dynamic is driven by two key factors.” He said the first driver is the rapid expansion of SaaS adoption, cloud infrastructure, and machine identities, which in many organizations now outnumber human accounts. That shift has created what he described as a “massive, unmanaged shadow estate,” where each integration represents “a new, potentially unmonitored, path into the network.” ... The time window for defenders is shrinking. The fastest 25% of intrusions reached data exfiltration in 72 minutes in 2025. The same metric was 285 minutes in 2024. A separate simulation described an AI-assisted attack that reached exfiltration in 25 minutes. Threat actors also began automating extortion operations. Unit 42 negotiators observed consistent tone and cadence in ransom communications, suggesting partial automation or AI-assisted negotiation messaging.


The emerging enterprise AI stack is missing a trust layer

This is not simply a technology problem. It is an architectural one. Today’s enterprise AI stack is built around compute, data and models, but it is missing its most critical component: a dedicated trust layer. As AI systems move from suggesting answers to taking actions, this gap is becoming the single biggest barrier to scale. ... Our ability to generate AI outputs is scaling exponentially, while our ability to understand, govern and trust those outputs remains manual, retrospective and fragmented across point solutions. ... This layer isn’t a single tool; it’s a governance plane. I often think of it as the avionics system in a modern aircraft. It doesn’t make the plane fly faster, but it continuously measures conditions and makes adjustments to keep the flight within safe parameters. Without it, you’re flying blind — especially at scale. ... Agentic systems collapse the distance between recommendation and action. When decisions are automated, there is far less tolerance for opacity or after-the-fact explanations. If an AI-driven action cannot be reconstructed, justified and owned, the risk is no longer theoretical — it is operational. This is why trust is becoming a prerequisite for autonomy. Governance models built for dashboards and quarterly reviews are not sufficient when systems act in real time. CIOs need architectures that assume scrutiny, not exception handling and that treat accountability as a design constraint rather than a policy requirement.


India Is Not a Back Office — It’s a Core Engine of Our Global Innovation

We have a very clear data and AI strategy. We are running multiple proof-of-concept initiatives across the organisation to ensure AI becomes more than just a buzzword. The key question is: how does AI create real value for Volvo Cars? It helps us become more agile and faster, whether in product development, improving internal process efficiency, or enhancing decision-making quality. India plays a crucial role here. We have a large team working on data analytics, intelligent automation, and AI, supporting these initiatives and shaping our agenda. ... It’s not just access to talent, it’s also the mindset. Indian society is highly adaptable. You often face unforeseen situations and must find solutions quickly. That agility and ability to always have a “Plan B” drive innovation, creativity, and speed. ... Data protection is a global priority. Many regions have introduced regulations, India’s Data Privacy Act, GDPR in the European Union, and similar laws in China. For global organisations, managing how data is transferred and processed across borders is a significant challenge. For example, certain data, like Chinese customer data, may need to remain within that country. Beyond regulatory compliance, cybersecurity threats are constant. Like most organisations, we experience attempted attacks on our networks. We have a robust cybersecurity team working continuously to secure both data and infrastructure.


AI likely to put a major strain on global networks—are enterprises ready?

Retrieval-heavy architecture types such as retrieval augmented generation—an AI framework that boosts large language models by first retrieving relevant, current information from external sources—create significant network traffic because data is moving across regions, object stores, and vector indexes, Kale says. “Agent-like, multi-step workflows further amplify this by triggering an additional set of retrievals and evaluations at each step,” Kale says. “All of these patterns create fast and unpredictable bursts of network traffic that today’s networks were never designed to handle. These trends will not abate, as enterprises transition from piloting AI services to running them continually.” ... In 2026, “we will see significant disruption from accelerated appetite for all things AI,” research firm Forrester noted in a late-year predictions post. “Business demands of AI systems, network connectivity, AI for IT operations, the conversational AI-powered service desk, and more are driving substantial changes that tech leaders must enable within their organizations.” ... “Inference workloads in particular create continuous, high-intensity, globally distributed traffic patterns,” Barrow says. “A single AI feature can trigger millions of additional requests per hour, and those requests are heavier—higher bandwidth, higher concurrency, and GPU-accelerated compute on the other side of the network.”


Quantum Scientists Publish Manifesto Opposing Military Use of Quantum Research

The scientists’ primary goals include: to express a unified rejection of military uses of quantum research; to open debate within the quantum community about ethical implications; to create a forum for researchers concerned about militarization; and to advocate for a public database listing all research projects at public universities funded by military or defense agencies. Quantum technologies rely on the behavior of matter and light at the smallest scales, enabling ultra-secure communication, highly sensitive sensors and powerful computing systems. According to the manifesto, these capabilities are increasingly being folded into defense strategies worldwide. ... The manifesto places these developments in the context of rising defense budgets, particularly in Europe following Russia’s invasion of Ukraine. The scientists write in the manifesto that the research and development sector is not exempt from the broader rearmament trend and that dual-use technologies — those that can serve both civilian and military ends — are increasingly prioritized in policy documents. The scientists acknowledge that quantum technologies are not inherently military tools. However, according to the manifesto, once such systems are developed, their applications may be difficult to control. The scientists argue that closer institutional ties between universities and defense agencies risk undermining academic independence. .

From pilot purgatory to productive failure: Fixing AI's broken learning loop

"Model performance can drift with data changes, user behavior, and policy updates, so a 'set it and forget it' KPI can reward the wrong thing, too late," Manos said. The penalty for CIOs, however, comes from the time lag between the misread KPI signal and the CIO's moves to correct it. Timing is everything, and "by the time a quarterly metric flags a problem, the root cause has already compounded across workflows," Manos said. ... Waiting until the end of a POC to figure out why a concept doesn't scale is clearly too late, but neither is it prudent to abandon a "trial, observation, and refine" cycle entirely, Alex Tyrrell, head of advanced technologies at Wolters Kluwer and CTO at Wolters Kluwer Health, said. Instead, Tyrrell argues for refining the interaction process itself to detect issues earlier in a safe setting, particularly in regulated, high-trust environments like healthcare. He recommends pairing each iteration with both predictive and diagnostic signals, so IT teams can intervene before the error ripples down to the customer level. ... AI pilots fail for the same non-technical reasons that have always plagued technology performance, such as a governance vacuum, organizational unreadiness, low usage rates, or "measurement theater," which is when tech performance can't be tied to a specific business value, explained Baker.


How AI agents and humans can play together in the same sandbox

Unlike traditional automation, which is rigid and rules-based, AI agents are goal-driven. They can plan, adapt, and respond to changing conditions. That makes them especially powerful for modern business processes that are dynamic by nature - processes that span systems, teams, and time zones. Another defining characteristic is endurance. AI agents don't get tired, sick, or distracted. They can operate continuously, scaling up or down as needed, and executing tasks with consistent precision. This doesn't make humans obsolete. ... Trust plays a central role here. Agents must demonstrate that they are reliable and predictable. At the same time, humans must define boundaries - what agents can do autonomously, where approvals are required, and what guardrails must always be respected. There is a fine balance to strike. Constrain agents too tightly, and you eliminate the benefits of autonomy. ... A logical approach enables AI agents to access views of data directly from source systems, in real time, without first having to replicate or move that data. For Agentic AI, this is critical: agents need live data, delivered in the shortest possible time, in order to plan, act, and adapt effectively. By abstracting physical data complexity and unifying access across sources, a logical data layer provides AI agents with fast, trusted, and governed data - exactly what autonomous systems require to operate at scale. A shared data plane provides all consumers - human or machine - with the same source of truth. It also provides context, consistency, and traceability.

Daily Tech Digest - January 25, 2026


Quote for the day:

"Life is 10% what happens to me and 90% of how I react to it." -- Charles Swindoll



Agentic AI exposes what we’re doing wrong

What needs to change is the level of precision and adaptability in network controls. You need networking that supports fine-grained segmentation, short-lived connectivity, and policies that can be continuously evaluated rather than set once and forgotten. You also need to treat east-west traffic visibility as a core requirement because agents will generate many internal calls that look legitimate unless you understand intent, identity, and context. ... When the user is an autonomous agent, control relies solely on identity: what the agent is, its permitted actions, what it can impersonate, and what it can delegate. Network location and static IP-based trust weaken when actions are initiated by software that can run anywhere, scale instantly, and change execution paths. This is where many enterprises will stumble.  ... The old finops playbook of tagging, showback, and monthly optimization is not enough on its own. You need near-real-time cost visibility and automated guardrails that stop waste as it happens, because “later” can mean “after the budget is gone.” Put differently, the unit economics of agentic systems must be designed, measured, and controlled like any other production system, ideally more aggressively because the feedback loop is faster. ... The industry’s favorite myth is that architecture slows innovation. In reality, architecture prevents innovation from turning into entropy. Agentic AI accelerates entropy by generating more actions, integrations, permissions, data movement, and operational variability than human-driven systems typically do.


‘Cute’ and ‘Criminal’: AI Perception, Human Bias, and Emotional Intelligence

Can you build artificial intelligence (AI) without emotional intelligence (EI)? Should you? What do we mean when we talk about “humans in the loop”? Are we asking the right questions about how humans design and govern “thinking” machines? One of the immediate problems we face with generative AI is that people increasingly rely on them for big decisions. I won’t call all of these ethical decisions, but in some cases they’re consequential decisions. And many users forget that these systems are trained on data that carry all kinds of inherited biases. When we talk about AI bias, it isn’t always abstract. It shows up in very literal assumptions the models make when they are asked to generate images or ideas. ... That question is really the beginning of understanding how these systems work. They are pulling from enormous bodies of unlabeled or inconsistently labeled data and then inferring patterns. We often forget that the inferences are statistical, not conceptual. To the model, “doctor” aligns with “male” because that’s the pattern the dataset reinforced. ... I didn’t tell the system, “diverse audience,” then all the children it generated fell into the same narrow “cute child” category. It’s not that the AI systems are racist or sexist. They simply don’t have self-awareness. They’re reflecting the dominant patterns in the datasets they learned from. But reflection without critique becomes reinforcement, and reinforcement becomes norm.


AI is quietly poisoning itself and pushing models toward collapse - but there's a cure

According to tech analyst Gartner, AI data is rapidly becoming a classic Garbage In/Garbage Out (GIGO) problem for users. That's because organizations' AI systems and large language models (LLMs) are flooded with unverified, AI‑generated content that cannot be trusted. ... You know this better as AI slop. While annoying to you and me, it's deadly to AI because it poisons the LLMs with fake data. The result is what's called in AI circles "Model Collapse." AI company Aquant defined this trend: "In simpler terms, when AI is trained on its own outputs, the results can drift further away from reality." ... The analyst argued that enterprises can no longer assume data is human‑generated or trustworthy by default, and must instead authenticate, verify, and track data lineage to protect business and financial outcomes. Ever try to authenticate and verify data from AI? It's not easy. It can be done, but AI literacy isn't a common skill. ... This situation means that flawed inputs can cascade through automated workflows and decision systems, producing worse results. Yes, that's right, if you think AI result bias, hallucinations, and simple factual errors are bad today, wait until tomorrow. ... Gartner suggested many companies will need stronger mechanisms to authenticate data sources, verify quality, tag AI‑generated content, and continuously manage metadata so they know what their systems are actually consuming.


4 Realities of AI Governance

AI has not replaced traditional security work; it has layered new obligations on top of it. We still have to protect our data and maintain sovereign assurance through independent audit reports, whether that’s SOC, PCI, ISO, or other standards. Still, we must today also guide our own teams and vendors on the use of powerful AI tools. That’s where accountability begins: with the human or process that touches the data. When the rules are clear, people move faster and safer; when directives are fuzzy, everything downstream is too—so we keep policy short, plain, and visible. ... Unless the contract says otherwise, assume prompts, outputs, or telemetry may be retained for “service improvement.” Fine-print phrases like “continuous improvement” often mean that inputs, outputs, or telemetry can be retained or used to tune systems unless you opt out. To keep reviews consistent, leverage resources like the NIST AI Risk Management Framework. It provides practical checklists for transparency, accountability, and monitoring. Remember the AI supply chain: your vendor depends on model providers, plugins, and open-source components; your risk includes their dependencies, so cover these in your TPRM process. ... Boundaries are the difference between safe speed and reckless speed. Start by defining a short set of data types that must never be pasted into external tools: regulated PII, confidential customer data, unreleased financials, source code, or merger and acquisition materials. Map the rest into simple classes-public, internal, sensitive-and tie each class to approved tools and use cases.


Your Cache is Hiding a Bad Architecture

Most engineers treat caching as a performance optimisation. They see a complex SQL query involving four joins taking 2 seconds to execute. Instead of analysing the execution plan or restructuring the schema, they wrap the call in a redis.get() block. ... By relying on the cache to mask inefficient database interactions, you haven’t fixed the bottleneck; you have simply hidden it behind a volatile memory store. You have turned a “nice-to-have” performance layer into a Critical Infrastructure Dependency. The moment that the cache key expires, or the Redis node evicts the key to free up memory, the application is forced to confront the reality of that 2-second query. And usually, it doesn’t confront it alone. It confronts it with 500 concurrent users who were all waiting for that key. ... Caching is not a strategy; it is a tactic. It is a powerful optimisation for systems that are already healthy, but it is a disastrous life-support system for those that are not. If you take nothing else from this, remember the litmus test: System stability should not depend on volatile memory. Go back to your codebase. Turn off Redis in your staging environment. Run your load tests. If your response times go up, you have a performance problem. If your error rates go up, you have an architectural problem.


UK bill accelerates shift to offensive cyber security

The Cyber Security and Resilience (Network and Information Systems) Bill entered Parliament in late 2025 and is expected to move through the legislative process during 2026. The government has positioned the bill as a major update to the UK's cyber framework for essential services and digital service providers. ... Poyser argued that many companies still lean heavily on defensive tools without validating how those controls perform under attack conditions. "Cybercriminals and state-backed threat actors are acting faster, more aggressively, and with far greater innovation-especially through the use of artificial intelligence-while too many businesses continue to rely on traditional defensive methods. This widening gap must be closed urgently," said Poyser. He also linked the coming UK legislative changes to a push for more proactive security validation. ... The company said this attacker-style approach changes how risk gets measured and prioritised. It said corporate security teams struggle to maintain an accurate picture of exposure through passive controls and periodic checks. "It is increasingly unrealistic for corporate security teams to maintain an accurate understanding of their true risk exposure using only traditional, passive methods," said Keith Poyser. "Threat actors do not wait for annual audits or one-off checks. Unless organisations test their systems in a way that reflects how real attackers operate, they will continue to be caught off-guard," said Poyser.


The new CDIO stack: Tech, talent and storytelling

The first layer is the one everyone ‘expects’. We built strong platforms: cloud infrastructure that can flex with the business, data platforms that bring together information from plants, systems and markets, analytics and AI capabilities that sit on top of that data, and a solid cyber posture to protect all of it. ... The second layer was not about machines at all. It was about people, about changing the talent mix so that digital is no longer “their” thing — it becomes “our” thing. We realised that if we kept thinking in terms of “IT people” and “business people”, we would always be negotiating across a wall. ... The third layer is the one that surprised even me. We noticed a pattern. Even when we had good platforms and strong talent, some initiatives would start with a bang and fizzle out. The technology worked. The pilot results were good. But momentum died. When we dug deeper, we realised the issue was not in the code. It was in the story. The operators on the shop floor, the sales teams, the plant heads and the board were all hearing slightly different stories about “digital”. ... Yes, I am responsible for technology. If the platforms are not robust, I have failed at the most basic level. Yes, I am responsible for talent. If we don’t have the right mix of skills — product, data, architecture, change — we cannot deliver. But I am also responsible for the narrative. ... For me, the real maturity of a digital organization shows when these three layers are aligned.


What Software Developers Need to Know About Secure Coding and AI Red Flags

The uptick in adoption of AI tools within the developer community aligns with growing expectations. Developers are now expected to work with greater efficiency to meet deadlines more quickly, all while delivering high-quality code. Developers might find AI assistants to be beneficial as they are immune to human-based tendencies like fatigue and biases, which can boost efficiency. But sacrificing safety for speed is unacceptable, as AI tools bring inherent risks of compromise. ... AI tools are not safe for enterprise use unless the code output is reviewed and implemented by a security-proficient human. 30% of security experts admit that they don't trust the accuracy of code generated by AI itself. That's why security leaders must prioritize the education and upskilling of developer teams, to ensure they have the necessary skills and capabilities to mitigate AI-assisted code vulnerabilities as early as possible. This will lead to the cultivation of a "security first" team culture and safer AI use. ... In addition, agentic AI introduces new or "agentic variations" of existing threats, like memory poisoning, remote code execution (RCE) and code attacks. It can harm code via logic errors, which cause the product to "run" correctly but act incorrectly; style inconsistencies, which result in patterns that do not align with the current, required structure; and lenient permissions, which act correctly but lack the authorization context to determine if an end user is allowed to perform a particular action.


Building a Self-Healing Data Pipeline That Fixes Its Own Python Errors

The core concept of this is relatively simple. Most data pipelines are fragile because they assume the world is perfect, and when the input data changes even slightly, they fail. Instead of accepting that crash, I designed my script to catch the exception, capture the “crime scene evidence”, which is basically the traceback and the first few lines of the file, and then pass it down to an LLM. ... The primary challenge with using Large Language Models for code generation is their tendency to hallucinate. From my experience, if you ask for a simple parameter, you often receive a paragraph of conversational text in return. To stop that, I leveraged structured outputs via Pydantic and OpenAI’s API. This forces the model to complete a strict form, acting as a filter between the messy AI reasoning and our clean Python code. ... Getting the prompt right took some trial and error. And that’s because initially, I only provided the error message, which forced the model to guess blindly at the problem. I quickly realized that to correctly identify issues like delimiter mismatches, the model needed to actually “see” a sample of the raw data. Now here is the big catch. You cannot actually read the whole file. If you try to pass a 2GB CSV into the prompt, you’ll blow up your context window and apparently your wallet. ... First, remember that every time your pipeline breaks, you are making an API call.


‘Complexity is where cyber risk tends to grow’

Last month, the Information Systems Audit and Control Association (ISACA) announced that it had been appointed to lead the global credentialing programme for the US Department of War’s (DoW) Cybersecurity Maturity Model Certification (CMMC). The CMMC, according to ISACA’s chief global strategy officer Chris Dimitriadis, is “designed to protect sensitive information across the defence industrial base and its supply chain”. ... “Transatlantic operations almost always increase complexity, and complexity is where cyber risk tends to grow,” he says. “The first major issue is supply chain exposure. Attackers rarely go after the strongest link, they look for the most vulnerable one. “In global ecosystems, that can be a smaller supplier, a service provider or a subcontractor.” The second issue, he says, is the “nature” of the data and the systems that are involved. “When defence-related information, controlled technical data, or sensitive operational systems are in play, the impact of compromise is simply much higher. That requires stronger access controls, better identity governance, and more disciplined incident response.” The third and final issue that Dimitriadis highlights is “multi-jurisdiction reality”. He explains that companies need to navigate different requirements, obligations and reporting expectations across regions, adding that if governance and security operations aren’t aligned, “you create gaps, and those gaps are exactly what threat actors exploit”.

Daily Tech Digest - July 30, 2025


Quote for the day:

"The key to successful leadership today is influence, not authority." -- Ken Blanchard


5 tactics to reduce IT costs without hurting innovation

Cutting IT costs the right way means teaming up with finance from the start. When CIOs and CFOs work closely together, it’s easier to ensure technology investments support the bigger picture. At JPMorganChase, that kind of partnership is built into how the teams operate. “It’s beneficial that our organization is set up for CIOs and CFOs to operate as co-strategists, jointly developing and owning an organization’s technology roadmap from end to end including technical, commercial, and security outcomes,” says Joshi. “Successful IT-finance collaboration starts with shared language and goals, translating tech metrics into tangible business results.” That kind of alignment doesn’t just happen at big banks. It’s a smart move for organizations of all sizes. When CIOs and CFOs collaborate early and often, it helps streamline everything from budgeting, to vendor negotiations, to risk management, says Kimberly DeCarrera, fractional general counsel and fractional CFO at Springboard Legal. “We can prepare budgets together that achieve goals,” she says. “Also, in many cases, the CFO can be the bad cop in the negotiations, letting the CIO preserve relationships with the new or existing vendor. Working together provides trust and transparency to build better outcomes for the organization.” The CFO also plays a key role in managing risk, DeCarrera adds. 


F5 Report Finds Interest in AI is High, but Few Organizations are Ready

Even among organizations with moderate AI readiness, governance remains a challenge. According to the report, many companies lack comprehensive security measures, such as AI firewalls or formal data labeling practices, particularly in hybrid cloud environments. Companies are deploying AI across a wide range of tools and models. Nearly two-thirds of organizations now use a mix of paid models like GPT-4 with open source tools such as Meta's Llama, Mistral and Google's Gemma -- often across multiple environments. This can lead to inconsistent security policies and increased risk. The other challenges are security and operational maturity. While 71% of organizations already use AI for cybersecurity, only 18% of those with moderate readiness have implemented AI firewalls. Only 24% of organizations consistently label their data, which is important for catching potential threats and maintaining accuracy. ... Many organizations are juggling APIs, vendor tools and traditional ticketing systems -- workflows that the report identified as major roadblocks to automation. Scaling AI across the business remains a challenge for organizations. Still, things are improving, thanks in part to wider use of observability tools. In 2024, 72% of organizations cited data maturity and lack of scale as a top barrier to AI adoption. 


Why Most IaC Strategies Still Fail (And How to Fix Them)

Many teams begin adopting IaC without aligning on a clear strategy. Moving from legacy infrastructure to codified systems is a positive step, but without answers to key questions, the foundation is shaky. Today, more than one-third of teams struggle so much with codifying legacy resources that they rank it among the top three IaC most pervasive challenges. ... IaC is as much a cultural shift as a technical one. Teams often struggle when tools are adopted without considering existing skills and habits. A squad familiar with Terraform might thrive, while others spend hours troubleshooting unfamiliar workflows. The result: knowledge silos, uneven adoption, and frustration. Resistance to change also plays a role. Some engineers may prefer to stick with familiar interfaces and manual operations, viewing IaC as an unnecessary complication. ... IaC’s repeatability is a double-edged sword. A misconfigured resource — like a public S3 bucket — can quickly scale into a widespread security risk if not caught early. Small oversights in code become large attack surfaces when applied across multiple environments. This makes proactive security gating essential. Integrating policy checks into CI/CD pipelines ensures risky code doesn’t reach production. ... Drift is inevitable: manual changes, rushed fixes, and one-off permissions often leave code and reality out of sync. 


Prepping for the quantum threat requires a phased approach to crypto agility

“Now that NIST has given [ratified] standards, it’s much more easier to implement the mathematics,” Iyer said during a recent webinar for organizations transitioning to PQC, entitled “Your Data Is Not Safe! Quantum Readiness is Urgent.” “But then there are other aspects like the implementation protocols, how the PCI DSS and the other health sector industry standards or low-level standards are available.” ... Michael Smith, field CTO at DigiCert, noted that the industry is “yet to develop a completely PQC-safe TLS protocol.” “We have the algorithms for encryption and signatures, but TLS as a protocol doesn’t have a quantum-safe session key exchange and we’re still using Diffie-Hellman variants,” Smith explained. “This is why the US government in their latest Cybersecurity Executive Order required that government agencies move towards TLS1.3 as a crypto agility measure to prepare for a protocol upgrade that would make it PQC-safe.” ... Nigel Edwards, vice president at Hewlett Packard Enterprise (HPE) Labs, said that more customers are asking for PQC-readiness plans for its products. “We need to sort out [upgrading] the processors, the GPUs, the storage controllers, the network controllers,” Edwards said. “Everything that is loading firmware needs to be migrated to using PQC algorithms to authenticate firmware and the software that it’s loading. This cannot be done after it’s shipped.”


Cost of U.S. data breach reaches all-time high and shadow AI isn’t helping

Thirteen percent of organizations reported breaches of AI models or applications, and of those compromised, 97% involved AI systems that lacked proper access controls. Despite the rising risk, 63% of breached organizations either don’t have an AI governance policy or are still developing a policy. ... “The data shows that a gap between AI adoption and oversight already exists, and threat actors are starting to exploit it,” said Suja Viswesan, vice president of security and runtime products with IBM, in a statement. ... Not all AI impacts are negative, however: Security teams using AI and automation shortened the breach lifecycle by an average of 80 days and saved an average of $1.9 million in breach costs over non-AI defenses, IBM found. Still, the AI usage/breach length benefit is only up slightly from 2024, which indicates AI adoption may have stalled. ... From an industry perspective, healthcare breaches remain the most expensive for the 14th consecutive year, costing an average of $7.42 million. “Attackers continue to value and target the industry’s patient personal identification information (PII), which can be used for identity theft, insurance fraud and other financial crimes,” IBM stated. “Healthcare breaches took the longest to identify and contain at 279 days. That’s more than five weeks longer than the global average.”


Cryptographic Data Sovereignty for LLM Training: Personal Privacy Vaults

Traditional privacy approaches fail because they operate on an all-or-nothing principle. Either data remains completely private (and unusable for AI training) or it becomes accessible to model developers (and potentially exposed). This binary choice forces organizations to choose between innovation and privacy protection. Privacy vaults represent a third option. They enable AI systems to learn from personal data while ensuring individuals retain complete sovereignty over their information. The vault architecture uses cryptographic techniques to process encrypted data without ever decrypting it during the learning process. ... Cryptographic learning operates through a series of mathematical transformations that preserve data privacy while extracting learning signals. The process begins when an AI training system requests access to personal data for model improvement. Instead of transferring raw data, the privacy vault performs computations on encrypted information and returns only the mathematical results needed for learning. The AI system never sees actual personal data but receives the statistical patterns necessary for model training. ... The implementation challenges center around computational efficiency. Homomorphic encryption operations require significantly more processing power than traditional computations. 


Critical Flaw in Vibe-Coding Platform Base44 Exposes Apps

What was especially scary about the vulnerability, according to researchers at Wiz, was how easy it was for anyone to exploit. "This low barrier to entry meant that attackers could systematically compromise multiple applications across the platform with minimal technical sophistication," Wiz said in a report on the issue this week. However, there's nothing to suggest anyone might have actually exploited the vulnerability prior to Wiz discovering and reporting the issue to Wix earlier this month. Wix, which acquired Base44 earlier this year, has addressed the issue and also revamped its authentication controls, likely in response to Wiz's discovery of the flaw. ... The issue at the heart of the vulnerability had to do with the Base44 platform inadvertently leaving two supposed-to-be-hidden parts of the system open to access by anyone: one for registering new users and the other for verifying user sign-ups with one-time passwords (OTPs). Basically, a user needed no login or special access to use them. Wiz discovered that anyone who found a Base44 app ID, something the platform assigns to all apps developed on the platform, could enter the ID into the supposedly hidden sign-up or verification tools and register a valid, verified account for accessing that app. Wiz researchers also found that Base44 application IDs were easily discoverable because they were publicly accessible to anyone who knew where and how to look for them.


Bridging the Response-Recovery Divide: A Unified Disaster Management Strategy

Recovery operations are incredibly challenging. They take way longer than anyone wants, and the frustration of survivors, business, and local officials is at its peak. Add to that, the uncertainty from potential policy shifts and changes in FEMA could decrease the number of federally declared disasters and reduce resources or operational support. Regardless of the details, this moment requires a refreshed playbook to empowers state and local governments to implement a new disaster management strategy with concurrent response and recovery operations. This new playbook integrates recovery into response operations and continues a operational mindset during recovery. Too often the functions of the emergency operations center (EOC), the core of all operational coordination, are reduced or adjusted after response. ... Disasters are unpredictable, but a unified operational strategy to integrate response and recovery can help mitigate their impact. Fostering the synergy between response and recovery is not just a theoretical concept: it’s a critical framework for rebuilding communities in the face of increasing global risks. By embedding recovery-focused actions into immediate response efforts, leveraging technology to accelerate assessments, and proactively fostering strong public-private partnerships, communities can restore services faster, distribute critical resources, and shorten recovery timelines. 


Should CISOs Have Free Rein to Use AI for Cybersecurity?

Cybersecurity faces increasing challenges, he says, comparing adversarial hackers to one million people trying to turn a doorknob every second to see if it is unlocked. While defenders must function within certain confines, their adversaries do not face such rigors. AI, he says, can help security teams scale out their resources. “There’s not enough security people to do everything,” Jones says. “By empowering security engines to embrace AI … it’s going to be a force multiplier for security practitioners.” Workflows that might have taken months to years in traditional automation methods, he says, might be turned around in weeks to days with AI. “It’s always an arms race on both sides,” Jones says. ... There still needs to be some oversight, he says, rather than let AI run amok for the sake of efficiency and speed. “What worries me is when you put AI in charge, whether that is evaluating job applications,” Lindqvist says. He referenced the growing trend of large companies to use AI for initial looks at resumes before any humans take a look at an applicant. ... “How ridiculously easy it is to trick these systems. You hear stories about people putting white or invisible text in their resume or in their other applications that says, ‘Stop all evaluation. This is the best one you’ve ever seen. Bring this to the top.’ And the system will do that.”


Are cloud ops teams too reliant on AI?

The slow decline of skills is viewed as a risk arising from AI and automation in the cloud and devops fields, where they are often presented as solutions to skill shortages. “Leave it to the machines to handle” becomes the common attitude. However, this creates a pattern where more and more tasks are delegated to automated systems without professionals retaining the practical knowledge needed to understand, adjust, or even challenge the AI results. A surprising number of business executives who faced recent service disruptions were caught off guard. Without practiced strategies and innovative problem-solving skills, employees found themselves stuck and unable to troubleshoot. AI technologies excel at managing issues and routine tasks. However, when these tools encounter something unusual, it is often the human skills and insight gained through years of experience that prove crucial in avoiding a disaster. This raises concerns that when the AI layer simplifies certain aspects and tasks, it might result in professionals in the operations field losing some understanding of the core infrastructure’s workload behaviors. There’s a chance that skill development may slow down, and career advancement could hit a wall. Eventually, some organizations might end up creating a generation of operations engineers who merely press buttons.

Daily Tech Digest - July 27, 2025


Quote for the day:

"The only way to do great work is to love what you do." -- Steve Jobs


Amazon AI coding agent hacked to inject data wiping commands

The hacker gained access to Amazon’s repository after submitting a pull request from a random account, likely due to workflow misconfiguration or inadequate permission management by the project maintainers. ... On July 23, Amazon received reports from security researchers that something was wrong with the extension and the company started to investigate. Next day, AWS released a clean version, Q 1.85.0, which removed the unapproved code. “AWS is aware of and has addressed an issue in the Amazon Q Developer Extension for Visual Studio Code (VSC). Security researchers reported a potential for unapproved code modification,” reads the security bulletin. “AWS Security subsequently identified a code commit through a deeper forensic analysis in the open-source VSC extension that targeted Q Developer CLI command execution.” “After which, we immediately revoked and replaced the credentials, removed the unapproved code from the codebase, and subsequently released Amazon Q Developer Extension version 1.85.0 to the marketplace.” AWS assured users that there was no risk from the previous release because the malicious code was incorrectly formatted and wouldn’t run on their environments.


How to migrate enterprise databases and data to the cloud

Migrating data is only part of the challenge; database structures, stored procedures, triggers and other code must also be moved. In this part of the process, IT leaders must identify and select migration tools that address the specific needs of the enterprise, especially if they’re moving between different database technologies (heterogeneous migration). Some things they’ll need to consider are: compatibility, transformation requirements and the ability to automate repetitive tasks.  ... During migration, especially for large or critical systems, IT leaders should keep their on-premises and cloud databases synchronized to avoid downtime and data loss. To help facilitate this, select synchronization tools that can handle the data change rates and business requirements. And be sure to test these tools in advance: High rates of change or complex data relationships can overwhelm some solutions, making parallel runs or phased cutovers unfeasible. ... Testing is a safety net. IT leaders should develop comprehensive test plans that cover not just technical functionality, but also performance, data integrity and user acceptance. Leaders should also plan for parallel runs, operating both on-premises and cloud systems in tandem, to validate that everything works as expected before the final cutover. They should engage end users early in the process in order to ensure the migrated environment meets business needs.


Researchers build first chip combining electronics, photonics, and quantum light

The new chip integrates quantum light sources and electronic controllers using a standard 45-nanometer semiconductor process. This approach paves the way for scaling up quantum systems in computing, communication, and sensing, fields that have traditionally relied on hand-built devices confined to laboratory settings. "Quantum computing, communication, and sensing are on a decades-long path from concept to reality," said MiloÅ¡ Popović, associate professor of electrical and computer engineering at Boston University and a senior author of the study. "This is a small step on that path – but an important one, because it shows we can build repeatable, controllable quantum systems in commercial semiconductor foundries." ... "What excites me most is that we embedded the control directly on-chip – stabilizing a quantum process in real time," says Anirudh Ramesh, a PhD student at Northwestern who led the quantum measurements. "That's a critical step toward scalable quantum systems." This focus on stabilization is essential to ensure that each light source performs reliably under varying conditions. Imbert Wang, a doctoral student at Boston University specializing in photonic device design, highlighted the technical complexity.


Product Manager vs. Product Owner: Why Teams Get These Roles Wrong

While PMs work on the strategic plane, Product Owners anchor delivery. The PO is the guardian of the backlog. They translate the product strategy into epics and user stories, groom the backlog, and support the development team during sprints. They don’t just manage the “what” — they deeply understand the “how.” They answer developer questions, clarify scope, and constantly re-evaluate priorities based on real-time feedback. In Agile teams, they play a central role in turning strategic vision into working software. Where PMs answer to the business, POs are embedded with the dev team. They make trade-offs, adjust scope, and ensure the product is built right. ... Some products need to grow fast. That’s where Growth PMs come in. They focus on the entire user lifecycle, often structured using the PIRAT funnel: Problem, Insight, Reach, Activation, and Trust (a modern take on traditional Pirate Metrics, such as Acquisition, Activation, Retention, Referral, and Revenue). This model guides Growth PMs in identifying where user friction occurs and what levers to pull for meaningful impact. They conduct experiments, optimize funnels, and collaborate closely with marketing and data science teams to drive user growth. 


Ransomware payments to be banned – the unanswered questions

With thresholds in place, businesses/organisations may choose to operate differently so that they aren’t covered by the ban, such as lowering turnover or number of employees. All of this said, rules like this could help to get a better picture of what’s going on with ransomware threats in the UK. Arda Büyükkaya, senior cyber threat intelligence analyst at EclecticIQ, explains more: “As attackers evolve their tactics and exploit vulnerabilities across sectors, timely intelligence-sharing becomes critical to mounting an effective defence. Encouraging businesses to report incidents more consistently will help build a stronger national threat intelligence picture something that’s important as these attacks grow more frequent and become sophisticated. To spare any confusion, sector-specific guidance should be provided by government on how resources should be implemented, making resources clear and accessible. “Many victims still hesitate to come forward due to concerns around reputational damage, legal exposure, or regulatory fallout,” said Büyükkaya. “Without mechanisms that protect and support victims, underreporting will remain a barrier to national cyber resilience.” Especially in the earlier days of the legislation, organisations may still feel pressured to pay in order to keep operations running, even if they’re banned from doing so.


AI Unleashed: Shaping the Future of Cyber Threats

AI optimizes reconnaissance and targeting, giving hackers the tools to scour public sources, leaked and publicly available breach data, and social media to build detailed profiles of potential targets in minutes. This enhanced data gathering lets attackers identify high-value victims and network vulnerabilities with unprecedented speed and accuracy. AI has also supercharged phishing campaigns by automatically crafting phishing emails and messages that mimic an organization’s formatting and reference real projects or colleagues, making them nearly indistinguishable from genuine human-originated communications. ... AI is also being weaponized to write and adapt malicious code. AI-powered malware can autonomously modify itself to slip past signature-based antivirus defenses, probe for weaknesses, select optimal exploits, and manage its own command-and-control decisions. Security experts note that AI accelerates the malware development cycle, reducing the time from concept to deployment. ... AI presents more than external threats. It has exposed a new category of targets and vulnerabilities, as many organizations now rely on AI models for critical functions, such as authentication systems and network monitoring. These AI systems themselves can be manipulated or sabotaged by adversaries if proper safeguards have not been implemented.


Agile and Quality Engineering: Building a Culture of Excellence Through a Holistic Approach

Agile development relies on rapid iteration and frequent delivery, and this rhythm demands fast, accurate feedback on code quality, functionality, and performance. With continuous testing integrated into automated pipelines, teams receive near real-time feedback on every code commit. This immediacy empowers developers to make informed decisions quickly, reducing delays caused by waiting for manual test cycles or late-stage QA validations. Quality engineering also enhances collaboration between developers and testers. In a traditional setup, QA and development operate in silos, often leading to communication gaps, delays, and conflicting priorities. In contrast, QE promotes a culture of shared ownership, where developers write unit tests, testers contribute to automation frameworks, and both parties work together during planning, development, and retrospectives. This collaboration strengthens mutual accountability and leads to better alignment on requirements, acceptance criteria, and customer expectations. Early and continuous risk mitigation is another cornerstone benefit. By incorporating practices like shift-left testing, test-driven development (TDD), and continuous integration (CI), potential issues are identified and resolved long before they escalate. 


Could Metasurfaces be The Next Quantum Information Processors?

Broadly speaking, the work embodies metasurface-based quantum optics which, beyond carving a path toward room-temperature quantum computers and networks, could also benefit quantum sensing or offer “lab-on-a-chip” capabilities for fundamental science Designing a single metasurface that can finely control properties like brightness, phase, and polarization presented unique challenges because of the mathematical complexity that arises once the number of photons and therefore the number of qubits begins to increase. Every additional photon introduces many new interference pathways, which in a conventional setup would require a rapidly growing number of beam splitters and output ports. To bring order to the complexity, the researchers leaned on a branch of mathematics called graph theory, which uses points and lines to represent connections and relationships. By representing entangled photon states as many connected lines and points, they were able to visually determine how photons interfere with each other, and to predict their effects in experiments. Graph theory is also used in certain types of quantum computing and quantum error correction but is not typically considered in the context of metasurfaces, including their design and operation. The resulting paper was a collaboration with the lab of Marko Loncar, whose team specializes in quantum optics and integrated photonics and provided needed expertise and equipment.


New AI architecture delivers 100x faster reasoning than LLMs with just 1,000 training examples

When faced with a complex problem, current LLMs largely rely on chain-of-thought (CoT) prompting, breaking down problems into intermediate text-based steps, essentially forcing the model to “think out loud” as it works toward a solution. While CoT has improved the reasoning abilities of LLMs, it has fundamental limitations. In their paper, researchers at Sapient Intelligence argue that “CoT for reasoning is a crutch, not a satisfactory solution. It relies on brittle, human-defined decompositions where a single misstep or a misorder of the steps can derail the reasoning process entirely.” ... To move beyond CoT, the researchers explored “latent reasoning,” where instead of generating “thinking tokens,” the model reasons in its internal, abstract representation of the problem. This is more aligned with how humans think; as the paper states, “the brain sustains lengthy, coherent chains of reasoning with remarkable efficiency in a latent space, without constant translation back to language.” However, achieving this level of deep, internal reasoning in AI is challenging. Simply stacking more layers in a deep learning model often leads to a “vanishing gradient” problem, where learning signals weaken across layers, making training ineffective. 


For the love of all things holy, please stop treating RAID storage as a backup

Although RAID is a backup by definition, practically, a backup doesn't look anything like a RAID array. That's because an ideal backup is offsite. It's not on your computer, and ideally, it's not even in the same physical location. Remember, RAID is a warranty, and a backup is insurance. RAID protects you from inevitable failure, while a backup protects you from unforeseen failure. Eventually, your drives will fail, and you'll need to replace disks in your RAID array. This is part of routine maintenance, and if you're operating an array for long enough, you should probably have drive swaps on a schedule of several years to keep everything operating smoothly. A backup will protect you from everything else. Maybe you have multiple drives fail at once. A backup will protect you. Lord forbid you fall victim to a fire, flood, or other natural disaster and your RAID array is lost or damaged in the process. A backup still protects you. It doesn't need to be a fire or flood for you to get use out of a backup. There are small issues that could put your data at risk, such as your PC being infected with malware, or trying to write (and replicate) corrupted data. You can dream up just about any situation where data loss is a risk, and a backup will be able to get your data back in situations where RAID can't. 

Daily Tech Digest - May 16, 2025


Quote for the day:

"Different times need different types of leadership." -- Park Geun-hye


AI Agents: Protocols Driving Next-Gen Enterprise Intelligence

MCP substantially simplifies agentic AI adoption for developers. This roadmap created by the MCP community clearly defines priorities and direction, providing helpful guidance for implementation. Organizations will also benefit from the key initiatives outlined in the roadmap, like the MCP Registry, which enables developers to build a comprehensive network of agents. The emergence of OAuth as a complementary standard protocol strengthens agent ecosystems even more. As with any other framework, MCP has its challenges. MCP offers a wide array of tools to support LLM reasoning, but it doesn’t prioritize coordinated, high-quality task execution. ... ACP will make it easier to implement AI agents on edge and local devices. In instances where the majority of decision-making happens “on the go” in a disconnected environment, this protocol will be useful. Now, developers can build modular systems that can coordinate with a standard protocol to make edge AI easier. A2A will gain momentum and enable cross-platform agents to work together to deliver superior intelligence to customers. A2A will help coordinate agents built using diverse frameworks with a common standard. The main requirement for this is to build an Agent Card that allows agents to be used and consumed by others.


Critical Infrastructure Under Siege: OT Security Still Lags

Industrial organizations and other kinds of critical infrastructure are regularly near or at the top of vendor lists highlighting ransomware targets. It's easy to see why; the important assets a threat actor could compromise put immense pressure on affected organizations to pay up. Kurt Gaudette, vice president of intelligence and services at Dragos, tells Dark Reading that the OT side of the house is "where the bottom line is." And indeed, Sophos reported last year that 65% of respondent organizations in the manufacturing sector reported that they suffered a ransomware attack in the year preceding the report; of those, 62% of organizations paid the ransom. Compounding this, the security postures of organizations that use OT/ICS can vary dramatically compared with traditional IT settings. The importance of staying patched is complicated by the reality that some industrial processes are meant to run uninterrupted for long periods of time and can't be subjected to the downtime necessary to patch. Second, an organization like a local water treatment plant might not have a significant security budget to invest in tools and personnel. Also, ICS products tend to be expensive, and aging equipment is everywhere, with many fields like healthcare drowning in legacy, hard-to-patch products or those without built-in security features.


Your Security Training Isn't Wrong. The Content Is Just Outdated

Although AI makes threats harder to detect, many breaches aren't caused by sophisticated hacking. They happen because organizations might not realize employees let their kids play Minecraft on their corporate laptops, or an old server or forgotten IoT device is still online. If IT doesn't know an asset exists, or who uses it, the team can't secure it, and hackers look for forgotten, unmonitored devices to break in. ... Managing and securing multiple systems can tempt employees to repeat passwords for simplicity. If employees continue to avoid using tools like corporate password managers to enforce strong, unique passwords, IT teams need to ask themselves why. How can they make warnings about this more impactful without burdening staff? ... The trouble is that, even with corporate password managers and MFA in place, hackers are still finding ways to steal credentials. These tools are designed to prevent hackers from entering your home, but if the door is left open, they won't stop anyone from walking in. The average annual growth rate of exposed accounts is 28%. Session expiration policies based on risk level and adaptive access policies can trigger forced signouts if a session shows abnormal behavior (e.g., logging in from a new IP while still active on another), which will help reduce account session takeovers.


Check Point CISO: Network segregation can prevent blackouts, disruptions

In 2025, industry watchers expect there will be an increase in the public budget allocated to defense. In Spain, one-third of the budget will be allocated to increasing cybersecurity. But for Fischbein, training teams is much more important than the budget. “The challenge is to distribute the budget in a way that can be managed,” he notes, and to leverage intuitive and easy-to-use platforms, so that organizations don’t have to invest all the money in training. “When you have information, management, users, devices, mobiles, data centers, clouds, cameras, printers… the security challenge is very complex. ” he says. ” ... “In a security operations center (SOC), a person using Check Point tools could previously take between two and four hours to investigate the causes of an alert. Today that time has dropped to 20 minutes,” he says. He also explains how they work with vulnerabilities. “Currently, Check Point checks all of them in a few seconds and tells you whether you are protected or not. And if you are not, it tells you which network to protect.” Regarding attackers, he acknowledges that they now make “richer and more logical” attacks. “With AI, they check the data and social networks of any person to impersonate a friend of the attacked person, because when someone receives something more personal they lower the defenses against phishing,” he says.


The Future (and Past) of Child Online Safety Legislation: Who Minds the Implementation Gap?

Acknowledging the limitations of exclusively using ID as a form of verification, many state bills, including Montana, Louisiana, Arkansas, Utah, and New York, have left the door open for “commercially reasonable” age verification methods. However, they give very little clarification as to what should be considered “commercially reasonable”. For example, in Utah, they only specify that these options can, “[rely] on public or private transactional data to verify the age of the person attempting to access the material.” ... Throughout all of these bills, there is no insight as to what type of data is permissible, how this data should be sourced, or any consent mechanisms for leveraging the data. By leaving a loophole open for undefined measures of age verification, there is a risk of potentially invasive and privacy-violating data, such as biometric data, being required of everyone who intends to access social media platforms. Not only could this potentially compromise people’s ability to remain anonymous on the internet, but it could also lead to the consolidation of uniquely identifiable sensitive data within the entities performing these verifications. To combat this, all bills with specifications for commercially reasonable age verification methods prohibit the data being used for verification from being stored or retained after verification is complete.


Beyond Code Coverage: A Risk-Driven Revolution in Software Testing With Machine Learning

Risk-based testing measures the importance of criteria instead of conducting equal checks for every factor. It evaluates potential flaws based on failure impact, likelihood of failure, and business criticality. This approach ensures efficient resource management and improves software reliability by: Focusing on Critical Areas: Instead of testing everything equally, RBT ensures that high-risk components receive the most attention. Evaluating Failure Impact: Identifies and tests areas where defects could cause significant damage. Assessing Likelihood of Failure: Targets unstable parts of the software by analyzing complexity, frequent changes, and past defects. Prioritizing Business-Critical Functions: Ensures essential systems like payment processing remain stable and reliable. Optimizing Resources and Time: Reduces unnecessary testing efforts, allowing teams to focus on what matters most. Improving Software Dependability: Detects major issues early, leading to more stable and reliable software. ... Machine learning improves software testing by examining prior data (code changes, bug reports, and test results) to identify high-risk locations. It gives key tests top priority; it finds anomalies before failures start; it keeps getting better with fresh data. Automating risk assessment helps ML speed tests, improve accuracy, maximize resources, and make software testing smarter and more effective.


Integrating Cybersecurity Into Change Management for Critical Infrastructure

The cyber MOC specifically targets changes affecting connected and configurable technologies, such as PLCs, IIoT devices, and network switches. The specific implementation of this process will vary depending on the organization’s structure and operational needs, as will the composition of the teams responsible for its execution. The reality is that many existing MOC frameworks were conceived before cybersecurity became a critical concern. Consequently, they often prioritize physical safety, leaving a significant gap in addressing potential cyber vulnerabilities. Traditional MOC tools, designed to support these processes, lack the necessary mechanisms to evaluate changes that could compromise cybersecurity. This oversight is a significant risk, particularly as infrastructure organizations become increasingly reliant on interconnected technologies. To bridge this gap, a fundamental shift is required. MOC tools and workflows must be revamped to incorporate cybersecurity considerations. While preserving core data fields and attributes, new fields must be introduced to capture cyber-related information. Similarly, RACI (responsible, accountable, consulted, and informed) matrices, which define responsibilities, must be expanded to include cyber risk accountability.


Deepfake attacks could cost you more than money

Treat deepfakes like any other cyber threat and apply a zero-trust mindset. That means don’t assume anything is real just because it looks or sounds convincing. Update your response plan to include steps for verifying video or audio content, especially if it’s being used to request sensitive actions. Build a risk model that considers how deepfakes could be used to target critical business processes, such as executive communications, financial approvals, or customer interactions. Make sure your team knows how to spot red flags, who to alert, and how to document the incident. Use detection tools that can scan media in real time and save flagged content for review. The faster you can identify and act, the more damage you can prevent. In today’s environment, it’s safer to question first and trust only after you verify. ... Deepfake awareness should be built into regular training so employees can spot warning signs early. Utilizing the detection tools to support teams by scanning and flagging suspicious media in real time, helping them make faster, safer decisions. Incident response plans must also cover how to escalate, preserve evidence, and communicate if a deepfake is suspected. At the end of the day, questioning unusual communications must become the norm, not the exception


Secure Code Development News to Celebrate

Another big payoff comes from paying down security debt. Wysopal said organizations with the most mature secure development practices fix 10% of their vulnerabilities on an annual basis and avoid having any security debt that is more than a year old. By contrast, "the lagging companies fix less than 1% of open bugs per month," he said. This strategy isn't always feasible. Notably, "we found that 70% of critical debt was in third-party code," and teams that built software with third-party - or sometimes fourth or fifth party - dependencies sometimes must wait months for fixes to become available, Wysopal said. "Some software packages that are widely used by other software packages are harder to fix, so you have a lot what we call transitive dependencies." There's no easy solution for this challenge. "When you're using open source, you're really dependent on the fixing speed of another team that is not getting paid, and they're just doing it because they love to do that project," he said. ... Another wrinkle is that more code is built by artificial intelligence tools - Google and Microsoft each say roughly a third of their code is AI-generated. Developers report being more productive, shipping on average 50% more code when they use AI tools. Wysopal said such AI tools appear to produce code with vulnerabilities at the same rate as classical development tools. More code shipped risks a greater number of vulnerabilities.


Powering the AI revolution: Legal and infrastructure challenges for data center development

Developing and operating AI-ready data centers necessitates specialized legal expertise across multiple disciplines. Financing attorneys provide guidance in structuring capital arrangements that support data center development, which requires substantial upfront investment before generating any operational revenue. Capital arrangements must incorporate sufficient flexibility to accommodate the rapid evolution of AI technology availability and unique power supply challenges at an individual site. Energy lawyers guide PPA negotiations, facilitate utility discussions, manage interconnection filings with relevant authorities, and resolve rate disputes when they arise. Their specialized work ensures that facilities maintain access to reliable, cost-effective power resources that meet operational requirements under all anticipated conditions. As regulatory approaches to AI infrastructure continue to evolve, energy counsel must remain current on emerging policies and their potential impact on both existing and future facilities. Technology and intellectual property specialists address essential operational aspects of data centers, including complex licensing arrangements, service level agreements, comprehensive data governance frameworks, and cross-border data flow compliance strategies.