Daily Tech Digest - October 28, 2025


Quote for the day:

"Ideas are easy, implementation is hard." -- Guy Kawasaki



India’s AI Paradox: Why We Need Cloud Sovereignty Before Model Sovereignty

As is clear, cloud sovereignty is the new pillar supporting national security and having control over infrastructure, data, and digital operations. It has the capacity to safeguard the country’s national interests, including (but not limited to) industrial data, citizen information, and AI workloads. For India, specifically, building a sovereign digital infrastructure guarantees continuity and trust. It gives the country power to enforce its own data laws, manage computing resources for homegrown AI systems, and stay insulated from the tremors of foreign policy decisions or transnational outages. It’s the digital equivalent of producing energy at home—self-reliant, secure, and governed by national priorities. ... Sovereign infrastructure is less a matter of where data sits and more about who controls it and how securely it is managed. With connected systems, AI workloads spread across networks. This makes it imperative for security to be built into every layer and stage. As systems grow more connected and AI workloads spread across networks, security needs to be built into every layer of technology, not added as an afterthought. That’s where edge computing and modern cloud-security frameworks come in. ... There is a real cost involved in neglecting cloud sovereignty. If our AI models continue to depend upon infrastructure that lies outside our jurisdiction, any changes in foreign regulations might suddenly restrict access to critical training datasets. 


Do CISOs need to rethink service provider risk?

Security leaders face mounting pressure from boards to provide assurance about third-party risks, while services provider vetting processes are becoming more onerous — a growing burden for both CISOs and their providers. At the same time, AI is becoming integrated into more business systems and processes, opening new risks. CISOs may be forced to rethink their vetting processes with partners to maintain a focus on risk reduction while treating partnerships as a shared responsibility. ... When looking to engage a services provider, his vetting process starts with building relationships first and then working towards a formal partnership and delivery of services. He believes dialogue helps establish trust and transparency and underpin the partnership approach. “A lot of that is ironed out in that really undocumented process. You build up those relationships first, and then the transactional piece comes after that.” ... “If your questions stop once the form is complete, you’ve missed the chance to understand how a partner really thinks about security,” Thiele says. “You learn a lot more from how they explain their risk decisions than from a yes/no tick box.” Transparency and collaboration are at the heart of stronger partnerships. “You can’t outsource accountability, but you can become mature in how you manage shared responsibility,” Thiele says. ... With AI, Cruz has started to monitor vendors acquiring ISO 42001 certification for AI governance. “It’s a trend I’m seeing in some of the work that we’re doing,” she says.


The Silent Technical Debt: Why Manual Remediation Is Costing You More Than You Think

A far more challenging and costly form of this debt has silently embedded itself into the daily operations of nearly every software development team, and most leaders don’t even have a line item for it. This liability is remediation debt: The ever-growing cost of manually fixing vulnerabilities in the open source components that form the backbone of modern applications. For years, we’ve accepted this process as a necessary chore. A scanner finds a flaw, an alert is sent, and a developer is pulled from their work to hunt down a patch. ... The complexity doesn’t stop there. The report reveals that 65% of manual remediation attempts for a single critical vulnerability require updating at least five additional “transitive” dependencies, or a dependency of a dependency. This is the dreaded “dependency conundrum” that developers lament, where fixing one problem creates a cascade of new compatibility issues. ... It’s time to reframe our way of dealing with this: the goal is not just to find vulnerabilities faster but to remediate them instantly. The path forward lies in shifting from manual labor to intelligent remediation. This means evolving beyond tools that simply populate dashboards with problems and embracing platforms that solve them at their source. Imagine a system where a vulnerability is identified, and instead of creating a ticket, the platform automatically builds, tests, and delivers a fully patched and compatible version of the necessary component directly to the developer.


AI Isn’t Coming for Data Jobs – It’s Coming for Data Chaos

Data chaos arises when organizations lose control of their information landscape. It’s the confusion born from fragmentation, duplication, and inconsistency when multiple versions of “truth” compete for authority. Poor data quality and disconnected data governance processes often amplify this chaos. This chaos manifests as conflicting reports, inaccurate dashboards, mismatched customer profiles, and entire departments working from isolated datasets that refuse to align. ... Recent industry analyses reveal an accelerating imbalance in the data economy. While nearly 90% of the world’s data has been generated in just the past two years, data professionals and data stewards represent only about 3% of the enterprise workforce, creating a widening gap between information growth and the human capacity to govern it. ... Data chaos doesn’t just strain systems, it strains people. As enterprises struggle to keep pace with growing data volume and complexity, the very professionals tasked with managing it find themselves overwhelmed by maintenance work. ... When applied strategically, AI can transform the data management lifecycle from ingestion to governance reducing human toil and freeing engineers to focus on design, quality, and strategy. Paired with an intelligent data catalog, these systems make information assets instantly discoverable and reusable across business domains. AI-driven data classification tools now tag, cluster, and prioritize assets automatically, reducing manual oversight.


Why IT projects still fail

Failure today means an IT project doesn’t deliver expected benefits, according to CIOs, project leaders, researchers, and IT consultants. Failure can also mean a project doesn’t produce returns, runs so late as to be obsolete when completed, or doesn’t engage users who then shun it in response. ... IT leaders and now business leaders, too, get enamored with technologies, despite years of admonishments not to do so. The result is a misalignment between the project objectives and business goals, experienced CIOs and veteran project managers say. ... Stettler says a business owner with clear accountability is needed to ensure that business resources are available when required as well as to ensure process changes and worker adoption happen. He notes that having CIOs — instead of a business owner — try to make those things happen “would be a tail-wagging-the-dog scenario.” ... “Executives need to make more time and engage across all levels of the program. They can’t just let the leaders come talk to them. They need to do spot checks and quality reviews of deliverable updates, and check in with those throughout the program,” Stettler says. “And they have to have the attitude of ‘Bring stuff to me when I can be helpful.’” ... Phillips acknowledges that project teams don’t usually overlook entire divisions, but they sometimes fail to identify and include all the stakeholders they should in the project process. Consequently, they miss key requirements to include, regulations to consider, and opportunities to capitalize on.



The Human Plus AI Quotient: Inside Ascendion's strategy to make AI an amplifier of human talent

Technical skills evolve—mainframes lasted forty years, client-server about twenty, and digital waves even less. Skills will come and go, so we focus on candidates with a strong willingness to learn and invest in themselves. That’s foundational. What’s changed now is the importance of being open to AI. We don’t require deep AI expertise at the outset, but we do look for those who are ready to embrace it. This approach explains why our workforce is so quick to adapt to AI—it’s ingrained in how we hire and develop our people. ... The war for talent has always existed—it’s just the scale and timing that change. For us, the quality of work and the opportunities we provide are key to retention. Being fundamentally an AI-first company is a big differentiator, and our “AI-first” mindset is wired into our DNA. Our employees see a real difference in how we approach projects, always asking how AI can add value. We’ve created an environment that encourages experimentation and learning, and the IP our teams develop—sometimes even around best practices for AI adoption—becomes part of our organisational knowledge base. ... The good news is that for a large cross-section of the workforce, "skilling in AI" is not about mastery of mathematics; it's about improving English writing skills to prompt effectively. We often share prompt libraries with clients because the ability to ask the right question and interpret the output is a significant win.


Recruitment Class: What CIOs Want in Potential New Hires

Candidates should be comfortable operating in a very complex, deep digital ecosystem, Avetisyan said. Now, digital fluency means much more than knowing how to use a certain tool that is currently popular, including AI tools. There needs to be an awareness of the broader implications and responsibilities that come with implementing AI. "It's about integrating AI responsibly and designing for accessibility," Avetisyan said -- both of which represent big challenges that must be tackled and kept continuously top of mind. AI should elevate user experiences. ... There's still a need to demonstrate technical skills with human skills such as problem-solving, communication, and ethical awareness, she said. "You can't just be an exceptional coder and right away be effective in our organization if you don't understand all these other aspects," she said. One more thing: While vibe coding -- letting AI shoulder much or most of the work -- is a buzzy concept, she said she is not ready to turn her shop of developers into vibe coders. A more grounded approach to teaching AI fluency is -- or should be -- the educational mission. ... As for programming? A programmer is still a programmer, but the job has evolved to become more strategic, Ruch said. Technical talent will be needed; however, the first few revisions of code will be pre-written based on the specifications given to AI, he said.


Do programming certifications still matter?

“Certifications are shifting from a checkbox to a compass. They’re less about proving you memorized syntax and more about proving you can architect systems, instruct AI coding assistants, and solve problems end-to-end,” says Faizel Khan, lead AI engineer at Landing Point, an executive search and recruiting firm. ... Certifications really do two things, Khan adds. “First, they force you to learn by doing,” he says. “If you’re taking AWS Solutions Architect or Terraform, you don’t pass by guessing—you plan, build, and test systems. That practice matters. Second, they act as a public signal. Think of it like a micro-degree. You’re not just saying, ‘I know cloud.’ You’re showing you’ve crossed a bar that thousands of other engineers recognize.” But there are cons, too. “In tech, employers don’t just want credentials, they want proof you can deliver,” says Kevin Miller, CTO at IFS. “Programming certifications can be a valuable indicator of your baseline knowledge and competencies, especially if you’re early in your career or pivoting into tech, but their importance is dwindling.” ... “I’m more interested in a candidate’s attitude and aptitude: what problems they’ve solved, what they’ve built, and how they’ve approached challenges,” Watts says. “Certifications can show commitment and discipline, and they’re especially useful in highly specialized roles. But I’m cautious when someone presents a laundry list of certifications with little evidence of real-world application.”


Guarding the Digital God: The Race to Secure Artificial Intelligence

Securing an AI is fundamentally different from securing a traditional computer network. A hacker doesn’t need to breach a firewall if they can manipulate the AI’s “mind” itself. The attack vectors are subtle, insidious, and entirely new. ... The debate over whether people or AI should lead this effort presents a false choice. The only viable path forward is a deep, symbiotic partnership. We must build a system where the AI is the frontline soldier and the human is the strategic commander. The guardian AI should handle the real-time, high-volume defense: scanning trillions of data points, flagging suspicious queries, and patching low-level vulnerabilities at machine speed. The human experts, in turn, must set the strategy. They define the ethical red lines, design the security architecture, and, most importantly, act as the ultimate authority for critical decisions. If the guardian AI detects a major, system-level attack, it shouldn’t act unilaterally; it should quarantine the threat and alert a human operator who makes the final call. ... The power of artificial intelligence is growing at an exponential rate, but our strategies for securing it are lagging dangerously behind. The threats are no longer theoretical. The solution is not a choice between humans and AI, but a fusion of human strategic oversight and AI-powered real-time defense. For a nation like the United States, developing a comprehensive national strategy to secure its AI infrastructure is not optional.


Managing legacy medical devices that can no longer be patched

First, hospitals need to recognize that it is rarely possible to instantaneously remove a medical device, but what you can do is build a wall around that device so that only trusted, validated network traffic will be able to reach the device. Secondly, close collaboration with vendors is critical to understand available upgrade paths. Most vendors don’t want customers running legacy technologies that heighten security risk. From my perspective, if a device is too old to be secured, that’s a serious concern. Collaborate with your providers early and be transparent about budget and timeline constraints. This enables vendors to design a phased roadmap for replacing legacy systems, steadily reducing security risk over time. ... We can take a cue from manufacturing, where cyber resilience is essential to limiting the impact of attacks on the production line and broader ecosystem. No single breach should be able to bring down the entire operation. Yet many organizations still run forgotten, outdated systems. It’s critical to retire legacy assets, streamline the environment, and continuously identify and manage risk. ... We’ve seen meaningful progress when dozens of technology vendors pledged to self-regulate and build cyber resilience into their products from the outset. Unfortunately, that momentum has slowed. In my experience, however, the strongest gains often come from non‑legislative, industry‑led initiatives, when organizations voluntarily choose to prioritize security.

Daily Tech Digest - October 27, 2025


Quote for the day:

“There is no failure except in no longer trying.” -- Chris Bradford


AWS Outage Is Just the Latest Internet Glitch Banks Must Insulate Against

If clouds fail or succumb to cyberattacks, the damage can be enormous, measured only by the maliciousness and creativity of the hacker and the redundancy and resilience of the defenses that users have in place. ... As I describe in The Unhackable Internet, we are already way down the rabbit hole of cyber insecurity. It would take a massive coordinated global effort to secure the current internet. That is unlikely to happen. Therefore, the most realistic business strategy is to assume the inevitable: A glitch, human error or a successful breach or cloud failure will occur. That means systems must be in place to distribute patches, resume operations, reconstruct networks, and recover lost data. Redundancy is a necessary component to get back online, but how much redundancy is feasible or economically sustainable? And will those backstops actually work? ... Given these ever-increasing challenges and cyber incursions in the financial services business, I have argued for a fundamental change in regulation — one that will keep regulators on the cutting edge of digital and cybersecurity developments. To accomplish that, regulation should be a more collaborative experience that invests the financial industry in its own oversight and systemic security. This effort should include industry executives and their staffs. Their expertise in the oversight process would enrich the quality of regulation, particularly from the perspective of strengthening the cyber defenses of the industry.


The 10 biggest issues CISOs and cyber teams face today

“It’s not finger-pointing; we’re all learning,” Lee says. “Business is now expected to embrace and move quickly with AI. Boards and C-level executives are saying, ‘We have to lean into this more’ and then they turn to security teams to support AI. But security doesn’t fully understand the risk. No one has this down because it’s moving so fast.” As a result, many organizations skip security hardening in their rush to embrace AI. But CISOs are catching up. ... Moreover, Todd Moore, global vice president of data security at Thales, says CISOs are facing a torrent of AI-generated data — generally unstructured data such as chat logs — that needs to be secured. “In some aspects, AI is becoming the new insider threat in organizations,” he says. “The reason why I say it’s a new insider threat is because there’s a lot of information that’s being put in places you never expected. CISOs need to identify and find that data and be able to see if that data is critical and then be able to protect it.” ... “We’re now getting to the stage where no one is off-limits,” says Simon Backwell, head of information security at tech company Benifex and a member of ISACA’s Emerging Trends Working Group. “Attack groups are getting bolder, and they don’t care about the consequences. They want to cause mass destruction.”


The AI Inflection Point Isn’t in the Cloud, It’s at the Edge

Beyond the screen, there is a need for agentic applications that specifically reduce latency and improve throughput. “You need an agentic architecture with several things going on,” Shelby said about using models to analyze the packaging of pharmaceuticals, for instance. “You might need to analyze the defects. Then you might need an LLM with a RAG behind it to do manual lookup. That’s very complex. It might need a lot of data behind it. It might need to be very large. You might need 100 billion parameters.” The analysis, he noted, may require integration with a backend system to perform another task, necessitating collaboration among several agents. AI appliances are then necessary to manage multiagent workflows and larger models. ... The nature of LLMs, Shelby said, requires a person to tell you if the LLM’s output is correct, which in turn impacts how to judge the relevancy of LLMs in edge environments. It’s not like you can rely on an LLM to provide an answer to a prompt. Consider a camera in the Texas landscape, focusing on an oil pump, Shelby said. “The LLM is like, ‘Oh, there are some campers cooking some food,’ when really there’s a fire” at the oil pump. So, how do you make the process testable in a way that engineers expect, Shelby asked. It requires end-to-end guard rails. And that’s why random, cloud-based LLMs do not yet apply to industrial environments.


Scaling Identity Security in Cloud Environments

One significant challenge organizations face is the disconnect between security and research and development (R&D) teams. This gap can lead to vulnerabilities being overlooked during the development phase, resulting in potential security risks once new systems are operational in cloud environments. To bridge this gap, a collaborative approach involving both teams is essential. Creating a secure cloud environment necessitates an understanding of the specific needs and challenges faced by each department. ... The journey to achieving scalable identity security in cloud environments is ongoing and requires constant vigilance. By integrating NHI management into their cybersecurity strategies, organizations can reduce risks, increase efficiencies, and ensure compliance with regulatory requirements. With security continue to evolve, staying informed and adaptable remains key. To gain further insights into cybersecurity, you might want to read about some cybersecurity predictions for 2025 and how they may influence your strategies surrounding NHI management. The integration of effective NHI and secrets management into cloud security controls is not just recommended but necessary for safeguarding data. It’s an invaluable part of a broader cybersecurity strategy aimed at minimizing risk and ensuring seamless, secure operations across all sectors.


Owning the Fallout: Inside Blameless Culture

For an organization to truly own the fallout after an incident, there must be a cultural shift from blame to inquiry. A ‘blameless culture’ doesn’t mean it’s a free-for-all, with no accountability. Instead, it’s a circumstance where the first question after an incident isn’t “Who screwed up?” it’s “What failed — and why?” As Gustavo Razzetti describes, “blame is a sign of an unhealthy culture,” and the goal is to replace it with curiosity. In a blameless postmortem, you break down what happened, map the contributing systemic factors, and focus on where processes, tooling, or assumptions broke down. This mindset aligns with the concept of just culture, which balances accountability and systems thinking. After an incident, the focus is to ask how things went wrong, not whom to punish — unless egregious misconduct is involved. ... The most powerful learning happens in the moment when incident patterns redirect strategic priorities. For example, during post-mortems, a team could discover that under-monitored dependencies cause high-severity incidents. With a resilience mindset, that insight can become an objective: “Build automated dependency-health dashboards by Q2.” When feedback and insights flow into OKRs, teams internalize resilience as part of delivery, not an afterthought. Resilient teams move beyond damage control to institutional learning. 


Can your earbuds recognize you? Researchers are working on it

Each person’s ear canal produces a distinct acoustic signature, so the researchers behind EarID designed a method that allows earbuds to identify their wearer by using sound. The earbuds emit acoustic signals into the user’s ear canal, and the reflections from that sound reveal patterns shaped by the ear’s structure. What makes this study stand out is that the authentication process happens entirely on the earbuds themselves. The device extracts a unique binary key based on the user’s ear canal shape and then verifies that key on the paired mobile device. By working with binary keys instead of raw biometric data, the system avoids sending sensitive information over Bluetooth. This helps prevent interception or replay attacks that could expose biometric data. ... A key part of the research is showing that earbuds can handle biometric processing without large hardware or cloud support. EarID runs on a small microcontroller comparable to those found in commercial earbuds. The researchers measured performance on an Arduino platform with an 80 MHz chip and found that it could perform the key extraction in under a third of a second. For comparison, traditional machine learning classifiers took three to ninety times longer to train and process data. This difference could make a real impact if ear canal authentication ever reaches consumer devices, since users expect quick and seamless authentication.


What It 'Techs' to Run Real-Time Payments at Scale

Beyond hosting applications, the architecture is designed for scale, reuse and rapid provisioning. APIs and services support multiple verticals including lending, insurance, investments and even quick commerce through a shared infrastructure-as-a-service model. "Every vertical uses the same underlying infra, and we constantly evaluate whether something can be commoditized for the group and then scaled centrally. It's easier to build and scale one accounting stack than reinvent it every time," Nigam said. Early investments in real-time compute systems and edge analytics enable rapid anomaly detection and insights, cutting operational downtime by 30% and improving response times to under 50 milliseconds. A recent McKinsey report on financial infrastructure in emerging economies underscores the importance of edge computation and near-real-time monitoring for high-volume payments networks - a model increasingly being adopted by global fintech leaders to ensure both speed and reliability. ... Handling spikes and unexpected surges is another critical consideration. India's payments ecosystem experiences predictable peaks - including festival seasons or IPL weekends - and unpredictable surges triggered by government announcements or regulatory deadlines. When a payments platform is built for population scale, any single merchant or use case does not create a surge at this level. 


Who’s right — the AI zoomers or doomers?

Earlier this week, the Emory Wheel editorial board published an opinion column claiming that without regulation, AI will soon outpace humanity’s ability to control it. The post said AI’s uncontrolled evolution threatens human autonomy, free expression, and democracy, stressing that the technical development is faster than what lawmakers can handle. ... Both zoomers and doomers agree that humanity’s fate will be decided when the industry releases AGI or superintelligent AI. But there’s strong disagreement on when that will happen. From OpenAI’s Sam Altman to Elon Musk, Eric Schmidt, Demis Hassabis, Dario Amodei, Masayoshi Son, Jensen Huang, Ray Kurzweil, Louis Rosenberg, Geoffrey Hinton, Mark Zuckerberg, Ajeya Cotra, and Jürgen Schmidhuber — all predict AGI by later this year to later this decade. ... Some say we need strict global rules, maybe like those for nuclear weapons. Others say strong laws would slow progress, stop new ideas, and give the benefits of AI to China. ... AI is already causing harms. It contributes to privacy invasion, disinformation and deepfakes, surveillance overreach, job displacement, cybersecurity threats, child and psychological harms, environmental damage, erosion of human creativity and autonomy, economic and political instability, manipulation and loss of trust in media, unjust criminal justice outcomes, and other problems.


Powering Data in the Age of AI: Part 3 – Inside the AI Data Center Rebuild

You can’t design around AI the way data centers used to handle general compute. The loads are heavier, the heat is higher, and the pace is relentless. You start with racks that pull more power than entire server rooms did a decade ago, and everything around them has to adapt. New builds now work from the inside out. Engineers start with workload profiles, then shape airflow, cooling paths, cable runs, and even structural supports based on what those clusters will actually demand. In some cases, different types of jobs get their own electrical zones. That means separate cooling loops, shorter throw cabling, dedicated switchgear — multiple systems, all working under the same roof. Power delivery is changing, too. In a conversation with BigDATAwire, David Beach, Market Segment Manager at Anderson Power, explained, “Equipment is taking advantage of much higher voltages and simultaneously increasing current to achieve the rack densities that are necessary. This is also necessitating the development of components and infrastructure to properly carry that power.” ... We know that hardware alone doesn’t move the needle anymore. The real advantage comes from pushing it online quickly, without getting bogged down by power, permits, and other obstacles. That’s where the cracks are beginning to open.


Strategic Domain-Driven Design: The Forgotten Foundation of Great Software

The strategic aspect of DDD is often overlooked because many people do not recognize its importance. This is a significant mistake when applying DDD. Strategic design provides context for the model, establishes clear boundaries, and fosters a shared understanding between business and technology. Without this foundation, developers may focus on modeling data rather than behavior, create isolated microservices that do not represent the domain accurately, or implement design patterns without a clear purpose. ... The first step in strategic modeling is to define your domain, which refers to the scope of knowledge and activities that your software intends to address. Next, we apply the age-old strategy of "divide and conquer," a principle used by the Romans that remains relevant in modern software development. We break down the larger domain into smaller, focused areas known as subdomains. ... Once the language is aligned, the next step is to define bounded contexts. These are explicit boundaries that indicate where a particular model and language apply. Each bounded context encapsulates a subset of the ubiquitous language and establishes clear borders around meaning and responsibilities. Although the term is often used in discussions about microservices, it actually predates that movement. 

Daily Tech Digest - October 26, 2025


Quote for the day:

"Everywhere is within walking distance if you have the time." -- Steven Wright


AI policy without proof is just politics

History shows us that regulation without verification rarely works. Imagine if Wall Street firms were allowed to audit their own books, or if pharmaceutical companies could approve their own drugs. The risks would be obvious and unacceptable. Yet, in AI today, much of the information policymakers see about model performance and safety comes straight from the companies developing those systems, leaving regulators dependent on the very firms they are meant to oversee. Self-reporting, intentionally or not, creates structural blind spots. Developers have incentives to highlight strengths and minimize weaknesses, and even honest disclosures can leave out important context. ... The first requirement is independence. Oversight must be based on information that does not come solely from the companies themselves: data that can be inspected, verified and trusted as neutral. Without that independence, even well-intentioned disclosures risk being selective or incomplete. The second requirement is continuity. AI systems evolve quickly, and their performance often shifts once they are deployed in the wild. Benchmarks conducted at launch can’t capture how models change over time, or how they behave across different languages, domains and user needs.  ... AI policy is at a crossroads. The U.S. has set bold goals, but without reliable evaluation, those goals risk becoming little more than rhetoric. Rules set the direction. Proof provides the trust.


5 ways ambitious IT pros can future-proof their tech careers in an age of AI

Successful IT chiefs are expected to be the expert resources for pioneering technology developments. In fact, Daly said the CIOs of the future will demonstrate how AI can fulfill some executive roles and responsibilities. ... David Walmsley, chief digital and technology officer at jewelry specialist Pandora, said up-and-coming IT stars take responsibilities and opportunities. The disconnected technology organization of old, which relied on outsourcing arrangements for project delivery, has been replaced by a department that works closely with the business to achieve its objectives. "The days of technology leaders leaning back and saying, 'Well, which of my external providers do I blame now?' are long gone," he said. "Everyone can see that technology is a strategic lever for growing the business and helping it succeed in its mission." ... The critical skill for next-generation leaders lies not in chasing every new platform or coding language, but in cultivating the human capacities that allow you to adapt. "Those capabilities include curiosity, critical thinking, collaboration, and an understanding of human behavior," he said. "At LIS, we emphasize interdisciplinary learning precisely because technology never exists in isolation; it is always entangled with psychology, economics, ethics, and culture."


Biometrics increase integrity from age checks to agents, but not when compelled

Biometrics are anchoring trust for established but growing use cases like national IDs even as new use cases are taking off. But surveillance concerns inevitably come with increases in the collection of personal data, particularly when the collection is compelled or involuntary. ... Tech industry group the CCIA took aim at Texas’ app store level age checks, arguing the plan is bound to fail in several ways, including data privacy breaches. One of those alleged likely failures is the accuracy of facial age estimation, but the supporting stat from NIST is outdated, and the new figure significantly better. Automated license-place reader-maker Flock and Amazon’s Ring have partnered to share data, allowing law enforcement agencies that use Flock’s investigative platforms to request footage from homeowners. ... The growth of online interactions with credentials that are anchored with biometrics continues unabated, in the form of national ID systems, agentic AI, age checks and identity verification. Juniper Research forecasts digital identity will be an $80 billion global market by 2030, with growth driven by new regulations and credentials. ... Age checks could catalyze digital ID adoption Luciditi CPO Dan Johnson says on the Biometric Update Podcast. He makes the case for the advantages of adding age assurance to apps by integrating a component, rather than building a standalone branded app.


Weak Data Infrastructure Keeps Most GenAI Projects From Delivering ROI

Kolbeck sees companies investing billions while overlooking adequate storage to support their AI infrastructure as one of the major mistakes corporations make. He said that oversight leads to three key failure factors — festering silos, lack of performance, and uptime dilemmas. The most critical resource for AI is data training. When companies store data across multiple silos, data scientists lack access to essential details. “Storage systems must be able to scale and provide unified access to enable an AI data lake, a centralized and efficient storage for the entire company,” he observed. ... “Early AI projects may work well, but as soon as these projects grow in size [as in more GPUs], these arrays tip over, and that’s when mission-critical workflows grind to a halt,” he said. Kolbeck explained the difference between scale-out architecture versus a scale-up approach as a better option for handling the massive and unpredictable data demands of modern AI and ML. He cited his company’s experience in making that transition. ... “Developing and training AI technology is still a very experimental process and requires the infrastructure — including storage — to adapt quickly when data scientists develop new ideas,” Kolbeck noted. Real-time performance analytics are critical. Storage administrators need to be able to precisely identify how applications, such as training or other pipeline phases, impact the storage. 


When your AI browser becomes your enemy: The Comet security disaster

Your regular Chrome or Firefox browser is basically a bouncer at a club. It shows you what's on the webpage, maybe runs some animations, but it doesn't really "understand" what it's reading. If a malicious website wants to mess with you, it has to work pretty hard — exploit some technical bug, trick you into downloading something nasty or convince you to hand over your password. AI browsers like Comet threw that bouncer out and hired an eager intern instead. This intern doesn't just look at web pages — it reads them, understands them and acts on what it reads. Sounds great, right? Except this intern can't tell when someone's giving them fake orders. ... They can actually do stuff: Regular browsers mostly just show you things. AI browsers can click buttons, fill out forms, switch between your tabs, even jump between different websites. ... They remember everything: Unlike regular browsers that forget each page when you leave, AI browsers keep track of everything you've done across your whole session. ... You trust them too much: We naturally assume our AI assistants are looking out for us. That blind trust means we're less likely to notice when something's wrong. Hackers get more time to do their dirty work because we're not watching our AI assistant as carefully as we should. They break the rules on purpose: Normal web security works by keeping websites in their own little boxes — Facebook can't mess with your Gmail, Amazon can't see your bank account. 


Rewriting the Rules of Software Quality: Why Agentic QA is the Future CIOs Must Champion

From continuous deployment to AI-powered applications, software systems are more dynamic, distributed and adaptive than ever. In this changing environment, static testing frameworks are crumbling. What worked yesterday is increasingly not going to work today, and tomorrow’s risks cannot be addressed using yesterday’s checklists. This is where agentic QA steps in, heralding a transformative approach that integrates autonomous, intelligent agents throughout the entire software lifecycle. ... What distinguishes this model isn’t just its intelligence — it’s its adaptability. In a world where AI models are themselves part of the application stack, QA must account for nondeterminism. Agentic systems are uniquely equipped to do this. When AI-driven components exhibit variable behavior based on internal learning states, traditional test-case comparisons fail for evident reasons. Agentic QA, on the other hand, thrives in uncertainty. It detects anomalies, learns from edge cases, and refines its approach continuously. ... However, it is essential to note that as AI takes over repetitive and complex validations, it enables QA professionals to step up and evolve into curators of quality. Their role is freed up to become more strategic: Defining testing intent, ensuring AI alignment with business goals, interpreting nuanced behaviors, and upholding ethical standards. This shift calls for a cultural transformation.


AI-Powered Ransomware Is the Emerging Threat That Could Bring Down Your Organization

AI fundamentally transforms every phase of ransomware operations through several key capabilities. Enhanced reconnaissance allows malware to autonomously scan security perimeters, identify vulnerabilities, and select precise exploitation tools. This eliminates the need for human operators during initial phases, enabling attacks to spread rapidly across IT environments. Adaptive encryption techniques represent another revolutionary advancement. AI-powered ransomware can analyze system resources and data types to modify encryption algorithms dynamically, making decryption more complex. The malware can prioritize high-value targets by analyzing document content using Natural Language Processing before encryption, ensuring maximum strategic impact. Evasive tactics powered by machine learning enable ransomware to continuously modify its code and behavior patterns. This polymorphic capability makes signature-based detection methods ineffective, as the malware presents different fingerprints with each execution. AI also enables malware to track user presence and activate during off-hours to maximize damage while minimizing detection opportunities. The financial consequences of AI-powered ransomware attacks far exceed traditional threats. ... Small businesses face particularly severe consequences, with 60% of attacked companies closing permanently within six months.


When a Provider's Lights Go Out, How Can CIOs Keep Operations Going?

This may seem obvious, but a thousand companies still lost digital functionality on Monday. Why weren't they better prepared? One answer is that while redundancy isn't new, it also isn't very sexy. In a field full of innovation and growth, redundancy is about slowing down, checking your work, and taking the safest route. It's not surprising if some companies are more excited about investing in new AI capabilities than implementing failsafe protocols. Nor is it necessarily wrong. ... "It is important to invest where failure creates real risk, not just minor inconvenience, or noise," he added. This will look different for companies of different sizes, but particularly for companies within different sectors. Some industries, such as healthcare or finance, require a higher level of redundancy across the board simply because the stakes are greater; lack of access to patient records or financial information could have severe repercussions in terms of safety and public trust, which are far beyond inconvenience or frustration. ... But this isn't as simple as tracing third-party contracts, counting how often one name appears, and shifting some operations away from too-dominant providers. If an organization has partnered predominantly with one provider, it's probably for good reason. As Hitchens explained, working with a single provider can accelerate innovation and simplify management, offering visibility, native integrations and unified tooling.


Three Ways Secure Modern Networks Unlock the True Power of AI

AI is network-bound. As always-on models demand up to 100 times more compute, storage, and bandwidth, traditional networks risk becoming bottlenecks both on capacity, and latency. For AI tasks that happen instantly, like self-driving cars or automated stock trading, even tiny delays can cause problems. Modern network infrastructure needs to be more than just fast. It also needs to be safe from cyberattacks and strong enough to handle more AI growth in the future. To realize AI’s full potential, businesses must build purpose-built “AI superhighways”, secure networks designed to scale seamlessly, handling distributed AI workloads across core, cloud, and edge environments. ... The value organizations expect from AI, be it automating workflows, unlocking predictive insights, or powering new digital experiences, depends on more than just compute power or clever algorithms. Furthermore, the demand for real-time machine data from business operations to train AI models is increasing the need for more detailed and extensive networks. This, in turn, accelerates the integration of IT and OT, and expands the adoption of the Internet of Things (IoT) ... The sensitivity of AI data flows is raising the bar for security and compliance. The risks of sticking with outdated infrastructure are stark. 95% of technology leaders say a resilient network is critical to their operations, and 77% have experienced major outages due to congestion, cyberattacks, or misconfigurations.


"It’s not about security, it’s about control" – How EU governments want to encrypt their own comms, but break our private chats

In the wake of ever-larger and frequent cyberattacks – think of the Salt Typhoon in the US – encryption has become crucial to shield everyone's security, whether that's ID theft, scams, or national security risks. Even the FBI urged all Americans to turn to encrypted chats. ... Law enforcement, however, often sees this layer of protection as an obstacle to their investigations, pushing for "lawful access" to encrypted data as a way to combat hideous crimes like terrorism or child abuse. That's exactly where legislation proposals like Chat Control and ProtectEU in the European bloc, or the Online Safety Act in the UK, come from. Yet, people working with encryption know that these solutions are flawed. On a technical level, experts all agree that an encryption backdoor cannot guarantee the same level of online security and privacy we have now. Is then time to redefine what we mean when we talk about privacy? This is what's probably needed, according to Rocket.Chat's Strategic Advisor, Christian Calcagni. "We need to have a new definition of private communication, and that's a big debate. Encryption or no encryption, what could be the way?" Calcagni is, nonetheless, very critical of the current push to break encryption. He told me: "Why should the government know what I think or what I'm sharing on a personal level? We shouldn't focus only on encryption or not encryption, but on what that means for our privacy, our intimacy."

Daily Tech Digest - October 25, 2025


Quote for the day:

"The most powerful leadership tool you have is your own personal example." -- John Wooden


The day the cloud went dark

This week, the impossible happened—again. Amazon Web Services, the backbone of the digital economy and the world’s largest cloud provider, suffered a large-scale outage. If you work in IT or depend on cloud services, you didn’t need a news alert to know something was wrong. Productivity ground to a halt, websites failed to load, business systems stalled, and the hum of global commerce was silenced, if only for a few hours. The impact was immediate and severe, affecting everything from e-commerce giants to startups, including my own consulting business. ... Some businesses hoped for immediate remedies from AWS’s legendary service-level agreements. Here’s the reality: SLA credits are cold comfort when your revenue pipeline is in freefall. The truth that every CIO has faced at least once is that even industry-leading SLAs rarely compensate for the true cost of downtime. They don’t make up for lost opportunities, damaged reputations, or the stress on your teams. ... This outage is a wake-up call. Headlines will fade, and AWS (and its competitors) will keep promising ever-improving reliability. Just don’t forget the lesson: No matter how many “nines” your provider promises, true business resilience starts inside your own walls. Enterprises must take matters into their own hands to avoid existential risk the next time lightning strikes.


Application Modernization Pitfalls: Don't Let Your Transformation Fail

Modernizing legacy applications is no longer a luxury — it’s a strategic imperative. Whether driven by cloud adoption, agility goals, or technical debt, organizations are investing heavily in transformation. Yet, for all its potential, many modernization projects stall, exceed budgets, or fail to deliver the expected business value. Why? The transition from a monolithic legacy system to a flexible, cloud-native architecture is a complex undertaking that involves far more than just technology. It's a strategic, organizational, and cultural shift. And that’s where the pitfalls lie. ... Application modernization is not just a technical endeavor — it’s a strategic transformation that touches every layer of the organization. From legacy code to customer experience, from cloud architecture to compliance posture, the ripple effects are profound. Yet, the most overlooked ingredient in successful modernization isn’t technology — it’s leadership: Leadership that frames modernization as a business enabler, not a cost center; Leadership that navigates complexity with clarity, acknowledging legacy constraints while championing innovation; Leadership that communicates with empathy, recognizing that change is hard and adoption is earned, not assumed. Modernization efforts fail not because teams lack skill, but because they lack alignment. 


CIOs will be on the hook for business-led AI failures

While some business-led AI projects include CIO input, AI experts have seen many organizations launch AI projects without significant CIO or IT team support. When other departments launch AI projects without heavy IT involvement, they may underestimate the technical work needed to make the projects successful, says Alek Liskov, chief AI officer at data refinery platform provider Datalinx AI. ... “Start with the tech folks in the room first, before you get much farther,” he says. “I still see many organizations where there’s either a disconnect between business and IT, or there’s lack of speed on the IT side, or perhaps it’s just a lack of trust.” Despite the doubts, IT leaders need to be involved from the beginning of all AI projects, adds Bill Finner, CIO at large law firm Jackson Walker. “AI is just another technology to add to the stack,” he says. “Better to embrace it and help the business succeed then to sit back and watch from the bench.” ... “It’s a great opportunity for CIOs to work closely with all the practice areas both on the legal and business professional side to ensure we’re educating everyone on the capabilities of the applications and how they can enhance their day-to-day workflows by streamlining processes,” Finner says. “CIOs love to help the business succeed, and this is just another area where they can show their value.”


Three Questions That Help You Build a Better Software Architecture

You don’t want to create an architecture for a product that no one needs. And in validating the business ideas, you will test assumptions that drive quality attributes like scalability and performance needs. To do this, the MVP has to be more than a Proof of Concept - it needs to be able to scale well enough and perform well enough to validate the business case, but it does not need to answer all questions about scalability and performance ... yet. ... Achieving good performance while scaling can also mean reworking parts of the solution that you’ve already built; solutions that perform well with a few users may break down as load is increased. On the other hand, you may never need to scale to the loads that cause those failures, so overinvesting too early can simply be wasted effort. Many scaling issues also stem from a critical bottleneck, usually related to accessing a shared resource. Spotting these early can inform the team about when, and under what conditions, they might need to change their approach. ... One of the most important architectural decisions that teams must make is to decide how they will know that technical debt has risen too far for the system to be supportable and maintainable in the future. The first thing they need to know is how much technical debt they are actually incurring. One way they can do this is by recording decisions that incur technical debt in their Architectural Decision Record (ADR).


Ransomware recovery perils: 40% of paying victims still lose their data

Decryptors are frequently slow and unreliable, John adds. “Large-scale decryption across enterprise environments can take weeks and often fails on corrupted files or complex database systems,” he explains. “Cases exist where the decryption process itself causes additional data corruption.” Even when decryptor tools are supplied, they may contain bugs, or leave files corrupted or inaccessible. Many organizations also rely on untested — and vulnerable — backups. Making matters still worse, many ransomware victims discover that their backups were also encrypted as part of the attack. “Criminals often use flawed or incompatible encryption tools, and many businesses lack the infrastructure to restore data cleanly, especially if backups are patchy or systems are still compromised,” says Daryl Flack, partner at UK-based managed security provider Avella Security and cybersecurity advisor to the UK Government. ... “Setting aside funds to pay a ransom is increasingly viewed as problematic,” Tsang says. “While payment isn’t illegal in itself, it may breach sanctions, it can fuel further criminal activity, and there is no guarantee of a positive outcome.” A more secure legal and strategic position comes from investing in resilience through strong security measures, well-tested recovery plans, clear reporting protocols, and cyber insurance, Tsang advises.


In IoT Security, AI Can Make or Break

Ironically, the same techniques that help defenders also help attackers. Criminals are automating reconnaissance, targeting exposed protocols common in IoT, and accelerating exploitation cycles. Fortinet recently highlighted a surge in AI-driven automated scanning (tens of thousands of scans per second), where IoT and Session Initiation Protocol (SIP) endpoints are probed earlier in the kill chain. That scale turns "long-tail" misconfigurations into early footholds. Worse, AI itself is susceptible to attack. Adversarial ML (machine learning) can blind or mislead detection models, while prompt injection and data poisoning can repurpose AI assistants connected to physical systems. ... Move response left. Anomaly detection without orchestration just creates work. It's important to pre-stage responses such as quarantine VLANs, Access Control List (ACL) updates, Network Access Control (NAC) policies, and maintenance window tickets. This way, high-confidence detections contain first and ask questions second. Finally, run purple-team exercises that assume AI is the target and the tool. This includes simulating prompt injection against your assistants and dashboards; simulating adversarial noise against your IoT Intrusion Detection System (IDS); and testing whether analysts can distinguish "model weirdness" from real incidents under time pressure.


Cyber attack on Jaguar Land Rover estimated to cost UK economy £1.9 billion

Most of the estimated losses stem from halted vehicle production and reduced manufacturing output. JLR’s production reportedly dropped by around 5,000 vehicles per week during the shutdown, translating to weekly losses of approximately £108 million. The shock has cascaded across hundreds of suppliers and service providers. Many firms have faced cash-flow pressures, with some taking out emergency loans. To mitigate the fallout, JLR has reportedly cleared overdue invoices and issued advance payments to critical suppliers. ... The CMC’s Technical Committee urged businesses and policymakers to prioritise resilience against operational disruptions, which now pose the greatest financial risk from cyberattacks. The committee recommended identifying critical digital assets, strengthening segmentation between IT and operational systems, and ensuring robust recovery plans. It also called on manufacturers to review supply-chain dependencies and maintain liquidity buffers to withstand prolonged shutdowns. Additionally, it advised insurers to expand cyber coverage to include large-scale supply chain disruption, and urged the government to clarify criteria for financial support in future systemic cyber incidents.


Thinking Machines challenges OpenAI's AI scaling strategy: 'First superintelligence will be a superhuman learner'

To illustrate the problem with current AI systems, Rafailov offered a scenario familiar to anyone who has worked with today's most advanced coding assistants. "If you use a coding agent, ask it to do something really difficult — to implement a feature, go read your code, try to understand your code, reason about your code, implement something, iterate — it might be successful," he explained. "And then come back the next day and ask it to implement the next feature, and it will do the same thing." The issue, he argued, is that these systems don't internalize what they learn. "In a sense, for the models we have today, every day is their first day of the job," Rafailov said. ... "Think about how we train our current generation of reasoning models," he said. "We take a particular math problem, make it very hard, and try to solve it, rewarding the model for solving it. And that's it. Once that experience is done, the model submits a solution. Anything it discovers—any abstractions it learned, any theorems—we discard, and then we ask it to solve a new problem, and it has to come up with the same abstractions all over again." That approach misunderstands how knowledge accumulates. "This is not how science or mathematics works," he said. ... The objective would fundamentally change: "Instead of rewarding their success — how many problems they solved — we need to reward their progress, their ability to learn, and their ability to improve."


Demystifying Data Observability: 5 Steps to AI-Ready Data

Data observability ensures data pipelines capture representative data, both the expected and the messy. By continuously measuring drift, outliers, and unexpected changes, observability creates the feedback loop that allows AI/ML models to learn responsibly. In short, observability is not an add-on; it is a foundational practice for AI-ready data. ... Rather than relying on manual checks after the fact, observability should be continuous and automated. This turns observability from a reactive safety net into a proactive accelerator for trusted data delivery. As a result, every new dataset or transformation can generate metadata about quality, lineage, and performance, while pipelines can include regression tests and alerting as standard practice. ... The key is automation. Rather than policies that sit in binders, observability enables policies as code. In this way, data contracts and schema checks that are embedded in pipelines can validate that inputs remain fit for purpose. Drift detection routines, too, can automatically flag when training data diverges from operational realities while governance rules, from PII handling to lineage, are continuously enforced, not applied retroactively. ... It’s tempting to measure observability in purely technical terms such as the number of alerts generated, data quality scores, or percentage of tables monitored. But the real measure of success is its business impact. Rather than numbers, organizations should ask if it resulted in fewer failed AI deployments. 


AI heavyweights call for end to ‘superintelligence’ research

Superintelligence isn’t just hype. It’s a strategic goal determined by a privileged few, and backed by hundreds of billions of dollars in investment, business incentives, frontier AI technology, and some of the world’s best researchers. ... Human intelligence has reshaped the planet in profound ways. We have rerouted rivers to generate electricity and irrigate farmland, transforming entire ecosystems. We have webbed the globe with financial markets, supply chains, air traffic systems: enormous feats of coordination that depend on our ability to reason, predict, plan, innovate and build technology. Superintelligence could extend this trajectory, but with a crucial difference. People will no longer be in control. The danger is not so much a machine that wants to destroy us, but one that pursues its goals with superhuman competence and indifference to our needs. Imagine a superintelligent agent tasked with ending climate change. It might logically decide to eliminate the species that’s producing greenhouse gases. ... For years, efforts to manage AI have focused on risks such as algorithmic bias, data privacy, and the impact of automation on jobs. These are important issues. But they fail to address the systemic risks of creating superintelligent autonomous agents. The focus has been on applications, not the ultimate stated goal of AI companies to create superintelligence.

Daily Tech Digest - October 23, 2025


Quote for the day:

“The more you loose yourself in something bigger than yourself, the more energy you will have.” -- Norman Vincent Peale



Leadership lessons from NetForm founder Karen Stephenson

Co-creation is a hot buzzword encouraging individuals to integrate and create with each other, but the simplest way to integrate and create is in the mind of one person — if they’re willing to push forward and do it. Even further, what can an integrated team of diverse minds accomplish when they co-create? ... In the age of AI, humans will need to focus on what humans do well. At the moment, at least, that’s making novel connections, thinking by analogy and creating the new. Our single-field approach to learning, qualifications and career ladders makes it hard for us to compete with machines that are often smarter than we are in any given discipline. For that creative spark and to excel at what messy, forgetful, slow, imperfect humans do best, we need to work, think and live differently. In fact, the founders of five of the largest companies in the world are (or were) polymaths — mentally diverse people skilled in multiple disciplines — Bill Gates, Steve Jobs, Warren Buffett, Larry Page and Jeff Bezos. They learn because they’re curious and want to solve problems, not for a career ladder. It’s easier than ever, today, to learn with AI and online materials and to collaborate with tech and humans around the world. All you need to do is open inward to your talents and desires, explore, collect and fuse.


Why cloud and AI projects take longer and how to fix the holdups

In the case of the cloud, the problem is that senior management thinks that the cloud is always cheaper, that you can always cut costs by moving to the cloud. This is despite the recent stories on “repatriation,” or moving cloud applications back into the data center. In the case of cloud projects, most enterprise IT organizations now understand how to assess a cloud project for cost/benefit, so most of the cases where impossible cost savings are promised are caught in the planning phase. For AI, both senior management and line department management have high expectations with respect to the technology, and in the latter case may also have some experience with AI in the form of as-a-service generative AI models available online. About a quarter of these proposals quickly run afoul of governance policies because of problems with data security, and half of this group dies at this point. For the remaining proposals, there is a whole set of problems that emerge. Most enterprises admit that they really don’t understand what AI can do, which obviously makes it hard to frame a realistic AI project. The biggest gap identified is between an AI business goal and a specific path leading to it. One CIO calls the projects offered by user organizations as “invitations to AI fishing trips” because the goal is usually set in business terms, and these would actually require a project simply to identify how the stated goal could be achieved.


Who pays when a multi-billion-dollar data center goes down?

While the Lockton team is looking at everything from immersion cooling to drought, there are a handful of risks where it feels the industry isn't adequately preparing. “The big thing that isn't getting on people's radars in a growing way is customer equipment," Hayhow says “Looking at this through the lens of the data center owner or developer, it's often very difficult. “It's a bit of an unspoken conversation that the equipment in the white space belongs to the customer. Often you don't have custody over it, you don't have visibility over it, and it’s highly proprietary. But the value of it is growing.” Per square meter of white space, the Lockton partner suggests that the value of the equipment five years from now will be exponentially larger than the value of the equipment five years ago, as more data centers invest in expensive GPUs and other equipment for AI use cases. “Leases have become clearer in terms of placing responsibility for damage to customer equipment more squarely on the shoulders of the owner, developer,” Hayhow says. “We're having that conversation in the US, where the halls are larger, the value of the equipment is greater, and some of the hyperscale customers are being much more prescriptive in terms of wanting to address the topic of damage to our equipment … if you lose 20 megawatts worth of racks of Nvidia chips, the lead time to get those replaced, unless you're building elsewhere, is quite significant.”


AI Agents Need Security Training – Just Like Your Employees

“It may not be as candid as what humans would do during those sessions, but AI agents used by your workforce do need to be trained. They need to understand what your company policies are, including what is acceptable behavior, what data they're allowed to access, what actions they're allowed to take,” Maneval explained. ... “Most AI tools are just trained to do the same thing over and over and so it means decisions are based on assumptions from limited information,” she explained to Infosecurity. “Additionally, most AI tools solve real problems but also create real risks and each solve different problems and creates different risks.” While some cybersecurity experts argue that auditing AI tools is no different to auditing any other software or application, Maneval disagrees. ... Maneval’s said her “rule of thumb” is that whether you’re dealing with traditional machine learning algorithms, generative AI applications of AI agents, “treat them like any other employees.” This not only means that AI-powered agents should be trained on security policies but should also be forced to respect security controls that the staff have to respect, such as role-based access controls (RBAC). “You should look at how you treat your humans and apply those same controls to the AI. You probably do a background check before anyone is hired. Do the same thing with your AI agent. ..."


Why must CISOs slay a cyber dragon to earn business respect?

Why should a security leader need to experience a major cyber incident to earn business colleagues’ respect? Jeff Pollard, VP and principal analyst at Forrester, says this enterprise perception problem is “just part of human nature. If we don’t see the bad thing happening, we don’t appreciate all of the things that were done to prevent that bad thing from happening.” Of course, if an attack turns into an incident and defense goes poorly, “it can easily turn from a hero moment to a scapegoat moment,” Pollard says. Oberlaender, who now works as a cybersecurity consultant, is among those who believe hard-earned experience should be rewarded, but that’s not what he’s seeing in the market today. ... CISOs “feel that they need to fight off an attack to show value, but there are many other successes they can do and show,” says Erik Avakian, technical counselor at Info-Tech Research Group. “Building KPIs is a powerful way to show their value.” ... Chris Jackson, a senior cybersecurity specialist with tech education vendor Pluralsight, reinforces the frustration that many enterprise CISOs feel about the lack of appropriate respect from their colleagues and bosses. “CISOs are a lot like pro sports coaches. It doesn’t matter how well they performed during the season or how many games they won. If they don’t win the championship, it’s seen as a failure, and the coach is often the first to go,” Jackson says. 


The next cyber crisis may start in someone else’s supply chain

Organizations have improved oversight of their direct partners, but few can see beyond the first layer. This limited view leaves blind spots that attackers can exploit, particularly through third-party software or service providers. “We’re in a new generation of risk, one where cyber, geopolitical, technology, political risk, and other factors are converging and reshaping the landscape. The impact on markets and operations is unfolding faster than many organizations can keep up,” said Jim Wetekamp, CEO of Riskonnect. ... Third-party and nth-party risks continue to expose companies to disruption. Most organizations have business continuity plans for supplier disruptions, but their monitoring often stops at direct partners. Only a small fraction can monitor risks across multiple tiers of their supply chain, and some cannot track their critical technology providers at all. Organizations still underestimate how dependent they are on third parties and continue to rely on paper-based continuity plans that offer a false sense of security. ... More companies now have a chief risk officer, but funding for technology and tools has barely moved. Most risk leaders say their budgets have stayed the same even as they are asked to cover more ground. Many are turning to automation and specialized software to do more with what they already have.


Boardroom to War Room: Translating AI-Driven Cyber Risk into Action

Great CISOs today combine strategic leadership, financial knowledge, technological skills, and empathy to turn cybersecurity from a burden on operations into a strong enabler. This change happens faster with artificial intelligence. AI has a lot of potential, but it also makes things more uncertain. It can do things like forecast threats and automate orchestration. CISOs need to see AI problems as more than just technological problems; they need to see them as business risks that need clear communication, openness, and quick response. ... Not storytelling, but data and graphics win over executives. Suggested metrics include: Predictive accuracy - The percentage of risks that AI flagged before a breach compared to the percentage of threats that AI flagged after it happened; Speed of reaction - The average time it took for AI-enabled confinement to work compared to manual reaction; False positive rate - Tech teams employed AI to improve alerts and cut down on alert fatigue from X to Y; Third-party model risk - The number of outside model calls that were looked at and accepted; Visual callout suggestion - A mock-up of a dashboard that illustrates AI risk KPIs, a trendline of predictive value, and a drop in incidences. ... Change from being an IT responder who reacts to problems to a strategic AI-enabled risk leader. Take ownership of your AI risk story, keep an eye on third-party models, provide your board clear information, and make sure your war room functions quickly.


Govt. faces questions about why US AWS outage disrupted UK tax office and banking firms

“The narrative of bigger is better and biggest is best has been shown for the lie it always has been,” Owen Sayers, an independent security architect and data protection specialist with a long history of working in the public sector, told Computer Weekly. “The proponents of hyperscale cloud will always say they have the best engineers, the most staff and the greatest pool of resources, but bigger is not always better – and certainly not when countries rely on those commodity global services for their own national security, safety and operations. “Nationally important services must be recognised as best delivered under national control, and as a minimum, the government should be knocking on AWS’s door today and asking if they can in fact deliver a service that guarantees UK uptime,” he said. “Because the evidence from this week’s outage suggests that they cannot.” ... “In light of today’s major outage at Amazon Web Services … why has HM Treasury not designated Amazon Web Services or any other major technology firm as a CTP for the purposes of the Critical Third Parties Regime,” asked Hillier, in the letter. “[And] how soon can we expect firms to be brought into this regime?” Hillier also asked HM Treasury for clarification about whether or not it is concerned about the fact that “seemingly key parts of our IT infrastructure are hosted abroad” given the outage originated from a US-based AWS datacentre region but impacted the activities of Lloyds Bank and also HMRC.


Quantum work, federated learning and privacy: Emerging frontiers in blockchain research

It is possible to have a future in which the field of quantum computation could serve as the foundation for blockchain consensus. The future is alluring; quantum algorithms can provide solutions to the issues that classical computers find difficult and the method may be more effective and resistant to brute-force attacks. The danger, however, is significant: when quantum computers are sufficiently robust, existing encryption standards can be compromised. ... Federated learning is another upcoming element of blockchain studies, a machine learning model training technique that avoids data centralisation. Federated learning enables various devices or nodes to feed into a standard model instead of storing sensitive data in a central server inaccessible to third parties. ... The issue of privacy is of specific importance today due to the increased regulatory pressure on exchanges and cryptocurrency companies. A compromise between user privacy and regulatory openness could prove to be the key to success. Studies of privacy-saving instruments provide a competitive advantage to blockchain developers and for exchanges interested in increasing their influence on the global economy. ... The decade of blockchain research to come will not be characterised by fast transactions or cheaper costs. It will redraw the borders of trust, calculation, and privacy in digitally based economies. 


Ransomware groups surge as automation cuts attack time to 18 mins

The ransomware group LockBit has recently introduced "LockBit 5.0", reportedly incorporating artificial intelligence for attack randomisation and enhanced targeting options, with a focus on regaining its previous position atop the ransomware ecosystem. Medusa, by contrast, was noted to have fallen behind due in part to lacking widespread automated and customisable features, despite previous activity levels. ReliaQuest's analysis predicts the rise of new groups through the lens of its three-factor model, specifically naming "The Gentlemen" and "DragonForce" as likely to become major threats due to their adoption of advanced technical capabilities. The Gentlemen, for instance, has listed over 30 victims on its data-leak site within its first month of activity, underpinned by automation, prioritised encryption, and endpoint discovery for rapid lateral movement. Conversely, groups such as "Chaos" and "Nova" are likely to remain minor players, lacking the integral features associated with higher victim counts and affiliate recruitment. ... RaaS groups now use automation to reduce breakout times to as little as 18 minutes, making manual intervention too slow. Implement automated containment and response plays to keep pace with attackers. These workflows should automatically isolate hosts, block malicious files, and disable compromised accounts quickly after a critical detection, containing the threat before ransomware can be deployed.