Daily Tech Digest - December 11, 2025


Quote for the day:

"We become what we think about most of the time, and that's the strangest secret." -- Earl Nightingale



SEON Predicts Fraud’s Next Frontier: Entering the Age of Autonomous Attacks

AI has become a permanent part of the fraud landscape, but not in the way many expected. AI has transformed how we detect and prevent fraud, from adaptive risk scoring to real-time data enrichment, but full autonomy remains out of reach. Fraud detection still depends on human judgment, such as weighing intent, interpreting ambiguity, and understanding context that no model can fully replicate. Fraud prevention is a complex interplay of data, intent, and context, and that is where human reasoning continues to matter most. Analysts interpret ambiguity, weigh risk appetite, and understand social signals that no model can fully replicate. What AI can do is amplify that capability. ... The boundary between genuine and synthetic activity is blurring. Generative AI can now simulate human interaction with high accuracy, including realistic typing rhythms, believable navigation flows, and deepfake biometrics that replicate natural variance. The traditional approach of searching for the red flags no longer works when those flags can be easily fabricated. The next evolution in fraud detection will come from baselining legitimate human behaviour. By modelling how real users act over time, and looking at their rhythms, routines, and inconsistencies, we can identify the subtle deviations that synthetic agents struggle to mimic. It is the behavioural equivalent of knowing a familiar face in a crowd. Trust comes from recognition, not reaction. 


The Invisible Vault: Mastering Secrets Management in CI/CD Pipelines

In the high-speed world of modern software development, Continuous Integration and Continuous Deployment (CI/CD) pipelines are the engines of delivery. They automate the process of building, testing, and deploying code, allowing teams to ship faster and more reliably. But this automation introduces a critical challenge: How do you securely manage the "keys to the kingdom"—the API tokens, database passwords, encryption keys, and service account credentials that your applications and infrastructure require? ... A single misstep can expose your entire organization to a devastating data breach. Recent breaches in CI/CD platforms have shown how exposed organizations can be when secrets leak or pipelines are compromised. As pipelines scale, the complexity and risk grow with them. ... The cryptographic algorithms that currently secure nearly all digital communications (like RSA and Elliptic Curve Cryptography used in TLS/SSL) are vulnerable to being broken by a sufficiently powerful quantum computer. While such computers do not yet exist at scale, they represent a future threat that has immediate consequences due to "harvest now, decrypt later" attacks. ... Relevance to CI/CD Secrets Management: The primary risk is in the transport of secrets. The secure channel (TLS) established between your CI/CD runner and your Secrets Manager is the point of vulnerability. To future-proof your pipeline, you need to consider moving towards PQC-enabled protocols.


Experience Really Matters - But Now You're Fighting AI Hacks

Defenders traditionally rely on understanding the timing and ordering of events. The Anthropic incident shows that AI-driven activity occurs in extremely rapid cycles. Reconnaissance, exploit refinement and privilege escalation can occur through repeated attempts that adjust based on feedback from the environment. This creates a workflow that resembles iterative code generation rather than a series of discrete intrusion stages. Professionals must now account for an adversary that can alter its approach within seconds and can test multiple variations of the same technique without the delays associated with human effort. ... The AI attacker moved across cloud systems, identity structures, application layers and internal services. It interacted fluidly with whatever surface was available. Professionals who have worked primarily within a single domain may now need broader familiarity with adjacent layers of the stack because AI-driven activity does not limit itself to the boundaries of established specializations. ... The workforce shortage in cybersecurity will continue, but the qualifications for advancement are shifting. Organizations will look for professionals who understand both the capabilities and the limitations of AI-driven offense and defense. Those who can read an AI-generated artifact, refine an automated detection workflow, or construct an updated threat model will be positioned for leadership roles.


Is vibe coding the new gateway to technical debt?

The big idea in AI-driven development is that now we can just build applications by describing them in plain English. The funny thing is, describing what an application does is one of the hardest parts of software development; it’s called requirements gathering. ... But now we are riding a vibe. A vibe, in this case, is an unwritten requirement. It is always changing—and with AI, we can keep manifesting these whims at a good clip. But while we are projecting our intentions into code that we don’t see, we are producing hidden effects that add up to masses of technical debt. Eventually, it will all come back to bite us. ... Sure, you can try using AI to fix the things that are breaking, but have you tried it? Have you ever been stuck with an AI assistant confidently running you and your code around in circles? Even with something like Gemini CLI and DevTools integration (where the AI has access to the server and client-side outputs) it can so easily descend into a maddening cycle. In the end, you are mocked by your own unwillingness to roll up your sleeves and do some work. ... If I had to choose one thing that is the most compelling about AI-coding, it would be the ability to quickly scale from nothing. The moment when I get a whole, functioning something based on not much more than an idea I described? That’s a real thrill. Weirdly, AI also makes me feel less alone at times; like there is another voice in the room.


How to Be a Great Data Steward: Responsibilities and Best Practices

Data is often described as “a critical organizational asset,” but without proper stewardship, it can become a liability rather than an asset. Poor data management leads to inaccurate reporting, compliance violations, and reputational damage. For example, a financial institution that fails to maintain accurate customer records risks incurring regulatory penalties and causing customer dissatisfaction. ... Effective data stewardship is guided by several foundational principles: accountability, transparency, integrity, security, and ethical use. These principles ensure that data remains accurate, secure, and ethically managed across its lifecycle. ... Data stewards can be categorized into several types: business data stewards, technical data stewards, domain or lead data stewards, and operational data stewards. Each plays a unique role in maintaining data quality and compliance in conjunction with other data management professionals, technical teams, and business stakeholders. ... Data stewardship thrives on clarity. Every data steward should have well-defined responsibilities and authority levels, and each data stewardship team should have clear boundaries and expectations identified. This includes specifying who manages which datasets, who ensures compliance, and who handles data quality issues. Clear role definitions prevent duplication of effort and ensure accountability across the organization.


Time for CIOs to ratify an IT constitution

IT governance is simultaneously a massive value multiplier and a must-immediately-take-a-nap-boring topic for executives. For busy moderns, governance is as intellectually palatable as the stale cabbage on the table RenĂ© Descartes once doubted. How do CIOs get key stakeholders to care passionately and appropriately about how IT decisions are made? ... Everyone agrees that one can’t have a totally centralized, my-way-or-the-highway dictatorship or a totally decentralized you-all-do-whatever-you-want, live-in-a-yurt digital commune. Has the stakeholder base become too numerous, too culturally disparate, and too attitudinally centrifugal to be governed at all? ... Has IT governance sunk to such a state of disrepair that a total rethink is necessary? I asked 30 CIOs and thought leaders what they thought about the current state of IT governance and possible paths forward. The CFO for IT at a state college in the northeast argued that if the CEO, the board of directors, and the CIO were “doing their job, a constitution would not be necessary.” The CIO at a midsize, mid-Florida city argued that writing an effective IT constitution “would be like pushing water up a wall.” ... CIOs need to have a conversation regarding IT rights, privileges, duties, and responsibilities. Are they willing to do so? ... It appears that IT governance is not a hill that CIOs are willing to expend political capital on. 


Flash storage prices are surging – why auto-tiering is now essential

Across industries and use cases, a consistent pattern emerges. The majority of data becomes cold shortly after it is created. It is written once, accessed briefly, then retained for long periods without meaningful activity. Cold data does not require low latency, high IOPS, expensive endurance ratings or premium, power-intensive performance tiers. It only needs to be stored reliably at the lowest reasonable cost. Yet during the years when flash was only marginally more expensive than HDD, many organisations placed cold data on flash systems simply because the price difference felt manageable. With today’s economics, that model can no longer scale. ... The rise in ransomware attacks also helped drive flash adoption. Organisations sought faster backups, quicker restores, and higher snapshot retention. Flash delivered these benefits, but the economics are breaking under current pricing conditions. Today, the cost of flash-based backup appliances is rising, long-term retention on flash is becoming unsustainable, and maintaining deep histories on premium media no longer aligns with budget expectations. ... The current flash pricing crisis is more than a temporary spike. It signals a long-term shift in storage economics driven by accelerating AI demand, constrained supply chains, and global data growth. The all-flash mindset of the past decade is now colliding with financial realities that organisations can no longer ignore. Cold data should not be placed on expensive media. 


AI, sustainability and talent gaps reshape industrial growth

A new study by GlobalLogic, a Hitachi Group company, in partnership with HFS Research, reveals a widening divide between industrial enterprises’ ambitions and their real-world readiness for AI, sustainability, and workforce transformation. Despite strong executive push towards modernization, skills shortages, legacy systems, and misaligned priorities continue to stall progress across key industrial segments. ... The findings lay bare the scale of transition ahead: while industries recognize AI and sustainability as foundational for future competitiveness, a lack of talent and weak integration strategies are slowing measurable impact. “Industrial leaders see AI, sustainability, and talent as top priorities, yet struggle to convert these ambitions into tangible results,” said Srini Shankar, President and CEO at GlobalLogic. ... Although operational cost reduction is the top priority today, the study finds that within two years, AI adoption and operational optimization will dominate executive focus. The industrial sector is preparing for a shift from incremental improvements to deep automation and intelligence-led models. ... “Enterprises need to embed sustainability, talent, and technology transitions into both strategy and day-to-day operations,” said Josh Matthews, HFS Research. “Clear outcomes and messaging are essential to show current and future workforces that industrial organizations are shaping — not chasing — the sustainable, tech-driven future.”


When ransomware strikes, who takes the lead -- the CIO or CISO?

"[CIOs and CISOs] will probably have different priorities for when they want to do things; the CIO is going to be more concerned [about the] business side of keeping systems operational, whereas the CISO [wants to know] where is this critical data? Is it being exfiltrated? Having a good incident response plan, planning that stuff out in advance [is necessary so both parties know] what steps they're supposed to take. "The best default to contain the attack is to pull internet connectivity. You don't want to restart a system [or] shut it down, because you can lose forensic evidence. That way, if they are exfiltrating any data, that access stops, so you can begin triaging how they got in and patch that hole up. ... the first three steps come down to confirm, contain and anchor. We want to confirm that blast radius, not hypothesize or theorize what it could be, but what is it really? You'd be surprised at how many teams burn their most valuable hour debating whether it's really ransomware. "Second, contain first, communicate second. I think there's a natural [tendency for] humans to send an all-hands email out, call an emergency meeting and even notify customers. What matters most is to triage and stop the bleeding, isolate those compromised systems and cripple the bad actor's lateral movement. ... "[The best way to contain a ransomware attack will be different for each organization depending on their architectures, controls and technology, but in general, isolate as completely as possible. 


LLM vulnerability patching skills remain limited

Because the models rely on patterns they have learned, a shift in structure can break those patterns. The model may still spot something that looks like the original flaw, but the fix it proposes may no longer land in the right place. That is why a patch that looks reasonable can still fail the exploit test. The weakness remains reachable because the model addressed only part of the issue or chose the wrong line to modify. Another pattern surfaced. When a fix for an artificial variant did appear, it often came from only one model. Others failed on the same case. This shows that each artificial variant pushed the systems in different directions, and only one model at a time managed to guess a working repair. The lack of agreement across models signals that these variants exposed gaps in the patterns the systems depend on. ... OpenAI and Meta models landed behind that mark but contributed steady fixes in several scenarios. The spread shows that gains do not come from one vendor alone. The study also checked overlap. Authentic issues showed substantial agreement between models, while artificial issues showed far less. Only two issues across the entire set were patched by one model and not by any other. This suggests that combining several models adds limited coverage. ... Researchers plan to extend this work in several ways. One direction involves combining output from different LLMs or from repeated runs of the same model, giving the patching process a chance to compare options before settling on one.

No comments:

Post a Comment