Showing posts with label data infrastructure. Show all posts
Showing posts with label data infrastructure. Show all posts

Daily Tech Digest - October 26, 2025


Quote for the day:

"Everywhere is within walking distance if you have the time." -- Steven Wright


AI policy without proof is just politics

History shows us that regulation without verification rarely works. Imagine if Wall Street firms were allowed to audit their own books, or if pharmaceutical companies could approve their own drugs. The risks would be obvious and unacceptable. Yet, in AI today, much of the information policymakers see about model performance and safety comes straight from the companies developing those systems, leaving regulators dependent on the very firms they are meant to oversee. Self-reporting, intentionally or not, creates structural blind spots. Developers have incentives to highlight strengths and minimize weaknesses, and even honest disclosures can leave out important context. ... The first requirement is independence. Oversight must be based on information that does not come solely from the companies themselves: data that can be inspected, verified and trusted as neutral. Without that independence, even well-intentioned disclosures risk being selective or incomplete. The second requirement is continuity. AI systems evolve quickly, and their performance often shifts once they are deployed in the wild. Benchmarks conducted at launch can’t capture how models change over time, or how they behave across different languages, domains and user needs.  ... AI policy is at a crossroads. The U.S. has set bold goals, but without reliable evaluation, those goals risk becoming little more than rhetoric. Rules set the direction. Proof provides the trust.


5 ways ambitious IT pros can future-proof their tech careers in an age of AI

Successful IT chiefs are expected to be the expert resources for pioneering technology developments. In fact, Daly said the CIOs of the future will demonstrate how AI can fulfill some executive roles and responsibilities. ... David Walmsley, chief digital and technology officer at jewelry specialist Pandora, said up-and-coming IT stars take responsibilities and opportunities. The disconnected technology organization of old, which relied on outsourcing arrangements for project delivery, has been replaced by a department that works closely with the business to achieve its objectives. "The days of technology leaders leaning back and saying, 'Well, which of my external providers do I blame now?' are long gone," he said. "Everyone can see that technology is a strategic lever for growing the business and helping it succeed in its mission." ... The critical skill for next-generation leaders lies not in chasing every new platform or coding language, but in cultivating the human capacities that allow you to adapt. "Those capabilities include curiosity, critical thinking, collaboration, and an understanding of human behavior," he said. "At LIS, we emphasize interdisciplinary learning precisely because technology never exists in isolation; it is always entangled with psychology, economics, ethics, and culture."


Biometrics increase integrity from age checks to agents, but not when compelled

Biometrics are anchoring trust for established but growing use cases like national IDs even as new use cases are taking off. But surveillance concerns inevitably come with increases in the collection of personal data, particularly when the collection is compelled or involuntary. ... Tech industry group the CCIA took aim at Texas’ app store level age checks, arguing the plan is bound to fail in several ways, including data privacy breaches. One of those alleged likely failures is the accuracy of facial age estimation, but the supporting stat from NIST is outdated, and the new figure significantly better. Automated license-place reader-maker Flock and Amazon’s Ring have partnered to share data, allowing law enforcement agencies that use Flock’s investigative platforms to request footage from homeowners. ... The growth of online interactions with credentials that are anchored with biometrics continues unabated, in the form of national ID systems, agentic AI, age checks and identity verification. Juniper Research forecasts digital identity will be an $80 billion global market by 2030, with growth driven by new regulations and credentials. ... Age checks could catalyze digital ID adoption Luciditi CPO Dan Johnson says on the Biometric Update Podcast. He makes the case for the advantages of adding age assurance to apps by integrating a component, rather than building a standalone branded app.


Weak Data Infrastructure Keeps Most GenAI Projects From Delivering ROI

Kolbeck sees companies investing billions while overlooking adequate storage to support their AI infrastructure as one of the major mistakes corporations make. He said that oversight leads to three key failure factors — festering silos, lack of performance, and uptime dilemmas. The most critical resource for AI is data training. When companies store data across multiple silos, data scientists lack access to essential details. “Storage systems must be able to scale and provide unified access to enable an AI data lake, a centralized and efficient storage for the entire company,” he observed. ... “Early AI projects may work well, but as soon as these projects grow in size [as in more GPUs], these arrays tip over, and that’s when mission-critical workflows grind to a halt,” he said. Kolbeck explained the difference between scale-out architecture versus a scale-up approach as a better option for handling the massive and unpredictable data demands of modern AI and ML. He cited his company’s experience in making that transition. ... “Developing and training AI technology is still a very experimental process and requires the infrastructure — including storage — to adapt quickly when data scientists develop new ideas,” Kolbeck noted. Real-time performance analytics are critical. Storage administrators need to be able to precisely identify how applications, such as training or other pipeline phases, impact the storage. 


When your AI browser becomes your enemy: The Comet security disaster

Your regular Chrome or Firefox browser is basically a bouncer at a club. It shows you what's on the webpage, maybe runs some animations, but it doesn't really "understand" what it's reading. If a malicious website wants to mess with you, it has to work pretty hard — exploit some technical bug, trick you into downloading something nasty or convince you to hand over your password. AI browsers like Comet threw that bouncer out and hired an eager intern instead. This intern doesn't just look at web pages — it reads them, understands them and acts on what it reads. Sounds great, right? Except this intern can't tell when someone's giving them fake orders. ... They can actually do stuff: Regular browsers mostly just show you things. AI browsers can click buttons, fill out forms, switch between your tabs, even jump between different websites. ... They remember everything: Unlike regular browsers that forget each page when you leave, AI browsers keep track of everything you've done across your whole session. ... You trust them too much: We naturally assume our AI assistants are looking out for us. That blind trust means we're less likely to notice when something's wrong. Hackers get more time to do their dirty work because we're not watching our AI assistant as carefully as we should. They break the rules on purpose: Normal web security works by keeping websites in their own little boxes — Facebook can't mess with your Gmail, Amazon can't see your bank account. 


Rewriting the Rules of Software Quality: Why Agentic QA is the Future CIOs Must Champion

From continuous deployment to AI-powered applications, software systems are more dynamic, distributed and adaptive than ever. In this changing environment, static testing frameworks are crumbling. What worked yesterday is increasingly not going to work today, and tomorrow’s risks cannot be addressed using yesterday’s checklists. This is where agentic QA steps in, heralding a transformative approach that integrates autonomous, intelligent agents throughout the entire software lifecycle. ... What distinguishes this model isn’t just its intelligence — it’s its adaptability. In a world where AI models are themselves part of the application stack, QA must account for nondeterminism. Agentic systems are uniquely equipped to do this. When AI-driven components exhibit variable behavior based on internal learning states, traditional test-case comparisons fail for evident reasons. Agentic QA, on the other hand, thrives in uncertainty. It detects anomalies, learns from edge cases, and refines its approach continuously. ... However, it is essential to note that as AI takes over repetitive and complex validations, it enables QA professionals to step up and evolve into curators of quality. Their role is freed up to become more strategic: Defining testing intent, ensuring AI alignment with business goals, interpreting nuanced behaviors, and upholding ethical standards. This shift calls for a cultural transformation.


AI-Powered Ransomware Is the Emerging Threat That Could Bring Down Your Organization

AI fundamentally transforms every phase of ransomware operations through several key capabilities. Enhanced reconnaissance allows malware to autonomously scan security perimeters, identify vulnerabilities, and select precise exploitation tools. This eliminates the need for human operators during initial phases, enabling attacks to spread rapidly across IT environments. Adaptive encryption techniques represent another revolutionary advancement. AI-powered ransomware can analyze system resources and data types to modify encryption algorithms dynamically, making decryption more complex. The malware can prioritize high-value targets by analyzing document content using Natural Language Processing before encryption, ensuring maximum strategic impact. Evasive tactics powered by machine learning enable ransomware to continuously modify its code and behavior patterns. This polymorphic capability makes signature-based detection methods ineffective, as the malware presents different fingerprints with each execution. AI also enables malware to track user presence and activate during off-hours to maximize damage while minimizing detection opportunities. The financial consequences of AI-powered ransomware attacks far exceed traditional threats. ... Small businesses face particularly severe consequences, with 60% of attacked companies closing permanently within six months.


When a Provider's Lights Go Out, How Can CIOs Keep Operations Going?

This may seem obvious, but a thousand companies still lost digital functionality on Monday. Why weren't they better prepared? One answer is that while redundancy isn't new, it also isn't very sexy. In a field full of innovation and growth, redundancy is about slowing down, checking your work, and taking the safest route. It's not surprising if some companies are more excited about investing in new AI capabilities than implementing failsafe protocols. Nor is it necessarily wrong. ... "It is important to invest where failure creates real risk, not just minor inconvenience, or noise," he added. This will look different for companies of different sizes, but particularly for companies within different sectors. Some industries, such as healthcare or finance, require a higher level of redundancy across the board simply because the stakes are greater; lack of access to patient records or financial information could have severe repercussions in terms of safety and public trust, which are far beyond inconvenience or frustration. ... But this isn't as simple as tracing third-party contracts, counting how often one name appears, and shifting some operations away from too-dominant providers. If an organization has partnered predominantly with one provider, it's probably for good reason. As Hitchens explained, working with a single provider can accelerate innovation and simplify management, offering visibility, native integrations and unified tooling.


Three Ways Secure Modern Networks Unlock the True Power of AI

AI is network-bound. As always-on models demand up to 100 times more compute, storage, and bandwidth, traditional networks risk becoming bottlenecks both on capacity, and latency. For AI tasks that happen instantly, like self-driving cars or automated stock trading, even tiny delays can cause problems. Modern network infrastructure needs to be more than just fast. It also needs to be safe from cyberattacks and strong enough to handle more AI growth in the future. To realize AI’s full potential, businesses must build purpose-built “AI superhighways”, secure networks designed to scale seamlessly, handling distributed AI workloads across core, cloud, and edge environments. ... The value organizations expect from AI, be it automating workflows, unlocking predictive insights, or powering new digital experiences, depends on more than just compute power or clever algorithms. Furthermore, the demand for real-time machine data from business operations to train AI models is increasing the need for more detailed and extensive networks. This, in turn, accelerates the integration of IT and OT, and expands the adoption of the Internet of Things (IoT) ... The sensitivity of AI data flows is raising the bar for security and compliance. The risks of sticking with outdated infrastructure are stark. 95% of technology leaders say a resilient network is critical to their operations, and 77% have experienced major outages due to congestion, cyberattacks, or misconfigurations.


"It’s not about security, it’s about control" – How EU governments want to encrypt their own comms, but break our private chats

In the wake of ever-larger and frequent cyberattacks – think of the Salt Typhoon in the US – encryption has become crucial to shield everyone's security, whether that's ID theft, scams, or national security risks. Even the FBI urged all Americans to turn to encrypted chats. ... Law enforcement, however, often sees this layer of protection as an obstacle to their investigations, pushing for "lawful access" to encrypted data as a way to combat hideous crimes like terrorism or child abuse. That's exactly where legislation proposals like Chat Control and ProtectEU in the European bloc, or the Online Safety Act in the UK, come from. Yet, people working with encryption know that these solutions are flawed. On a technical level, experts all agree that an encryption backdoor cannot guarantee the same level of online security and privacy we have now. Is then time to redefine what we mean when we talk about privacy? This is what's probably needed, according to Rocket.Chat's Strategic Advisor, Christian Calcagni. "We need to have a new definition of private communication, and that's a big debate. Encryption or no encryption, what could be the way?" Calcagni is, nonetheless, very critical of the current push to break encryption. He told me: "Why should the government know what I think or what I'm sharing on a personal level? We shouldn't focus only on encryption or not encryption, but on what that means for our privacy, our intimacy."

Daily Tech Digest - September 08, 2024

The hidden cost of speed

The software development engine within a company is like the power grid: it’s a given that it works, and there are no celebrations or accolades for keeping the lights on. When it fails or goes down, however, everyone’s upset and what’s left is assigning blame and determining culpability. Unfortunately, in many industries, the responsible application and development of software is not considered until there’s a problem. There is no “working well” for a developer in an ecosystem without insight and intuition as to how difficult the workload is for various projects or positions. The black and white reality is simply ”Working” or “Not working, what the hell is going on, do we need to fire them, why is everything so slow lately?” This can be incredibly frustrating for developers. In my own experience, the person in the worst position is the developer brought in to clean up another developer’s mess. It’s now your responsibility not only to convince management that they need to slow down to give you time to fix things (which will stall sales), but also to architect everything, orchestrate the rollout, and coordinate with sales goals and marketing. 


Tracing The Destructive Path of Ransomware's Evolution

Contemporary attackers carefully select high-value organizations and infrastructure to cripple until substantial ransoms are paid — frequently upwards of seven figures for large corporations, hospitals, pipelines, and municipalities. Present-day ransomware groups’ techniques reflect a chilling professionalization of tactics. They leverage military-grade encryption, identity-hiding cryptocurrencies, data-stealing side efforts, and penetration testing of victims before attacks to determine maximum tolerances. Hackers often gain initial entry by purchasing access to systems from underground brokers, then deploy multipart extortion schemes, including threatening distributed denial-of-service (DDoS) attacks, if demands aren’t promptly met. Ransomware perpetrators also tap advancements like artificial intelligence (AI) to accelerate attacks through malicious code generation, underground dark web communities to coordinate schemes, and initial access markets to reduce overhead. ... Ransomware groups continue to innovate their attack methods. Supply chain attacks have become increasingly common. By compromising a single software supplier, attackers can access the networks of thousands of downstream customers.


Zero-Touch Provisioning Simplifies and Augments State and Local Networks

“With zero-touch provisioning unlocking greater time efficiencies, these agencies can more optimally serve the public,” he says. “For example, research shows that shaving mere seconds off emergency response calls yields more lives saved.” Government agencies also can reach wider and broader audiences and increase constituent trust by delivering crucial food and mobile healthcare services faster. Even agencies with strong budgets can benefit from more efficient spending thanks to zero-trust networking, DePreta adds. “By eliminating the need for manual intervention, government agencies can optimize budgets to better serve their communities and become smarter in the way they deliver services. From public services such as mobile healthcare clinics to public safety activities such as emergency response and disaster relief, ZTP enables government agencies to do more with less,” he says. ... “You can take a couple of devices and ship them to a branch, and someone who is not necessarily a technical expert in that branch can unbox them and plug them in. You are then up and running right away,” DeBacker says.


Why employee ‘will’ can make or break transformations

Leaders who focus on making work more meaningful and expressing their appreciation inspire and motivate employees. Previous McKinsey research shows that executives at organizations who invest time and effort in changing employee mindsets from the start are four times more likely than those who didn’t to say their change programs were successful. Indeed, employees notice when their bosses don’t change their own behaviors to adapt to the goals of transformation. ... he best ideas for how to implement transformation initiatives may come from frontline employees who are closest to the customer. Organizations that encourage employees to pursue innovation and continuous improvement see a higher share of employees that own initiatives or reach milestones during transformations. ... Once leaders have elevated a core group of employees to own initiatives or milestones, they should turn to empowering a broader group to serve as role models who can activate others. These change leaders—influencers, managers, and supervisors—play a visible role in shaping and amplifying the behaviors that enhance organizational performance while counteracting behaviors that get in the way of success.


Deploying digital twins: 7 challenges businesses can face and how to navigate them

An organization adopting digital twins needs to be well-networked. "The biggest roadblock to digital systems is connectivity, at the network and human levels," Thierry Klein, president of Nokia Bell Labs Solutions Research, told ZDNET. "Digital twins are most effective when multiple digital twins are integrated, but this requires collaboration among stakeholders, a robust digital network, and systems that can be connected to the digital twin." ... The ability to represent physical environments in real time also presents challenges to digital twin environments. "With digital twins, you're generally relying on your model to run parallel with some real-life physical system so you can understand certain effects that might be impacting the system," Naveen Rao, vice president of AI for Databricks, told ZDNET. ... "The lack of open, interoperable data standards presents another significant roadblock. "Antiquated technology, legacy proprietary data formats, and analog processes create silos of 'dark data' -- or data that's inaccessible to teams across the asset lifecycle," Shelly Nooner, vice president of innovation and platform for Trimble, told ZDNET. 


Why CEOs and Corporate Boards Can’t Afford to Get AI Governance Wrong

The first step in preparing for safe and successful AI adoption is establishing the necessary C-Suite governance structures. This needs to be a point of urgency, as far more advanced and powerful AI capabilities, including Artificial General Intelligence (AGI), where AI may be able to perform human cognitive tasks better than the smartest human being, loom on the horizon. BCG published a leadership report earlier this year entitled “Every C-Suite Member Is Now a Chief AI Officer.” ... Corporate leadership and boards must determine how best to manage the risks and opportunities presented by AI to serve its customers and to protect its stakeholders. To begin with, they must identify where management responsibility should sit, and how these responsibilities should be structured. BCG’s report states that from the CEO on down, there needs to be at minimum, “a basic understanding of GenAI, particularly with respect to security and privacy risks,” adding that business leaders “must have confidence that all decisions strike the right balance between risk and business benefit.”


Get ready for a tumultuous era of GPU cost volitivity

Demand is almost certain to increase as companies continue to build AI at a rapid pace. Investment firm Mizuho has said the total market for GPUs could grow tenfold over the next five years to more than $400 billion, as businesses rush to deploy new AI applications. Supply depends on several factors that are hard to predict. They include manufacturing capacity, which is costly to scale, as well as geopolitical considerations — many GPUs are manufactured in Taiwan, whose continued independence is threatened by China. Supplies have already been scarce, with some companies reportedly waiting six months to get their hands on Nvidia’s powerful H100 chips. As businesses become more dependent on GPUs to power AI applications, these dynamics mean that they will need to get to grips with managing variable costs. ... To lock in costs, more companies may choose to manage their own GPU servers rather than renting them from cloud providers. This creates additional overhead but provides greater control and can lead to lower costs in the longer term. Companies may also buy up GPUs defensively: Even if they don’t know how they’ll use them yet, these defensive contracts can ensure they’ll have access to GPUs for future needs — and that their competitors won’t.


Optimizing Continuous Deployment at Uber: Automating Microservices in Large Monorepos

The newly designed system, named Up CD, was designed to improve automation and safety. It is tightly integrated with Uber's internal cloud platform and observability tools, ensuring that deployments follow a standardized and repeatable process by default. The new system prioritized simplicity and transparency, especially in managing monorepos. One key improvement was optimizing deployments by looking at which services were affected by each commit, rather than deploying every service with every code change. This reduced unnecessary builds and gave engineers more clarity over the changes impacting their services. ... Up introduced a unified commit flow for all services, ensuring that each service progressed through a series of deployment stages, each with its own safety checks. These conditions included time delays, deployment windows, and service alerts, ensuring deployments were triggered only when safe. Each stage operated independently, allowing flexibility in customizing deployment flows while maintaining safety. This new approach reduced manual errors and provided a more structured deployment experience.


Cybercriminals use legitimate software for attacks increasing

The report underscores the growing trend of attackers adopting legitimate tools to evade security measures and deceive security personnel. These tools are used for various malicious activities, including spreading ransomware, conducting network scanning, lateral movement within networks, and establishing command-and-control (C2) operations. Among the tools identified in the report are PDQ Deploy, PSExec, Rclone, SoftPerfect, AnyDesk, ScreenConnect, and WMIC. A series of case studies detailed in the report highlights specific incidents involving these tools. Between September 2023 and August 2024, 22 posts on various criminal forums discussed or shared cracked versions of the SoftPerfect network scanner. ... Remote management and monitoring (RMM) tools like AnyDesk and ScreenConnect are also prominently featured in criminal discussions. An August 2024 post on the RAMP forum described using AnyDesk during a penetration test and recommended disabling secure logon for successful connections. Initial Access Brokers (IABs) frequently sell access to networks through these established remote management and monitoring tool connections.


Principles of Modern Data Infrastructure

Designing a modern data infrastructure to fail fast means creating systems that can quickly detect and handle failures, improving reliability and resilience. If a system goes down, most of the time, the problem is with the data layer not being able to handle the stress rather than the application compute layer. While scaling, when one or more components within the data infrastructure fail, they should fail fast and recover fast. In the meantime, since the data layer is stateful, the whole fail-and-recovery process should minimize data inconsistency as well. ... By default, databases and data stores need to be able to respond quickly to user queries under heavy throughput. Users expect a real-time or near-real-time experience from all applications. Much of the time, even a few milliseconds, is too slow. For instance, a web API request may translate to one or a few queries to the primary on-disk database and then a few to even tens of operations to the in-memory data store. For each in-memory data store operation, a sub-millisecond response time is a bare necessity for an expected user experience.



Quote for the day:

Leaders must be good listeners. It's rule number one, and it's the most powerful thing they can do to build trusted relationships. - Lee Ellis