Daily Tech Digest - August 23, 2025


Quote for the day:

"Failure is the condiment that gives success its flavor." -- Truman Capote


Enterprise passwords becoming even easier to steal and abuse

Attackers actively target user credentials because they offer the most direct route or foothold into a targeted organization’s network. Once inside, attackers can move laterally across systems, searching for other user accounts to compromise, or they attempt to escalate their privileges and gain administrative control. This hunt for credentials extends beyond user accounts to include code repositories, where developers may have hard-coded access keys and other secrets into application source code. Attacks using valid credentials were successful 98% of the time, according to Picus Security. ... “CISOs and security teams should focus on enforcing strong, unique passwords, using MFA everywhere, managing privileged accounts rigorously and testing identity controls regularly,” Curran says. “Combined with well-tuned DLP and continuous monitoring that can detect abnormal patterns quickly, these measures can help limit the impact of stolen or cracked credentials.” Picus Security’s latest findings reveal a concerning gap between the perceived protection of security tools and their actual performance. An overall protection effectiveness score of 62% contrasts with a shockingly low 3% prevention rate for data exfiltration. “Failures in detection rule configuration, logging gaps and system integration continue to undermine visibility across security operations,” according to Picus Security.


Architecting the next decade: Enterprise architecture as a strategic force

In an age of escalating cyber threats and expanding digital footprints, security can no longer be layered on; it must be architected in from the start. With the rise of AI, IoT and even quantum computing on the horizon, the threat landscape is more dynamic than ever. Security-embedded architectures prioritize identity-first access control, continuous monitoring and zero-trust principles as baseline capabilities. ... Sustainability is no longer a side initiative; it’s becoming a first principle of enterprise architecture. As organizations face pressure from regulators, investors and customers to lower their carbon footprint, digital sustainability is gaining traction as a measurable design objective. From energy-efficient data centers to cloud optimization strategies and greener software development practices, architects are now responsible for minimizing the environmental impact of IT systems. The Green Software Foundation has emerged as a key ecosystem partner, offering measurement standards like software carbon intensity (SCI) and tooling for emissions-aware development pipelines. ... Technology leaders must now foster a culture of innovation, build interdisciplinary partnerships and enable experimentation while ensuring alignment with long-term architectural principles. They must guide the enterprise through both transformation and stability, navigating short-term pressures and long-term horizons simultaneously.


Capitalizing on Digital: Four Strategic Imperatives for Banks and Credit Unions

Modern architectures dissolve the boundary between core and digital. The digital banking solution is no longer a bolt-on to the core; the core and digital come together to form the accountholder experience. That user experience is delivered through the digital channel, but when done correctly, it’s enabled by the modern core. Among other things, the core transformation requires robust use of shared APIs, consistent data structures, and unified development teams. Leading financial institutions are coming to realize that core evaluations now must include an evaluable of their capability to enable the digital experience. Criteria like Availability, Reliability, Real-time, Speed and Security are now emerging as foundational requirements of a core to enable the digital experience. "If your core can’t keep up with your digital, you’re stuck playing catch-up forever," said Jack Henry’s Paul Wiggins, Director of Sales, Digital Engineers. ... Many institutions still operate with digital siloed in one department, while marketing, product, and operations pursue separate agendas. This leads to mismatched priorities — products that aren’t promoted effectively, campaigns that promise features operations can’t support, and technical fixes that don’t address the root cause of customer and member pain points. ... Small-business services are a case in point. Jack Henry’s Strategy Benchmark study found that 80% of CEOs plan to expand these services over the next two years. 


Bentley Systems CIO Talks Leadership Strategy and AI Adoption

The thing that’s really important for a CIO to be thinking about is that we are a microcosm for how all of the business functions are trying to execute the tactics against the strategy. What we can do across the portfolio is represent the strategy in real terms back to the business. We can say: These are all of the different places where we're thinking about investing. Does that match with the strategy we thought we were setting for ourselves? And where is there a delta and a difference? ... When I got my first CIO role, there was all of this conversation about business process. That was the part that I had to learn and figure out how to map into these broader, strategic conversations. I had my first internal IT role at Deutsche Bank, where we really talked about product model a lot -- thinking about our internal IT deliverables as products. When I moved to Lenovo, we had very rich business process and transformation conversations because we were taking the whole business through such a foundational change. I was able to put those two things together. It was a marriage of several things: running a product organization; marrying that to the classic IT way of thinking about business process; and then determining how that becomes representative to the business strategy.


What Is Active Metadata and Why Does It Matter?

Active metadata addresses the shortcomings of passive approaches by automatically updating the metadata whenever an important aspect of the information changes. Defining active metadata and understanding why it matters begins by looking at the shift in organizations’ data strategies from a focus on data acquisition to data consumption. The goal of active metadata is to promote the discoverability of information resources as they are acquired, adapted, and applied over time. ... From a data consumer’s perspective, active metadata adds depth and breadth to their perception of the data that fuels their decision-making. By highlighting connections between data elements that would otherwise be hidden, active metadata promotes logical reasoning about data assets. This is especially so when working on complex problems that involve a large number of disconnected business and technical entities.The active metadata analytics workflow orchestrates metadata management across platforms to enhance application integration, resource management, and quality monitoring. It provides a single, comprehensive snapshot of the current status of all data assets involved in business decision-making. The technology augments metadata with information gleaned from business processes and information systems. 


Godrej Enterprises CHRO on redefining digital readiness as culture, not tech

“Digital readiness at Godrej Enterprises Group is about empowering every employee to thrive in an ever-evolving landscape,” Kaur said. “It’s not just about technology adoption. It’s about building a workforce that is agile, continuously learning, and empowered to innovate.” This reframing reflects a broader trend across Indian industry, where digital transformation is no longer confined to IT departments but runs through every layer of an organisation. For Godrej Enterprises Group, this means designing a workplace where intrapreneurship is rewarded, innovation is constant, and employees are trained to think beyond immediate functions. ... “We’ve moved away from one-off training sessions to creating a dynamic ecosystem where learning is accessible, relevant, and continuous,” she said. “Learning is no longer a checkbox — it’s a shared value that energises our people every day.” This shift is underpinned by leadership development programmes and innovation platforms, ensuring that employees at every level are encouraged to experiment and share knowledge.  ... “We see digital skilling as a core business priority, not just an HR or L&D initiative,” she said. “By making digital skilling a shared responsibility, we foster a culture where learning is continuous, progress is visible, and success is celebrated across the organisation.”


AI is creeping into the Linux kernel - and official policy is needed ASAP

However, before you get too excited, he warned: "This is a great example of what LLMs are doing right now. You give it a small, well-defined task, and it goes and does it. And you notice that this patch isn't, 'Hey, LLM, go write me a driver for my new hardware.' Instead, it's very specific -- convert this specific hash to use our standard API." Levin said another AI win is that "for those of us who are not native English speakers, it also helps with writing a good commit message. It is a common issue in the kernel world where sometimes writing the commit message can be more difficult than actually writing the code change, and it definitely helps there with language barriers." ... Looking ahead, Levin suggested LLMs could be trained to become good Linux maintainer helpers: "We can teach AI about kernel-specific patterns. We show examples from our codebase of how things are done. It also means that by grounding it into our kernel code base, we can make AI explain every decision, and we can trace it to historical examples." In addition, he said the LLMs can be connected directly to the Linux kernel Git tree, so "AI can go ahead and try and learn things about the Git repo all on its own." ... This AI-enabled program automatically analyzes Linux kernel commits to determine whether they should be backported to stable kernel trees. The tool examines commit messages, code changes, and historical backporting patterns to make intelligent recommendations.


Applications and Architecture – When It’s Not Broken, Should You Try to Fix It?

No matter how reliable your application components are, they will need to be maintained, upgraded or replaced at some point. As elements in your application evolve, some will reach end of life status – for example, Redis 7.2 will reach end of life status for security updates in February 2026. Before that point, it’s necessary to assess the available options. For businesses in some sectors like financial services, running out of date and unsupported software is a potential failure for regulations on security and resilience. For example, the Payment Card Industry Data Security Standard version 4.0 enforces that teams should check all their software and hardware is supported every year; in the case of end of life software, teams must also provide a full plan for migration that will be completed within twelve months. ... For developers and software architects, understanding the role that any component plays in the overall application makes it easier to plan ahead. Even the most reliable and consistent component may need to change given outside circumstances. In the Discworld series, golems are so reliable that they become the standard for currency; at the same time, there are so many of them that any problem could affect the whole economy. When it comes to data caching, Redis has been a reliable companion for many developers. 


From cloud migration to cloud optimization

The report, based on insights from more than 2,000 IT leaders, reveals that a staggering 94% of global IT leaders struggle with cloud cost optimization. Many enterprises underestimate the complexities of managing public cloud resources and the inadvertent overspending that occurs from mismanagement, overprovisioning, or a lack of visibility into resource usage. This inefficiency goes beyond just missteps in cloud adoption. It also highlights how difficult it is to align IT cost optimization with broader business objectives. ... This growing focus sheds light on the rising importance of finops (financial operations), a practice aimed at bringing greater financial accountability to cloud spending. Adding to this complexity is the increasing adoption of artificial intelligence and automation tools. These technologies drive innovation, but they come with significant associated costs. ... The argument for greater control is not new, but it has gained renewed relevance when paired with cost optimization strategies. ... With 41% of respondents’ IT budgets still being directed to scaling cloud capabilities, it’s clear that the public cloud will remain a cornerstone of enterprise IT in the foreseeable future. Cloud services such as AI-powered automation remain integral to transformative business strategies, and public cloud infrastructure is still the preferred environment for dynamic, highly scalable workloads. Enterprises will need to make cloud deployments truly cost-effective.


The Missing Layer in AI Infrastructure: Aggregating Agentic Traffic

Software architects and engineering leaders building AI-native platforms are starting to notice familiar warning signs: sudden cost spikes on AI API bills, bots with overbroad permissions tapping into sensitive data, and a disconcerting lack of visibility or control over what these AI agents are doing. It’s a scenario reminiscent of the early days of microservices – before we had gateways and meshes to restore order – only now the "microservices" are semi-autonomous AI routines. Gartner has begun shining a spotlight on this emerging gap. ... Every major shift in software architecture eventually demands a mediation layer to restore control. When web APIs took off, API gateways became essential for managing authentication/authorization, rate limits, and policies. With microservices, service meshes emerged to govern internal traffic. Each time, the need only became clear once the pain of scale surfaced. Agentic AI is on the same path. Teams are wiring up bots and assistants that call APIs independently - great for demos ... So, what exactly is an AI Gateway? At its core, it’s a middleware component – either a proxy, service, or library – through which all AI agent requests to external services are channeled. Rather than letting each agent independently hit whatever API it wants, you route those calls via the gateway, which can then enforce policies and provide central management.



Daily Tech Digest - August 22, 2025


Quote for the day:

“Become the kind of leader that people would follow voluntarily, even if you had no title or position.” -- Brian Tracy


Leveraging DevOps to accelerate the delivery of intelligent and autonomous care solutions

Fast iteration and continuous delivery have become standard in industries like e-commerce and finance. Healthcare operates under different rules. Here, the consequences of technical missteps can directly affect care outcomes or compromise sensitive patient information. Even a small configuration error can delay a diagnosis or impact patient safety. That reality shifts how DevOps is applied. The focus is on building systems that behave consistently, meet compliance standards automatically, and support reliable care delivery at every step. ... In many healthcare environments, developers are held back by slow setup processes and multi-step approvals that make it harder to contribute code efficiently or with confidence. This often leads to slower cycles and fragmented focus. Modern DevOps platforms help by introducing prebuilt, compliant workflow templates, secure self-service provisioning for environments, and real-time, AI-supported code review tools. In one case, development teams streamlined dozens of custom scripts into a reusable pipeline that provisioned compliant environments automatically. The result was a noticeable reduction in setup time and greater consistency across projects. Building on this foundation, DevOps also play a vital role in development and deployment of the Machine Learning Models. 


Tackling the DevSecOps Gap in Software Understanding

The big idea in DevSecOps has always been this: shift security left, embed it early and often, and make it everyone’s responsibility. This makes DevSecOps the perfect context for addressing the software understanding gap. Why? Because the best time to capture visibility into your software’s inner workings isn’t after it’s shipped—it’s while it’s being built. ... Software Bill of Materials (SBOMs) are getting a lot of attention—and rightly so. They provide a machine-readable inventory of every component in a piece of software, down to the library level. SBOMs are a baseline requirement for software visibility, but they’re not the whole story. What we need is end-to-end traceability—from code to artifact to runtime. That includes:Component provenance: Where did this library come from, and who maintains it? Build pipelines: What tools and environments were used to compile the software? Deployment metadata: When and where was this version deployed, and under what conditions? ... Too often, the conversation around software security gets stuck on source code access. But as anyone in DevSecOps knows, access to source code alone doesn’t solve the visibility problem. You need insight into artifacts, pipelines, environment variables, configurations, and more. We’re talking about a whole-of-lifecycle approach—not a repo review.


Navigating the Legal Landscape of Generative AI: Risks for Tech Entrepreneurs

The legal framework governing generative AI is still evolving. As the technology continues to advance, the legal requirements will also change. Although the law is still playing catch-up with the technology, several jurisdictions have already implemented regulations specifically targeting AI, and others are considering similar laws. Businesses should stay informed about emerging regulations and adapt their practices accordingly. ... Several jurisdictions have already enacted laws that specifically govern the development and use of AI, and others are considering such legislation. These laws impose additional obligations on developers and users of generative AI, including with respect to permitted uses, transparency, impact assessments and prohibiting discrimination. ... In addition to AI-specific laws, traditional data privacy and security laws – including the EU General Data Protection Regulation (GDPR) and U.S. federal and state privacy laws – still govern the use of personal data in connection with generative AI. For example, under GDPR the use of personal data requires a lawful basis, such as consent or legitimate interest. In addition, many other data protection laws require companies to disclose how they use and disclose personal data, secure the data, conduct data protection impact assessments and facilitate individual rights, including the right to have certain data erased. 


Five ways OSINT helps financial institutions to fight money laundering

By drawing from public data sources available online, such as corporate registries and property ownership records, OSINT tools can provide investigators with a map of intricate corporate and criminal networks, helping them unmask UBOs. This means investigators can work more efficiently to uncover connections between people and companies that they otherwise might not have spotted. ... External intelligence can help analysts to monitor developments, so that newer forms of money laundering create fewer compliance headaches for firms. Some of the latest trends include money muling, where criminals harness channels like social media to recruit individuals to launder money through their bank accounts, and trade-based laundering, which allows bad actors to move funds across borders by exploiting international complexity. OSINT helps identify these emerging patterns, enabling earlier intervention and minimizing enforcement risks. ... When it comes to completing suspicious activity reports (SARs), many financial institutions rely on internal data, spending millions on transaction monitoring, for instance. While these investments are unquestionably necessary, external intelligence like OSINT is often neglected – despite it often being key to identifying bad actors and gaining a full picture of financial crime risk. 


The hard problem in data centres isn’t cooling or power – it’s people

Traditional infrastructure jobs no longer have the allure they once did, with Silicon Valley and startups capturing the imagination of young talent. Let’s be honest – it just isn’t seen as ‘sexy’ anymore. But while people dream about coding the next app, they forget someone has to build and maintain the physical networks that power everything. And that ‘someone’ is disappearing fast. Another factor is that the data centre sector hasn’t done a great job of telling its story. We’re seen as opaque, technical and behind closed doors. Most students don’t even know what a data centre is, and until something breaks  it doesn’t even register. That’s got to change. We need to reframe the narrative. Working in data centres isn’t about grey boxes and cabling. It’s about solving real-world problems that affect billions of people around the world, every single second of every day. ... Fixing the skills gap isn’t just about hiring more people. It’s about keeping the knowledge we already have in the industry and finding ways to pass it on. Right now, we’re on the verge of losing decades of expertise. Many of the engineers, designers and project leads who built today’s data centre infrastructure are approaching retirement. While projects operate at a huge scale and could appear exciting to new engineers, we also have inherent challenges that come with relatively new sectors. 


Multi-party computation is trending for digital ID privacy: Partisia explains why

The main idea is achieving fully decentralized data, even biometric information, giving individuals even more privacy. “We take their identity structure and we actually run the matching of the identity inside MPC,” he says. This means that neither Partisia nor the company that runs the structure has the full biometric information. They can match it without ever decrypting it, Bundgaard explains. Partisia says it’s getting close to this goal in its Japan experiment. The company has also been working on a similar goal of linking digital credentials to biometrics with U.S.-based Trust Stamp. But it is also developing other identity-related uses, such as proving age or other information. ... Multiparty computation protocols are closing that gap: Since all data is encrypted, no one learns anything they did not already know. Beyond protecting data, another advantage is that it still allows data analysts to run computations on encrypted data, according to Partisia. There may be another important role for this cryptographic technique when it comes to privacy. Blockchain and multiparty computation could potentially help lessen friction between European privacy standards, such as eIDAS and GDPR, and those of other countries. “I have one standard in Japan and I travel to Europe and there is a different standard,” says Bundgaard. 


MIT report misunderstood: Shadow AI economy booms while headlines cry failure

While headlines trumpet that “95% of generative AI pilots at companies are failing,” the report actually reveals something far more remarkable: the fastest and most successful enterprise technology adoption in corporate history is happening right under executives’ noses. ... The MIT researchers discovered what they call a “shadow AI economy” where workers use personal ChatGPT accounts, Claude subscriptions and other consumer tools to handle significant portions of their jobs. These employees aren’t just experimenting — they’re using AI “multiples times a day every day of their weekly workload,” the study found. ... Far from showing AI failure, the shadow economy reveals massive productivity gains that don’t appear in corporate metrics. Workers have solved integration challenges that stymie official initiatives, proving AI works when implemented correctly. “This shadow economy demonstrates that individuals can successfully cross the GenAI Divide when given access to flexible, responsive tools,” the report explains. Some companies have started paying attention: “Forward-thinking organizations are beginning to bridge this gap by learning from shadow usage and analyzing which personal tools deliver value before procuring enterprise alternatives.” The productivity gains are real and measurable, just hidden from traditional corporate accounting. 


The Price of Intelligence

Indirect prompt injection represents another significant vulnerability in LLMs. This phenomenon occurs when an LLM follows instructions embedded within the data rather than the user’s input. The implications of this vulnerability are far-reaching, potentially compromising data security, privacy, and the integrity of LLM-powered systems. At its core, indirect prompt injection exploits the LLM’s inability to consistently differentiate between content it should process passively (that is, data) and instructions it should follow. While LLMs have some inherent understanding of content boundaries based on their training, they are far from perfect. ... Jailbreaks represent another significant vulnerability in LLMs. This technique involves crafting user-controlled prompts that manipulate an LLM into violating its established guidelines, ethical constraints, or trained alignments. The implications of successful jailbreaks can potentially undermine the safety, reliability, and ethical use of AI systems. Intuitively, jailbreaks aim to narrow the gap between what the model is constrained to generate, because of factors such as alignment, and the full breadth of what it is technically able to produce. At their core, jailbreaks exploit the flexibility and contextual understanding capabilities of LLMs. While these models are typically designed with safeguards and ethical guidelines, their ability to adapt to various contexts and instructions can be turned against them.


The Strategic Transformation: When Bottom-Up Meets Top-Down Innovation

The most innovative organizations aren’t always purely top-down or bottom-up—they carefully orchestrate combinations of both. Strategic leadership provides direction and resources, while grassroots innovation offers practical insights and the capability to adapt rapidly. Chynoweth noted how strategic portfolio management helps companies “keep their investments in tech aligned to make sure they’re making the right investments.” The key is creating systems that can channel bottom-up innovations while ensuring they support the organization’s strategic objectives. Organizations that succeed in managing both top-down and bottom-up innovation typically have several characteristics. They establish clear strategic priorities from leadership while creating space for experimentation and adaptation. They implement systems for capturing and evaluating innovations regardless of their origin. And they create mechanisms for scaling successful pilots while maintaining strategic alignment. The future belongs to enterprises that can master this balance. Pure top-down enterprises will likely continue to struggle with implementation realities and changing market conditions. In contrast, pure bottom-up organizations would continue to lack the scale and coordination needed for significant impact.


Digital-first doesn’t mean disconnected for this CEO and founder

“Digital-first doesn’t mean disconnected – it means being intentional,” she said. For leaders it creates a culture where the people involved feel supported, wherever they’re working, she thinks. She adds that while many organisations found themselves in a situation where the pandemic forced them to establish a remote-first system, very few actually fully invested in making it work well. “High performance and innovation don’t happen in isolation,” said Feeney. “They happen when people feel connected, supported and inspired.” Sentiments which she explained are no longer nice to have, but are becoming a part of modern organisational infrastructure. One in which people are empowered to do their best work on their own terms. ... “One of the biggest challenges I have faced as a founder was learning to slow down, especially when eager to introduce innovation. Early on, I was keen to implement automation and technology, but I quickly realised that without reliable data and processes, these tools could not reach their full potential.” What she learned was, to do things correctly, you have to stop, review your foundations and processes and when you encounter an obstacle, deal with it, because though the stopping and starting might initially be frustrating, you can’t overestimate the importance of clean data, the right systems and personnel alignment with new tech.

Daily Tech Digest - August 21, 2025


Quote for the day:

"The master has failed more times than the beginner has even tried." -- Stephen McCranie


Ghost Assets Drain 25% of IT Budgets as ITAM Confidence Gap Widens

The survey results reveal fundamental breakdowns in communication, trust, and operational alignment that threaten both current operations and future digital transformation initiatives. ... The survey's most alarming finding centers on ghost assets. These are IT resources that continue consuming budget and creating risk while providing zero business value. The phantom resources manifest across the entire technology stack, from forgotten cloud instances to untracked SaaS subscriptions. ... The tool sprawl paradox is striking. Sixty-five percent of IT managers use six or more ITAM tools yet express confidence in their setup. Non-IT roles use fewer tools but report significantly lower integration confidence. This suggests IT teams have adapted to complexity through process workarounds rather than achieving true operational efficiency. ... "Over the next two to three years, I see this confidence gap continuing to widen," Collins said. "This is primarily fueled by the rapid acceleration of hybrid work models, mass migration to the cloud, and the burgeoning adoption of artificial intelligence, creating a perfect storm of complexity for IT asset management teams." Collins noted that the distributed workforce has shattered the traditional, centralized view of IT assets. Cloud migration introduces shadow IT, ghost assets, and uncontrolled sprawl that bypass traditional procurement channels.


Documents: The architect’s programming language

The biggest bottlenecks in the software lifecycle have nothing to do with code. They’re people problems: communication, persuasion, decision-making. So in order to make an impact, architects have to consistently make those things happen, sprint after sprint, quarter after quarter. How do you reliably get the right people in the right place, at the right time, talking about the right things? Is there a transfer protocol or infrastructure-as-code tool that works on human beings? ... A lot of programmers don’t feel confident in their writing skills, though. It’s hard to switch from something you’re experienced at, where quality speaks for itself (programming) to something you’re unfamiliar with, where quality depends on the reader’s judgment (writing). So what follows is a crash course: just enough information to help you confidently write good (even great) documents, no matter who you are. You don’t have to have an English degree, or know how to spell “idempotent,” or even write in your native language. You just have to learn a few techniques. ... The main thing you want to avoid is a giant wall of text. Often the people whose attention your document needs most are the people with the most demands on their time. If you send them a four-page essay, there’s a good chance they’ll never have the time to get through it. 


CIOs at the Crossroads of Innovation and Trust

Consulting firm McKinsey's Technology Trends Outlook 2025 paints a vivid picture: The CIO is no longer a technologist but one who writes a narrative where technology and strategy merge. Four forces together - artificial intelligence at scale, agentic AI, cloud-edge synergy and digital trust - are a perfect segue for CIOs to navigate the technology forces of the future and turn disruption into opportunities. ... As the attack surface continues to expand due to advances in AI, connected devices and cloud tech - and because the regulatory environment is still in a constant flux - achieving enterprise-level cyber resilience is critical. ... McKinsey's data indicates - and it's no revelation - a global shortage of AI, cloud and security experts. But leading companies are overcoming this bottleneck by upskilling their workers. AI copilots train employees, while digital agents handle repetitive tasks. The boundary between human and machine is blurring, and the CIO is the alchemist, creating hybrid teams that drive transformation. If there's a single plot twist for 2025, it's this: Technology innovation is assessed not by experimentation but by execution. Tech leaders have shifted from chasing shiny objects to demanding business outcomes, from adopting new platforms to aligning every digital investment with growth, efficiency and risk reduction.


Bigger And Faster Or Better And Greener? The EU Needs To Define Its Priorities For AI

Since Europe is currently not clear on its priorities for AI development, US-based Big Tech companies can use their economic and discursive power to push their own ambitions onto Europe. Through publications directly aimed at EU policy-makers, companies promote their services as if they are perfectly aligned with European values. By promising the EU can have it all — bigger, faster, greener and better AI — tech companies exploit this flexible discursive space to spuriously position themselves as “supporters” of the EU’s AI narrative. Two examples may illustrate this: OpenAI and Google. ... Big Tech’s promises to develop AI infrastructure faster while optimizing sustainability, enhancing democracy, and increasing competitiveness seem too good to be true — which in fact they are. Not surprisingly, their claims are remarkably low on details and far removed from the reality of these companies’ immense carbon emissions. Bigger and faster AI is simply incompatible with greener and better AI. And yet, one of the main reasons why Big Tech companies’ claims sound agreeable is that the EU’s AI Continent Action Plan fails to define clear conditions and set priorities in how to achieve better and greener AI. So what kind of changes does the EU AI-CAP need? First, it needs to set clear goalposts on what constitutes a democratic and responsible use of AI, even if this happens at the expense of economic competitiveness. 


Myth Or Reality: Will AI Replace Computer Programmers?

The truth is that the role of the programmer, in line with just about every other professional role, will change. Routine, low-level tasks such as customizing boilerplate code and checking for coding errors will increasingly be done by machines. But that doesn’t mean basic coding skills won’t still be important. Even if humans are using AI to create code, it’s critical that we can understand it and step in when it makes mistakes or does something dangerous. This shows that humans with coding skills will still be needed to meet the requirement of having a “human-in-the-loop”. This is essential for safe and ethical AI, even if its use is restricted to very basic tasks. This means entry-level coding jobs don’t vanish, but instead transition into roles where the ability to automate routine work and augment our skills with AI becomes the bigger factor in the success or failure of a newbie programmer. Alongside this, entirely new development roles will also emerge, including AI project management, specialists in connecting AI and legacy infrastructure, prompt engineers and model trainers. We’re also seeing the emergence of entirely new methods of developing software, using generative AI prompts alone. Recently, this has been named "vibe coding" because of the perceived lack of stress and technical complexity in relation to traditional coding.


FinOps as Code – Unlocking Cloud Cost Optimization

FinOps as Code (FaC) is the practice of applying software engineering principles, particularly those from Infrastructure as Code (IaC) to cloud financial management. It considers financial operations, such as cost management and resource allocation, as code-driven processes that can be automated, version-controlled, and collaborated on between the teams in an organization. FinOps as Code blends financial operations with cloud native practices to optimize and manage cloud spending programmatically using code. It enables FinOps principles and guidelines to be coded directly into the CI/CD pipelines. ... When you bring FinOps into your organization, you know where and how you spend your money. FinOps provides a cultural transformation to your organization where each team member is aware of how their usage of the cloud affects your final costs associated with such usage. While cloud spend is no longer merely an IT issue, you should be able to manage your cloud spend properly. ... FinOps as Code (FaC) is an emerging trend enabling the infusion of FinOps principles in the software development lifecycle using Infrastructure as Code (IaC) and automation. It helps embed cost awareness directly into the development process, encouraging collaboration between engineering and finance teams, and improving cloud resource utilization. Additionally, it also empowers your teams to take ownership of their cloud usage in the organization.


6 IT management practices certain to kill IT productivity

Eliminating multitasking is too much to shoot for, because there are, inevitably, more bits and pieces of work than there are staff to work on them. Also, the political pressure to squeeze something in usually overrules the logic of multitasking less. So instead of trying to stamp it out, attack the problem at the demand side instead of the supply side by enforcing a “Nothing-Is-Free” rule. ... Encourage a “culture of process” throughout your organization. Yes, this is just the headline, and there’s a whole lot of thought and work associated with making it real. Not everything can be reduced to an e-zine article. Sorry. ... If you hold people accountable when something goes wrong, they’ll do their best to conceal the problem from you. And the longer nobody deals with a problem, the worse it gets. ... Whenever something goes wrong, first fix the immediate problem — aka “stop the bleeding.” Then, figure out which systems and processes failed to prevent the problem and fix them so the organization is better prepared next time. And if it turns out the problem really was that someone messed up, figure out if they need better training and coaching, if they just got unlucky, if they took a calculated risk, or if they really are a problem employee you need to punish — what “holding people accountable” means in practice.


Resilience and Reinvention: How Economic Shocks Are Redefining Software Quality and DevOps

Reducing investments in QA might provide immediate financial relief, but it introduces longer-term risks. Releasing software with undetected bugs and security vulnerabilities can quickly erode customer trust and substantially increase remediation costs. History demonstrates that neglected QA efforts during financial downturns inevitably lead to higher expenses and diminished brand reputations due to subpar software releases. ... Automation plays an essential role in filling gaps caused by skills shortages. Organizations worldwide face a substantial IT skills shortage that will cost them $5.5 trillion by 2026, according to an IDC survey of North American IT leaders. ... The complexity of the modern software ecosystem magnifies the impact of economic disruptions. Delays or budget constraints in one vendor can create spillover, causing delays and complications across entire project pipelines. These interconnected dependencies magnify the importance of better operational visibility. Visibility into testing and software quality processes helps teams anticipate these ripple effects. ... Effective resilience strategies focus less on budget increases and more on strategic investment in capabilities that deliver tangible efficiency and reliability benefits. Technologies that support centralized testing, automation, and integrated quality management become critical investments rather than optional expenditures.


Current Debate: Will the Data Center of the Future Be AC or DC?

“DC power has been around in some data centers for about 20 years,” explains Peter Panfil, vice president of global power at Vertiv. “400V and 800V have been utilized in UPS for ages, but what is beginning to emerge to cope with the dynamic load shifts in AI are [new] applications of DC.” ... Several technical hurdles must be overcome before DC achieves broad adoption in the data center. The most obvious challenge is component redesign. Nearly every component – from transformers to breakers – must be re-engineered for DC operation. That places a major burden on transformer, PDU, substation, UPS, converter, regulator, and other electrical equipment suppliers. High-voltage DC also raises safety challenges. Arc suppression and fault isolation are more complex. Internal models are being devised to address this problem with solid-state circuit breakers and hybrid protection schemes. In addition, there is no universal standard for DC distribution in data centers, which complicates interoperability and certification. ... On the sustainability front, DC has a clear edge. DC power results in lower conversion losses, which equate to less wasted energy. Further, DC is more compatible with solar PV and battery storage, reducing long-term Opex and carbon costs.


Weak Passwords and Compromised Accounts: Key Findings from the Blue Report 2025

In the Blue Report 2025, Picus Labs found that password cracking attempts succeeded in 46% of tested environments, nearly doubling the success rate from last year. This sharp increase highlights a fundamental weakness in how organizations are managing – or mismanaging – their password policies. Weak passwords and outdated hashing algorithms continue to leave critical systems vulnerable to attackers using brute-force or rainbow table attacks to crack passwords and gain unauthorized access. Given that password cracking is one of the oldest and most reliably effective attack methods, this finding points to a serious issue: in their race to combat the latest, most sophisticated new breed of threats, many organizations are failing to enforce strong basic password hygiene policies while failing to adopt and integrate modern authentication practices into their defenses. ... The threat of credential abuse is both pervasive and dangerous, yet as the Blue Report 2025 highlights, organizations are still underprepared for this form of attack. And once attackers obtain valid credentials, they can easily move laterally, escalate privileges, and compromise critical systems. Infostealers and ransomware groups frequently rely on stolen credentials to spread across networks, burrowing deeper and deeper, often without triggering detection. 

Daly Tech Digest - August 20, 2025


Quote for the day:

"Real difficulties can be overcome; it is only the imaginary ones that are unconquerable." -- Theodore N. Vail


Asian Orgs Shift Cybersecurity Requirements to Suppliers

Cybersecurity audits need to move away from a yearly or quarterly exercise to continuous evaluation, says Security Scorecard's Cobb. As part of that, organizations should look to work with their suppliers to build a relationship that can help both companies be more resilient, he says. "Maybe you do an on-site visit or maybe you do a specific evidence gathering with that supplier, especially if they're a critical supplier based on their grade," Cobb says. "That security rating is a great first step for assessment, and it also will lead into further discussions with that supplier around what things can you do better." And yes, artificial intelligence (AI) is making inroads into monitoring third-party risk profiles as well. Consultancy EY imagines a future where multiple automated agents track information about suppliers and when an event — whether cyber, geopolitical, or meteorological — affects one or more supply chains, will automatically develop plans to mitigate the risk. Pointing out the repeated supply chain shocks from the pandemic, geopolitics, and climate change, EY argues that an automated system is necessary to keep up. When a chemical spill or a cybersecurity breach affects a supplier in Southeast Asia, for example, the system would track the news, predict the impact on a company's supply, and suggest alternate sources, if needed, the EY report stated.


The successes and challenges of AI agents

To really get the benefits, businesses will need to redesign the way work is done. The agent should be placed at the center of the task, with people stepping in only when human judgment is required. There is also the issue of trust. If the agent is only giving suggestions, a person can check the results. But when the agent acts directly, the risks are higher. This is where safety rules, testing systems, and clear records become important. Right now, these systems are still being built. One unexpected problem is that agents often think they are done when they are not. Humans know when a task is finished. Agents sometimes miss that. ... Today, the real barrier goes beyond just technology. It is also how people think about agents. Some overestimate what they can do; others are hesitant to try them. The truth lies in the middle. Agents are strong with goal-based and repeatable tasks. They are not ready to replace deep human thinking yet. ... Still, the direction is clear. In the next two years, agents will become normal in customer support and software development. Writing code, checking it, and merging it will become faster. Agents will handle more of these steps with less need for back-and-forth. As this grows, companies may create new roles to manage agents, needing someone to track how they are used, make sure they follow rules, and measure how much value they bring. This role could be as common as a data officer in the future.


How To Prepare Your Platform For Agentic Commerce

APIs and MCP servers are inherently more agent-friendly but less ubiquitous than websites. They expose services in a structured, scalable way that's perfect for agent consumption. The tradeoff is that you must find a way to allow verified agents to get access to your APIs. This is where some payment processing protocols can help by allowing verified agents to get access credentials that leverage your existing authentication, rate-limiting and abuse-prevention mechanisms to ensure access doesn’t lead to spam or scraping. In many cases, the best path is a hybrid approach: Expand your existing website to allow agent-compatible access and checkout while building key capabilities for agent access via APIs or MCP servers. ... Agents work best with standardized checkouts instead of needing to dodge botblockers and captchas while filling out forms via screenscraping. They need an entirely programmatic checkout process. That means you must move beyond more brittle browser autofill and instead accept tokenized payments directly via API. These tokens can carry pre-authorized payment methods such as tokenized credit cards, digital wallets (e.g., Apple Pay and PayPal), stablecoins or on-chain assets and account-to-account transfers. When combined with identity tokens, these payment tokens allow agents to present a complete, scoped credential that you can inspect and charge instantly. Think Stripe Checkout but for AI.


AI agents alone can’t be trusted in verification

One of the biggest risks comes from what’s known as compounding errors. Even a very accurate AI system – for example, 95% – becomes far less reliable when it’s chained to a series of compounding and related decisions. By the fifth hypothetical step, accuracy would drop to 77% or less. Unlike human teams, these systems don’t raise flags or signal uncertainty. That’s what makes them so risky: when they fail, they tend to do so silently and exponentially. ... This opacity is particularly dangerous in the fight against fraud, which is only getting more advanced. In 2025, fraudsters aren’t using fake passports and bad Photoshop. They’re using AI-generated identities, videos, and documents that are nearly impossible to distinguish from the real thing. Tools like Google’s Veo 3 or open-source image generators allow anyone to produce high-quality synthetic content at scale. ... Responsible and effective use of AI means using multiple models to cross-check results to avoid the domino effect of one error feeding into the next. It means assigning human reviewers to the most sensitive or high-risk cases – especially when fraud tactics evolve faster than models can be retrained. And it means having clear escalation procedures and full audit trails that can stand up to regulatory scrutiny. This hybrid model offers the best of both worlds: the speed and scale of AI, combined with the judgment and flexibility of human experts. As fraud becomes more sophisticated, this balance will be essential. 


AI in the classroom is important for real-world skills, college professors say

The agents can flag unsupported claims in students’ writing and explain why evidence is needed and recommend the use of credible sources, Luke Behnke, vice president of product management at Grammarly, said in an interview. “Colleges recognize it’s their responsibility to prepare students for the workforce, and that now includes AI literacy,” Behnke said. Universities are also implementing AI in their own learning management systems and providing students and staff access to Google’s Gemini, Microsoft’s Copilot and OpenAI’s ChatGPT. ... Cuo asks students not to simply accept whatever results advanced genAI models spit out, as they may be riddled with factual errors and hallucinations. “Students need to select and read more by themselves to create something that people don’t recognize as an AI product,” Cuo said. Some professors are trying to mitigate AI use by altering coursework and assignments, while others prefer not to use it at all, said Paul Shovlin, an assistant professor of AI and digital rhetoric at Ohio University. But students have different requirements and use AI tools for personalized learning, collaboration, and writing, as well as for coursework workflow, Shovlin said. He stressed, however, that ethical considerations, rhetorical awareness, and transparency remain important in demonstrating appropriate use.


Automation Alert Sounds as Certificates Set to Expire Faster

Decreasing the validity time for a certificate offers multiple benefits. As previous certificate revocations have demonstrated, actually revoking every bad certificate in a timely manner, across the broad ecosystem, is a challenge. Having certificates simply expire more frequently helps address that. The CA/Browser Forum also expects an ancillary benefit of "increased consistency of quality, stability and availability of certificate lifecycle management components which enable automated issuance, replacement and rotation of certificates." While such automation won't fix every ill, the forum said that "it certainly helps." ... When it comes to getting the so-called cryptographic agility needed to manage both of those requirements, many organizations say they're not yet there. "While awareness is high, execution is lagging," says a new study from market researcher Omdia. "Many organizations know they need to act but lack clear roadmaps or the internal alignment to do so." ... For managing the much shorter certificate renewal timeframe, only 19% of surveyed organizations say they're "very prepared," with 40% saying they're somewhat prepared and another 40% saying they're not very prepared, and so far continue to rely on manual processes. "Historically, organizations have been able to get by with poor certificate hygiene because cryptography was largely static," said Tim Callan


AI Data Centers Are Coming for Your Land, Water and Power

"Think of them as AI factories." But as data centers grow in size and number, often drastically changing the landscape around them, questions are looming: What are the impacts on the neighborhoods and towns where they're being built? Do they help the local economy or put a dangerous strain on the electric grid and the environment? ... As fast as the AI companies are moving, they want to be able to move even faster. Smith, in that Commerce Committee hearing, lamented that the US government needed to "streamline the federal permitting process to accelerate growth." ... Even as big tech companies invest heavily in AI, they also continue to promote their sustainability goals. Amazon, for example, aims to reach net-zero carbon emissions by 2040. Google has the same goal but states it plans to reach it 10 years earlier, by 2030. With AI's rapid advancement, experts no longer know if those climate goals are attainable, and carbon emissions are still rising. "Wanting to grow your AI at that speed and at the same time meet your climate goals are not compatible," Good says. For its Louisiana data center, Meta has "pledged to match its electricity use with 100% clean and renewable energy" and plans to "restore more water than it consumes," the Louisiana Economic Development statement reads.


Slow and Steady Security: Lessons from the Tortoise and the Hare

In security, it seems that we are constantly confronted by the next shiny object, item du jour, and/or overhyped topic. Along with this seems to come an endless supply of “experts” ready to instill fear in us around the “revolutionized threat landscape” and the “new reality” we apparently now find ourselves in and must come to terms with. Indeed, there is certainly no shortage of distractions in our field. Some of us are likely aware of and conscious of the near-constant tendency for distraction in our field. So how can we avoid falling into the trap of succumbing to the temptation and running after every distraction that comes along? Or, to pose it another way, how can we appropriately invest our time and resources in areas where we are likely to see value and return on that investment? ... All successful security teams are governed by a solid security strategy. While the strategy can be adjusted from time to time as risks and threats evolve, it shouldn’t drift wildly and certainly not in an instant. If the newest thing demands radically altering the security strategy, it’s an indicator that it may be overblown. The good news is that a well-formed security strategy can be adapted to deal with just about anything new that arises in a steady and systematic way, provided that new thing is real.


IBM and Google say scalable quantum computers could arrive this decade

Most notable advances come from qubits built with superconducting circuits, as used in IBM and Google machines. These systems must operate near absolute zero and are notoriously hard to control. Other approaches use trapped ions, neutral atoms, or photons as qubits. While these approaches offer greater inherent stability, scaling up and integrating large numbers of qubits remains a formidable practical challenge. "The costs and technical challenges of trying to scale will probably show which are more practical," said Sebastian Weidt, chief executive at Universal Quantum, a startup developing trapped ions. Weidt emphasized that government support in the coming years could play a decisive role in determining which quantum technologies prove viable, ultimately limiting the field to a handful of companies capable of bringing a system to full scale. Widespread interest in quantum computing is attracting attention from both investors and government agencies. ... These next-generation technologies are still in their early stages, though proponents argue they could eventually surpass today's quantum machines. For now, industry leaders continue refining and scaling legacy architectures developed over years of lab research.


The 6 challenges your business will face in implementing MLSecOps

ML models are often “black boxes”, even to their creators, so there’s little visibility into how they arrive at answers. For security pros, this means limited ability to audit or verify behavior – traditionally a key aspect of cybersecurity. There are ways to circumnavigate this opacity of AI and ML systems: with Trusted Execution Environments (TEEs). These are secure enclaves in which organizations can test models repeatedly in a controlled ecosystem, creating attestation data. ... Models are not static and are shaped by the data they ingest. Thus, data poisoning is a constant threat for ML models that need to be retrained. Organizations must embed automated checks into the training process to enforce a continuously secure pipeline of data. Using information from the TEE and guidelines on how models should behave, AI and ML models can be assessed for integrity and accuracy each time they are given new information. ... Risk assessment frameworks that work for traditional software will not be applicable to the changeable nature of AI and ML programs. Traditional assessments fail to account for tradeoffs specific to ML, e.g., accuracy vs fairness, security vs explainability, or transparency vs efficiency. To navigate this difficulty, businesses must be evaluating models on a case-by-case basis, looking to their mission, use case and context to weigh their risks. 

Daily Tech Digest - August 19, 2025


Quote for the day:

“A great person attracts great people and knows how to hold them together. “ -- Johann Wolfgang von Goethe



What happens when penetration testing goes virtual and gets an AI coach

Researchers from the University of Bari Aldo Moro propose using Cyber Digital Twins (CDTs) and generative AI to create realistic, interactive environments for cybersecurity education. Their framework simulates IT, OT, and IoT systems in a controlled virtual space and layers AI-driven feedback on top. The goal is to improve penetration testing skills and strengthen understanding of the full cyberattack lifecycle. At the center of the framework is the Red Team Knife (RTK), a toolkit that integrates common penetration testing tools like Nmap, theHarvester, sqlmap, and others. What makes RTK different is how it walks learners through the stages of the Cyber Kill Chain model. It prompts users to reflect on next steps, reevaluate earlier findings, and build a deeper understanding of how different phases connect. ... This setup reflects the non-linear nature of real-world penetration testing. Learners might start with a network scan, move on to exploitation, then loop back to refine reconnaissance based on new insights. RTK helps users navigate this process with suggestions that adapt to each situation. The research also connects this training approach to a broader concept called Cyber Social Security, which focuses on the intersection of human behavior, social factors, and cybersecurity. 


7 signs it’s time for a managed security service provider

When your SOC team is ignoring 300 daily alerts and manually triaging what should be automated, that’s your cue to consider an MSSP, says Toby Basalla, founder and principal data consultant at data consulting firm Synthelize. When confusion reigns, who in the SOC team knows which red flag actually means something? Plus, if you’re depending on one person to monitor traffic during off-hours, and that individual is out sick, what happens then? ... Organizations typically realize they need an MSSP when their internal team struggles to keep pace with alerts, incident response, or compliance requirements, says Ensar Seker, CISO at SOCRadar, where he specializes in threat intelligence, ransomware mitigation, and supply chain security. This vulnerability becomes particularly evident after a close call or audit finding, when gaps in visibility, threat detection, or 24/7 coverage become undeniable. ... Many smaller enterprises simply can’t afford the cost of a full-time cybersecurity staff, or even a single dedicated expert. This leaves such organizations particularly vulnerable to all types of attacks. An MSSP can significantly help such organizations by providing a full array of services, including 24/7 monitoring, threat detection, incident response, and access to a broad range of specialized security tools and expertise. “They bring economies of scale, proactive threat intelligence, and a deep understanding of best practices,” Young says.


Cyber Security Responsibilities of Roles Involved in Software Development

Building secure software is crucial as a vulnerable software would be an easy target for the cyber criminals to exploit. There are people, process and technology forming part of the software supply chain and it is very important that all of these plays a role in securing the supply chain. While process and technology play the role of enablers, it is people who should buy-in and adapt to the mindset of ensuring security in every aspect of their routine work. ... This includes developers implementing secure coding techniques, security teams identifying vulnerabilities, and everyone involved staying updated on the latest threats and best practices to prevent potential security breaches. Whatever said and done, the root cause of a vulnerability in a software ultimately boils down to people, because someone somewhere had missed something and thus a security defect creeps in to the supply chain and shows up as a vulnerability. It could be a missed requirement by the Business Analyst or a simple coding mistake by a developer. So, everyone involved in the software development right from gathering requirements to deployment of the software in production environment need to have the sense of cyber security in what they do. Even those involved in support and maintenance of software systems also has a role in keeping the software secure.


Build Boringly Reliable ai Into Your DevOps

Observability for ai is different because “correctness” isn’t binary and inputs are messy. We focus on three pillars: live service metrics, evaluation metrics (task success, hallucination rate), and lineage. The first pillar looks like any microservice: we scrape metrics and trace request/response cycles. We prefer OpenTelemetry for traces because we can tag spans with prompt IDs, model routes, and experiment flags. The benefit is obvious when a perf spike happens and you can isolate it to “experiment=prompt_v17.” ... Costs don’t explode; they creep—one verbose chain-of-thought at a time. We price every inference the same way we price a SQL query: tokens in, tokens out, latency, and downstream work. For a customer-support deflection bot, we discovered that truncating history to the last 6 messages cut average tokens by 41% with no measurable drop in solved-rate over 30 days. That was an easy win. Harder wins come from selective routing: ship easy tasks to a small, fast model; escalate only when confidence is low. ... Data quality makes or breaks ai results. Before we debate model choices, we sanitize inputs, enforce schemas, and redact PII. You don’t want a customer’s credit card to become part of your “context.” We’ve had great results with a lightweight validation layer in the request path and daily batch checks on the source corpora. 


Why Training Won’t Solve the Citizen Developer Security Problem

In most organizations, security training is a core component of cybersecurity frameworks and often a compliance requirement. Helping employees recognize and respond to cyber threats significantly reduces human error, the leading cause of security breaches. That said, traditional security training for technically inclined IT staff and developer teams is already a formidable challenge. Rolling out training for citizen developers—employees with little to no formal IT or security background— is exponentially harder for several reasons ... It’s a well-known fact: security training has always struggled to deliver lasting behavioral change. For two decades, employees have been told, “Don’t click suspicious links in emails.” Yet, click rates on phishing emails remain stubbornly high. Why? Human error is persistent, so training alone is not enough. In response, businesses are layering technology — advanced email gateways, sandboxing, Endpoint Detection and Response (EDR), and real-time URL scanning — around users to compensate for their inevitable lapses in judgment. ... Unfortunately, traditional AppSec tools fall short for no-code apps, which aren’t built line by line and rely on proprietary logic inaccessible to standard code scans. Even with access, interpreting their risks demands specialized cybersecurity expertise, rendering traditional code-scanning tools ineffective.


6 signs of a dying digital transformation

“It’s a fundamental disconnect where the technology being implemented simply isn’t delivering the promised improvements to operations, customer experience, or competitive advantage.” This indicator, he notes, often reveals itself as a growing cynicism within the organization, with teams feeling like they’re simply “doing digital” for its own sake without a clear understanding of the “why” or seeing any real positive impact. ... When users aren’t interested or feel no need to use the transformation’s new tools or applications, it indicates a disconnect between the users, their goals, and actual business outcomes, says Aparna Achanta, IBM Consulting’s cybersecurity strategist and AI governance and transformation leader. To successfully address this issue, Achanta recommends aligning digital transformation with the overall business vision, making sure that the voices of end-users and customers are being heard. ... Strong business leadership, and a willingness to admit mistakes, are essential to digital transformation success, Hochman says. “Too often, enterprises run away from failure.” He notes that such moments are actually golden opportunities to break paradigms and try new approaches. “The more failures a company speaks openly about, the more innovation occurs.” ... “Adoption is the oxygen of transformation,” he says. 


Why Master Data Management Is Even More Important Now

There is a mindset shift that must happen to get people to buy into the cost and the overhead of managing the data in a way that's going to be usable, Thompson says. “It’s knowing how to match technology up with a set of business processes, internal culture, commitment to do things properly and tie [that] to a business outcome that makes sense,” he says. “[T]he level of maturity of some good companies is bad. They’re just bad at managing their data assets.” ... “[MDM] has very real business consequences, and I think that's the part that we can all do better is to start talking about the business outcome, because these business outcomes are so serious and so easy to understand that it shouldn't be hard to get business leaders behind it,” says Thompson. “But if you try to get business leaders behind MDM, it sounds like you want to undertake a science project with their help. It’s not about the MDM, it’s about the business outcome that you can get if you do a great job at MDM.” ... In older organizations, MDM maturity tends to be unevenly distributed. The core data tends to be fairly well organized and managed, but the rest isn’t. The age-old problem of data ownership and a reticence to share data doesn’t help. “The notion of data mesh [is] I’ll manage this piece, and you manage that piece. We’ll be disconnected but we can connect, and you can use it, but don’t mess with it. It’s mine,” says Landry.


How to Future-Proof Your Data and AI Strategy

The earlier you find a software bug, the less expensive it is to fix and the less negative customer impact it has – this is a basic principle of software development. And the value of a shift-left approach becomes even more apparent when applied to data privacy in the age of AI. If you use personal information to train models and realize later that you shouldn’t have, the only solution is to roll back the model, which also rolls back the value of the system and the competitive advantage it was intended to deliver. ... Companies need a scalable approach to determine where to go deep and where to move quickly. Prioritize based on impact by applying stricter controls where AI is high-risk or high-stakes, such as projects where AI is core to the functionality of new solutions or segments of the business. Apply lighter-touch governance where risk is low and build scalable policies that align governance intensity with business context, risk appetite, and innovation goals. ... Future-proofing your data and AI strategy is more than having the right tools and processes; it’s a mindset. If your approach isn’t designed for scalability and agility, it can quickly become a source of friction. A rigid, compliance-focused model makes even the best tools feel ineffective and can result in governance being seen as a bottleneck rather than a value driver.


The Unavoidable ‘SCREAM’: Why Enterprise Architecture Must Transform for the Organization of Tomorrow

In an era where every discussion, whether personal or organizational, is steeped in the pervasive influence of AI and data, one naturally questions the true state of Enterprise Architecture (EA) within most organizations today. Too often, we observe situational chaos and a predominantly reactive posture, where EA teams find themselves supporting hasty executive decisions in a culture of order-taking. Businesses, in turn, perceive Information Technology as slow to deliver, while IT teams, grappling with a perceived lack of business understanding, struggle to demonstrate timely value. This dynamic often leads to organizations becoming vendor-driven, with core architectural management often unaddressed. Despite this, there’s no doubt that the demand for Enterprise Architecture is surging. However, the existing challenges—from the sheer breadth of required skillsets and knowledge to the overwhelming abundance of frameworks to choose from—frequently plunge EA practices into moments of SCREAM: Situational Chaotic Realities of Enterprise Architecture Management. However, among these challenges, there persists a profound desire for adaptive design and resilient enterprise architecture. Significant architectural efforts are indeed undertaken across organizations of all sizes. The equilibrium that every organization truly needs, however, often feels elusive.


Microsoft Morphs Fusion Developers To Full Stack Builders

Citizen development is a thorny subject; allowing business “laypersons” to impact the way software application code is structured, aligned and executed is an unpopular concept with command line purists who would prefer to keep the suits at arm’s length, if not further. ... The central argument from Silver and Cunningham is that it’s really tough to teach businesspeople to code and, equally tough to teach software engineers the principles of business operations. The Redmond pair suggest that Microsoft Power Platform will provide the “scaffolding” for full-stack teams to fuse (yes, okay, we’re not using that word anymore) their two previously quite separate working environments. ... To make full-stack development a reality inside any given organization, Microsoft has said that there will need to be a degree of initial investment into engineering systems and context. This, then, would be the scaffolding. Redmond suggests that new applications will emerge that are architected to support natural language development, augmentation and modification. With boundaries, safeguards and guardrails in place to oversee what AI agents can do when left in the hands of businesspeople, software systems will need to be engineered with enough meta-knowledge to understand the business context of the decisions that might be made without breaking other parts of the system. 

Daily Tech Digest - August 18, 2025


Quote for the day:

"The ladder of success is best climbed by stepping on the rungs of opportunity." -- Ayn Rand


Legacy IT Infrastructure: Not the Villain We Make It Out to Be

Most legacy infrastructure consists of tried-and-true solutions. If a business has been using a legacy system for years, it's a reliable investment. It may not be as optimal from a cost, scalability, or security perspective as a more modern alternative. But in some cases, this drawback is outweighed by the fact that — unlike a new, as-yet-unproven solution — legacy systems can be trusted to do what they claim to do because they've already been doing it for years. The fact that legacy systems have been around for a while also means that it's often easy to find engineers who know how to work with them. Hiring experts in the latest, greatest technology can be challenging, especially given the widespread IT talent shortage. But if a technology has been in widespread use for decades, IT departments don't need to look as hard to find staff qualified to support them. ... From a cost perspective, too, legacy systems have their benefits. Even if they are subject to technical debt or operational inefficiencies that increase costs, sticking with them may be a more financially sound move than undertaking a costly migration to an alternative system, which may itself present unforeseen cost drawbacks. ...  As for security, it's hard to argue that a system with inherent, incurable security flaws is worth keeping around. However, some IT systems can offer security benefits not available on more modern alternatives. 


Agentic AI promises a cybersecurity revolution — with asterisks

“If you want to remove or give agency to a platform tool to make decisions on your behalf, you have to gain a lot of trust in the system to make sure that it is acting in your best interest,” Seri says. “It can hallucinate, and you have to be vigilant in maintaining a chain of evidence between a conclusion that the system gave you and where it came from.” ... “Everyone’s creating MCP servers for their services to have AI interact with them. But an MCP at the end of the day is the same thing as an API. [Don’t make] all the same mistakes that people made when they started creating APIs ten years ago. All these authentication problems and tokens, everything that’s just API security.” ... CISOs need to immediately strap in and grapple with the implications of a technology that they do not always fully control, if for no other reason than their team members will likely turn to AI platforms to develop their security solutions. “Saying no doesn’t work. You have to say yes with guardrails,” says Mesta. At this still nascent stage of agentic AI, CISOs should ask questions, Riopel says. But he stresses that the main “question you should be asking is: How can I force multiply the output or the effectiveness of my team in a very short period of time? And by a short period of time, it’s not months; it should be days. That is the type of return that our customers, even in enterprise-type environments, are seeing.”


Zero Trust: A Strong Strategy for Secure Enterprise

Due to the increasing interconnection of operational changes affecting the financial and social health of digital enterprises, security is assuming a more prominent role in business discussions. Executive leadership is pivotal in ensuring enterprise security. It’s vital for business operations and security to be aligned and coordinated to maintain security. Data governance is integral in coordinating cross-functional activity to achieve this requirement. Executive leadership buy-in coordinates and supports security initiatives, and executive sponsorship sets the tone and provides the resources necessary for program success. As a result, security professionals are increasingly represented in board seats and C-suite positions. In public acknowledgment of this responsibility, executive leadership is increasingly held accountable for security breaches, with some being found personally liable for negligence. Today, enterprise security is the responsibility of multiple teams. IT infrastructure, IT enterprise, information security, product teams, and cloud teams work together in functional unity but require a sponsor for the security program. Zero trust security complements operations due to its strict role definition, process mapping, and monitoring practices, making compliance more manageable and automatable. Whatever the region, the trend is toward increased reporting and compliance. As a result, data governance and security are closely intertwined.


The Role of Open Source in Democratizing Data

Every organization uses a unique mix of tools, from mainstream platforms such as Salesforce to industry-specific applications that only a handful of companies use. Traditional vendors can't economically justify building connectors for niche tools that might only have 100 users globally. This is where open source fundamentally changes the game. The math that doesn't work for proprietary vendors, where each connector needs to generate significant revenue, becomes irrelevant when the users themselves are the builders. ... The truth about AI is that it isn’t about using the best LLMs or the most powerful GPUs. The real truth is that AI is only as good as the data it ingests. I've seen Fortune 500 companies with data locked in legacy ERPs from the 1990s, custom-built internal tools, and regional systems that no vendor supports. This data, often containing decades of business intelligence, remains trapped and unusable for AI training. Long-tail connectors change this equation entirely. When the community can build connectors for any system, no matter how obscure, decades of insights can be unlocked and unleashed. This matters enormously for AI readiness. Training effective models requires real data context, not a selected subset from cloud native systems incorporated just 10 years ago. Companies that can integrate their entire data estate, including legacy systems, gain massive advantages. More data fed into AI leads to better results.


7 Terrifying AI Risks That Could Change The World

Operating generative AI language models requires huge amounts of compute power. This is provided by vast data centers that burn through energy at rates comparable to small nations, creating poisonous emissions and noise pollution. They consume massive amounts of water at a time when water scarcity is increasingly a concern. Critics of the idea that the benefits of AI are outweighed by the environmental harm it causes often believe that this damage will be offset by efficiencies that AI will create. ... The threat that AI poses to privacy is at the root of this one. With its ability to capture and process vast quantities of personal information, there’s no way to predict how much it might know about our lives in just a few short years. Employers increasingly monitoring and analyzing worker activity, the growing number of AI-enabled cameras on our devices, and in our streets, vehicles and homes, and police forces rolling out facial-recognition technology, all raise anxiety that soon no corner will be safe from prying AIs. ... AI enables and accelerates the spread of misinformation, making it quicker and easier to disseminate, more convincing, and harder to detect from Deepfake videos of world leaders saying or doing things that never happened, to conspiracy theories flooding social media in the form of stories and images designed to go viral and cause disruption. 


Quality Mindset: Why Software Testing Starts at Planning

In many organizations, quality is still siloed, handed off to QA or engineering teams late in the process. But high-performing companies treat quality as a shared responsibility. The business, product, development, QA, release, and operations teams all collaborate to define what "good" looks like. This culture of shared ownership drives better business outcomes. It reduces rework, shortens release cycles, and improves time to market. More importantly, it fosters alignment between technical teams and business stakeholders, ensuring that software investments deliver measurable value. ... A strong quality strategy delivers measurable benefits across the entire enterprise. When teams focus on building quality into every stage of the development process, they spend less time fixing bugs and more time delivering innovation. This shift enables faster time to market and allows organizations to respond more quickly to changing customer needs. The impact goes far beyond the development team. Fewer defects lead to a better customer experience, resulting in higher satisfaction and improved retention. At the same time, a focus on quality reduces the total cost of ownership by minimizing rework, preventing incidents, and ensuring more predictable delivery cycles. Confident in their processes and tools, teams gain the agility to release more frequently without the fear of failure. 


Is “Service as Software” Going to Bring Down People Costs?

Tiwary, formerly of Barracuda Networks and now a venture principal and board member, described the phenomenon as “Service as Software” — a flip of the familiar SaaS acronym that points to a fundamental shift. Instead of hiring more humans to deliver incremental services, organizations are looking at whether AI can deliver those same services as software: infinitely scalable, lower cost, always on. ... Yes, “Service as Software” is a clever phrase, but Hoff bristles at the way “agentic AI” is invoked as if it’s already a settled, mature category. He reminds us that this isn’t some radical new direction — we’ve been on the automation journey for decades, from the codification of security to the rise of cloud-based SOC tooling. GenAI is an iteration, not a revolution. And with each iteration comes risk. Automation without full agency can create as many headaches as it solves. Hiring people who understand how to wield GenAI responsibly may actually increase costs — try finding someone who can wrangle KQL, no-code workflows, and privileged AI swarms without commanding a premium salary. ... The future of “Service as Software” won’t be defined by clever turns of phrase or venture funding announcements. It will be defined by the daily grind of adoption, iteration and timing. AI will replace people in some functions. 


Zero-Downtime Critical Cloud Infrastructure Upgrades at Scale

The requirement for performance testing is mandatory when your system handles critical traffic flow. The first step of every upgrade requires you to collect baseline performance data while performing detailed stress tests that replicate actual workload scenarios. The testing process should include both typical happy path executions and edge cases along with peak traffic conditions and failure scenarios to detect performance bottlenecks. ... Every organization should create formal rollback procedures. A defined rollback approach must accompany all migration and upgrade operations regardless of their future utilization plans. Such a system creates a one-way entry system without any exit plan which puts you at risk. The rollback procedures need proper documentation and validation and should sometimes undergo independent testing. ... Never add any additional improvements during upgrades or migrations – not even a single log line. This discipline might seem excessive, but it's crucial for maintaining clarity during troubleshooting. Migrate the system exactly as it is, then tackle improvements in a separate, subsequent deployment. ... The successful implementation of zero-downtime upgrades at scale needs more than technical skills because it requires systematic preparation and clear communication together with experience-based understanding of potential issues.


The Human Side of AI Governance: Using SCARF to Navigate Digital Transformation

Developed by David Rock in 2008, the SCARF model provides a comprehensive framework for understanding human social behavior through five critical domains that trigger either threat or reward responses in the brain. These domains encompass Status (our perceived importance relative to others), Certainty (our ability to predict future outcomes), Autonomy (our sense of control over events), Relatedness (our sense of safety and connection with others), and Fairness (our perception of equitable treatment). The significance of this framework lies in its neurological foundation. These five social domains activate the same neural pathways that govern our physical survival responses, which explains why perceived social threats can generate reactions as intense as those triggered by physical danger. ... As AI systems become embedded in daily workflows, governance frameworks must actively monitor and support the evolving human-AI relationships. Organizations can create mechanisms for publicly recognizing successful human-AI collaborations while implementing regular “performance reviews” that explain how AI decision-making evolves. Establish clear protocols for human override capabilities, foster a team identity that includes AI as a valued contributor, and conduct regular bias audits to ensure equitable AI performance across different user groups.


How security teams are putting AI to work right now

Security teams are used to drowning in alerts. Most are false positives, some are low risk, only a few matter. AI is helping to cut through this mess. Vendors have been building machine learning models to sort and score alerts. These tools learn over time which signals matter and which can be ignored. When tuned well, they can bring alert volumes down by more than half. That gives analysts more time to look into real threats. GenAI adds something new. Instead of just ranking alerts, some tools now summarize what happened and suggest next steps. One prompt might show an analyst what an attacker did, which systems were touched, and whether data was exfiltrated. This can save time, especially for newer analysts. ... “Humans are still an important part of the process. Analysts provide feedback to the AI so that it continues to improve, share environmental-specific insights, maintain continuous oversight, and handle things AI can’t deal with today,” said Tom Findling, CEO of Conifers. “CISOs should start by targeting areas that consume the most resources or carry the highest risk, while creating a feedback loop that lets analysts guide how the system evolves.” ... Entry-level analysts may no longer spend all day clicking through dashboards. Instead, they might focus on verifying AI suggestions and tuning the system.