Daly Tech Digest - August 20, 2025


Quote for the day:

"Real difficulties can be overcome; it is only the imaginary ones that are unconquerable." -- Theodore N. Vail


Asian Orgs Shift Cybersecurity Requirements to Suppliers

Cybersecurity audits need to move away from a yearly or quarterly exercise to continuous evaluation, says Security Scorecard's Cobb. As part of that, organizations should look to work with their suppliers to build a relationship that can help both companies be more resilient, he says. "Maybe you do an on-site visit or maybe you do a specific evidence gathering with that supplier, especially if they're a critical supplier based on their grade," Cobb says. "That security rating is a great first step for assessment, and it also will lead into further discussions with that supplier around what things can you do better." And yes, artificial intelligence (AI) is making inroads into monitoring third-party risk profiles as well. Consultancy EY imagines a future where multiple automated agents track information about suppliers and when an event — whether cyber, geopolitical, or meteorological — affects one or more supply chains, will automatically develop plans to mitigate the risk. Pointing out the repeated supply chain shocks from the pandemic, geopolitics, and climate change, EY argues that an automated system is necessary to keep up. When a chemical spill or a cybersecurity breach affects a supplier in Southeast Asia, for example, the system would track the news, predict the impact on a company's supply, and suggest alternate sources, if needed, the EY report stated.


The successes and challenges of AI agents

To really get the benefits, businesses will need to redesign the way work is done. The agent should be placed at the center of the task, with people stepping in only when human judgment is required. There is also the issue of trust. If the agent is only giving suggestions, a person can check the results. But when the agent acts directly, the risks are higher. This is where safety rules, testing systems, and clear records become important. Right now, these systems are still being built. One unexpected problem is that agents often think they are done when they are not. Humans know when a task is finished. Agents sometimes miss that. ... Today, the real barrier goes beyond just technology. It is also how people think about agents. Some overestimate what they can do; others are hesitant to try them. The truth lies in the middle. Agents are strong with goal-based and repeatable tasks. They are not ready to replace deep human thinking yet. ... Still, the direction is clear. In the next two years, agents will become normal in customer support and software development. Writing code, checking it, and merging it will become faster. Agents will handle more of these steps with less need for back-and-forth. As this grows, companies may create new roles to manage agents, needing someone to track how they are used, make sure they follow rules, and measure how much value they bring. This role could be as common as a data officer in the future.


How To Prepare Your Platform For Agentic Commerce

APIs and MCP servers are inherently more agent-friendly but less ubiquitous than websites. They expose services in a structured, scalable way that's perfect for agent consumption. The tradeoff is that you must find a way to allow verified agents to get access to your APIs. This is where some payment processing protocols can help by allowing verified agents to get access credentials that leverage your existing authentication, rate-limiting and abuse-prevention mechanisms to ensure access doesn’t lead to spam or scraping. In many cases, the best path is a hybrid approach: Expand your existing website to allow agent-compatible access and checkout while building key capabilities for agent access via APIs or MCP servers. ... Agents work best with standardized checkouts instead of needing to dodge botblockers and captchas while filling out forms via screenscraping. They need an entirely programmatic checkout process. That means you must move beyond more brittle browser autofill and instead accept tokenized payments directly via API. These tokens can carry pre-authorized payment methods such as tokenized credit cards, digital wallets (e.g., Apple Pay and PayPal), stablecoins or on-chain assets and account-to-account transfers. When combined with identity tokens, these payment tokens allow agents to present a complete, scoped credential that you can inspect and charge instantly. Think Stripe Checkout but for AI.


AI agents alone can’t be trusted in verification

One of the biggest risks comes from what’s known as compounding errors. Even a very accurate AI system – for example, 95% – becomes far less reliable when it’s chained to a series of compounding and related decisions. By the fifth hypothetical step, accuracy would drop to 77% or less. Unlike human teams, these systems don’t raise flags or signal uncertainty. That’s what makes them so risky: when they fail, they tend to do so silently and exponentially. ... This opacity is particularly dangerous in the fight against fraud, which is only getting more advanced. In 2025, fraudsters aren’t using fake passports and bad Photoshop. They’re using AI-generated identities, videos, and documents that are nearly impossible to distinguish from the real thing. Tools like Google’s Veo 3 or open-source image generators allow anyone to produce high-quality synthetic content at scale. ... Responsible and effective use of AI means using multiple models to cross-check results to avoid the domino effect of one error feeding into the next. It means assigning human reviewers to the most sensitive or high-risk cases – especially when fraud tactics evolve faster than models can be retrained. And it means having clear escalation procedures and full audit trails that can stand up to regulatory scrutiny. This hybrid model offers the best of both worlds: the speed and scale of AI, combined with the judgment and flexibility of human experts. As fraud becomes more sophisticated, this balance will be essential. 


AI in the classroom is important for real-world skills, college professors say

The agents can flag unsupported claims in students’ writing and explain why evidence is needed and recommend the use of credible sources, Luke Behnke, vice president of product management at Grammarly, said in an interview. “Colleges recognize it’s their responsibility to prepare students for the workforce, and that now includes AI literacy,” Behnke said. Universities are also implementing AI in their own learning management systems and providing students and staff access to Google’s Gemini, Microsoft’s Copilot and OpenAI’s ChatGPT. ... Cuo asks students not to simply accept whatever results advanced genAI models spit out, as they may be riddled with factual errors and hallucinations. “Students need to select and read more by themselves to create something that people don’t recognize as an AI product,” Cuo said. Some professors are trying to mitigate AI use by altering coursework and assignments, while others prefer not to use it at all, said Paul Shovlin, an assistant professor of AI and digital rhetoric at Ohio University. But students have different requirements and use AI tools for personalized learning, collaboration, and writing, as well as for coursework workflow, Shovlin said. He stressed, however, that ethical considerations, rhetorical awareness, and transparency remain important in demonstrating appropriate use.


Automation Alert Sounds as Certificates Set to Expire Faster

Decreasing the validity time for a certificate offers multiple benefits. As previous certificate revocations have demonstrated, actually revoking every bad certificate in a timely manner, across the broad ecosystem, is a challenge. Having certificates simply expire more frequently helps address that. The CA/Browser Forum also expects an ancillary benefit of "increased consistency of quality, stability and availability of certificate lifecycle management components which enable automated issuance, replacement and rotation of certificates." While such automation won't fix every ill, the forum said that "it certainly helps." ... When it comes to getting the so-called cryptographic agility needed to manage both of those requirements, many organizations say they're not yet there. "While awareness is high, execution is lagging," says a new study from market researcher Omdia. "Many organizations know they need to act but lack clear roadmaps or the internal alignment to do so." ... For managing the much shorter certificate renewal timeframe, only 19% of surveyed organizations say they're "very prepared," with 40% saying they're somewhat prepared and another 40% saying they're not very prepared, and so far continue to rely on manual processes. "Historically, organizations have been able to get by with poor certificate hygiene because cryptography was largely static," said Tim Callan


AI Data Centers Are Coming for Your Land, Water and Power

"Think of them as AI factories." But as data centers grow in size and number, often drastically changing the landscape around them, questions are looming: What are the impacts on the neighborhoods and towns where they're being built? Do they help the local economy or put a dangerous strain on the electric grid and the environment? ... As fast as the AI companies are moving, they want to be able to move even faster. Smith, in that Commerce Committee hearing, lamented that the US government needed to "streamline the federal permitting process to accelerate growth." ... Even as big tech companies invest heavily in AI, they also continue to promote their sustainability goals. Amazon, for example, aims to reach net-zero carbon emissions by 2040. Google has the same goal but states it plans to reach it 10 years earlier, by 2030. With AI's rapid advancement, experts no longer know if those climate goals are attainable, and carbon emissions are still rising. "Wanting to grow your AI at that speed and at the same time meet your climate goals are not compatible," Good says. For its Louisiana data center, Meta has "pledged to match its electricity use with 100% clean and renewable energy" and plans to "restore more water than it consumes," the Louisiana Economic Development statement reads.


Slow and Steady Security: Lessons from the Tortoise and the Hare

In security, it seems that we are constantly confronted by the next shiny object, item du jour, and/or overhyped topic. Along with this seems to come an endless supply of “experts” ready to instill fear in us around the “revolutionized threat landscape” and the “new reality” we apparently now find ourselves in and must come to terms with. Indeed, there is certainly no shortage of distractions in our field. Some of us are likely aware of and conscious of the near-constant tendency for distraction in our field. So how can we avoid falling into the trap of succumbing to the temptation and running after every distraction that comes along? Or, to pose it another way, how can we appropriately invest our time and resources in areas where we are likely to see value and return on that investment? ... All successful security teams are governed by a solid security strategy. While the strategy can be adjusted from time to time as risks and threats evolve, it shouldn’t drift wildly and certainly not in an instant. If the newest thing demands radically altering the security strategy, it’s an indicator that it may be overblown. The good news is that a well-formed security strategy can be adapted to deal with just about anything new that arises in a steady and systematic way, provided that new thing is real.


IBM and Google say scalable quantum computers could arrive this decade

Most notable advances come from qubits built with superconducting circuits, as used in IBM and Google machines. These systems must operate near absolute zero and are notoriously hard to control. Other approaches use trapped ions, neutral atoms, or photons as qubits. While these approaches offer greater inherent stability, scaling up and integrating large numbers of qubits remains a formidable practical challenge. "The costs and technical challenges of trying to scale will probably show which are more practical," said Sebastian Weidt, chief executive at Universal Quantum, a startup developing trapped ions. Weidt emphasized that government support in the coming years could play a decisive role in determining which quantum technologies prove viable, ultimately limiting the field to a handful of companies capable of bringing a system to full scale. Widespread interest in quantum computing is attracting attention from both investors and government agencies. ... These next-generation technologies are still in their early stages, though proponents argue they could eventually surpass today's quantum machines. For now, industry leaders continue refining and scaling legacy architectures developed over years of lab research.


The 6 challenges your business will face in implementing MLSecOps

ML models are often “black boxes”, even to their creators, so there’s little visibility into how they arrive at answers. For security pros, this means limited ability to audit or verify behavior – traditionally a key aspect of cybersecurity. There are ways to circumnavigate this opacity of AI and ML systems: with Trusted Execution Environments (TEEs). These are secure enclaves in which organizations can test models repeatedly in a controlled ecosystem, creating attestation data. ... Models are not static and are shaped by the data they ingest. Thus, data poisoning is a constant threat for ML models that need to be retrained. Organizations must embed automated checks into the training process to enforce a continuously secure pipeline of data. Using information from the TEE and guidelines on how models should behave, AI and ML models can be assessed for integrity and accuracy each time they are given new information. ... Risk assessment frameworks that work for traditional software will not be applicable to the changeable nature of AI and ML programs. Traditional assessments fail to account for tradeoffs specific to ML, e.g., accuracy vs fairness, security vs explainability, or transparency vs efficiency. To navigate this difficulty, businesses must be evaluating models on a case-by-case basis, looking to their mission, use case and context to weigh their risks. 

Daily Tech Digest - August 19, 2025


Quote for the day:

“A great person attracts great people and knows how to hold them together. “ -- Johann Wolfgang von Goethe



What happens when penetration testing goes virtual and gets an AI coach

Researchers from the University of Bari Aldo Moro propose using Cyber Digital Twins (CDTs) and generative AI to create realistic, interactive environments for cybersecurity education. Their framework simulates IT, OT, and IoT systems in a controlled virtual space and layers AI-driven feedback on top. The goal is to improve penetration testing skills and strengthen understanding of the full cyberattack lifecycle. At the center of the framework is the Red Team Knife (RTK), a toolkit that integrates common penetration testing tools like Nmap, theHarvester, sqlmap, and others. What makes RTK different is how it walks learners through the stages of the Cyber Kill Chain model. It prompts users to reflect on next steps, reevaluate earlier findings, and build a deeper understanding of how different phases connect. ... This setup reflects the non-linear nature of real-world penetration testing. Learners might start with a network scan, move on to exploitation, then loop back to refine reconnaissance based on new insights. RTK helps users navigate this process with suggestions that adapt to each situation. The research also connects this training approach to a broader concept called Cyber Social Security, which focuses on the intersection of human behavior, social factors, and cybersecurity. 


7 signs it’s time for a managed security service provider

When your SOC team is ignoring 300 daily alerts and manually triaging what should be automated, that’s your cue to consider an MSSP, says Toby Basalla, founder and principal data consultant at data consulting firm Synthelize. When confusion reigns, who in the SOC team knows which red flag actually means something? Plus, if you’re depending on one person to monitor traffic during off-hours, and that individual is out sick, what happens then? ... Organizations typically realize they need an MSSP when their internal team struggles to keep pace with alerts, incident response, or compliance requirements, says Ensar Seker, CISO at SOCRadar, where he specializes in threat intelligence, ransomware mitigation, and supply chain security. This vulnerability becomes particularly evident after a close call or audit finding, when gaps in visibility, threat detection, or 24/7 coverage become undeniable. ... Many smaller enterprises simply can’t afford the cost of a full-time cybersecurity staff, or even a single dedicated expert. This leaves such organizations particularly vulnerable to all types of attacks. An MSSP can significantly help such organizations by providing a full array of services, including 24/7 monitoring, threat detection, incident response, and access to a broad range of specialized security tools and expertise. “They bring economies of scale, proactive threat intelligence, and a deep understanding of best practices,” Young says.


Cyber Security Responsibilities of Roles Involved in Software Development

Building secure software is crucial as a vulnerable software would be an easy target for the cyber criminals to exploit. There are people, process and technology forming part of the software supply chain and it is very important that all of these plays a role in securing the supply chain. While process and technology play the role of enablers, it is people who should buy-in and adapt to the mindset of ensuring security in every aspect of their routine work. ... This includes developers implementing secure coding techniques, security teams identifying vulnerabilities, and everyone involved staying updated on the latest threats and best practices to prevent potential security breaches. Whatever said and done, the root cause of a vulnerability in a software ultimately boils down to people, because someone somewhere had missed something and thus a security defect creeps in to the supply chain and shows up as a vulnerability. It could be a missed requirement by the Business Analyst or a simple coding mistake by a developer. So, everyone involved in the software development right from gathering requirements to deployment of the software in production environment need to have the sense of cyber security in what they do. Even those involved in support and maintenance of software systems also has a role in keeping the software secure.


Build Boringly Reliable ai Into Your DevOps

Observability for ai is different because “correctness” isn’t binary and inputs are messy. We focus on three pillars: live service metrics, evaluation metrics (task success, hallucination rate), and lineage. The first pillar looks like any microservice: we scrape metrics and trace request/response cycles. We prefer OpenTelemetry for traces because we can tag spans with prompt IDs, model routes, and experiment flags. The benefit is obvious when a perf spike happens and you can isolate it to “experiment=prompt_v17.” ... Costs don’t explode; they creep—one verbose chain-of-thought at a time. We price every inference the same way we price a SQL query: tokens in, tokens out, latency, and downstream work. For a customer-support deflection bot, we discovered that truncating history to the last 6 messages cut average tokens by 41% with no measurable drop in solved-rate over 30 days. That was an easy win. Harder wins come from selective routing: ship easy tasks to a small, fast model; escalate only when confidence is low. ... Data quality makes or breaks ai results. Before we debate model choices, we sanitize inputs, enforce schemas, and redact PII. You don’t want a customer’s credit card to become part of your “context.” We’ve had great results with a lightweight validation layer in the request path and daily batch checks on the source corpora. 


Why Training Won’t Solve the Citizen Developer Security Problem

In most organizations, security training is a core component of cybersecurity frameworks and often a compliance requirement. Helping employees recognize and respond to cyber threats significantly reduces human error, the leading cause of security breaches. That said, traditional security training for technically inclined IT staff and developer teams is already a formidable challenge. Rolling out training for citizen developers—employees with little to no formal IT or security background— is exponentially harder for several reasons ... It’s a well-known fact: security training has always struggled to deliver lasting behavioral change. For two decades, employees have been told, “Don’t click suspicious links in emails.” Yet, click rates on phishing emails remain stubbornly high. Why? Human error is persistent, so training alone is not enough. In response, businesses are layering technology — advanced email gateways, sandboxing, Endpoint Detection and Response (EDR), and real-time URL scanning — around users to compensate for their inevitable lapses in judgment. ... Unfortunately, traditional AppSec tools fall short for no-code apps, which aren’t built line by line and rely on proprietary logic inaccessible to standard code scans. Even with access, interpreting their risks demands specialized cybersecurity expertise, rendering traditional code-scanning tools ineffective.


6 signs of a dying digital transformation

“It’s a fundamental disconnect where the technology being implemented simply isn’t delivering the promised improvements to operations, customer experience, or competitive advantage.” This indicator, he notes, often reveals itself as a growing cynicism within the organization, with teams feeling like they’re simply “doing digital” for its own sake without a clear understanding of the “why” or seeing any real positive impact. ... When users aren’t interested or feel no need to use the transformation’s new tools or applications, it indicates a disconnect between the users, their goals, and actual business outcomes, says Aparna Achanta, IBM Consulting’s cybersecurity strategist and AI governance and transformation leader. To successfully address this issue, Achanta recommends aligning digital transformation with the overall business vision, making sure that the voices of end-users and customers are being heard. ... Strong business leadership, and a willingness to admit mistakes, are essential to digital transformation success, Hochman says. “Too often, enterprises run away from failure.” He notes that such moments are actually golden opportunities to break paradigms and try new approaches. “The more failures a company speaks openly about, the more innovation occurs.” ... “Adoption is the oxygen of transformation,” he says. 


Why Master Data Management Is Even More Important Now

There is a mindset shift that must happen to get people to buy into the cost and the overhead of managing the data in a way that's going to be usable, Thompson says. “It’s knowing how to match technology up with a set of business processes, internal culture, commitment to do things properly and tie [that] to a business outcome that makes sense,” he says. “[T]he level of maturity of some good companies is bad. They’re just bad at managing their data assets.” ... “[MDM] has very real business consequences, and I think that's the part that we can all do better is to start talking about the business outcome, because these business outcomes are so serious and so easy to understand that it shouldn't be hard to get business leaders behind it,” says Thompson. “But if you try to get business leaders behind MDM, it sounds like you want to undertake a science project with their help. It’s not about the MDM, it’s about the business outcome that you can get if you do a great job at MDM.” ... In older organizations, MDM maturity tends to be unevenly distributed. The core data tends to be fairly well organized and managed, but the rest isn’t. The age-old problem of data ownership and a reticence to share data doesn’t help. “The notion of data mesh [is] I’ll manage this piece, and you manage that piece. We’ll be disconnected but we can connect, and you can use it, but don’t mess with it. It’s mine,” says Landry.


How to Future-Proof Your Data and AI Strategy

The earlier you find a software bug, the less expensive it is to fix and the less negative customer impact it has – this is a basic principle of software development. And the value of a shift-left approach becomes even more apparent when applied to data privacy in the age of AI. If you use personal information to train models and realize later that you shouldn’t have, the only solution is to roll back the model, which also rolls back the value of the system and the competitive advantage it was intended to deliver. ... Companies need a scalable approach to determine where to go deep and where to move quickly. Prioritize based on impact by applying stricter controls where AI is high-risk or high-stakes, such as projects where AI is core to the functionality of new solutions or segments of the business. Apply lighter-touch governance where risk is low and build scalable policies that align governance intensity with business context, risk appetite, and innovation goals. ... Future-proofing your data and AI strategy is more than having the right tools and processes; it’s a mindset. If your approach isn’t designed for scalability and agility, it can quickly become a source of friction. A rigid, compliance-focused model makes even the best tools feel ineffective and can result in governance being seen as a bottleneck rather than a value driver.


The Unavoidable ‘SCREAM’: Why Enterprise Architecture Must Transform for the Organization of Tomorrow

In an era where every discussion, whether personal or organizational, is steeped in the pervasive influence of AI and data, one naturally questions the true state of Enterprise Architecture (EA) within most organizations today. Too often, we observe situational chaos and a predominantly reactive posture, where EA teams find themselves supporting hasty executive decisions in a culture of order-taking. Businesses, in turn, perceive Information Technology as slow to deliver, while IT teams, grappling with a perceived lack of business understanding, struggle to demonstrate timely value. This dynamic often leads to organizations becoming vendor-driven, with core architectural management often unaddressed. Despite this, there’s no doubt that the demand for Enterprise Architecture is surging. However, the existing challenges—from the sheer breadth of required skillsets and knowledge to the overwhelming abundance of frameworks to choose from—frequently plunge EA practices into moments of SCREAM: Situational Chaotic Realities of Enterprise Architecture Management. However, among these challenges, there persists a profound desire for adaptive design and resilient enterprise architecture. Significant architectural efforts are indeed undertaken across organizations of all sizes. The equilibrium that every organization truly needs, however, often feels elusive.


Microsoft Morphs Fusion Developers To Full Stack Builders

Citizen development is a thorny subject; allowing business “laypersons” to impact the way software application code is structured, aligned and executed is an unpopular concept with command line purists who would prefer to keep the suits at arm’s length, if not further. ... The central argument from Silver and Cunningham is that it’s really tough to teach businesspeople to code and, equally tough to teach software engineers the principles of business operations. The Redmond pair suggest that Microsoft Power Platform will provide the “scaffolding” for full-stack teams to fuse (yes, okay, we’re not using that word anymore) their two previously quite separate working environments. ... To make full-stack development a reality inside any given organization, Microsoft has said that there will need to be a degree of initial investment into engineering systems and context. This, then, would be the scaffolding. Redmond suggests that new applications will emerge that are architected to support natural language development, augmentation and modification. With boundaries, safeguards and guardrails in place to oversee what AI agents can do when left in the hands of businesspeople, software systems will need to be engineered with enough meta-knowledge to understand the business context of the decisions that might be made without breaking other parts of the system. 

Daily Tech Digest - August 18, 2025


Quote for the day:

"The ladder of success is best climbed by stepping on the rungs of opportunity." -- Ayn Rand


Legacy IT Infrastructure: Not the Villain We Make It Out to Be

Most legacy infrastructure consists of tried-and-true solutions. If a business has been using a legacy system for years, it's a reliable investment. It may not be as optimal from a cost, scalability, or security perspective as a more modern alternative. But in some cases, this drawback is outweighed by the fact that — unlike a new, as-yet-unproven solution — legacy systems can be trusted to do what they claim to do because they've already been doing it for years. The fact that legacy systems have been around for a while also means that it's often easy to find engineers who know how to work with them. Hiring experts in the latest, greatest technology can be challenging, especially given the widespread IT talent shortage. But if a technology has been in widespread use for decades, IT departments don't need to look as hard to find staff qualified to support them. ... From a cost perspective, too, legacy systems have their benefits. Even if they are subject to technical debt or operational inefficiencies that increase costs, sticking with them may be a more financially sound move than undertaking a costly migration to an alternative system, which may itself present unforeseen cost drawbacks. ...  As for security, it's hard to argue that a system with inherent, incurable security flaws is worth keeping around. However, some IT systems can offer security benefits not available on more modern alternatives. 


Agentic AI promises a cybersecurity revolution — with asterisks

“If you want to remove or give agency to a platform tool to make decisions on your behalf, you have to gain a lot of trust in the system to make sure that it is acting in your best interest,” Seri says. “It can hallucinate, and you have to be vigilant in maintaining a chain of evidence between a conclusion that the system gave you and where it came from.” ... “Everyone’s creating MCP servers for their services to have AI interact with them. But an MCP at the end of the day is the same thing as an API. [Don’t make] all the same mistakes that people made when they started creating APIs ten years ago. All these authentication problems and tokens, everything that’s just API security.” ... CISOs need to immediately strap in and grapple with the implications of a technology that they do not always fully control, if for no other reason than their team members will likely turn to AI platforms to develop their security solutions. “Saying no doesn’t work. You have to say yes with guardrails,” says Mesta. At this still nascent stage of agentic AI, CISOs should ask questions, Riopel says. But he stresses that the main “question you should be asking is: How can I force multiply the output or the effectiveness of my team in a very short period of time? And by a short period of time, it’s not months; it should be days. That is the type of return that our customers, even in enterprise-type environments, are seeing.”


Zero Trust: A Strong Strategy for Secure Enterprise

Due to the increasing interconnection of operational changes affecting the financial and social health of digital enterprises, security is assuming a more prominent role in business discussions. Executive leadership is pivotal in ensuring enterprise security. It’s vital for business operations and security to be aligned and coordinated to maintain security. Data governance is integral in coordinating cross-functional activity to achieve this requirement. Executive leadership buy-in coordinates and supports security initiatives, and executive sponsorship sets the tone and provides the resources necessary for program success. As a result, security professionals are increasingly represented in board seats and C-suite positions. In public acknowledgment of this responsibility, executive leadership is increasingly held accountable for security breaches, with some being found personally liable for negligence. Today, enterprise security is the responsibility of multiple teams. IT infrastructure, IT enterprise, information security, product teams, and cloud teams work together in functional unity but require a sponsor for the security program. Zero trust security complements operations due to its strict role definition, process mapping, and monitoring practices, making compliance more manageable and automatable. Whatever the region, the trend is toward increased reporting and compliance. As a result, data governance and security are closely intertwined.


The Role of Open Source in Democratizing Data

Every organization uses a unique mix of tools, from mainstream platforms such as Salesforce to industry-specific applications that only a handful of companies use. Traditional vendors can't economically justify building connectors for niche tools that might only have 100 users globally. This is where open source fundamentally changes the game. The math that doesn't work for proprietary vendors, where each connector needs to generate significant revenue, becomes irrelevant when the users themselves are the builders. ... The truth about AI is that it isn’t about using the best LLMs or the most powerful GPUs. The real truth is that AI is only as good as the data it ingests. I've seen Fortune 500 companies with data locked in legacy ERPs from the 1990s, custom-built internal tools, and regional systems that no vendor supports. This data, often containing decades of business intelligence, remains trapped and unusable for AI training. Long-tail connectors change this equation entirely. When the community can build connectors for any system, no matter how obscure, decades of insights can be unlocked and unleashed. This matters enormously for AI readiness. Training effective models requires real data context, not a selected subset from cloud native systems incorporated just 10 years ago. Companies that can integrate their entire data estate, including legacy systems, gain massive advantages. More data fed into AI leads to better results.


7 Terrifying AI Risks That Could Change The World

Operating generative AI language models requires huge amounts of compute power. This is provided by vast data centers that burn through energy at rates comparable to small nations, creating poisonous emissions and noise pollution. They consume massive amounts of water at a time when water scarcity is increasingly a concern. Critics of the idea that the benefits of AI are outweighed by the environmental harm it causes often believe that this damage will be offset by efficiencies that AI will create. ... The threat that AI poses to privacy is at the root of this one. With its ability to capture and process vast quantities of personal information, there’s no way to predict how much it might know about our lives in just a few short years. Employers increasingly monitoring and analyzing worker activity, the growing number of AI-enabled cameras on our devices, and in our streets, vehicles and homes, and police forces rolling out facial-recognition technology, all raise anxiety that soon no corner will be safe from prying AIs. ... AI enables and accelerates the spread of misinformation, making it quicker and easier to disseminate, more convincing, and harder to detect from Deepfake videos of world leaders saying or doing things that never happened, to conspiracy theories flooding social media in the form of stories and images designed to go viral and cause disruption. 


Quality Mindset: Why Software Testing Starts at Planning

In many organizations, quality is still siloed, handed off to QA or engineering teams late in the process. But high-performing companies treat quality as a shared responsibility. The business, product, development, QA, release, and operations teams all collaborate to define what "good" looks like. This culture of shared ownership drives better business outcomes. It reduces rework, shortens release cycles, and improves time to market. More importantly, it fosters alignment between technical teams and business stakeholders, ensuring that software investments deliver measurable value. ... A strong quality strategy delivers measurable benefits across the entire enterprise. When teams focus on building quality into every stage of the development process, they spend less time fixing bugs and more time delivering innovation. This shift enables faster time to market and allows organizations to respond more quickly to changing customer needs. The impact goes far beyond the development team. Fewer defects lead to a better customer experience, resulting in higher satisfaction and improved retention. At the same time, a focus on quality reduces the total cost of ownership by minimizing rework, preventing incidents, and ensuring more predictable delivery cycles. Confident in their processes and tools, teams gain the agility to release more frequently without the fear of failure. 


Is “Service as Software” Going to Bring Down People Costs?

Tiwary, formerly of Barracuda Networks and now a venture principal and board member, described the phenomenon as “Service as Software” — a flip of the familiar SaaS acronym that points to a fundamental shift. Instead of hiring more humans to deliver incremental services, organizations are looking at whether AI can deliver those same services as software: infinitely scalable, lower cost, always on. ... Yes, “Service as Software” is a clever phrase, but Hoff bristles at the way “agentic AI” is invoked as if it’s already a settled, mature category. He reminds us that this isn’t some radical new direction — we’ve been on the automation journey for decades, from the codification of security to the rise of cloud-based SOC tooling. GenAI is an iteration, not a revolution. And with each iteration comes risk. Automation without full agency can create as many headaches as it solves. Hiring people who understand how to wield GenAI responsibly may actually increase costs — try finding someone who can wrangle KQL, no-code workflows, and privileged AI swarms without commanding a premium salary. ... The future of “Service as Software” won’t be defined by clever turns of phrase or venture funding announcements. It will be defined by the daily grind of adoption, iteration and timing. AI will replace people in some functions. 


Zero-Downtime Critical Cloud Infrastructure Upgrades at Scale

The requirement for performance testing is mandatory when your system handles critical traffic flow. The first step of every upgrade requires you to collect baseline performance data while performing detailed stress tests that replicate actual workload scenarios. The testing process should include both typical happy path executions and edge cases along with peak traffic conditions and failure scenarios to detect performance bottlenecks. ... Every organization should create formal rollback procedures. A defined rollback approach must accompany all migration and upgrade operations regardless of their future utilization plans. Such a system creates a one-way entry system without any exit plan which puts you at risk. The rollback procedures need proper documentation and validation and should sometimes undergo independent testing. ... Never add any additional improvements during upgrades or migrations – not even a single log line. This discipline might seem excessive, but it's crucial for maintaining clarity during troubleshooting. Migrate the system exactly as it is, then tackle improvements in a separate, subsequent deployment. ... The successful implementation of zero-downtime upgrades at scale needs more than technical skills because it requires systematic preparation and clear communication together with experience-based understanding of potential issues.


The Human Side of AI Governance: Using SCARF to Navigate Digital Transformation

Developed by David Rock in 2008, the SCARF model provides a comprehensive framework for understanding human social behavior through five critical domains that trigger either threat or reward responses in the brain. These domains encompass Status (our perceived importance relative to others), Certainty (our ability to predict future outcomes), Autonomy (our sense of control over events), Relatedness (our sense of safety and connection with others), and Fairness (our perception of equitable treatment). The significance of this framework lies in its neurological foundation. These five social domains activate the same neural pathways that govern our physical survival responses, which explains why perceived social threats can generate reactions as intense as those triggered by physical danger. ... As AI systems become embedded in daily workflows, governance frameworks must actively monitor and support the evolving human-AI relationships. Organizations can create mechanisms for publicly recognizing successful human-AI collaborations while implementing regular “performance reviews” that explain how AI decision-making evolves. Establish clear protocols for human override capabilities, foster a team identity that includes AI as a valued contributor, and conduct regular bias audits to ensure equitable AI performance across different user groups.


How security teams are putting AI to work right now

Security teams are used to drowning in alerts. Most are false positives, some are low risk, only a few matter. AI is helping to cut through this mess. Vendors have been building machine learning models to sort and score alerts. These tools learn over time which signals matter and which can be ignored. When tuned well, they can bring alert volumes down by more than half. That gives analysts more time to look into real threats. GenAI adds something new. Instead of just ranking alerts, some tools now summarize what happened and suggest next steps. One prompt might show an analyst what an attacker did, which systems were touched, and whether data was exfiltrated. This can save time, especially for newer analysts. ... “Humans are still an important part of the process. Analysts provide feedback to the AI so that it continues to improve, share environmental-specific insights, maintain continuous oversight, and handle things AI can’t deal with today,” said Tom Findling, CEO of Conifers. “CISOs should start by targeting areas that consume the most resources or carry the highest risk, while creating a feedback loop that lets analysts guide how the system evolves.” ... Entry-level analysts may no longer spend all day clicking through dashboards. Instead, they might focus on verifying AI suggestions and tuning the system.

Daily Tech Digest - August 17, 2025


Quote for the day:

"Failure is the condiment that gives success its flavor." -- Truman Capote


The third leg of the stool: Technology’s role in M&A

The term “technical debt” wasn’t mainstream, making it tough to convey to lawyers, accountants and executives. Their languages aligned — business, finance, law — with shared specificity. But IT? We spoke a different dialect, full of jargon that obscured our business insights. This cultural divide explained technology’s historical exclusion from M&A. The gap was mine to bridge. Over time, I learned to translate, framing technical risks in terms of dollars, downtime and competitive edge. ... Overlap exists with legal and finance, but IT’s lens is unique: assessing how operations impact data and systems. Chaotic processes yield chaotic data; effective ones produce reliable insights. ... “Good decisions on bad data are bad decisions” (me, circa 2007). Data is an enterprise’s most valuable asset, yet often neglected. Poor data can cripple; great data accelerates growth. In M&A, I scrutinize quality, lifecycle management, governance, ownership and analysis. Companies are typically polarized: exemplary governance or barely functional. Data issues heavily influence deal pricing — more on that in a future post. ... Critical during M&A, as deals attract hackers — sometimes derailing them entirely. With AI-driven threats rising, robust postures are non-negotiable. This warrants its own article.


Navigating the issues that impact data center design today

In the last few years, design considerations have changed significantly. The adoption of high-performance computing (HPC) and artificial intelligence (AI) applications translates into greater power consumption and that requires a rethink of cooling and management. What’s more, it’s increasingly difficult to predict future capacity requirements. ... Modular data center infrastructure can help facilitate zone-based deployments. Many people think of modular data centers as those deployed in ISO shipping containers, but that is only one type. There are also skid-mounted systems and preconfigured enclosures. Preconfigured enclosures can be shells or self-contained units with built-in power, cooling, fire suppression, and physical security. ... Whether building out a new data center or expanding an existing one, organizations should choose sustainable materials. With smart choices, future data centers will be self-sufficient and carbon- and water-neutral and have minimal impact on the local environment.
Planning is key These challenges have upped the ante for data center design planning. It’s no longer advisable to build out a simple shell with a raised floor and start adding infrastructure. Your facility must have the necessary power capacity, redundancy, and security to meet your business needs. 


Mastering Microservices: Seven Uncommon Strategies for Streamlined Success

Containerization might seem like old news, but there are nuances that can significantly impact performance and scalability. Containers encapsulate your microservices, ensuring consistency across environments. Yet, not all container strategies are created equal. We’ve seen teams struggle when they cram too many processes into a single container. ... It’s said that you can’t manage what you can’t measure, and this couldn’t be truer for microservices. With multiple services running concurrently, effective logging and monitoring become crucial. Gone are the days of relying solely on traditional log files or single-instance monitors. We once faced a situation where a subtle bug in a service went undetected for weeks, causing memory leaks and gradually degrading performance. Our solution was to implement centralized logging and observability tools like Prometheus and Grafana. These tools allowed us to aggregate logs from various services and gain insights through real-time dashboards. ... Security is often like flossing—everyone knows it’s important, but many neglect it until there’s a problem. With microservices, security risks multiply. It’s crucial to secure inter-service communication, protect sensitive data, and ensure compliance with industry standards.


AI Security in the Cloud-Native DevSecOps Pipeline

Because reacting to threats is a lost cause when the attacks themselves are learning and adapting, a proactive stance is essential for survival. This is a mindset embraced by security leaders like Akash Agrawal, VP of DevOps & DevSecOps at LambdaTest, an AI-native software testing platform. He argues for a fundamental shift: “Security can no longer be bolted on at the end,” he explains. “AI allows us to move from reactive scanning to proactive prevention.” This approach means using AI not just to identify flaws in committed code, but to predict where the next one might emerge. ... But architectural flaws are not the only risk. AIʼs drive for automation can also lead to more common security gaps like credential leakage, a problem that Nic Adams, co-founder and CEO of security startup 0rcus, sees growing. He points to AI-backed CI/CD tools that auto-generate infrastructure-as-code and inadvertently create “credential sprawl” by embedding long-lived API keys directly into configuration files. The actionable defense here is to assume AI will make mistakes and build a safety net around it. Teams must integrate real-time secret scanning directly into the pipeline and enforce a strict policy of using ephemeral, short-lived credentials that expire automatically. Beyond specific code vulnerabilities, there is a more strategic gap that AI introduces into the development process itself. 


Stop using AI for these 9 work tasks - here's why

Every time you give the AI some information, ask yourself how you would feel if it were posted to the company's public blog or wound up on the front page of your industry's trade journal. This concern also includes information that might be subject to disclosure regulations, such as HIPAA for health information or GDPR for personal data for folks operating in the EU. Regardless of what the AI companies tell you, it's best to simply assume that everything you feed into an AI is now grist for the model-training mill. Anything you feed in could later wind up in a response to somebody's prompt, somewhere else. ... Contracts are designed to be detailed and specific agreements on how two parties will interact. They are considered governing documents, which means that writing a bad contract is like writing bad code. Baaad things will happen. Do not ask AIs for help with contracts. They will make errors and omissions. They will make stuff up. Worse, they will do so while sounding authoritative, so you're more likely to use their advice. ... But when it comes time to ask for real advice that you plan on considering as you make major decisions, just don't. Let's step away from the liability risk issues and focus on common sense. First, if you're using something like ChatGPT for real advice, you have to know what to ask. If you're not trained in these professions, you might not know.


The Evolution of the DBA—More Than Just a Keeper of Databases

Automation has dramatically changed database administration. Routine tasks—such as performance tuning, index management, and backup scheduling—are increasingly handled by AI-driven database tools. Solutions such as Oracle Autonomous Database, Db2 AI for SQL, and Microsoft Azure SQL’s Intelligent Query Processing promise self-optimizing, self-healing databases. While this might sound like a threat to DBAs, it’s actually an opportunity. Instead of focusing on routine maintenance, DBAs can now shift their efforts toward higher-value tasks including data architecture, governance, and security. ... Organizations are no longer tied to a single database platform. With multi-cloud and hybrid cloud strategies becoming the norm, DBAs must manage data across on-premises systems, cloud-native databases, and hybrid architectures. The days of being a single-platform DBA (e.g., only working with one DBMS) are coming to an end. Instead, cross-platform expertise is now a necessity. Knowing how to optimize for multiple platforms and database systems—for example, AWS RDS, Google Cloud Spanner, Azure SQL, and on-prem Db2, Oracle, and PostgreSQL—is more and more a core part of the DBA’s job description.  ... With the explosion of data regulations and industry-specific mandates, compliance has become a primary concern for DBAs. 


The global challenge of achieving cyber resilience

The barriers to effective cybersecurity include familiar suspects such as budgetary and resource limitations, the increasing complexity of modern systems and challenge of keeping up with rapidly evolving cyber threats. However, topping the list of challenges for many organisations is the ongoing shortage of cybersecurity skills. A recent Cybersecurity Workforce Study from ISC2 found that, although the size of the global cybersecurity workforce increased to 5.5 million workers in 2023 (a rise of 9% over a single year), so did the gap between supply and demand, which rose by 13% over the same period. Unfortunately, it’s more than just a numbers gap. The study also found that the skills gap is an even greater concern, with respondents saying the lack of necessary skills was a bigger factor making their organisations vulnerable. It’s clear the current approach is flawed. The grand plans that governments have for cybersecurity will require significant uplifts to security programs, including major improvements in developer upskilling, skills verification and guardrails for artificial intelligence tools. Organisations also need to modernise their approach by implementing pathways to upskilling that use deep data insights to provide the best possible skills verification. They need to manage and mitigate the inherent risks that developers with low security maturity bring to the table.


Social engineering becomes strategic threat as OT sector faces phishing, deepfakes, and AI deception risks

With the expanding IT/OT footprint, the attack surface is increasingly providing attackers additional opportunities to compromise targets by stealing credentials, impersonating trusted insiders, and moving laterally from one system to another inside the network. AI-driven phishing, voice cloning, and deepfake-enabled pretexting are lowering the barrier to entry, enabling cyber adversaries to deploy powerful tools that have the potential to erode the reliability of human judgment across critical infrastructure installations. Microsoft security researchers warn that a single compromise, say via a contractor’s infected laptop, can breach previously isolated OT systems, turning them into a breach gateway. While phishing and identity theft are now common access tools, the impact in OT environments is much worse. ... AI-driven deception is rapidly reshaping the social engineering landscape. Attackers are using voice cloning and deepfake technology to impersonate executives with unnerving accuracy. Qantas recently fell victim to a similar scheme, where an AI-powered ‘vishing’ attack compromised the personal data of up to six million customers. These incidents highlight how artificial intelligence has lowered the barrier for convincing, high-impact fraud. Across OT environments, such as energy distribution or manufacturing plants, the impact of social engineering goes way beyond stolen funds or data.


When cloud growth outpaces control, waste follows

Access to data does not guarantee accountability. Many organizations have detailed cost reporting but continue to struggle with cloud waste. The issue here shifts from one of visibility towards one of proximity. Our data shows 59% of organizations have a FinOps team that does some or all cloud cost optimization tasks, yet in many cases, these teams still sit at the edge of delivery. So, while they can surface issues, they are often too removed from daily operations to intervene effectively. The most effective models integrate cost ownership into delivery itself. This means that engineering leads, platform teams and product owners have oversight to take action before inefficiencies take hold. As a result, when these roles are supported with relevant reporting and shared financial metrics, cost awareness becomes a natural part of the decision-making process. This makes it easier to adjust workloads, retire underutilized services, and optimize environments in-flight, rather than in hindsight. ... Control is easiest to build before complexity sets in. The longer organizations delay embedding structure into cloud governance, the harder it becomes to retrofit later. Inconsistent tagging, ambiguous ownership and manual reporting all take time to correct once they are entrenched.


The Growing Impact of Technical Solution Architecture in Software Engineering

Technical solution architects serve as the bridge between business objectives and technology implementation. Their role involves understanding organizational needs, designing scalable system architectures, and leading development teams to execute complex solutions efficiently. As companies transition to cloud-native applications and AI-powered automation, technical solution architects must design systems that are adaptable, secure, and optimized for performance. ... “Legacy systems, while functional, often become bottlenecks as organizations grow,” Bodapati, who is also a fellow at the Hackathon Raptors, explains. “By modernizing these systems, we ensure better performance, stronger security, and more streamlined operations—all essential for today’s data-driven enterprises.” ... With experts like Rama Krishna Prasad Bodapati leading the charge in system architecture and software engineering, businesses can ensure scalability, agility, and efficiency in their IT infrastructure. His expertise in full-stack development, cloud engineering, and enterprise software modernization continues to shape the future of digital transformation. “The future of software engineering isn’t just about building applications—it’s about building intelligent, adaptable, and high-performance ecosystems that drive business success,” Bodapati emphasizes.

Daily Tech Digest - August 16, 2025


Quote for the day:

"Develop success from failures. Discouragement and failure are two of the surest stepping stones to success." -- Dale Carnegie


Digital Debt Is the New Technical Debt (And It’s Worse)

Digital debt doesn’t just slow down technology. It slows down business decision-making and strategic execution. Decision-Making Friction: Simple business questions require data from multiple systems. “What’s our customer lifetime value?” becomes a three-week research project because customer data lives in six different platforms with inconsistent definitions. Campaign Launch Complexity: Marketing campaigns that should take two weeks to launch require six weeks of coordination across platforms. Not because the campaign is complex, but because the digital infrastructure is fragmented. Customer Experience Inconsistency: Customers encounter different branding, messaging, and functionality depending on which digital touchpoint they use. Support teams can’t access complete customer histories because data is distributed across systems. Innovation Paralysis: New initiatives get delayed because teams spend time coordinating existing systems rather than building new capabilities. Digital debt creates a gravitational pull that keeps organizations focused on maintenance rather than innovation. ... Digital debt is more dangerous than technical debt because it’s harder to see and affects more stakeholders. Technical debt slows down development teams. Digital debt slows down entire organizations.


Rising OT threats put critical infrastructure at risk

Attackers are exploiting a critical remote code execution (RCE) vulnerability in the Erlang programming language's Open Telecom Platform, widely used in OT networks and critical infrastructure. The flaw enables unauthenticated users to execute commands through SSH connection protocol messages that should be processed only after authentication. Researchers from Palo Alto Networks' Unit 42 said they have observed more than 3,300 exploitation attempts since May 1, with about 70% targeting OT networks across healthcare, agriculture, media and high-tech sectors. Experts urged affected organizations to patch immediately, calling it a top priority for any security team defending an OT network. The flaw, which has a CVSS score of 10, could enable an attacker to gain full control over a system and disrupt connected systems -- particularly worrisome in critical infrastructure. ... Despite its complex cryptography, the protocol contains design flaws that could enable attackers to bypass authentication and exploit outdated encryption standards. Researcher Tom Tervoort, a security specialist at Netherlands-based security company Secura, identified issues affecting at least seven different products, resulting in the issuing of three CVEs.


Why Tech Debt is Eating Your ROI (and How To Fix It)

Regardless of industry or specific AI efforts, these frustrations seem to boil down to the same culprit. Their AI initiatives continue to stumble over decades of accumulated tech debt. Part of the reason is despite the hype, most organizations use AI — let’s say, timidly. Fewer than half employ it for predictive maintenance or detecting network anomalies. Fewer than a third use it for root-cause analysis or intelligent ticket routing. Why such hesitation? Because implementing AI effectively means confronting all the messiness that came before. It means admitting our tech environments need a serious cleanup before adding another layer of complexity. Tech complexity has become a monster. This mess came from years of bolting on new systems without retiring old ones. Some IT professionals point to redundant applications as a major source of wasted budget and others blame overprovisioning in the cloud — the digital equivalent of paying rent on empty apartments. ... IT teams admit something that, to me, is alarming: Their infrastructure has grown so tangled they can no longer maintain basic security practices. Let that sink in. Companies with eight-figure tech budgets can’t reliably patch vulnerable systems or implement fundamental security controls. No one builds silos deliberately. Silos emerge from organizational boundaries, competing priorities and the way we fund and manage projects. 


Ready on paper, not in practice: The incident response gap in Australian organisations

The truth is, security teams often build their plans around assumptions rather than real-world threats and trends. That gap becomes painfully obvious during an actual incident, when organisations realise they aren't adequately prepared to respond. Recent findings of a Semperis study titled The State of Enterprise Cyber Crisis Readiness revealed a strong disconnect between organisations' perceived readiness to respond to a cyber crisis and their actual performance. The study also showed that cyber incident response plans are being implemented and regularly tested, but not broadly. In a real-world crisis, too many teams are still operating in silos. ... A robust, integrated, and well-practiced cyber crisis response plan is paramount for cyber and business resilience. After all, the faster you can respond and recover, the less severe the financial impact of a cyberattack will be. Organisations can increase their agility by conducting tabletop exercises that simulate attacks. By practicing incident response regularly and introducing a range of new scenarios of varying complexity, organisations can train for the real thing, which can often be unpredictable. Security teams can continually adapt their response plans based on the lessons learned during these exercises, and any new emerging cyber threats.


Quantum Threat Is Real: Act Now with Post Quantum Cryptography

Some of the common types of encryption we use today include RSA (Rivest-Shamir-Adleman), ECC (Elliptic Curve Cryptography), and DH (Diffie-Hellman Key Exchange). The first two are asymmetric types of encryption. The third is a useful fillip to the first to establish secure communication, with secure key exchange. RSA relies on very large integers, and ECC, on very hard-to-solve math problems. As can be imagined, these cannot be solved with traditional computing. ... Cybercriminals think long-term. They are well aware that quantum computing is still some time away. But that doesn’t stop them from stealing encrypted information. Why? They will store it securely until quantum computing becomes readily available; then they will decrypt it. The impending arrival of quantum computers has set the cat amongst the pigeons. ... Blockchain is not unhackable, but it is difficult to hack. A bunch of cryptographic algorithms keep it secure. These include SHA-256 (Secure Hash Algorithm 256-bit) and ECDSA (Elliptic Curve Digital Signature Algorithm). Today, cybercriminals might not attempt to target blockchains and steal crypto. But tomorrow, with the availability of a quantum computer, the crypto vault can be broken into, without trouble. ... We keep saying that quantum computing and quantum computing-enabled threats are still some time away. And, this is true. But when the technology is here, it will evolve and gain traction. 


Cultivating product thinking in your engineering team

The most common trap you’ll encounter is what’s called the “feature factory.” This is a development model where engineers are simply handed a list of features to build, without context. They’re measured on velocity and output, not on the value their work creates. This can be comfortable for some – it’s a clear path with measurable metrics – but it’s also a surefire way to kill innovation and engagement. ... First and foremost, you need to provide context, and you need to do so early and often. Don’t just hand a Jira ticket to an engineer. Before a sprint starts, take the time to walk through the “what,” the “why,” and the “who.” Explain the market research that led to this feature request, share customer feedback that highlights the problem, and introduce them to the personas you’re building for. A quick 15-minute session at the start of a sprint can make a world of difference. You should also give engineers a seat at the table. Invite them to meetings where product managers are discussing strategy and customer feedback. They don’t just need to hear the final decision; they need to be a part of the conversation that leads to it. When an engineer hears a customer’s frustration firsthand, they gain a level of empathy that a written user story can never provide. They’ll also bring a unique perspective to the table, challenging assumptions and offering technical solutions you may not have considered.


Adapting to New Cloud Security Challenges

While the essence of Non-Human Identities and their secret management is acknowledged, many organizations still grapple with the efficient implementation of these practices. Some stumble upon the over-reliance on traditional security measures, thereby failing to adopt newer, more effective strategies that incorporate NHI management. Others struggle with time and resource constraints, devoid of efficient automation mechanisms – a crucial aspect for proficient NHI management. The disconnect between security and R&D teams often results in fractured efforts, leading to potential security gaps, breaches, and data leaks. ... With more organizations migrate to the cloud and with the rise of machine identities and secret management, the future of cloud security has been redefined. It is no longer solely about the protection from known threats but now involves proactive strategies to anticipate and mitigate potential future risks. This shift necessitates organizations to rethink their approach to cybersecurity, with a keen focus on NHIs and Secrets Security Management. It requires an integrated endeavor, involving CISOs, cybersecurity professionals, and R&D teams, along with the use of scalable and innovative platforms. Thought leaders in the data field continue to emphasize the importance of robust NHI management as vital to the future of cybersecurity, driving the message home for businesses of all sizes and across all industries.


Why IT Modernization Occurs at the Intersection of People and Data

A mandate for IT modernization doesn’t always mean the team has the complete expertise necessary to complete that mandate. It may take some time to arm the team with the correct knowledge to support modernization. Let’s take data analytics, for example. Many modern data analytics solutions, armed with AI, now allow teams to deliver natural language prompts that can retrieve the data necessary to inform strategic modernization initiatives without having to write expert-level SQL. While this lessens the need for writing scripts, IT leaders must still ensure their teams have the right expertise to construct the correct prompts. This could mean training on correct terms for presenting data and/or manipulating data, along with knowing in what circumstances to access that data. Having a well-informed and educated team will be especially important after modernization efforts are underway. ... One of the most important steps to IT modernization is arming your IT teams with a complete picture of the current IT infrastructure. It’s equivalent to giving them a full map before embarking on their modernization journey. In many situations, an ideal starting point is to ensure that any documentation, ER diagrams, and architectural diagrams are collected into a single repository and reviewed. Then, the IT teams use an observability solution that integrates with every part of the enterprise infrastructure to show each team how every part of it works together. 


Cyber Resilience Must Become The Third Pillar Of Security Strategy

For years, enterprise security has been built around two main pillars: prevention and detection. Firewalls, endpoint protection, and intrusion detection systems all aim to stop attackers before they do damage. But as threats grow more sophisticated, it’s clear that this isn’t enough. ... The shift to cloud computing has created dangerous assumptions. Many organizations believe that moving workloads to AWS, Azure, or Google Cloud means the provider “takes care of security.” ... Effective resilience starts with rethinking backup as more than a compliance checkbox. Immutable, air-gapped copies prevent attackers from tampering with recovery points. Built-in threat detection can spot ransomware or other malicious activity before it spreads. But technology alone isn’t enough. Mariappan urges leaders to identify the “minimum viable business” — the essential applications, accounts, and configurations required to function after an incident. Recovery strategies should be built around restoring these first to reduce downtime and financial impact. She also stresses the importance of limiting the blast radius. In a cloud context, that might mean segmenting workloads, isolating credentials, or designing architectures that prevent a single compromised account from jeopardizing an entire environment.


Breaking Systems to Build Better Ones: How AI is Reshaping Chaos Engineering

While AI dominates technical discussions across industries, Andrus maintains a pragmatic perspective on its role in system reliability. “If Skynet comes about tomorrow, it’s going to fail in three days. So I’m not worried about the AI apocalypse, because AI isn’t going to be able to build and maintain and run reliable systems.” The fundamental challenge lies in the nature of distributed systems versus AI capabilities. “A lot of the LLMs and a lot of what we talk about in the AI world is really non deterministic, and when we’re talking about distributed systems, we care about it working correctly every time, not just most of the time.” However, Andrus sees valuable applications for AI in specific areas. AI excels at providing suggestions and guidance rather than making deterministic decisions. ... Despite its name, chaos engineering represents the opposite of chaotic approaches to system reliability. “Chaos engineering is a bit of a misnomer. You know, a lot of people think, Oh, we’re going to go cause chaos and see what happens, and it’s the opposite. We want to engineer the chaos out of our systems.” This systematic approach to understanding system behavior under stress provides the foundation for building more resilient infrastructure. As AI-generated code increases system complexity, the need for comprehensive reliability testing becomes even more critical. 

Daily Tech Digest - August 15, 2025


Quote for the day:

“Become the kind of leader that people would follow voluntarily, even if you had no title or position.” -- Brian Tracy


DevSecOps 2.0: How Security-First DevOps Is Redefining Software Delivery

DevSecOps 2.0 is a true security-first revolution. This paradigm shift transforms software security into a proactive enabler, leveraging AI and policy-as-code to automate safeguards at scale. Security tools now blend seamlessly into developer workflows, and continuous compliance ensures real-time auditing. With ransomware, supply chain attacks, and other attacks on the rise, there is a need for a different approach to delivering resilient software. ... It marks a transformative approach to software development, where security is the foundation of the entire lifecycle. This evolution ensures proactive security that works to identify and neutralize threats early. ... AI-driven security is central to DevSecOps 2.0, which harnesses the power of artificial intelligence to transform security from a reactive process into a proactive defense strategy. By analyzing vast datasets, including security logs, network traffic, and code commit patterns, AI can detect subtle anomalies and predict potential threats before they materialize. This predictive capability enables teams to identify risks early, streamlining threat detection and facilitating automated remediation. For instance, AI can analyze commit patterns to predict code sections likely to contain vulnerabilities, allowing for targeted testing and prevention. 


What CIOs can do when AI boosts performance but kills motivation

“One of the clearest signs is copy-paste culture,” Anderson says. “When employees use AI output as-is, without questioning it or tailoring it to their audience, that’s a sign of disengagement. They’ve stopped thinking critically.” To prevent this, CIOs can take a closer look at how teams actually use AI. Honest feedback from employees can help, but there’s often a gap between what people say they use AI for and how they actually use it in practice, so trying to detect patterns of copy-paste usage can help improve workflows. CIOs should also pay attention to how AI affects roles, identities, and team dynamics. When experienced employees feel replaced, or when previously valued skills are bypassed, morale can quietly drop, even if productivity remains high on paper. “In one case, a senior knowledge expert, someone who used to be the go-to for tough questions, felt displaced when leadership started using AI to get direct answers,” Anderson says. “His motivation dropped because he felt his value was being replaced by a tool.” Over time, this expert started to use AI strategically, and saw it could reduce the ad-hoc noise and give him space for more strategic work. “That shift from threatened to empowered is something every leader needs to watch for and support,” he adds.


That ‘cheap’ open-source AI model is actually burning through your compute budget

The inefficiency is particularly pronounced for Large Reasoning Models (LRMs), which use extended “chains of thought” to solve complex problems. These models, designed to think through problems step-by-step, can consume thousands of tokens pondering simple questions that should require minimal computation. For basic knowledge questions like “What is the capital of Australia?” the study found that reasoning models spend “hundreds of tokens pondering simple knowledge questions” that could be answered in a single word. ... The research revealed stark differences between model providers. OpenAI’s models, particularly its o4-mini and newly released open-source gpt-oss variants, demonstrated exceptional token efficiency, especially for mathematical problems. The study found OpenAI models “stand out for extreme token efficiency in math problems,” using up to three times fewer tokens than other commercial models. ... The findings have immediate implications for enterprise AI adoption, where computing costs can scale rapidly with usage. Companies evaluating AI models often focus on accuracy benchmarks and per-token pricing, but may overlook the total computational requirements for real-world tasks. 


AI Agents and the data governance wild west

Today, anyone from an HR director to a marketing intern can quickly build and deploy an AI agent simply using Copilot Studio. This tool is designed to be accessible and quick, making it easy for anyone to play around with and launch a sophisticated agent in no time at all. But when these agents are created outside of the IT department, most users aren’t thinking about data classification or access controls, and they become part of a growing shadow IT problem. ... The problem is that most users will not be thinking like a developer with governance in mind when creating their own agents. Therefore, policies must be imposed to ensure that key security steps aren’t skipped in the rush to deploy a solution. A new layer of data governance must be considered with steps that include configuring data boundaries, restricting who can access what data according to job role and sensitivity level, and clearly specifying which data resources the agent can pull from. AI agents should be built for purpose, using principles of least privilege. This will help avoid a marketing intern having access to the entire company’s HR file. Just like any other business-critical application, it needs to be adequately tested and ‘red-teamed’. Perform penetration testing to identify what data the agent can surface, to who, and how accurate the data is.


Monitoring microservices: Best practices for robust systems

Collecting extensive amounts of telemetry data is most beneficial if you can combine, visualize and examine it successfully. A unified observability stack is paramount. By integrating tools like middleware that work together seamlessly, you create a holistic view of your microservices ecosystem. These unified tools ensure that all your telemetry information — logs, traces and metrics — is correlated and accessible from a single pane of glass, dramatically decreasing the mean time to detect (MTTD) and mean time to resolve (MTTR) problems. The energy lies in seeing the whole photograph, no longer just remote points. ... Collecting information is good, but acting on it is better. Define significant service level objectives (SLOs) that replicate the predicted performance and reliability of your offerings.  ... Monitoring microservices effectively is an ongoing journey that requires a commitment to standardization of data, using the right tools and a proactive mindset. By utilizing standardized observability practices, adapting a unified observability stack, continuously monitoring key metrics, placing meaningful SLOs and allowing enhanced root cause analysis, you may construct a strong and resilient microservices structure that truly serves your business desires and delights your customers. 


How military leadership prepares veterans for cybersecurity success

After dealing with extremely high-pressure environments, in which making the wrong decision can cost lives, veterans and reservists have little trouble dealing with the kinds of risks found in the world of business, such as threats to revenue, brand value and jobs. What’s more, the time-critical mission mindset so essential within the military is highly relevant within cybersecurity, where attacks and breaches must be dealt with confidently, rapidly and calmly. In the armed forces, people often find themselves in situations so intense that Maslow’s hierarchy of needs is flipped on its head. You’re not aiming for self-actualization or more advanced goals, but simply trying to keep the team alive and maintain essential operations. ... Military experience, on the other hand, fosters unparalleled trust, honesty and integrity within teams. Armed forces personnel must communicate really difficult messages. Telling people that many of them may die within hours demands a harsh honesty, but it builds trust. Combine this with an ability to achieve shared goals, and military leaders inspire others to follow them regardless of the obstacles. So veterans bring blunt honesty, communication, and a mission focus to do what is needed to succeed. These are all characteristics that are essential in cybersecurity, where you have to call out critical risks that others might avoid discussing.


Reclaiming the Architect’s Role in the SDLC

Without an active architect guiding the design and implementation, systems can experience architectural drift, a term that describes the gradual divergence from the intended system design, leading to a fragmented and harder-to-manage system. In the absence of architectural oversight, development teams may optimize for individual tasks at the expense of the system’s overall performance, scalability, and maintainability. ... The architect is primarily accountable for the overall design and ensuring the system’s quality, performance, scalability, and adaptability to meet changing demands. However, relying on outdated practices, like manually written and updated design documents, is no longer effective. The modern software landscape, with multiple services, external resources, and off-the-shelf integrations, makes such documents stale almost as soon as they’re written. Consequently, architects must use automated tools to document and monitor live system architectures. These tools can help architects identify potential issues almost in real time, which allows them to proactively address problems and ensure design integrity throughout the development process. These tools are especially useful in the design stage, allowing architects to reclaim the role they once possessed and the responsibilities that come with it.


Is Vibe Coding Ready for Prime Time?

As the vibe coding ecosystem matures, AI coding platforms are rolling out safeguards like dev/prod separation, backups/rollback, single sign-on, and SOC 2 reporting, yet audit logging is still not uniform across tools. But until these enterprise-grade controls become standard, organizations must proactively build their own guardrails to ensure AI-generated code remains secure, scalable and trustworthy. This calls for a risk-based approach, one that adjusts oversight based on the likelihood and impact of potential risks. Not all use cases carry the same weight. Some are low-stakes and well-suited for experimentation, while others may introduce serious security, regulatory or operational risks. By focusing controls where they’re most needed, a risk-based approach helps protect critical systems while still enabling speed and innovation in safer contexts. ... To effectively manage the risks of vibe coding, teams need to ask targeted questions that reflect the unique challenges of AI-generated code. These questions help determine how much oversight is needed and whether vibe coding is appropriate for the task at hand. ... Vibe coding unlocks new ways of thinking for software development. However, it also shifts risk upstream. The speed of code generation doesn’t eliminate the need for review, control and accountability. In fact, it makes those even more important.


7 reasons the SOC is in crisis — and 5 steps to fix it

The problem is that our systems verify accounts, not actual people. Once an attacker assumes a user’s identity through social engineering, they can often operate within normal parameters for extended periods. Most detection systems aren’t sophisticated enough to recognise that John Doe’s account is being used by someone who isn’t actually John Doe. ... In large enterprises with organic system growth, different system owners, legacy environments, and shadow SaaS integrations, misconfigurations are inevitable. No vulnerability scanner will flag identity systems configured inconsistently across domains, cloud services with overly permissive access policies, or network segments that bypass security controls. These misconfigurations often provide attackers with the lateral movement opportunities they need once they’ve gained initial access through compromised credentials. Yet most organizations have no systematic approach to identifying and remediating these architectural weaknesses. ... External SOC providers offer round-the-clock monitoring and specialised expertise, but they lack the organizational context that makes detection effective. They don’t understand your business processes, can’t easily distinguish between legitimate and suspicious activities, and often lack the authority to take decisive action.


One Network: Cloud-Agnostic Service and Policy-Oriented Network Architecture

The goal of One Network is to enable uniform policies across services. To do so, we are looking to overcome the complexities of heterogeneous networking, different language runtimes, and the coexistence of monolith services and microservices. These complexities span multiple environments, including public, private, and multi-cloud setups. The idea behind One Network is to simplify the current state of affairs by asking, "Why do I need so many networks? Can I have one network?" ... One Network enables you to manage such a service by applying governance, orchestrating policy, and managing the small independent services. Each of these microservices is imagined as a service endpoint. This enables orchestrating and grouping these service endpoints without application developers needing to modify service implementation, so everything is done on a network. There are three ways to manage these service endpoints. The first is the classic model: you add a load balancer before a workload, such as a shopping cart service running in multiple regions, and that becomes your service endpoint. ... If you start with a flat network but want to create boundaries, you can segment by exposing only certain services and keeping others hidden.