Showing posts with label data governance. Show all posts
Showing posts with label data governance. Show all posts

Daily Tech Digest - August 15, 2025


Quote for the day:

“Become the kind of leader that people would follow voluntarily, even if you had no title or position.” -- Brian Tracy


DevSecOps 2.0: How Security-First DevOps Is Redefining Software Delivery

DevSecOps 2.0 is a true security-first revolution. This paradigm shift transforms software security into a proactive enabler, leveraging AI and policy-as-code to automate safeguards at scale. Security tools now blend seamlessly into developer workflows, and continuous compliance ensures real-time auditing. With ransomware, supply chain attacks, and other attacks on the rise, there is a need for a different approach to delivering resilient software. ... It marks a transformative approach to software development, where security is the foundation of the entire lifecycle. This evolution ensures proactive security that works to identify and neutralize threats early. ... AI-driven security is central to DevSecOps 2.0, which harnesses the power of artificial intelligence to transform security from a reactive process into a proactive defense strategy. By analyzing vast datasets, including security logs, network traffic, and code commit patterns, AI can detect subtle anomalies and predict potential threats before they materialize. This predictive capability enables teams to identify risks early, streamlining threat detection and facilitating automated remediation. For instance, AI can analyze commit patterns to predict code sections likely to contain vulnerabilities, allowing for targeted testing and prevention. 


What CIOs can do when AI boosts performance but kills motivation

“One of the clearest signs is copy-paste culture,” Anderson says. “When employees use AI output as-is, without questioning it or tailoring it to their audience, that’s a sign of disengagement. They’ve stopped thinking critically.” To prevent this, CIOs can take a closer look at how teams actually use AI. Honest feedback from employees can help, but there’s often a gap between what people say they use AI for and how they actually use it in practice, so trying to detect patterns of copy-paste usage can help improve workflows. CIOs should also pay attention to how AI affects roles, identities, and team dynamics. When experienced employees feel replaced, or when previously valued skills are bypassed, morale can quietly drop, even if productivity remains high on paper. “In one case, a senior knowledge expert, someone who used to be the go-to for tough questions, felt displaced when leadership started using AI to get direct answers,” Anderson says. “His motivation dropped because he felt his value was being replaced by a tool.” Over time, this expert started to use AI strategically, and saw it could reduce the ad-hoc noise and give him space for more strategic work. “That shift from threatened to empowered is something every leader needs to watch for and support,” he adds.


That ‘cheap’ open-source AI model is actually burning through your compute budget

The inefficiency is particularly pronounced for Large Reasoning Models (LRMs), which use extended “chains of thought” to solve complex problems. These models, designed to think through problems step-by-step, can consume thousands of tokens pondering simple questions that should require minimal computation. For basic knowledge questions like “What is the capital of Australia?” the study found that reasoning models spend “hundreds of tokens pondering simple knowledge questions” that could be answered in a single word. ... The research revealed stark differences between model providers. OpenAI’s models, particularly its o4-mini and newly released open-source gpt-oss variants, demonstrated exceptional token efficiency, especially for mathematical problems. The study found OpenAI models “stand out for extreme token efficiency in math problems,” using up to three times fewer tokens than other commercial models. ... The findings have immediate implications for enterprise AI adoption, where computing costs can scale rapidly with usage. Companies evaluating AI models often focus on accuracy benchmarks and per-token pricing, but may overlook the total computational requirements for real-world tasks. 


AI Agents and the data governance wild west

Today, anyone from an HR director to a marketing intern can quickly build and deploy an AI agent simply using Copilot Studio. This tool is designed to be accessible and quick, making it easy for anyone to play around with and launch a sophisticated agent in no time at all. But when these agents are created outside of the IT department, most users aren’t thinking about data classification or access controls, and they become part of a growing shadow IT problem. ... The problem is that most users will not be thinking like a developer with governance in mind when creating their own agents. Therefore, policies must be imposed to ensure that key security steps aren’t skipped in the rush to deploy a solution. A new layer of data governance must be considered with steps that include configuring data boundaries, restricting who can access what data according to job role and sensitivity level, and clearly specifying which data resources the agent can pull from. AI agents should be built for purpose, using principles of least privilege. This will help avoid a marketing intern having access to the entire company’s HR file. Just like any other business-critical application, it needs to be adequately tested and ‘red-teamed’. Perform penetration testing to identify what data the agent can surface, to who, and how accurate the data is.


Monitoring microservices: Best practices for robust systems

Collecting extensive amounts of telemetry data is most beneficial if you can combine, visualize and examine it successfully. A unified observability stack is paramount. By integrating tools like middleware that work together seamlessly, you create a holistic view of your microservices ecosystem. These unified tools ensure that all your telemetry information — logs, traces and metrics — is correlated and accessible from a single pane of glass, dramatically decreasing the mean time to detect (MTTD) and mean time to resolve (MTTR) problems. The energy lies in seeing the whole photograph, no longer just remote points. ... Collecting information is good, but acting on it is better. Define significant service level objectives (SLOs) that replicate the predicted performance and reliability of your offerings.  ... Monitoring microservices effectively is an ongoing journey that requires a commitment to standardization of data, using the right tools and a proactive mindset. By utilizing standardized observability practices, adapting a unified observability stack, continuously monitoring key metrics, placing meaningful SLOs and allowing enhanced root cause analysis, you may construct a strong and resilient microservices structure that truly serves your business desires and delights your customers. 


How military leadership prepares veterans for cybersecurity success

After dealing with extremely high-pressure environments, in which making the wrong decision can cost lives, veterans and reservists have little trouble dealing with the kinds of risks found in the world of business, such as threats to revenue, brand value and jobs. What’s more, the time-critical mission mindset so essential within the military is highly relevant within cybersecurity, where attacks and breaches must be dealt with confidently, rapidly and calmly. In the armed forces, people often find themselves in situations so intense that Maslow’s hierarchy of needs is flipped on its head. You’re not aiming for self-actualization or more advanced goals, but simply trying to keep the team alive and maintain essential operations. ... Military experience, on the other hand, fosters unparalleled trust, honesty and integrity within teams. Armed forces personnel must communicate really difficult messages. Telling people that many of them may die within hours demands a harsh honesty, but it builds trust. Combine this with an ability to achieve shared goals, and military leaders inspire others to follow them regardless of the obstacles. So veterans bring blunt honesty, communication, and a mission focus to do what is needed to succeed. These are all characteristics that are essential in cybersecurity, where you have to call out critical risks that others might avoid discussing.


Reclaiming the Architect’s Role in the SDLC

Without an active architect guiding the design and implementation, systems can experience architectural drift, a term that describes the gradual divergence from the intended system design, leading to a fragmented and harder-to-manage system. In the absence of architectural oversight, development teams may optimize for individual tasks at the expense of the system’s overall performance, scalability, and maintainability. ... The architect is primarily accountable for the overall design and ensuring the system’s quality, performance, scalability, and adaptability to meet changing demands. However, relying on outdated practices, like manually written and updated design documents, is no longer effective. The modern software landscape, with multiple services, external resources, and off-the-shelf integrations, makes such documents stale almost as soon as they’re written. Consequently, architects must use automated tools to document and monitor live system architectures. These tools can help architects identify potential issues almost in real time, which allows them to proactively address problems and ensure design integrity throughout the development process. These tools are especially useful in the design stage, allowing architects to reclaim the role they once possessed and the responsibilities that come with it.


Is Vibe Coding Ready for Prime Time?

As the vibe coding ecosystem matures, AI coding platforms are rolling out safeguards like dev/prod separation, backups/rollback, single sign-on, and SOC 2 reporting, yet audit logging is still not uniform across tools. But until these enterprise-grade controls become standard, organizations must proactively build their own guardrails to ensure AI-generated code remains secure, scalable and trustworthy. This calls for a risk-based approach, one that adjusts oversight based on the likelihood and impact of potential risks. Not all use cases carry the same weight. Some are low-stakes and well-suited for experimentation, while others may introduce serious security, regulatory or operational risks. By focusing controls where they’re most needed, a risk-based approach helps protect critical systems while still enabling speed and innovation in safer contexts. ... To effectively manage the risks of vibe coding, teams need to ask targeted questions that reflect the unique challenges of AI-generated code. These questions help determine how much oversight is needed and whether vibe coding is appropriate for the task at hand. ... Vibe coding unlocks new ways of thinking for software development. However, it also shifts risk upstream. The speed of code generation doesn’t eliminate the need for review, control and accountability. In fact, it makes those even more important.


7 reasons the SOC is in crisis — and 5 steps to fix it

The problem is that our systems verify accounts, not actual people. Once an attacker assumes a user’s identity through social engineering, they can often operate within normal parameters for extended periods. Most detection systems aren’t sophisticated enough to recognise that John Doe’s account is being used by someone who isn’t actually John Doe. ... In large enterprises with organic system growth, different system owners, legacy environments, and shadow SaaS integrations, misconfigurations are inevitable. No vulnerability scanner will flag identity systems configured inconsistently across domains, cloud services with overly permissive access policies, or network segments that bypass security controls. These misconfigurations often provide attackers with the lateral movement opportunities they need once they’ve gained initial access through compromised credentials. Yet most organizations have no systematic approach to identifying and remediating these architectural weaknesses. ... External SOC providers offer round-the-clock monitoring and specialised expertise, but they lack the organizational context that makes detection effective. They don’t understand your business processes, can’t easily distinguish between legitimate and suspicious activities, and often lack the authority to take decisive action.


One Network: Cloud-Agnostic Service and Policy-Oriented Network Architecture

The goal of One Network is to enable uniform policies across services. To do so, we are looking to overcome the complexities of heterogeneous networking, different language runtimes, and the coexistence of monolith services and microservices. These complexities span multiple environments, including public, private, and multi-cloud setups. The idea behind One Network is to simplify the current state of affairs by asking, "Why do I need so many networks? Can I have one network?" ... One Network enables you to manage such a service by applying governance, orchestrating policy, and managing the small independent services. Each of these microservices is imagined as a service endpoint. This enables orchestrating and grouping these service endpoints without application developers needing to modify service implementation, so everything is done on a network. There are three ways to manage these service endpoints. The first is the classic model: you add a load balancer before a workload, such as a shopping cart service running in multiple regions, and that becomes your service endpoint. ... If you start with a flat network but want to create boundaries, you can segment by exposing only certain services and keeping others hidden. 

Daily Tech Digest - August 14, 2025


Quote for the day:

"Act as if what you do makes a difference. It does." -- William James


What happens the day after superintelligence?

As context, artificial superintelligence (ASI) refers to systems that can outthink humans on most fronts, from planning and reasoning to problem-solving, strategic thinking and raw creativity. These systems will solve complex problems in a fraction of a second that might take the smartest human experts days, weeks or even years to work through. ... So ask yourself, honestly, how will humans act in this new reality? Will we reflexively seek advice from our AI assistants as we navigate every little challenge we encounter? Or worse, will we learn to trust our AI assistants more than our own thoughts and instincts? ... Imagine walking down the street in your town. You see a coworker heading towards you. You can’t remember his name, but your AI assistant does. It detects your hesitation and whispers the coworker’s name into your ears. The AI also recommends that you ask the coworker about his wife, who had surgery a few weeks ago. The coworker appreciates the sentiment, then asks you about your recent promotion, likely at the advice of his own AI. Is this human empowerment, or a loss of human agency? ... Many experts believe that body-worn AI assistants will make us feel more powerful and capable, but that’s not the only way this could go. These same technologies could make us feel less confident in ourselves and less impactful in our lives.


Confidential Computing: A Solution to the Uncertainty of Using the Public Cloud

Confidential computing is a way to ensure that no external party can look at your data and business logic while it is executed. It looks to secure Data in Use. When you now add to that the already established way to secure Data at Rest and Data in Transit it can be ensured that most likely no external party can access secured data running in a confidential computing environment wherever that may be. ... To be able to execute services in the cloud the company needs to be sure that the data and the business logic cannot be accessed or changed from third parties especially by the system administrator of that cloud provider. It needs to be protected. Or better, it needs to be executed in the Trusted Compute Base (TCB) of the company. This is the environment where specific security standards are set to restrict all possible access to data and business logic. ... Here attestation is used to verify that a confidential environment (instance) is securely running in the public cloud and it can be trusted to implement all the security standards necessary. Only after successful attestation the TCB is then extended into the Public cloud to incorporate the attested instances. One basic requirement of attestation is that the attestation service is located independently of the infrastructure where the instance is running. 


Open Banking's Next Phase: AI, Inclusion and Collaboration

Think of open banking as the backbone for secure, event-driven automation: a bill gets paid, and a savings allocation triggers instantly across multiple platforms. The future lies in secure, permissioned coordination across data silos, and when applied to finance, it unlocks new, high-margin services grounded in trust, automation and personalisation. ... By building modular systems that handle hierarchy, fee setup, reconciliation and compliance – all in one cohesive platform – we can unlock new revenue opportunities. ... Regulators must ensure they are stepping up efforts to sustain progress and support fintech innovation whilst also meeting their aim to keep customers safe. Work must also be done to boost public awareness of the value of open banking. Many consumers are unaware of the financial opportunities open banking offers and some remain wary of sharing their data with unknown third parties. ... Rather than duplicating efforts or competing head-to-head, institutions and fintechs should focus on co-developing shared infrastructure. When core functions like fee management, operational controls and compliance processes are unified in a central platform, fintechs can innovate on customer experience, while banks provide the stability, trust and reach. 


Data centers are eating the economy — and we’re not even using them

Building new data centers is the easy solution, but it’s neither sustainable nor efficient. As I’ve witnessed firsthand in developing compute orchestration platforms, the real problem isn’t capacity. It’s allocation and optimization. There’s already an abundant supply sitting idle across thousands of data centers worldwide. The challenge lies in efficiently connecting this scattered, underutilized capacity with demand. ... The solution isn’t more centralized infrastructure. It’s smarter orchestration of existing resources. Modern software can aggregate idle compute from data centers, enterprise servers, and even consumer devices into unified, on-demand compute pools. ... The technology to orchestrate distributed compute already exists. Some network models already demonstrate how software can abstract away the complexity of managing resources across multiple providers and locations. Docker containers and modern orchestration tools make workload portability seamless. The missing piece is just the industry’s willingness to embrace a fundamentally different approach. Companies need to recognize that most servers are idle 70%-85% of the time. It’s not a hardware problem requiring more infrastructure. 


How an AI-Based 'Pen Tester' Became a Top Bug Hunter on HackerOne

While GenAI tools can be extremely effective at finding potential vulnerabilities, XBOW's team found they were't very good at validating the findings. The trick to making a successful AI-driven pen tester, Dolan-Gavitt explained, was to use something other than an LLM to verify the vulnerabilities. In this case of XBOW, researchers used a deterministic validation approach. "Potentially, maybe in a couple years down the road, we'll be able to actually use large language models out of the box to verify vulnerabilities," he said. "But for today, and for the rest of this talk, I want to propose and argue for a different way, which is essentially non-AI, deterministic code to validate vulnerabilities." But AI still plays an integral role with XBOW's pen tester. Dolan-Gavitt said the technology uses a capture-the-flag (CTF) approach in which "canaries" are placed in the source code and XBOW sends AI agents after them to see if they can access them. For example, he said, if researchers want to find a remote code execution (RCE) flaw or an arbitrary file read vulnerability, they can plant canaries on the server's file system and set the agents loose. ... Dolan-Gavitt cautioned that AI-powered pen testers are not panacea. XBOW still sees some false positives because some vulnerabilities, like business logic flaws, are difficult to validate automatically.


Data Governance Maturity Models and Assessments: 2025 Guide

Data governance maturity frameworks help organizations assess their data governance capabilities and guide their evolution toward optimal data management. To implement a data governance or data management maturity framework (a “model”) it is important to learn what data governance maturity is, explore how and why it should be assessed, discover various maturity models and their features, and understand the common challenges associated with using maturity models. Data governance maturity refers to the level of sophistication and effectiveness with which an organization manages its data governance processes. It encompasses the extent to which an organization has implemented, institutionalized, and optimized its data governance practices. A mature data governance framework ensures that the organization can support its business objectives with accurate, trusted, and accessible data. Maturity in data governance is typically assessed through various models that measure different aspects of data management such as data quality and compliance and examine processes for managing data’s context (metadata) and its security. Maturity models provide a structured way to evaluate where an organization stands and how it can improve for a given function.


Open-source flow monitoring with SENSOR: Benefits and trade-offs

Most flow monitoring setups rely on embedded flow meters that are locked to a vendor and require powerful, expensive devices. SENSOR shows it’s possible to build a flexible and scalable alternative using only open tools and commodity hardware. It also allows operators to monitor internal traffic more comprehensively, not just what crosses the network border. ... For a large network, that can make troubleshooting and oversight more complex. “Something like this is fine for small networks,” David explains, “but it certainly complicates troubleshooting and oversight on larger networks.” David also sees potential for SENSOR to expand beyond historical analysis by adding real-time alerting. “The paper doesn’t describe whether the flow collectors can trigger alarms for anomalies like rapidly spiking UDP traffic, which could indicate a DDoS attack in progress. Adding real-time triggers like this would be a valuable enhancement that makes SENSOR more operationally useful for network teams.” ... “Finally, the approach is fragile. It relies on precise bridge and firewall configurations to push traffic through the RouterOS stack, which makes it sensitive to updates, misconfigurations, or hardware changes. 


Network Segmentation Strategies for Hybrid Environments

It's not a simple feat to implement network segmentation. Network managers must address network architectural issues, obtain tools and methodologies, review and enact security policies, practices and protocols, and -- in many cases -- overcome political obstacles. ... The goal of network segmentation is to place the most mission-critical and sensitive resources and systems under comprehensive security for a finite ecosystem of users. From a business standpoint, it's equally critical to understand the business value of each network asset and to gain support from users and management before segmenting. ... Divide the network segments logically into security segments based on workload, whether on premises, cloud-based or within an extranet. For example, if the Engineering department requires secure access to its product configuration system, only that team would have access to the network segment that contains the Engineering product configuration system. ... A third prong of segmented network security enforcement in hybrid environments is user identity management. Identity and access management (IAM) technology identifies and tracks users at a granular level based on their authorization credentials in on-premises networks but not on the cloud. 


Convergence of AI and cybersecurity has truly transformed the CISO’s role

The most significant impact of AI in security at present is in automation and predictive analysis. Automation especially when enhanced with AI, such as integrating models like Copilot Security with tools like Microsoft Sentinel allows organisations to monitor thousands of indicators of compromise in milliseconds and receive instant assessments. ... The convergence of AI and cybersecurity has truly transformed the CISO’s role, especially post-pandemic when user locations and systems have become unpredictable. Traditionally, CISOs operated primarily as reactive defenders responding to alerts and attacks as they arose. Now, with AI-driven predictive analysis, we’re moving into a much more proactive space. CISOs are becoming strategic risk managers, able to anticipate threats and respond with advanced tools. ... Achieving real-time threat detection in the cloud through AI requires the integration of several foundational pillars that work in concert to address the complexity and speed of modern digital environments. At the heart of this approach is the adoption of a Zero Trust Architecture: rather than assuming implicit trust based on network perimeters, this model treats every access request whether to data, applications, or infrastructure as potentially hostile, enforcing strict verification and comprehensive compliance controls. 


Initial Access Brokers Selling Bundles, Privileges and More

"By the time a threat actor logs in using the access and privileged credentials bought from a broker, a lot of the heavy lifting has already been done for them. Therefore, it's not about if you're exposed, but whether you can respond before the intrusion escalates." More than one attacker may use any given initial access, either because the broker sells it to multiple customers, or because a customer uses the access for one purpose - say, to steal data - then sells it on to someone else, who perhaps monetizes their purchase by further ransacking data and unleashing ransomware. "Organizations that unwittingly have their network access posted for sale on initial access broker forums have already been victimized once, and they are on their way to being victimized once again when the buyer attacks," the report says. ... "Access brokers often create new local or domain accounts, sometimes with elevated privileges, to maintain persistence or allow easier access for buyers," says a recent report from cybersecurity firm Kela. For detecting such activity, "unexpected new user accounts are a major red flag." So too is "unusual login activity" to legitimate accounts that traces to never-before-seen IP addresses, or repeat attempts that only belatedly succeed, Kela said. "Watch for legitimate accounts doing unusual actions or accessing resources they normally don't - these can be signs of account takeover."

Daily Tech Digest - August 13, 2025


Quote for the day:

“You don’t lead by pointing and telling people some place to go. You lead by going to that place and making a case.” -- Ken Kesey


9 things CISOs need know about the dark web

There’s a growing emphasis on scalability and professionalization, with aggressive promotion and recruitment for ransomware-as-a-service (RaaS) operations. This includes lucrative affiliate programs to attract technically skilled partners and tiered access enabling affiliates to pay for premium tools, zero-day exploits or access to pre-compromised networks. It’s fragmenting into specialized communities that include credential marketplaces, exploit exchanges for zero-days, malware kits, and access to compromised systems, and forums for fraud tools. Initial access brokers (IABs) are thriving, selling entry points into corporate environments, which are then monetized by ransomware affiliates or data extortion groups. Ransomware leak sites showcase attackers’ successes, publishing sample files, threats of full data dumps as well as names and stolen data of victim organizations that refuse to pay. ... While DDoS-for-hire services have existed for years, their scale and popularity are growing. “Many offer free trial tiers, with some offering full-scale attacks with no daily limits, dozens of attack types, and even significant 1 Tbps-level output for a few thousand dollars,” Richard Hummel, cybersecurity researcher and threat intelligence director at Netscout, says. The operations are becoming more professional and many platforms mimic legitimate e-commerce sites displaying user reviews, seller ratings, and dispute resolution systems to build trust among illicit actors.


CMMC Compliance: Far More Than Just an IT Issue

For many years, companies working with the US Department of Defense (DoD) treated regulatory mandates including the Cybersecurity Maturity Model Certification (CMMC) as a matter best left to the IT department. The prevailing belief was that installing the right software and patching vulnerabilities would suffice. Yet, reality tells a different story. Increasingly, audits and assessments reveal that when compliance is seen narrowly as an IT responsibility, significant gaps emerge. In today’s business environment, managing controlled unclassified information (CUI) and federal contract information (FCI) is a shared responsibility across various departments – from human resources and manufacturing to legal and finance. ... For CMMC compliance, there needs to be continuous assurance involving regularly monitoring systems, testing controls and adapting security protocols whenever necessary. ... Businesses are having to rethink much of their approach to security because of CMMC requirements. Rather than treating it as something to be handed off to the IT department, organizations must now commit to a comprehensive, company-wide strategy. Integrating thorough physical security, ongoing training, updated internal policies and steps for continuous assurance mean companies can build a resilient framework that meets today’s regulatory demands and prepares them to rise to challenges on the horizon.


Beyond Burnout: Three Ways to Reduce Frustration in the SOC

For years, we’ve heard how cybersecurity leaders need to get “business smart” and better understand business operations. That is mostly happening, but it’s backwards. What we need is for business leaders to learn cybersecurity, and even further, recognize it as essential to their survival. Security cannot be viewed as some cost center tucked away in a corner; it’s the backbone of your entire operation. It’s also part of an organization’s cyber insurance – the internal insurance. Simply put, cybersecurity is the business, and you absolutely cannot sell without it. ... SOCs face a deluge of alerts, threats, and data that no human team can feasibly process without burning out. While many security professionals remain wary of artificial intelligence, thoughtfully embracing AI offers a path toward sustainable security operations. This isn’t about replacing analysts with technology. It’s about empowering them to do the job they actually signed up for. AI can dramatically reduce toil by automating repetitive tasks, provide rapid insights from vast amounts of data, and help educate junior staff. Instead of spending hours manually reviewing documents, analysts can leverage AI to extract key insights in minutes, allowing them to apply their expertise where it matters most. This shift from mundane processing to meaningful analysis can dramatically improve job satisfaction.


7 legal considerations for mitigating risk in AI implementation

AI systems often rely on large volumes of data, including sensitive personal, financial and business information. Compliance with data privacy laws is critical, as regulations such as the European Union’s General Data Protection Regulation, the California Consumer Privacy Act and other emerging state laws impose strict requirements on the collection, processing, storage and sharing of personal data. ... AI systems can inadvertently perpetuate or amplify biases present in training data, leading to unfair or discriminatory outcomes. This risk is present in any sector, from hiring and promotions to customer engagement and product recommendations. ... The legal framework surrounding AI is evolving rapidly. In the U.S., multiple federal agencies, including the Federal Trade Commission and Equal Employment Opportunity Commission, have signaled they will apply existing laws to AI use cases. AI-specific state laws, including in California and Utah, have taken effect in the last year. ... AI projects involve unique intellectual property questions related to data ownership and IP rights in AI-generated works. ... AI systems can introduce new cybersecurity vulnerabilities, including risks related to data integrity, model manipulation and adversarial attacks. Organizations must prioritize cybersecurity to protect AI assets and maintain trust.


Forrester’s Keys To Taming ‘Jekyll and Hyde’ Disruptive Tech

“Disruptive technologies are a double-edged sword for environmental sustainability, offering both crucial enablers and significant challenges,” explained the 15-page report written by Abhijit Sunil, Paul Miller, Craig Le Clair, Renee Taylor-Huot, Michele Pelino, with Amy DeMartine, Danielle Chittem, and Peter Harrison. “On the positive side,” it continued, “technology innovations accelerate energy and resource efficiency, aid in climate adaptation and risk mitigation, monitor crucial sustainability metrics, and even help in environmental conservation.” “However,” it added, “the necessary compute power, volume of waste, types of materials needed, and scale of implementing these technologies can offset their benefits.” ... “To meet sustainability goals with automation and AI,” he told TechNewsWorld, “one of our recommendations is to develop proofs of concept for ‘stewardship agents’ and explore emerging robotics focused on sustainability.” When planning AI operations, Franklin Manchester, a principal global industry advisor at SAS, an analytics and artificial intelligence software company in Cary, N.C., cautioned, “Not every nut needs to be cracked with a sledgehammer.” “Start with good processes — think lean process mapping, for example — and deploy AI where it makes sense to do so,” he told TechNewsWorld.


5 Key Benefits of Data Governance

Data governance processes establish data ethics, a code of behavior providing a trustworthy business climate and compliance with regulatory requirements. The IAPP calculates that 79% of the world’s population is now protected under privacy regulations such as the EU’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). This statistic highlights the importance of governance frameworks for risk management and customer trust. ... Data governance frameworks recognize data governance roles and responsibilities and streamline processes so that corporate-wide communications can improve. This systematic approach sets up businesses to be more agile, increasing the “freedom to innovate, invest, or hunker down and focus internally,” says O’Neal. For example, Freddie Mac developed a solid data strategy that streamlined data governance communications and later had the level of buy-in for the next iteration. ... With a complete picture of business activities, challenges, and opportunities, data governance creates the flexibility to respond quickly to changing needs. This allows for better self-service business intelligence, where business users can gather multi-structured data from various sources and convert it into actionable intelligence.


Architecture Lessons from Two Digital Transformations

The prevailing mindset was that of “Don’t touch what isn’t broken”. This approach, though seemingly practical, reflected a deeper inertia, rooted in a cash-strapped culture and leadership priorities that often leaned towards prestige over progress. Over the years, the organization had acquired others in an attempt to grow its customer base. These mergers and acquisitions lead to inheritance of a lot more legacy estate. The mess burgeoned to an extent that they needed a transformation, not now, but yesterday! That is exactly where the Enterprise Architecture practice comes into picture. Strategically, a green field approach was suggested. A brand-new system from scratch, that has modern data centers for the infrastructure, cloud platforms for the applications, plug and play architecture or composable architecture as it is better known, for technology, unified yet diversified multi-branding under one umbrella and the whole works. Where things slowly started taking a downhill turn is when they decided to “outsource” the entire development of this new and shiny platform to a vendor. The reasoning was that the organization did not want to diversify from being a banking institution and turn into an IT heavy organization. They sought experienced engineering teams who could hit the ground running and deliver in 2 years flat.


Cloud security in multi-tenant environments

The most useful security strategy in a multi-tenant cloud environment comes from cultivating a security-first culture. It is important to educate the team on the intricacies of the cloud security system, implementing stringent password and authentication policies, thereby promoting secure practices for development. Security teams and company executives may reduce the possible effects of breaches and remain ready for changing threats with the support of event simulations, tabletop exercises, and regular training. ... As we navigate the evolving landscape of enterprise cloud computing, multi-tenant environments will undoubtedly remain a cornerstone of modern IT infrastructure. However, the path forward demands more than just technological adaptation – it requires a fundamental shift in how we approach security in shared spaces. Organizations must embrace a comprehensive defense-in-depth strategy that transcends traditional boundaries, encompassing everything from robust infrastructure hardening to sophisticated application security and meticulous user governance. The future of cloud computing need not present a binary choice between efficiency and security. ... By placing security at the heart of multi-tenant operations, organizations can fully harness the transformative power of cloud technology while protecting their most critical assets 


This Big Data Lesson Applies to AI

Bill Schmarzo was one of the most vocal supporters of the idea that there were no silver bullets, and that successful business transformation was the result of careful planning and a lot of hard work. A decade ago, the “Dean of Big Data” let this publication in on secret recipe he would use to guide his clients. He called it the SAM test, and it allowed business leaders to gauge the viability of new IT projects through three lenses.First, is the new project strategic? That is, will it make a big difference for the company? If it won’t, why are you investing lots of money? Second, is the proposed project actionable? You might be able to get some insight with the new tech, but can your business actually do anything with it? Third, is the project material? The new project might technically be feasible, but if the costs outweigh the benefits, then it’s a failure. Schmarzo, who is currently working as Dell’s Customer AI and Data Innovation Strategist, was also a big proponent of the importance of data governance and data management. The same data governance and data management bugaboos that doomed so many big data projects are, not surprisingly, raising their ugly little heads in the age of AI. Which brings us to the current AI hype wave. We’re told that trillions of dollars are on the line with large language models, that we’re on the cusp of a technological transformation the likes of which we have never seen. 


Sovereign cloud and digital public infrastructure: Building India’s AI backbone

India’s Digital Public Infrastructure (DPI) is an open, interoperable platform that powers essential services like identity and payments. It comprises foundational systems that are accessible, secure, and support seamless integration. In practice, this has taken shape as the famous “India Stack.” ... India’s digital economy is on an exciting trajectory. A large slice of that will be AI-driven services like smart agriculture, precision health, financial inclusion, and more. But to fully capitalize on this opportunity, we need both rich data and trusted compute. DPI provides vast amounts of structured data (financial records, IDs, health info) and access channels. Combining that with a sovereign cloud means we can turn data into insight on Indian soil. Indian regulators now view data itself as a strategic asset and fuel for AI. AI pilots (e.g., local-language advisory bots) are already being built on top of DPI platforms (UPI, ONDC, etc.) to deliver inclusive services. And the government has even subsidized thousands of GPUs for researchers. But all this computing and data must be hosted securely. If our AI models and sensitive datasets live on foreign soil, we remain vulnerable to geopolitical shifts and export controls. ... Now, policy is catching up with sovereignty. In 2023, the new Digital Personal Data Protection (DPDP) Act formally mandated local storage for sensitive personal data. 

Daily Tech Digest - July 11, 2025


Quote for the day:

"People may forget what you say, but they won't forget how you them feel." -- Mary Kay Ash


Throwing AI at Developers Won’t Fix Their Problems

Organizations are spending too much time, money and energy focusing on the tools themselves. “Should we use OpenAI or Anthropic? Copilot or Cursor?” We see two broad patterns for how organizations approach AI tool adoption. The first is that leadership has a relationship with a certain vendor or just a personal preference, so they pick a tool and mandate it. This can work, but you’ll often get poor results — not because the tool is bad, but because the market is moving too fast for centralized teams to keep up. ... The second model, which generally works much better, is to allow early adopters to try new tools and find what works. This gives developers autonomy to improve their own workflows and reduces the need for a central team to test every new tool exhaustively. Comparing the tools by features or technology is less important every day. You’ll waste a lot of energy debating minor differences that won’t matter next year. Instead, focus on what problem you want to solve. Are you trying to improve testing? Code review? Documentation? Incident response? Figure out the goal first. Then see if an AI tool (or any tool) actually helps. If you don’t, you’ll just make DevEx worse: You’ll have a landscape of 100 tools nobody knows how to use, and you’ll deliver no real value.


Anatomy of a Scattered Spider attack: A growing ransomware threat evolves

Scattered Spider began its attack against the unnamed organization’s public-facing Oracle Cloud authentication portal, targeting its chief financial officer. Using personal details, such as the CFO’s date of birth and the last four digits of their Social Security number obtained from public sources and previous breaches, Scattered Spider impersonated the CFO in a call to the company’s help desk, tricking help desk staff into resetting the CFO’s registered device and credentials. ... The cybercriminals extracted more than 1,400 secrets by taking advantage of compromised admin accounts tied to the target’s CyberArk password vault and likely an automated script. Scattered Spider granted administrator roles to compromised user accounts before using tools, including ngrok, to maintain access on compromised virtual machines. ... Scattered Spider’s operations have become more aggressive and compressed. “Within hours of initial compromise — often via social engineering — they escalate privileges, move laterally, establish persistence, and begin reconnaissance across both cloud and on-prem environments,” Beek explained. “This speed and fluidity represent a significant escalation in operational maturity.” ... Defending effectively against Scattered Spider involves tackling both human and technical vulnerabilities, ReliaQuest researchers noted.


Data governance: The contract layer that makes agentic systems possible

Today, AI has changed everything. Lineage, access enforcement and cataloging must operate in real time and cover vastly more data types and sources. Models consume data continuously and make decisions instantly, raising the stakes for mistakes or gaps in oversight. What used to be a once-a-week check is now an always-on discipline. This transformation has turned data governance from a checklist into a living system that protects quality and trust at scale. ... One of the biggest misconceptions is that governance slows down innovation. In reality, good governance speeds it up. By clarifying ownership, policies and data quality from the start, teams avoid spending precious time reconciling mismatches and can focus on delivering AI that works as intended. A clear governance framework reduces unnecessary data copies, lowers regulatory risk and prevents AI from producing unpredictable results. Getting this right also requires a culture shift. Producers and consumers alike need to see themselves as co-stewards of shared data products. ... Enterprises deploying agentic AI cannot leave governance behind. These systems run continuously, make autonomous decisions and rely on accurate context to stay relevant. Governance must move from passive checks to an active, embedded foundation within both architecture and culture.


How CIOs Are Navigating Today’s Hyper Volatility

“When it comes to changing dynamics, [such as] AI and driving innovation, there are several things that people like me are dealing with right now. There is an impact on how you hire people, staffing, how to structure your organization,” says Johar. “There is an impact on risk. I’m also responsible within my organization for managing the risk of data, privacy and security, and AI is bringing a new dimension to that risk. It’s an opportunity, but it's also a risk. How you structure your organization, how you manage risk, how you drive transformation -- these things are all connected.” ... “[CIOs] are emerging as transformation leaders, so they need to understand how to navigate the culture change of an organization, the change in people in an organization. They must know how to tell stories so they can get the organization on board,” says Danielle Phaneuf, a partner, PwC cloud and digital strategy operating model leader. “Their mindset is different, so they're embracing the transformation with a product model that allows them to move faster [and] allows them to think long term. They’re building these new muscles around change leadership and engaging the business early, co-creating solutions, not thinking they must solve everything on their own, and doing that in an agile way.”


What Is AI Agent Washing And Why Is It A Risk To Businesses?

You’ve heard of greenwashing and AI-washing? Well, now it seems that the hype-merchants and bandwagon-jumpers with technology to sell have come up with a new (and perhaps predictably inevitable) scam. Analysts at Gartner say unscrupulous vendors are increasingly engaging in "agent washing" and say that out of “thousands” of supposedly agentic AI products tested, only 130 truly lived up to the claim. ... So, what’s the scam? Well, according to the report, agent washing involves passing off existing automation technology, including LLM-powered chatbots and robotic process automation, as agentic, when in reality it lacks those capabilities. ... Tools that claim to be agentic because they orchestrate and pull together multiple AI systems, such as marketing automation platforms and workflow automation tools, are stretching the term, too, unless they are also capable of autonomously coordinating the usage of those tools for long-term planning and decision-making. A few more hypothetical examples: While an AI chatbot-based system can write emails on command, an agentic system might write emails, identify the best recipients for marketing purposes, send the emails out, monitor responses, and then generate follow-up emails, tailored to individual responders.


Agentic AI Architecture Framework for Enterprises

The critical decision point lies in understanding when predictability and control take precedence versus when flexibility and autonomous decision-making deliver greater value. This understanding leads to a fundamental principle: start with the simplest effective solution, adding complexity only when clear business value justifies the additional operational overhead and risk. ... Enterprise deployment of agentic AI creates an inherent tension between AI autonomy and organizational governance requirements. Our Analysis of successful MVPs and on-going production implementations across multiple industries reveals three distinct architectural tiers, each representing different trade-offs between capability and control while anticipating emerging regulatory frameworks like the EU AI Act and others coming. These tiers form a systematic maturity progression, so organizations can build competency and stakeholder trust incrementally before advancing to more sophisticated implementations. ... Our three-tier progression manifests differently across industries, reflecting unique regulatory environments, risk tolerances, customer expectations and operational requirements. Understanding these industry-specific approaches enables organizations to tailor their implementation strategies while maintaining systematic capability development.


Rewriting the rules of enterprise architecture with AI agents

In enterprise architecture, agentic AI systems can be deployed as digital “co-architects”, process optimizers, compliance monitors and scenario planners — each acting with a degree of independence and intelligence previously impossible. So why agentic AI and simulations for governance…and why now? Governance in enterprise architecture is about ensuring that IT systems, processes and data align with business goals, comply with regulations and adapt to change. ... These methods are increasingly inadequate in the face of real-time business dynamics. Agentic AI introduces a new composability model that is achievable: Governance that is continuous, adaptive and proactive. Agentic systems can monitor the enterprise landscape, simulate the impact of changes, enforce policies autonomously and even resolve conflicts or escalate issues when necessary. This results in governance that is both more robust and more responsive to business needs. Gartner’s research reinforces the impact of agency and simulations on enterprise architecture’s future. According to its Enterprise Architecture Services Predictions for 2025, 55% of EA teams will act as coordinators of autonomous governance automation by 2028 and shift from a direct oversight role to that of model curation and certification, agent simulations and oversight, and business outcome alignment with machine-led governance.


With tools like Alpha and Coherence, we’re turning risk management from reactive to real-time

Those days when it was more of a very reactive and process-heavy system, where you had to follow a set of dilutive processes all the time and react to risks being observed in the system, and then you had a standard operating procedure to deal with it step by step. Those days are behind us. That scenario was there for a number of decades. But with AI and intelligent-led solution capabilities transforming the landscape, it has become proactive and extremely real-time. So what we propose, we always have lived by our Digital Knowledge Operations framework. The three words in it: digital, knowledge, and operations. Digital makes you proactive because you’re building solutions not for today but for the future. You rely on knowledge, and you transform your operations. That’s our philosophy that unlocks this proactive ability of capturing the possibilities of risk in real time. That drove us to build something like Alpha. It’s essentially a very strong and effective transaction monitoring framework and tool that can detect a whole lot of false alerts with over 75% to 80% accuracy. Now, in risk management, what happens is that a lot of operational bandwidth, effort, and talent capability is lost in assessing all of these false positives that are generated because of risk management procedures. Most of them can be taken care of by a combination of machine learning, artificial intelligence, and some sort of robotics.


Banking on Better Data: Why Financial Institutions Need an Agile Cloud Strategy

The urgency to migrate to the cloud is particularly pronounced in the banking sector, where legacy institutions are under mounting pressure to keep pace with digital-native competitors. These agile challengers can roll out new features in a matter of weeks, while traditional banks remain constrained by older mainframes. It is clear that the risk of standing still is no longer theoretical. Earlier this year, over 1.2 million UK customers experienced banking outages on pay day, a critical moment for both individuals and businesses. Several major retail banks reported widespread issues, including login failures and prolonged delays in customer service. Far from being one-off glitches, these disruptions point to a broader pattern of structural fragility rooted in outdated technology. Unlike legacy systems, cloud-native platforms are engineered for adaptability, resilience, and real-time performance, which are traits that traditional banking environments have been struggling to deliver. These failures weren’t just accidents; they were foreseeable outcomes of prolonged underinvestment in modernization. This reinforced a critical truth for traditional banks, which is that cloud transformation is no longer a future aspiration, but an immediate requirement to safeguard customer trust and remain viable in a rapidly evolving market.


Why knowledge is the ultimate weapon in the Information Age

To turn AI into an asset rather than a liability, organisations must rethink their approach to knowledge management. At its core, knowledge management is a learning cycle centred on people, with technology acting as a force multiplier, not a substitute for judgment. The objective is to establish a virtuous loop in which data is collected, validated, and transformed into actionable insight. The tighter and more disciplined this cycle, the higher the quality of the resulting knowledge. In practice, this means treating AI as just another tool in the toolkit. ... In an age of information warfare, perception is the battleground. To stay ahead, decision-makers must be trained not just in AI tools but in understanding their strengths, limitations, and potential biases, including their own. The ability to critically assess AI-generated content is essential, not optional. More than static planning, modern organisations need situational awareness and strategic agility, embedding AI within a human-centric knowledge strategy. We can shift the balance in the information war by curating trusted sources, rigorously verifying content, and sustaining a culture of learning. This new knowledge ecosystem embraces uncertainty, leverages AI wisely, and keeps cognitive bias in control, wielding knowledge as a disciplined and secure strategic asset.

Daily Tech Digest - June 25, 2025


Quote for the day:

"Your present circumstances don’t determine where you can go; they merely determine where you start." -- Nido Qubein



Why data observability is the missing layer of modern networking

You might hear people use these terms interchangeably, but they’re not the same thing. Visibility is about what you can see – dashboard statistics, logs, uptime numbers, bandwidth figures, the raw data that tells you what’s happening across your network. Observability, on the other hand, is about what that data actually means. It’s the ability to interpret, analyse, and act on those insights. It’s not just about seeing a traffic spike but instead understanding why it happened. It’s not just spotting a latency issue, but knowing which apps are affected and where the bottleneck sits. ... Today, connectivity needs to be smart, agile, and scalable. It’s about building infrastructure that supports cloud, remote work, and everything in between. Whether you’re adding a new site, onboarding a remote team, or launching a cloud-hosted app, your network should be able to scale and respond at speed. Then there’s security, a non-negotiable layer that protects your entire ecosystem. Great security isn’t about throwing up walls, it’s about creating confidence. That means deploying zero trust principles, segmenting access, detecting threats in real time, and encrypting data, without making users lives harder. ... Finally, we come to observability. Arguably the most unappreciated of the three but quickly becoming essential. 


6 Key Security Risks in LLMs: A Platform Engineer’s Guide

Prompt injection is the AI-era equivalent of SQL injection. Attackers craft malicious inputs to manipulate an LLM, bypass safeguards or extract sensitive data. These attacks range from simple jailbreak prompts that override safety rules to more advanced exploits that influence backend systems. ... Model extraction attacks allow adversaries to systematically query an LLM to reconstruct its knowledge base or training data, essentially cloning its capabilities. These attacks often rely on automated scripts submitting millions of queries to map the model’s responses. One common technique, model inversion, involves strategically structured inputs that extract sensitive or proprietary information embedded in the model. Attackers may also use repeated, incremental queries with slight variations to amass a dataset that mimics the original training data. ... On the output side, an LLM might inadvertently reveal private information embedded in its dataset or previously entered user data. A common risk scenario involves users unknowingly submitting financial records or passwords into an AI-powered chatbot, which could then store, retrieve or expose this data unpredictably. With cloud-based LLMs, the risk extends further. Data from one organization could surface in another’s responses.


Adopting Agentic AI: Ethical Governance, Business Impact, Talent Demand, and Data Security

Agentic AI introduces a spectrum of ethical challenges that demand proactive governance. Given its capacity for independent decision-making, there is a heightened need for transparent, accountable, and ethically driven AI models. Ethical governance in Agentic AI revolves around establishing robust policies that govern decision logic, bias mitigation, and accountability. Organizations leveraging Agentic AI must prioritize fairness, inclusivity, and regulatory compliance to avoid unintended consequences. ... The integration of Agentic AI into business ecosystems promises not just automation but strategic enhancement of decision-making. These AI agents are designed to process real-time data, predict market shifts, and autonomously execute decisions that would traditionally require human intervention. In sectors such as finance, healthcare, and manufacturing, Agentic AI is optimizing supply chains, enhancing predictive analytics, and streamlining operations with unparalleled accuracy. ... One of the major concerns surrounding Agentic AI is data security. Autonomous decision-making systems require vast amounts of real-time data to function effectively, raising questions about data privacy, ownership, and cybersecurity. Cyber threats aimed at exploiting autonomous decision-making could have severe consequences, especially in sectors like finance and healthcare.


Unveiling Supply Chain Transformation: IIoT and Digital Twins

Digital twins and IIoTs are evolving technologies that are transforming the digital landscape of supply chain transformation. The IIoT aims to connect to actual physical sensors and actuators. On the other hand, DTs are replica copies that virtually represent the physical components. The DTs are invaluable for testing and simulating design parameters instead of disrupting production elements. ... Contrary to generic IoT, which is more oriented towards consumers, the IIoT enables the communication and interconnection between different machines, industrial devices, and sensors within a supply chain management ecosystem with the aim of business optimization and efficiency. The incubation of IIoT in supply chain management systems aims to enable real-time monitoring and analysis of industrial environments, including manufacturing, logistics management, and supply chain. It boosts efforts to increase productivity, cut downtime, and facilitate information and accurate decision-making. ... A supply chain equipped with IIoT will be a main ingredient in boosting real-time monitoring and enabling informed decision-making. Every stage of the supply chain ecosystem will have the impact of IIoT, like automated inventory management, health monitoring of goods and their tracking, analytics, and real-time response to meet the current marketplace. 


The state of cloud security

An important complicating factor in all this is that customers don’t always know what’s happening in cloud data centers. At the same time, De Jong acknowledges that on-premises environments have the same problem. “There’s a spectrum of issues, and a lot of overlap,” he says, something Wesley Swartelé agrees with: “You have to align many things between on-prem and cloud.” Andre Honders points to a specific aspect of the cloud: “You can be in a shared environment with ten other customers. This means you have to deal with different visions and techniques that do not exist on-premises.” This is certainly the case. There are plenty of worst case scenarios to consider in the public cloud. ... However, a major bottleneck remains the lack of qualified personnel. We hear this all the time when it comes to security. And in other IT fields too, as it happens, meaning one could draw a society-wide conclusion. Nevertheless, staff shortages are perhaps more acute in this sector. Erik de Jong sees society as a whole having similar problems, at any rate. “This is not an IT problem. Just ask painters. In every company, a small proportion of the workforce does most of the work.” Wesley Swartelé agrees it is a challenge for organizations in this industry to find the right people. “Finding a good IT professional with the right mindset is difficult.


As AI reshapes the enterprise, security architecture can’t afford to lag behind

Technology works both ways – it enables the attacker and the smart defender. Cybercriminals are already capitalising on its potential, using open source AI models like DeepSeek and Grok to automate reconnaissance, craft sophisticated phishing campaigns, and produce deepfakes that can convincingly impersonate executives or business partners. What makes this especially dangerous is that these tools don’t just improve the quality of attacks; they multiply their volume. That’s why enterprises need to go beyond reactive defenses and start embedding AI-aware policies into their core security fabric. It starts with applying Zero Trust to AI interactions, limiting access based on user roles, input/output restrictions, and verified behaviour. ... As attackers deploy AI to craft polymorphic malware and mimic legitimate user behaviour, traditional defenses struggle to keep up. AI is now a critical part of the enterprise security toolkit, helping CISOs and security teams move from reactive to proactive threat defense. It enables rapid anomaly detection, surfaces hidden risks earlier in the kill chain, and supports real-time incident response by isolating threats before they can spread. But AI alone isn’t enough. Security leaders must strengthen data privacy and security by implementing full-spectrum DLP, encryption, and input monitoring to protect sensitive data from exposure, especially as AI interacts with live systems. 


Identity Is the New Perimeter: Why Proofing and Verification Are Business Imperatives

Digital innovation, growing cyber threats, regulatory pressure, and rising consumer expectations all drive the need for strong identity proofing and verification. Here is why it is more important than ever:Combatting Fraud and Identity Theft: Criminals use stolen identities to open accounts, secure loans, or gain unauthorized access. Identity proofing is the first defense against impersonation and financial loss. Enabling Secure Digital Access: As more services – from banking to healthcare – go digital, strong remote verification ensures secure access and builds trust in online transactions. Regulatory Compliance: Laws such as KYC, AML, GDPR, HIPAA, and CIPA require identity verification to protect consumers and prevent misuse. Compliance is especially critical in finance, healthcare, and government sectors. Preventing Account Takeover (ATO): Even legitimate accounts are at risk. Continuous verification at key moments (e.g., password resets, high-risk actions) helps prevent unauthorized access via stolen credentials or SIM swapping. Enabling Zero Trust Security: Zero Trust assumes no inherent trust in users or devices. Continuous identity verification is central to enforcing this model, especially in remote or hybrid work environments. 


Why should companies or organizations convert to FIDO security keys?

FIDO security keys significantly reduce the risk of phishing, credential theft, and brute-force attacks. Because they don’t rely on shared secrets like passwords, they can’t be reused or intercepted. Their phishing-resistant protocol ensures authentication is only completed with the correct web origin. FIDO security keys also address insider threats and endpoint vulnerabilities by requiring physical presence, further enhancing protection, especially in high-security environments such as healthcare or public administration. ... In principle, any organization that prioritizes a secure IT infrastructure stands to benefit from adopting FIDO-based multi-factor authentication. Whether it’s a small business protecting customer data or a global enterprise managing complex access structures, FIDO security keys provide a robust, phishing-resistant alternative to passwords. That said, sectors with heightened regulatory requirements, such as healthcare, finance, public administration, and critical infrastructure, have particularly strong incentives to adopt strong authentication. In these fields, the risk of breaches is not only costly but can also have legal and operational consequences. FIDO security keys are also ideal for restricted environments, such as manufacturing floors or emergency rooms, where smartphones may not be permitted. 


Data Warehouse vs. Data Lakehouse

Data warehouses and data lakehouses have emerged as two prominent adversaries in the data storage and analytics markets, each with advantages and disadvantages. The primary difference between these two data storage platforms is that while the data warehouse is capable of handling only structured and semi-structured data, the data lakehouse can store unlimited amounts of both structured and unstructured data – and without any limitations. ... Traditional data warehouses have long supported all types of business professionals in their data storage and analytics endeavors. This approach involves ingesting structured data into a centralized repository, with a focus on warehouse integration and business intelligence reporting. Enter the data lakehouse approach, which is vastly superior for deep-dive data analysis. The lakehouse has successfully blended characteristics of the data warehouse and the data lake to create a scalable and unrestricted solution. The key benefit of this approach is that it enables data scientists to quickly extract insights from raw data with advanced AI tools. ... Although a data warehouse supports BI use cases and provides a “single source of truth” for analytics and reporting purposes, it can also become difficult to manage as new data sources emerge. The data lakehouse has redefined how global businesses store and process data. 


AI or Data Governance? Gartner Says You Need Both

Data and analytics leaders, such as chief data officers, or CDOs, and chief data and analytics officers, or CDAOs, play a significant role in driving their organizations' data and analytics, D&A, successes, which are necessary to show business value from AI projects. Gartner predicts that by 2028, 80% of gen AI business apps will be developed on existing data management platforms. Their analysts say, "This is the best time to be in data and analytics," and CDAOs need to embrace the AI opportunity eyed by others in the C-suite, or they will be absorbed into other technical functions. With high D&A ambitions and AI pilots becoming increasingly ubiquitous, focus is shifting toward consistent execution and scaling. But D&A leaders are overwhelmed with their routine data management tasks and need a new AI strategy. ... "We've never been good at governance, and now AI demands that we be even faster, which means you have to take more risks and be prepared to fail. We have to accept two things: Data will never be fully governed. Secondly, attempting to fully govern data before delivering AI is just not realistic. We need a more practical solution like trust models," Zaidi said. He said trust models provide a trust rating for data assets by examining their value, lineage and risk. They offer up-to-date information on data trustworthiness and are crucial for fostering confidence. 

Daily Tech Digest - June 20, 2025


Quote for the day:

"Everything you’ve ever wanted is on the other side of fear." -- George Addair



Encryption Backdoors: The Security Practitioners’ View

On the one hand, “What if such access could deliver the means to stop crime, aid public safety and stop child exploitation?” But on the other hand, “The idea of someone being able to look into all private conversations, all the data connected to an individual, feels exposing and vulnerable in unimaginable ways.” As a security practitioner he has both moral and practical concerns. “Even if lawful access isn’t the same as mass surveillance, it would be difficult to distinguish between ‘good’ and ‘bad’ users without analyzing them all.” Morally, it is a reversal of the presumption of innocence and means no-one can have any guaranteed privacy. Professionally he says, “Once the encryption can be broken, once there is a backdoor allowing someone to access data, trust in that vendor will lessen due to the threat to security and privacy introducing another attack vector into the equation.” It is this latter point that is the focus for most security practitioners. “From a practitioner’s standpoint,” says Rob T Lee, chief of research at SANS Institute and founder at Harbingers, “we’ve seen time and again that once a vulnerability exists, it doesn’t stay in the hands of the ‘good guys’ for long. It becomes a target. And once it’s exploited, the damage isn’t theoretical. It affects real people, real businesses, and critical infrastructure.”


Visa CISO Subra Kumaraswamy on Never Allowing Cyber Complacency

Kumaraswamy is always thinking about talent and technology in cybersecurity. Talent is a perennial concern in the industry, and Visa is looking to grow its own. The Visa Payments Learning Program, launched in 2023, aims to help close the skills gap in cyber through training and certification. “We are offering this to all of the employees. We’re offering it to our partners, like the banks, our customers,” says Kumaraswamy. Right now, Visa leverages approximately 115 different technologies in cyber, and Kumaraswamy is constantly evaluating where to go next. “How do I [get to] the 116th, 117th, 181th?” he asks. ”That needs to be added because every layer counts.” Of course, GenAI is a part of that equation. Thus far, Kumaraswamy and his team are exploring more than 80 different GenAI initiatives within cyber. “We’ve already taken about three to four of those initiatives … to the entire company. That includes the what we call a ‘shift left’ process within Visa. It is now enabled with agentic AI. It’s reducing the time to find bugs in the code. It is also helping reduce the time to investigate incidents,” he shares. Visa is also taking its best practices in cybersecurity and sharing them with their customers. “We can think of this as value-added services to the mid-size banks, the credit unions, who don’t have the scale of Visa,” says Kumaraswamy.


Agentic AI in automotive retail: Creating always-on sales teams

To function effectively, digital agents need memory. This is where memory modules come into play. These components store key facts about ongoing interactions, such as the customer’s vehicle preferences, budget, and previous questions. For instance, if a returning visitor had previously shown interest in SUVs under a specific price range, the memory module allows the AI to recall that detail. Instead of restarting the conversation, the agent can pick up where it left off, offering an experience that feels personalised and informed. Memory modules are critical for maintaining consistency across long or repeated interactions. Without them, agentic AI would struggle to replicate the attentive service provided by a human salesperson who remembers returning customers. ... Despite the intelligence of agentic AI, there are scenarios where human involvement is still needed. Whether due to complex financing questions or emotional decision-making, some buyers prefer speaking to a person before finalizing their decision. A well-designed agentic system should recognize when it has reached the limits of its capabilities. In such moments, it should facilitate a handover to a human representative. This includes summarizing the conversation so far, alerting the sales team in real-time, and scheduling a follow-up if required.


Multicloud explained: Why it pays to diversify your cloud strategy

If your cloud provider were to suffer a massive and prolonged outage, that would have major repercussions on your business. While that’s pretty unlikely if you go with one of the hyperscalers, it’s possible with a more specialized vendor. And even with the big players, you may discover annoyances, performance problems, unanticipated charges, or other issues that might cause you to rethink your relationship. Using services from multiple vendors makes it easier to end a relationship that feels like it’s gone stale without you having to retool your entire infrastructure. It can be a great means to determine which cloud providers are best for which workloads. And it can’t hurt as a negotiating tactic when contracts expire or when you’re considering adding new cloud services. ... If you add more cloud resources by adding services from a different vendor, you’ll need to put in extra effort to get the two clouds to play nicely together, a process that can range from “annoying” to “impossible.” Even after bridging the divide, there’s administrative overhead involved—it’ll be harder to keep tabs on data protection and privacy, for instance, and you’ll need to track cloud usage and the associated costs for multiple vendors. Network bandwidth. Many vendors make it cheap and easy to move data to and within their cloud, but might make you pay a premium to export it. 


Decentralized Architecture Needs More Than Autonomy

Decentralized architecture isn’t just a matter of system design - it’s a question of how decisions get made, by whom, and under what conditions. In theory, decentralization empowers teams. In practice, it often exposes a hidden weakness: decision-making doesn’t scale easily. We started to feel the cracks as our teams expanded quickly and our organizational landscape became more complex. As teams multiplied, architectural alignment started to suffer - not because people didn’t care, but because they didn’t know how or when to engage in architectural decision-making. ... The shift from control to trust requires more than mindset - it needs practice. We leaned into a lightweight but powerful toolset to make decentralized decision-making work in real teams. Chief among them is the Architectural Decision Record (ADR). ADRs are often misunderstood as documentation artifacts. But in practice, they are confidence-building tools. They bring visibility to architectural thinking, reinforce accountability, and help teams make informed, trusted decisions - without relying on central authority. ... Decentralized architecture works best when decisions don’t happen in isolation. Even with good individual practices - like ADRs and advice-seeking - teams still need shared spaces to build trust and context across the organization. That’s where Architecture Advice Forums come in.


4 new studies about agentic AI from the MIT Initiative on the Digital Economy

In their study, Aral and Ju found that human-AI pairs excelled at some tasks and underperformed human-human pairs on others. Humans paired with AI were better at creating text but worse at creating images, though campaigns from both groups performed equally well when deployed in real ads on social media site X. Looking beyond performance, the researchers found that the actual process of how people worked changed when they were paired with AI . Communication (as measured by messages sent between partners) increased for human-AI pairs, with less time spent on editing text and more time spent on generating text and visuals. Human-AI pairs sent far fewer social messages, such as those typically intended to build rapport. “The human-AI teams focused more on the task at hand and, understandably, spent less time socializing, talking about emotions, and so on,” Ju said. “You don’t have to do that with agents, which leads directly to performance and productivity improvements.” As a final part of the study, the researchers varied the assigned personality of the AI agents using the Big Five personality traits: openness, conscientiousness, extraversion, agreeableness, and neuroticism. The AI personality pairing experiments revealed that programming AI personalities to complement human personalities greatly enhanced collaboration. 


DevOps Backup: Top Reasons for DevOps and Management

Depending on the industry, you may need to comply with different security protocols, acts, certifications, and standards. If your company operates in a highly regulated industry, like healthcare, technology, financial services, pharmaceuticals, manufacturing, or energy, those security and compliance regulations and protocols can be even more strict. Thus, to meet the compliance stringent security requirements, your organization needs to implement security measures, like role-based access controls, encryption, ransomware protection measures — just to name RTOs and RPOs, risk-assessment plans, and other compliance best practices… And, of course, a backup and disaster recovery plan is one of them, too. It ensures that the company will be able to restore its critical data fast, guaranteeing the data availability, accessibility, security, and confidentiality of your data. ... Another issue that is closely related to compliance is data retention. Some compliance regulations require organizations to keep their data for a long time. As an example, we can mention NIST’s requirements from its Security and Privacy Controls for Information Systems and Organizations: “… Storing audit records on separate systems or components applies to initial generation as well as backup or long-term storage of audit records…”


How AI can save us from our 'infinite' workdays, according to Microsoft

Activity is not the same as progress. What good is work if it's just busy work and not tackling the right tasks or goals? Here, Microsoft advises adopting the Pareto Principle, which postulates that 20% of the work should deliver 80% of the outcomes. And how does this involve AI? Use AI agents to handle low-value tasks, such as status meetings, routine reports, and administrative churn. That frees up employees to focus on deeper tasks that require the human touch. For this, Microsoft suggested watching the leadership keynote from the Microsoft 365 Community Conference on Building the Future Firm. ... Instead of using an org chart to delineate roles and responsibilities, turn to a work chart. A work chart is driven more by outcome, in which teams are organized around a specific goal. Here, you can use AI to fill in some of the gaps, again freeing up employees for more in-depth work. ... Finally, Microsoft pointed to a new breed of professionals known as agent bosses. They handle the infinite workday not by putting in more hours but by working smarter. One example cited in the report is Alex Farach, a researcher at Microsoft. Instead of getting swamped in manual work, Farach uses a trio of AI agents to act as his assistants. One collects daily research. The second runs statistical analysis. And the third drafts briefs to tie all the data together.
 

Data Governance and AI Governance: Where Do They Intersect?

AIG and DG share common responsibilities in guiding data as a product that AI systems create and consume, despite their differences. Both governance programs evaluate data integration, quality, security, privacy, and accessibility. For instance, both governance frameworks need to ensure quality information meets business needs. If a major retailer discovered their AI-powered product recommendation engine was suggesting irrelevant items to customers, then DG and AIG would want the issue resolved. However, either approach or a combination could be best to solving the problem. Determining the right governance response requires analyzing the root issue. ... DG and AIG provide different approaches; which works best depends on the problem. Take the example, above, of the inaccurate pricing information to a customer in response to a query. The data governance team audits the product data pipeline and finds inconsistent data standards and missing attributes feeding into the AI model. However, the AI governance team also identifies opportunities to enhance the recommendation algorithm’s logic for weighting customer preferences. The retailer could resolve the data quality issues through DG while AIG improved the AI model’s mechanics by taking a collaborative approach with both data governance and AI governance perspectives. 


Deepfake Rebellion: When Employees Become Targets

Surviving and mitigating such an attack requires moving beyond purely technological solutions. While AI detection tools can help, the first and most critical line of defense lies in empowering the human factor. A resilient organization builds its bulwarks on human risk management and security awareness training, specifically tailored to counter the mental manipulation inherent in deepfake attacks. Rapidly deploy trained ambassadors. These are not IT security personnel, but respected peers from diverse departments trained to coach workshops. ... Leadership must address employees first, acknowledge the incident, express understanding of the distress caused, and unequivocally state the deepfake is under investigation. Silence breeds speculation and distrust. There should be channels for employees to voice concerns, ask questions, and access support without fear of retribution. This helps to mitigate panic and rebuild a sense of community. Ensure a unified public response, coordinating Comms, Legal, and HR. ... The antidote to synthetic mistrust is authentic trust, built through consistent leadership, transparent communication, and demonstrable commitment to shared values. The goal is to create an environment where verification habits are second nature. It’s about discerning malicious fabrication from human error or disagreement.