Daily Tech Digest - June 05, 2025


Quote for the day:

"The greatest accomplishment is not in never falling, but in rising again after you fall." -- Vince Lombardi


Your Recovery Timeline Is a Lie: Why They Fall Apart

Teams assume they can pull snapshots from S3 or recover databases from a backup tool. What they don’t account for is the reconfiguration time required to stitch everything back together. ... RTOs need to be redefined through the lens of operational reality and validated through regular, full-system DR rehearsals. This is where IaC and automation come in. By codifying all layers of your infrastructure — not just compute and storage, but IAM, networking, observability and external dependencies, too — you gain the ability to version, test and rehearse your recovery plans. Tools like Terraform, Helm, OpenTofu and Crossplane allow you to build immutable blueprints of your infrastructure, which can be automatically redeployed in disaster scenarios. But codification alone isn’t enough. Continuous testing is critical. Just as CI/CD pipelines validate application changes, DR validation pipelines should simulate failover scenarios, verify dependency restoration and track real mean time to recovery (MTTR) metrics over time. ... It’s also time to stop relying on aspirational RTOs and instead measure actual MTTR. It’s what matters when things go wrong, indicating how long it really takes to go from incident to resolution. Unlike RTOs, which are often set arbitrarily, MTTR is a tangible, trackable indicator of resilience.


The Dawn of Unified DataOps—From Fragmentation to Transformation

Data management has traditionally been the responsibility of IT, creating a disconnect between this function and the business departments that own and understand the data’s value. This separation has resulted in limited access to unified data across the organization, including the tools and processes to leverage it outside of IT. ... Organizations looking to embrace DataOps and transform their approach to data must start by creating agile DataOps teams that leverage software-oriented methodologies; investing in data management solutions that leverage DataOps and data mesh concepts; investing in scalable automation and integration; and cultivating a data-driven culture. Much like agile software teams, it’s critical to include product management, domain experts, test engineers, and data engineers. Approach delivery iteratively, incrementally delivering MVPs, testing, and improving capabilities and quality. ... Technology alone won’t solve data challenges. Truly transformative DataOps strategies align with unified teams that pair business users and subject matter experts with DataOps professionals, forming a culture where collaboration, accessibility, and transparency are at the core of decision making.


Redefining Cyber Value: Why Business Impact Should Lead the Security Conversation

A BVA brings clarity to that timeline. It identifies the exposures most likely to prolong an incident and estimates the cost of that delay based on both your industry and organizational profile. It also helps evaluate the return of preemptive controls. For example, IBM found that companies that deploy effective automation and AI-based remediation see breach costs drop by as much as $2.2 million. Some organizations hesitate to act when the value isn't clearly defined. That delay has a cost. A BVA should include a "cost of doing nothing" model that estimates the monthly loss a company takes on by leaving exposures unaddressed. We've found that for a large enterprise, that cost can exceed half a million dollars. ... There's no question about how well security teams are doing the work. The issue is that traditional metrics don't always show what their work means. Patch counts and tool coverage aren't what boards care about. They want to know what's actually being protected. A BVA helps connect the dots – showing how day-to-day security efforts help the business avoid losses, save time, and stay more resilient. It also makes hard conversations easier. Whether it's justifying a budget, walking the board through risk, or answering questions from insurers, a BVA gives security leaders something solid to point to. 


Fake REAL Ids Have Already Arrived, Here’s How to Protect Your Business

When the REAL ID Act of 2005 was introduced, it promised to strengthen national security by setting higher standards for state-issued IDs, especially when it came to air travel, access to federal buildings, and more. Since then, the roll-out of the REAL ID program has faced delays, but with an impending enforcement deadline, many are questioning if REAL IDs deliver the level of security intended. ... While the original aim was to prevent another 9/11-style attack, over 20 years later, the focus has shifted to protecting against identity theft and illegal immigration. The final deadline to get your REAL ID is now May 7th, 2025, owing in part to differing opinions and adoption rates state-by-state which has dragged enforcement on for two decades.  ... The delays and staggered adoption has given bad actors the chance to create templates for fraudulent REAL IDs. Businesses may incorrectly assume that an ID bearing a REAL ID star symbol are more likely to be legitimate, but as our data proves, this is not the case. REAL IDs can be faked just as easily as any other identity document, putting the onus on businesses to implement robust ID verification methods to ensure they don’t fall victim to ID fraud. ... AI-powered identity verification is one of the only ways to combat the increasing use of AI-powered criminal tools. 


How this 'FinOps for AI' certification can help you tackle surging AI costs

To really adopt AI into your enterprise, we're talking about costs that are orders of magnitude greater. Companies are turning to FinOps for help dealing with this. FinOps, a portmanteau of Finance and DevOps, combines financial management and collaborative, agile IT operations into a discipline to manage costs. It started as a way to get a handle on cloud pricing. FinOps' first job is to optimize cloud spending and align cloud costs with business objectives. ... Today, they're adding AI spending to their concerns. According to the FinOps Foundation, 63% of FinOps practitioners are already being asked to manage AI costs, a number expected to rise as AI innovation continues to surge. Mismanagement of these costs can not only erode business value but also stifle innovation. "FinOps teams are being asked to manage accelerating AI spend to allocate its cost, forecast its growth, and ultimately show its value back to the business," said Storment. "But the speed and complexity of the data make this a moving target, and cost overruns in AI can slow innovation when not well managed." Besides, Storment added, C-level executives are asking that painful question: "You're using this AI service and spending too much. Do you know what it's for?" 


Tackling Business Loneliness

Leaders who intentionally reach out to their employees do more than combat loneliness; they directly influence performance and business success. "To lead effectively, you need to lead with care. Because care creates connection. Connection fuels commitment. And commitment drives results. It's in those moments of real connection that collective brilliance is unlocked," she concludes. ... But it's not just women, with many men facing isolation in the workplace too, especially where a culture of 'put up and shut up' is frequently seen. Reflected in the high prevalence of suicide in the UK construction industry, it is essential that toxic cultures are dismantled and all employees feel valued and part of the team. "Whether they work on site or remotely, full time or part time, building an inclusive culture helps to ensure people do not experience prolonged loneliness or lack of connection. When we prioritise inclusion, everyone benefits," Allen concludes. ... Providing a safe, non-judgemental space for employees to discuss loneliness, things that are troubling them, and ways to manage any negative feelings is crucial. "This could be with a trusted line manager or colleague, but objective support from professional therapists and counsellors should also be accessible to prevent loneliness from manifesting into more serious issues," she emphasises. 


Revolutionizing Software Development: Agile, Shift-Left, and Cybersecurity Integration

While shift-left may cost more resources in the short term, in most cases, the long-term savings more than make up for the initial investment. Bugs discovered after a product release can cost up to 640 times more than those caught during development. In addition, late detection can increase the risk of fines from security breaches, as well as causing damage to a brand’s trust. Automation tools are the primary answer to these concerns and are at the core of what makes shift-left possible. The popular tech industry mantra, “automate everything,” continues to apply. Static analysis, dynamic analysis, and software composition analysis tools scan for known vulnerabilities and common bugs, producing instant feedback as code is first merged into development branches. ... Shift-left balances speed with quality. Performing regular checks on code as it is written reduces the likelihood that significant defects and vulnerabilities will surface after a release. Once software is out in the wild, the cost to fix issues is much higher and requires extensively more work than catching them in the early phases. Despite the advantages of shift-left, navigating the required cultural change can be a challenge. As such, it’s crucial for developers to be set up for success with effective tools and proper guidance.


Feeling Reassured by Your Cybersecurity Measures?

Organizations must pursue a data-driven approach that embraces comprehensive NHI management. This approach, combined with robust Secrets Security Management, can ensure that none of your non-human identities become security weak points. Remember, feeling reassured about your cybersecurity measures is not just about having security systems in place, but also about knowing how to manage them effectively. Effective NHI management will be a cornerstone in instilling peace of mind and enhancing security confidence. With these insights into the strategic importance of NHI management in promoting cybersecurity confidence, organizations can take a step closer to feeling reassured by their cybersecurity measures. ... Imagine a simple key, one that turns tumblers in the lock mechanism but isn’t alone in doing so. There are other keys that fit the same lock, and they all have the power to unlock the same door. This is similar to an NHI and its associated secret. There are numerous NHIs that could access the same system or part of a system, granted via their unique ‘Secret’. Now, here’s where it gets a little complex. ... Just as a busy airport needs security checkpoints to screen passengers and verify their credentials, a robust NHI management system is needed to accurately identify and manage all NHIs. 


How to Capitalize on Software Defined Storage, Securely and Compliantly

Because it fundamentally transforms data infrastructure, SDS is critical for technology executives to understand and capitalize on. It not only provides substantial cost savings and predictability and while reducing staff time required for managing physical hardware; SDS also makes companies much more agile and flexible in their business operations. For example, launching new initiatives or products that can start small and quickly scale is much easier with SDS. As a result, SDS does not just impact IT, it is a critical function across the enterprise. Software-defined storage in the cloud has brought major operational and cost benefits for enterprises. First, subscription business models enable buyers to make much more cost-conscious decisions and avoid wasting resources and usage. ... In addition, software-defined storage has also transformed technology management frameworks. SDS has enabled a move to agile DevOps, which includes real-time analytics resulting in faster iteration, less downtime and more efficient resource allocation. With real-time dashboards and alerts, organizations can now track key KPIs such as uptime and performance and react instantly. IT management can be more proactive by increasing storage or resource capacity when needed, rather than waiting for a crash to react.


The habits that set future-ready IT leaders apart

Constructive discomfort is the impetus to continuous learning, adaptability, agility, and anti-fragility. The concept of anti-fragile means designed for change. How do we build anti-fragile humans so they are unbreakable and prepared for tomorrow’s world, whatever it brings? We have these fault-tolerant designs where I can unplug a server and the system adapts and you don’t even know it. We want to create that same anti-fragility and fault tolerance in the human beings we train. We’re living in this ever-changing, accelerating VUCA [volatile, uncertain, complex, ambiguous] world, and there are two responses when you are presented with the unknown or the unexpected: You can freeze and be fearful and have it overcome you, or you can improvise, adapt, and overcome it by being a continuous learner and continuous adapter. I think resiliency in human beings is driven by this constructive discomfort, which creates a path to being continuous learners and continuous adapters. ... Strategic competence is knowing what hill to take, tactical competence is knowing how to take that hill safely, and technical competence is rolling up your sleeves and helping along the way. The leaders I admire have all three. The person who doesn’t have technical competence may set forth an objective and even chart the path to get there, but then they go have coffee. That leader is probably not going to do well. 

Daily Tech Digest - June 04, 2025


Quote for the day:

"Thinking should become your capital asset, no matter whatever ups and downs you come across in your life." -- Dr. APJ Kalam


Rethinking governance in a decentralized identity world

“Security leaders can take three discrete actions to improve identity and access management across a complex, distributed environment, starting with low hanging fruit before maturing the processes,” Karen Walsh, CEO of Allegro Solutions, told Help Net Security. The first step, Walsh said, is to implement SSO across all standard accounts. “The same way they limit the attack surface by segmenting networks, they can use SSO to consolidate identity management.” Next, security teams should give employees a password manager for both business and personal use, something many organizations overlook despite the risks. “Compromised and weak passwords are a primary attack vector, but too many organizations fail to give their employees a way to improve their password hygiene. Then, they should allow the password manager plugin on all corporate approved browsers. ...” ... The third action is often the most technically demanding: linking human user accounts to machine identities. “They should assign a human user account and identity to all machine identities, including IoT, RPA, and network devices,” Walsh explained. “This provides an additional level of insight into and monitoring over how these typically unmanaged assets behave on networks to mitigate risks from attackers exploiting vulnerabilities.”


A Chief AI Officer Won’t Fix Your AI Problems

Rather than creating an isolated AI leadership role, forward-thinking companies are integrating AI into existing C-suite domains. In my experience working with large enterprises, this approach leads to better alignment, faster adoption, and clearer accountability. CTOs, for example, have long driven AI adoption by ensuring it supports broader digital transformation efforts. Companies like Microsoft and Amazon have taken this route by embedding AI leadership within their technology teams. ... Industries that are slower to adopt AI often face unique challenges that make implementation more complex. Many operate with deeply entrenched legacy systems, strict regulatory requirements, or a more cautious approach to adopting new technologies.  ... The push to appoint a Chief AI Officer often reflects deeper organizational challenges, such as poor cross-functional collaboration, a lack of clarity in digital transformation strategy, or resistance to change. These issues aren’t solved by adding another executive to the leadership team. What is truly needed is a cultural shift—one that promotes AI literacy across the organization, empowers existing leaders to incorporate AI into their strategies, and encourages collaboration between technical and business teams to drive adoption where it matters.


Akamai Addresses DNS Security and Compliance Challenges with Industry-First DNS Posture Management

“DNS security often flies under the radar, but it’s vital in keeping businesses secure and running smoothly,” said Sean Lyons, SVP and General Manager, Infrastructure Security Solutions & Services, Akamai. “For many organisations, the challenge isn’t setting up DNS — it’s knowing whether all their systems are actually properly configured and secured. Those organisations really need a simple way to see what’s happening across their DNS environment to take action quickly. That’s the problem we’re solving with DNS Posture Management. Security practitioners get a clear, unified view that helps them identify priority issues early, stay compliant, and keep their networks performing at their best.” Domains often show known high-risk vulnerabilities or misconfigurations. These weaknesses could impact DNS uptime and resolution reliability while increasing exposure to serious threats such as unauthorised SSL/TLS certificate issuance, DNS spoofing, and cache poisoning. This could embolden threat actors to abuse a company’s DNS to create fake websites that imitate the organisation’s brand for purposes like fraud, data theft, and phishing. Other vulnerabilities allow attackers to bring DNS down entirely, causing network outages for the business and its customers.


Lightspeed: Photonic networking in data centers

Using photonics is seen as a potential way to alleviate this. By transmitting information using photons, vendors say they can make big efficiency and performance gains. The use of photonics in data centers is not new - DCD profiled Google’s Mission Apollo, which saw optical switches introduced to the search giant’s data centers, in 2023 - but interest in the technology has ramped up in recent months, with several vendors raising funds to develop their own particular flavors of photonics. ... Regan, a photonics industry veteran who was brought on board by the Oriole founders to help bring their vision to life, believes this radical approach to redesigning data center networks is required to realize the promise of photonics. “If you want to get the real benefits, you have to get rid of electronic packet switching completely,” he argues. “Google introduced its switches in a bunch of its data centers - they’re very slow but they allow you to reconfigure a network based on demands, and sits alongside electronic packet switching. ... These drawbacks include “complexity, cost, and compatibility concerns,” Lewis said, adding: “With further research and development, there may be possibilities for photonic components to replace electronics in the future; however, for now, electric components remain the status quo.” 


Employees with AI Skills Enjoy Increased Job Security

Frankel said companies that proactively invest in training and reskilling their teams will certainly fare better than those that lollygag. "If you're working in IT, I think the key is to focus on diving in and learning how to leverage new tech to your benefit and tie your efforts to the company's goals," he said. Kausik Chaudhuri, CIO at Lemongrass, added that many organizations are partnering with online learning platforms to deliver targeted courses, while also building internal academies for continuous learning. "Training is tailored to specific job functions, ensuring IT, analytics, and operations teams can effectively manage and optimize AI-driven processes," he explained. Additionally, companies are promoting cross-functional collaboration, encouraging both technical and non-technical teams to build AI literacy. ... For soft skills, adaptability, problem-solving, cross-functional communication, ethical awareness, and change management are essential as AI reshapes business processes. "This shift is pushing IT professionals to be both technically proficient and strategically adaptable," Chaudhuri said. Frankel noted that there's a lot of experimentation going on as organizations grapple with the potential and pitfalls of AI integration. "While AI will get better, I think a lot of places are realizing that AI tools alone won't get them where they need to go," he said.


Lessons learned from the trojanized KeePass incident

All fake KeePass installation packages were signed with a valid digital signature, so they didn’t trigger any alarming warnings in Windows. The five newly discovered distributions had certificates issued by four different software companies. The legitimate KeePass is signed with a different certificate, but few people bother to check what the Publisher line says in Windows warnings. ... Distributors of password-stealing malware indiscriminately target any unsuspecting user. The criminals analyze any passwords, financial data, or other valuable information they manage to steal, sort it into categories, and sell whatever is needed to other cybercriminals for their underground operations. Ransomware operators will buy credentials for corporate networks, scammers will purchase personal data and bank card numbers, and spammers will acquire login details for social media or gaming accounts. That’s why the business model for stealer distributors is to grab anything they can get their hands on and use all kinds of lures to spread their malware. Trojans can be hidden inside any type of software — from games and password managers to specialized applications for accountants or architects.


Do you trust AI? Here’s why half of users don’t

Jason Hardy, CTO at Hitachi Vantara, called the trust gap “The AI Paradox.” As AI grows more advanced, its reliability can drop. He warned that without quality training data and strong safeguards, such as protocols for verifying outputs, AI systems risk producing inaccurate results. “A key part of understanding the increasing prevalence of AI hallucinations lies in being able to trace the system’s behavior back to the original training data, making data quality and context paramount to avoid a ‘hallucination domino’ effect,” Hardy said in an email reply to Computerworld. AI models often struggle with multi-step, technical problems, where small errors can snowball into major inaccuracies — a growing issue in newer systems, according to Hardy. With original training data running low, models now rely on new, often lower-quality sources. Treating all data as equally valuable worsens the problem, making it harder to trace and fix AI hallucinations. As global AI development accelerates, inconsistent data quality standards pose a major challenge. While some systems prioritize cost, others recognize that strong quality control is key to reducing errors and hallucinations long-term, he said. 


Curves Ahead: The Promises and Perils of AI in Mobile App Development

AI-based development tools also increase risks stemming from dependency chain opacity in mobile applications. Blind spots in the software supply chain will increase as AI agents and coding assistants are tasked with autonomously selecting and integrating dependencies. Since AI simultaneously pulls code from multiple sources, traditional methods of dependency tracking will prove insufficient. ... The developer trend of intuitive "vibe coding" may take package hallucinations into serious bad trip territory. The term refers to developers using casual AI prompts to generally describe a desired mobile app outcome; the AI tool then generates code to achieve it. Counter to the common wisdom of zero trust, vibe coding tends to lean heavily on trust; developers very often copy and paste code results without any manual review checks. Any hallucinated packages that get carried over can become easy entry points for threat actors. ... While some predict that agentic AI will disrupt the mobile application landscape by ultimately replacing traditional apps, other modes of disruption seem more immediate. For instance, researchers recently discovered an indirect prompt injection flaw in GitLab's built-in AI assistant Duo. This could allow attackers to steal source code or inject untrusted HTML into Duo's responses and direct users to malicious websites.


CockroachDB’s distributed vector indexing tackles the looming AI data explosion

The Cockroach Labs engineering team had to solve multiple problems simultaneously: uniform efficiency at massive scale, self-balancing indexes and maintaining accuracy while underlying data changes rapidly. Kimball explained that the C-SPANN algorithm solves this by creating a hierarchy of partitions for vectors in a very high multi-dimensional space. ... The coming wave of AI-driven workloads creates what Kimball terms “operational big data”—a fundamentally different challenge from traditional big data analytics. While conventional big data focuses on batch processing large datasets for insights, operational big data demands real-time performance at massive scale for mission-critical applications. “When you really think about the implications of agentic AI, it’s just a lot more activity hitting APIs and ultimately causing throughput requirements for the underlying databases,” Kimball explained. ... Implementing generic query plans in distributed systems presents unique challenges that single-node databases don’t face. CockroachDB must ensure that cached plans remain optimal across geographically distributed nodes with varying latencies. “In distributed SQL, the generic query plans, they’re kind of a slightly heavier lift, because now you’re talking about a potentially geo-distributed set of nodes with different latencies,” Kimball explained.


Burnout: Combatting the growing burden on IT teams

From preventing breaches to troubleshooting system failures, IT teams are the unsung heroes in many organisations, ensuring business continuity, day and night. However, the relentless pace of requests and the sprawl of endpoints to manage, combined with the increasing variety of IT demands, has led to unprecedented levels of burnout. ... IT professionals, particularly those in high-alert environments such as network operations centres (NOC) and security operations centres (SOC), face an almost never-ending deluge of alerts and notifications. Today, IT workers can only respond to roughly 85% of the tickets they receive daily, leaving critical alerts at risk of being overlooked. The pressure to sift through numerous alerts also slows down decision-making processes, erodes wider-business confidence, and leads to IT teams feeling helpless and unsupported. This vicious cycle can be incredibly difficult to break, contributing to high levels of burnout and consequently high employee turnover rates. ... Navigating Complex Compliance Challenges The regulatory landscape is evolving rapidly, placing additional pressure on IT teams. Managing these changes is no easy task, especially as many businesses are riddled with outdated legacy systems making compliance seem daunting. With new frameworks such as DORA and NIS2 coming into effect, 80% of CISOs report that compliance regulations are negatively impacting their mental health.

Daily Tech Digest - June 03, 2025


Quote for the day:

"Keep your fears to yourself, but share your courage with others." -- Robert Louis Stevenson


Is it Time to Accept that the Current Role of the CISO Has Failed?

First of all, it was never conceived as a true C-level role. It probably originated in the minds of some organisation consultants, but it never developed any true C-level weight. Even if it may hurt some, it is my opinion that it was very rarely given to people with true C-level potential. Second, it was almost always given to technologists by trade or background, although the underlying matter is unequivocally cross-functional and has always been: You cannot be successful around identity and access management for example without the involvement of HR and business units, and the ability to reach credibly towards them. ... It has aggregated a mixed set of responsibilities and accountabilities without building up the right organisational and managerial momentum, and many CISOs are simply being set up to fail: The role has simply become too complex to carry for the profile of the people it attracts. To break this spiral, the logic is now to split the role, stripping off the managerial layers it has accumulated over the years and refocusing the role of the CISO on its native technical content so that it can lead effectively and efficiently at that level, while at the same time bringing up a CSO role able to reach across business, IT and support function to take in charge the level of corporate complexity cybersecurity is now amalgamating in large firms.


How to Fortify Your Business’s Online Infrastructure Against Downtime

The first step to protecting your online infrastructure against downtime is to assess just how much downtime risk is viable for your business. Understanding how much downtime you can realistically afford is important for developing a sound IT strategy. Your viable downtime limit will define your tolerance to risk and allow you to direct your resources toward systems that keep your systems running optimally as far as possible. The average accepted downtime rate for a website is just 0.05%. That means your systems should experience uptime at least 99.95% of the time. If you have a low risk tolerance – say, for instance, if you rely on an ecommerce platform to generate revenue – investing in IT continuity technology is essential for keeping downtime minimal. ... The first step to safeguarding your organization against cyberattacks is to regularly audit your network security measures. This helps to spot vulnerabilities and address them, ensuring your IT systems are always protected against continuously advancing threats. Begin by creating a map of your existing network infrastructure, including all of its user access points, hardware, and software. This map will allow you to keep track of changes and quickly identify unauthorized changes and additions.


Private cloud still matters—but it doesn’t matter most

Large enterprises will maintain significant on-premises footprints for the foreseeable future, for all the reasons we’ve discussed. The enterprise IT landscape in 2025 is undeniably hybrid and likely always will be. But it’s equally undeniable that the center of gravity for innovation has shifted. When a new opportunity emerges—say, deploying a breakthrough AI model or scaling a customer-facing app to millions of users overnight—companies aren’t spinning up a new on-premises cluster to meet the moment. They’re tapping the virtually unlimited resources of AWS, Azure, Google, or edge networks like Cloudflare. They’re doing so because cloud offers experimentation without hardware procurement, and success isn’t gated by how many servers you happen to own. Private clouds excel at running the known and steady. Public clouds excel at unleashing the unknown and extraordinary. As we reach a cloud/on-prem equilibrium, this division of labor is becoming clearer. The day-to-day workloads that keep the business running may happily live in a familiar private cloud enclave. But the industry-defining projects, the ones leaders hope will define the business’s future, gravitate to infrastructure that can stretch to any size, in any region, at a moment’s notice. 


Why Generative AI Needs Architecture, Not Just APIs

The root of the problem often lies in treating gen AI as an add-on to legacy systems rather than embedding it into core operations. This leads to inconsistent implementation, unclear ownership and limited returns. To deliver meaningful outcomes, organizations must start by identifying areas where gen AI can enhance decisions, such as customer engagement, service workflows and regulatory compliance. ... When the focus is only on launching siloed applications, organizations may move fast initially, but they end up with systems that are difficult to scale, integrate or adapt. That's where architecture-centric thinking becomes critical. A strong architectural foundation built on modularity, interoperability and scalability ensures that future applications don't just add features but add value as one needs to build to last. This means building platforms that support change, not just one-off projects. It's also about fostering collaboration between business and IT, so decisions can be made with both speed and stability in mind. ... The "situational layer cake" architecture enables enterprises to build applications in distinct layers, such as enterprisewide, division-specific and implementation layers, facilitating a balance between reusability and customization. This structure allows the creation of reusable components that can be tailored to specific business contexts without redundant coding, streamlining operations and reducing complexity.


Scattered Spider: Understanding Help Desk Scams and How to Defend Your Organization

The goal of a help desk scam is to get the help desk operator to reset the credentials and/or MFA used to access an account so the attacker can take control of it. They'll use a variety of backstories and tactics to get that done, but most of the time it's as simple as saying "I've got a new phone, can you remove my existing MFA and allow me to enroll a new one?" From there, the attacker is then sent an MFA reset link via email or SMS. Usually, this would be sent to, for example, a number on file — but at this point, the attacker has already established trust and bypassed the help desk process to a degree. So asking "Can you send it to this email address" or "I've actually got a new number too, can you send it to…" gets this sent directly to the attacker. ... But, help desks are a target for a reason. They're "helpful" by nature. This is usually reflected in how they're operated and performance measured — delays won't help you to hit those SLAs! Ultimately, a process only works if employees are willing to adhere to it — and can't be socially engineered to break it. Help desks that are removed from day-to-day operations are also inherently susceptible to attacks where employees are impersonated. But, the attacks we're experiencing at the moment should give security stakeholders plenty of ammunition as to why help desk reforms are vital to securing the business.


Banking on intelligence: How AI is powering the next evolution of financial services

With constantly evolving regulations, financial institutions need stringent compliance measures to avoid penalties and disruptions. AI steps in as a powerful ally, automating compliance tasks to slash manual workloads and boost reporting accuracy. AI agents digest regulatory data, churn out compliance reports, and handle KYC/AML validations—cutting errors while speeding up the process. While implementing the changes, financial institutions must comply with data localisation mandates and ensure AI solutions are hosted within India. To mitigate data privacy risks, personally identifiable information (PII) is anonymised, and AI is deployed within Virtual Private Cloud environments. AI systems automate document verification, ensuring consistent validation and improving audit readiness. ... AI-enabled Underwriting Workbench is an immensely helpful tool for streamlining documentation and offering a single-window interface. GenAI further enhances credit assessments by analysing alternative data—like transaction history, social media, and employment records—offering a comprehensive view of an applicant’s financial health. This enables banks to make inclusive, risk-aware lending decisions. Agentic AI further calibrates the process by automating tasks like application assessments and borrower information verifications, enabling near-instant loan decisions with minimal human intervention.


Why the end of Google as we know it could be your biggest opportunity yet

Now, before you think I'm writing Google's obituary, let me be clear. Like I've said before, I'm confident they'll figure it out, even if that means changing their business model. That said, if your business depends on Google in any way, whether it's your business profile, reviews, SEO, or products like Ad Manager to drive traffic, you need to pay attention to what's happening. ... The Department of Justice and several states are suing Google's parent company, Alphabet, arguing that its exclusive deals with companies like Apple are anticompetitive and potentially monopolistic. Basically, Google is paying billions to be the default search engine on Apple devices, effectively shutting out any real competition. The ruling in this case could break up their reported $20 billion-a-year agreement. ... Long story short, the way people discover, research, and choose businesses is changing one AI update at a time, but it's essential to note that people are still searching, just not in the same places they used to. That nuance is critical to understanding your next move. As more users turn to AI tools like ChatGPT and Perplexity for answers, traditional search engines are no longer the only gateway to your business. This shift in behavior over time will result in less traffic to your product or service. 


How global collaboration is hitting cybercriminals where it hurts

Collaboration and intelligence sharing is at the heart of our approach to tackling the threat within the NCA, and we enjoy relationships with partners across the public and private sector both nationally and internationally. We’re united and motivated, in many ways, by a common mission. Some of these are formalised law enforcement relationships that we have had for a long time – for example, I was the NCA’s embed to the FBI in Washington DC for a number of years. But, it is not just limited to the US – the NCA is lucky to enjoy brilliant relationships with the ‘five eyes’ countries and partners across Europe and beyond in the fight against cybercrime. ... In the NCA, we are predominantly focused on financially motivated cybercrime, with ransomware as a main area of focus given how significant the threat it poses to the UK. We recognise that some cybercrime groups have connections to the Russian State, but assess that these type of deep-rooted relationships are likely to be the exception as opposed to the norm. When targeting the cybercrime threat, we have been focused on associating cost and risk to the threat actors who seek to cause harm to us and our allies, and we achieve this in a number of different ways. The NCA-led disruption of LockBit in 2024 was successful in undermining trust between members of the group, as well as any trust that victims might have had in LockBit keeping their word. 


Future-Proofing AI: Repeating Mistakes or Learning From the Past?

Are the enterprises rushing to deploy new open source AI projects taking the necessary security measures to isolate them from the rest of their infrastructure? Or are they disregarding recent open source security history and trusting them by default? Alarmingly, there are also reports that China-, North Korea- and Russia-based cybercriminal groups are actively targeting both physical and AI infrastructure while leveraging AI-generated malware to exploit vulnerabilities more efficiently. ... Next-generation AI infrastructure cannot be beholden to performance penalties that arise from using today’s solutions to create true, secure, multitenant environments. By combining the best aspects of bare-metal performance with container-like deployment models, organizations can build systems that deliver both speed and convenience. ... We cannot build a solid future if we ignore the wisdom of the past. The foundations of computing security, resource management and operational efficiency were laid decades ago by pioneers who had to make every CPU cycle and memory byte count. Their lessons are more relevant now than ever as we build systems that consume unprecedented computational resources. The organizations that will outlast in the AI era won’t necessarily be those with the largest infrastructure investments or the trendiest technology stacks. 


Eight ways storage IT pros can evolve in the age of analytics and AI

Large organizations are spending millions of dollars annually on data storage, backups, and disaster recovery. On balance, there’s nothing wrong with that since data is the center of everything today – but all data should not be treated the same. Using cost modeling tools, the storage manager can enter actual storage costs to determine upfront new projected storage costs and actual usable capacity, based on data growth rates. These costs must factor in backups and disaster recovery, which can be 3X of storage spending, and should compare on-premises versus cloud models. An unstructured data management system that indexes all data across all storage can supply metrics on data volumes, costs, and predicted costs, and then model plans for moving less-active data to lower-cost archival storage, such as in the cloud. ... Storage teams must mitigate ransomware risks associated with file data. One way to do this is by implementing hybrid tiering strategies that offload infrequently accessed (cold) files to immutable cloud storage, which reduces the active attack surface by as much as 70 or 80 percent. Immutable storage ensures that once data is written, it cannot be altered or deleted, providing a robust defense against ransomware attempts to encrypt or corrupt files.

Daily Tech Digest - June 02, 2025


Quote for the day:

"The best way to predict the future is to create it." -- Peter Drucker


Doing nothing is still doing something

Here's the uncomfortable truth, doing nothing is still doing something – and very often, it's the wrong thing. We saw this play out at the start of the year when Donald Trump's likely return to the White House and the prospect of fresh tariffs sent ripples through global markets. Investors froze, and while the tariffs have been shelved (for now), the real damage had already been done – not to portfolios, but to behaviour. This is decision paralysis in action. And in my experience, it's most acute among entrepreneurs and high-net-worth individuals post-exit, many of whom are navigating wealth independently for the first time. It's human nature to crave certainty, especially when it comes to money, but if you're waiting for a time when everything is calm, clear, and safe before investing or making a financial decision, I've got bad news – that day is never going to arrive. Markets move, the political climate is noisy, the global economy is always in flux. If you're frozen by fear, your money isn't standing still – it's slipping backwards. ... Entrepreneurs are used to taking calculated risks, but when it comes to managing post-exit wealth or personal finances, many find themselves out of their depth. A little knowledge can be a dangerous thing – and half-understanding the tax system, the economy, or the markets can lead to costly mistakes.


The Future of Agile Isn’t ‘agile’

One reason is that agilists introduced too many conflicting and divergent approaches that fragmented the market. “Agile” meant so many things to different people that hiring managers could never predict what they were getting when a candidate’s resume indicated s/he was “experienced in agile development.” Another reason organizations failed to generate value with “agile” was that too many agile approaches focused on changing practices or culture while ignoring the larger delivery system in which the practices operate, reinforcing a culture that is resistant to change. This shouldn’t be a surprise to people following our industry, as my colleague and LeadingAgile CEO Mike Cottmeyer has been talking about why agile fails for over a decade, such as his Agile 2014 presentation, Why is Agile Failing in Large Enterprises… …and what you can do about it. The final reason that led “agile” to its current state of disfavor is that early in the agile movement there was too much money to be made in training and certifications. The industry’s focus on certifications had the effect over time of misaligning the goals of the methodology / training companies and their customers. “Train everyone. Launch trains” may be a short-term success pattern for a methodology purveyor, but it is ultimately unsustainable because the training and practices are too disconnected from tangible results senior executives need to compete and win in the market.


CIOs get serious about closing the skills gap — mainly from within

Staffing and talent issues are affecting CIOs’ ability to double down on strategic and innovation objectives, according to 54% of this year’s respondents. As a result, closing the skills gap has become a huge priority. “What’s driving it in some CIOs’ minds is tied back to their AI deployments,” says Mark Moccia, a vice president research director at Forrester. “They’re under a lot of cost pressure … to get the most out of AI deployments” to increase operational efficiencies and lower costs, he says. “It’s driving more of a need to close the skills gap and find people who have deployed AI successfully.” AI, generative AI, and cybersecurity top the list of skills gaps preventing organizations from achieving objectives, according to an April Gartner report. Nine out of 10 organizations have adopted or plan to adopt skills-based talent growth to address those challenges. ... The best approach, Karnati says, is developing talent from within. “We’re equipping our existing teams with the space, tools, and support needed to explore genAI through practical application, including rapid prototyping, internal hackathons, and proof-of-concept sprints,” Karnati says. “These aren’t just technical exercises — they’re structured opportunities for cross-functional learning, where engineers, product leads, and domain experts collaborate to test real use cases.”


The Critical Quantum Timeline: Where Are We Now And Where Are We Heading?

Technically, the term is fault-tolerant quantum computing. The qubits that quantum computers use to process data have to be kept in a delicate state – sometimes frozen to temperatures very close to absolute zero – in order to stay stable and not “decohere”. Keeping them in this state for longer periods of time requires large amounts of energy but is necessary for more complex calculations. Recent research by Google, among others, is pointing the way towards developing more robust and resilient quantum methods. ... One of the most exciting prospects ahead of us involves applying quantum computing to AI. Firstly, many AI algorithms involve solving the types of problems that quantum computers excel at, such as optimization problems. Secondly, with its ability to more accurately simulate and model the physical world, it will generate huge amounts of synthetic data. ... Looking beyond the next two decades, quantum computing will be changing the world in ways we can’t even imagine yet, just as the leap to transistors and microchips enabled the digital world and the internet of today. It will tackle currently impossible problems, help us create fantastic new materials with amazing properties and medicines that affect our bodies in new ways, and help us tackle huge problems like climate change and cleaning the oceans.


6 hard truths security pros must learn to live with

Every technological leap will be used against you - Information technology is a discipline built largely on rapid advances. Some of these technological leaps can help improve your ability to secure the enterprise. But every last one of them brings new challenges from a security perspective, not the least of which is how they will be used to attack your systems, networks, and data. ... No matter how good you are, your organization will be victimized - This is a hard one to swallow, but if we take the “five stages of grief” approach to cybersecurity, it’s better to reach the “acceptance” level than to remain in denial because much of what happens is simply out of your control. A global survey of 1,309 IT and security professionals found that 79% of organizations suffered a cyberattack within the past 12 months, up from 68% just a year ago, according to cybersecurity vendor Netwrix’s Hybrid Security Trends Report. ... Breach blame will fall on you — and the fallout could include personal liability - As if getting victimized by a security breach isn’t enough, new Securities and Exchange Commission (SEC) rules put CISOs in the crosshairs for potential criminal prosecution. The new rules, which went into effect in 2023, require publicly listed companies to report any material cybersecurity incident within four business days.


Are you an A(I)ction man?

Whilst individually AI-generated action figures have a small impact - a drop in the ocean you could say - trends like this exemplify how easy it is to use AI en masse, and collectively create an ocean of demand. Seeing the number of individuals, even those with knowledge of AI’s lofty resource consumption, partaking in the creation of these avatars, makes me wonder if we need greater awareness of the collective impact of GenAI. Now, I want to take a moment to clarify this is not a criticism of those producing AI-generated content, or of anyone who has taken part in the ‘action figure’ trend. I’ve certainly had many goes with DALL-E for fun, and taken part in various trends in my time, but the volume of these recent images caught my attention. Many of the conversations I had at Connect New York a few weeks ago addressed sustainability and the need for industry collaboration, but perhaps we should also be instilling more awareness from an end-user point of view. After all, ChatGPT, according to the Washington Post, consumes 39.8 million kWh per day. I’d be fascinated to see the full picture of power and water consumption from the AI-generated action figures. Whilst it will only account for a tiny fraction of overall demand, these drops can have a tendency to accumulate. 


The MVP Dilemma: Scale Now or Scale Later?

Teams often have few concrete requirements about scalability. The business may not be a reliable source of information but, as we noted above, they do have a business case that has implicit scalability needs. It’s easy for teams to focus on functional needs, early on, and ignore these implicit scaling requirements. They may hope that scaling won’t be a problem or that they can solve the problem by throwing more computing resources at it. They have a legitimate concern about overbuilding and increasing costs, but hoping that scaling problems won't happen is not a good scaling strategy. Teams need to consider scaling from the start. ... The MVP often has implicit scalability requirements, such as "in order for this idea to be successful we need to recruit ten thousand new customers". Asking the right questions and engaging in collaborative dialogue can often uncover these. Often these relate to success criteria for the MVP experiment. ... Some people see asynchronous communication as another scaling panacea because it allows work to proceed independently of the task that initiated the work. The theory is that the main task can do other things while work is happening in the background. So long as the initiating task does not, at some point, need the results of the asynchronous task to proceed, asynchronous processing can help a system to scale. 


Data Integrity: What It Is and Why It Matters

By contrast, data quality builds on methods for confirming the integrity of the data and also considers the data’s uniqueness, timeliness, accuracy, and consistency. Data is considered “high quality” when it ranks high in all these areas based on the assessment of data analysts. High-quality data is considered trustworthy and reliable for its intended applications based on the organization’s data validation rules. The benefits of data integrity and data quality are distinct, despite some overlap. Data integrity allows a business to recover quickly and completely in the event of a system failure, prevent unauthorized access to or modification of the data, and support the company’s compliance efforts. By confirming the quality of their data, businesses improve the efficiency of their data operations, increase the value of their data, and enhance collaboration and decision-making. Data Quality efforts also help companies reduce their costs, enhance employee productivity, and establish closer relationships with their customers. Implementing a data integrity strategy begins by identifying the sources of potential data corruption in your organization. These include human error, system malfunctions, unauthorized access, failure to validate and test, and lack of Governance. A data integrity plan operates at both the database level and business level.


Backup-as-a-service explained: Your guide to cloud data protection

With BaaS, enterprises have quick, easy access to their data. Providers store multiple copies of backups in different locations so that data can be recovered when lost due to outages, failures or accidental deletion. BaaS also features geographic distribution and automatic failover, when data handling is automatically moved to a different server or system in the event of an incident to ensure that it is safe and readily available. ... With BaaS, the provider uses its own cloud infrastructure and expertise to handle the entire backup and restoration process. Enterprises simply connect to the backup engine, set their preferences and the platform handles file transfer, encryption and maintenance. Automation is the engine that drives BaaS, helping ensure that data is continuously backed up without slowing down network performance or interrupting day-to-day work. Enterprises first select the data they need backed up — whether it be simple files or complex apps — backup frequency and data retention times. ... Enterprises shouldn’t just jump right into BaaS — proper preparation is critical. Firstly, it is important to define a backup policy that identifies the organization’s critical data that must be backed up. This policy should also include backup frequency, storage location and how long copies should be retained.


CISO 3.0: Leading AI governance and security in the boardroom

AI is expanding the CISO’s required skillset beyond cybersecurity to include fluency in data science, machine learning fundamentals, and understanding how to evaluate AI models – not just technically, but from a governance and risk perspective. Understanding how AI works and how to use it responsibly is essential. Fortunately, AI has also evolved how we train our teams. For example, adaptive learning platforms that personalize content and simulate real-world scenarios are assisting in closing the skills gap more effectively. Ultimately, to become successful in the AI space, both CISOs and their teams will need to grasp how AI models are trained, the data they rely on, and the risks they may introduce. CISOs should always prioritize accountability and transparency. Red flags to look out for include a lack of explainability or insufficient auditing capabilities, both of which leave companies vulnerable. It’s important to understand how it handles sensitive data, and whether it has proven success in similar environments. Beyond that, it’s also vital to evaluate how well the tool aligns with your governance model, that it can be audited, and that it integrates well into your existing systems. Lastly, overpromising capabilities or providing an unclear roadmap for support are signs to proceed with caution.

Daily Tech Digest - June 01, 2025


Quote for the day:

"You are never too old to set another goal or to dream a new dream." -- C.S. Lewis


A wake-up call for real cloud ROI

To make cloud spending work for you, the first step is to stop, assess, and plan. Do not assume the cloud will save money automatically. Establish a meticulous strategy that matches workloads to the right environments, considering both current and future needs. Take the time to analyze which applications genuinely benefit from the public cloud versus alternative options. This is essential for achieving real savings and optimal performance. ... Enterprises should rigorously review their existing usage, streamline environments, and identify optimization opportunities. Invest in cloud management platforms that can automate the discovery of inefficiencies, recommend continuous improvements, and forecast future spending patterns with greater accuracy. Optimization isn’t a one-time exercise—it must be an ongoing process, with automation and accountability as central themes. Enterprises are facing mounting pressure to justify their escalating cloud spend and recapture true business value from their investments. Without decisive action, waste will continue to erode any promised benefits. ... In the end, cloud’s potential for delivering economic and business value is real, but only for organizations willing to put in the planning, discipline, and governance that cloud demands. 


Why IT-OT convergence is a gamechanger for cybersecurity

The combination of IT and OT is a powerful one. It promises real-time visibility into industrial systems, predictive maintenance that limits downtime and data-driven decision making that gives everything from supply chain efficiency to energy usage a boost. When IT systems communicate directly with OT devices, businesses gain a unified view of operations – leading to faster problem solving, fewer breakdowns, smarter automation and better resource planning. This convergence also supports cost reduction through more accurate forecasting, optimised maintenance and the elimination of redundant technologies. And with seamless collaboration, IT and OT teams can now innovate together, breaking down silos that once slowed progress. Cybersecurity maturity is another major win. OT systems, often built without security in mind, can benefit from established IT protections like centralised monitoring, zero-trust architectures and strong access controls. Concurrently, this integration lays the foundation for Industry 4.0 – where smart factories, autonomous systems and AI-driven insights thrive on seamless IT-OT collaboration. ... The convergence of IT and OT isn’t just a tech upgrade – it’s a transformation of how we operate, secure and grow in our interconnected world. But this new frontier demands a new playbook that combines industrial knowhow with cybersecurity discipline.


How To Measure AI Efficiency and Productivity Gains

Measuring AI efficiency is a little like a "chicken or the egg" discussion, says Tim Gaus, smart manufacturing business leader at Deloitte Consulting. "A prerequisite for AI adoption is access to quality data, but data is also needed to show the adoption’s success," he advises in an online interview. ... The challenge in measuring AI efficiency depends on the type of AI and how it's ultimately used, Gaus says. Manufacturers, for example, have long used AI for predictive maintenance and quality control. "This can be easier to measure, since you can simply look at changes in breakdown or product defect frequencies," he notes. "However, for more complex AI use cases -- including using GenAI to train workers or serve as a form of knowledge retention -- it can be harder to nail down impact metrics and how they can be obtained." ... Measuring any emerging technology's impact on efficiency and productivity often takes time, but impacts are always among the top priorities for business leaders when evaluating any new technology, says Dan Spurling, senior vice president of product management at multi-cloud data platform provider Teradata. "Businesses should continue to use proven frameworks for measurement rather than create net-new frameworks," he advises in an online interview. 


The discipline we never trained for: Why spiritual quotient is the missing link in leadership

Spiritual Quotient (SQ) is the intelligence that governs how we lead from within. Unlike IQ or EQ, SQ is not about skill—it is about state. It reflects a leader’s ability to operate from deep alignment with their values, to stay centred amid volatility and to make decisions rooted in clarity rather than compulsion. It shows up in moments when the metrics don’t tell the full story, when stakeholders pull in conflicting directions. When the team is watching not just what you decide, but who you are while deciding it. It’s not about belief systems or spirituality in a religious sense; it’s about coherence between who you are, what you value, and how you lead. At its core, SQ is composed of several interwoven capacities: deep self-awareness, alignment with purpose, the ability to remain still and present amid volatility, moral discernment when the right path isn’t obvious, and the maturity to lead beyond ego. ... The workplace in 2025 is not just hybrid—it is holographic. Layers of culture, technology, generational values and business expectations now converge in real time. AI challenges what humans should do. Global disruptions challenge why businesses exist. Employees are no longer looking for charismatic heroes. They’re looking for leaders who are real, reflective and rooted.


Microsoft Confirms Password Deletion—Now Just 8 Weeks Away

The company’s solution is to first move autofill and then any form of password management to Edge. “Your saved passwords (but not your generated password history) and addresses are securely synced to your Microsoft account, and you can continue to access them and enjoy seamless autofill functionality with Microsoft Edge.” Microsoft has added an Authenticator splash screen with a “Turn on Edge” button as its ongoing campaign to switch users to its own browser continues. It’s not just with passwords, of course, there are the endless warnings and nags within Windows and even pointers within security advisories to switch to Edge for safety and security. ... Microsoft wants users to delete passwords once that’s done, so no legacy vulnerability remains, albeit Google has not gone quite that far as yet. You do need to remove SMS 2FA though, and use an app or key-based code at a minimum. ... Notwithstanding these Authenticator changes, Microsoft users should use this as a prompt to delete passwords and replace them with passkeys, per the Windows-makers’ advice. This is especially true given increasing reports of two-factor authentication (2FA) bypasses that are increasingly rendering basics forms of 2FA redundant.


Sustainable cyber risk management emerges as industrial imperative as manufacturers face mounting threats

The ability of a business to adjust, absorb, and continue operating under pressure is becoming a performance metric in and of itself. It is measured not only in uptime or safety statistics. It’s not a technical checkbox; it’s a strategic commitment that is becoming the new baseline for industrial trust and continuity. At the heart of this change lies security by design. Organizations are working to integrate security into OT environments, working their way up from system architecture to vendor procurement and lifecycle management, rather than adding protections along the way and after deployment. ... The path is made more difficult by the acute lack of OT cyber skills, which could be overcome by employing specialists and establishing long-term pipelines through internal reskilling, knowledge transfer procedures, and partnerships with universities. Building sustainable industrial cyber risk management can be made more organized using the ISA/IEC 62443 industrial cybersecurity standards. Cyber defense is now a continuous, sustainable discipline rather than an after-the-fact response thanks to these widely recognized models, which also allow industries to link risk mitigation to real industrial processes, guarantee system interoperability, and measure progress against common benchmarks.


Design Sprint vs Design Thinking: When to Use Each Framework for Maximum Impact

The Design Sprint is a structured five-day process created by Jake Knapp during his time at Google Ventures. It condenses months of work into a single workweek, allowing teams to rapidly solve challenges, create prototypes, and test ideas with real users to get clear data and insights before committing to a full-scale development effort. Unlike the more flexible Design Thinking approach, a Design Sprint follows a precise schedule with specific activities allocated to each day ...
The Design Sprint operates on the principle of "together alone" – team members work collaboratively during discussions and decision-making, but do individual work during ideation phases to ensure diverse thinking and prevent groupthink. ... Design Thinking is well-suited for broadly exploring problem spaces, particularly when the challenge is complex, ill-defined, or requires extensive user research. It excels at uncovering unmet needs and generating innovative solutions for "wicked problems" that don't have obvious answers. The Design Sprint works best when there's a specific, well-defined challenge that needs rapid resolution. It's particularly effective when a team needs to validate a concept quickly, align stakeholders around a direction, or break through decision paralysis.


Broadcom’s VMware Financial Model Is ‘Ethically Flawed’: European Report

Some of the biggest issues VMware cloud partners and customers in Europe include the company increasing prices after Broadcom axed VMware’s former perpetual licenses and pay-as-you-go monthly pricing models. Another big issue was VMware cutting its product portfolio from thousands of offerings into just a few large bundles that are only available via subscription with a multi-year minimum commitment. “The current VMware licensing model appears to rely on practices that breach EU competition regulations which, in addition to imposing harm on its customers and the European cloud ecosystem, creates a material risk for the company,” said the ECCO in its report. “Their shareholders should investigate and challenge the legality of such model.” Additionally, the ECCO said Broadcom recently made changes to its partnership program that forced partners to choose between either being a cloud service provider or a reseller. “It is common in Europe for CSP to play both [service provider and reseller] roles, thus these new requirements are a further harmful restriction on European cloud service providers’ ability to compete and serve European customers,” the ECCO report said.


Protecting Supply Chains from AI-Driven Risks in Manufacturing

Cybercriminals are notorious for exploiting AI and have set their sights on supply chains. Supply chain attacks are surging, with current analyses indicating a 70% likelihood of cybersecurity incidents stemming from supplier vulnerabilities. Additionally, Gartner projects that by the end of 2025, nearly half of all global organizations will have faced software supply chain attacks. Attackers manipulate data inputs to mislead algorithms, disrupt operations or steal proprietary information. Hackers targeting AI-enabled inventory systems can compromise demand forecasting, causing significant production disruptions and financial losses. ... Continuous validation of AI-generated data and forecasts ensures that AI systems remain reliable and accurate. The “black-box” nature of most AI products, where internal processes remain hidden, demands innovative auditing approaches to guarantee reliable outputs. Organizations should implement continuous data validation, scenario-based testing and expert human review to mitigate the risks of bias and inaccuracies. While black-box methods like functional testing offer some evaluation, they are inherently limited compared to audits of transparent systems, highlighting the importance of open AI development.


What's the State of AI Costs in 2025?

This year's report revealed that 44% of respondents plan to invest in improving AI explainability. Their goals are to increase accountability and transparency in AI systems as well as to clarify how decisions are made so that AI models are more understandable to users. Juxtaposed with uncertainty around ROI, this statistic signals further disparity between organizations' usage of AI and accurate understanding of it. ... Of the companies that use third-party platforms, over 90% reported high awareness of AI-driven revenue. That awareness empowers them to confidently compare revenue and cost, leading to very reliable ROI calculations. Conversely, companies that don't have a formal cost-tracking system have much less confidence that they can correctly determine the ROI of their AI initiatives. ... Even the best-planned AI projects can become unexpectedly expensive if organizations lack effective cost governance. This report highlights the need for companies to not merely track AI spend but optimize it via real-time visibility, cost attribution, and useful insights. Cloud-based AI tools account for almost two-thirds of AI budgets, so cloud cost optimization is essential if companies want to stop overspending. Cost is more than a metric; it's the most strategic measure of whether AI growth is sustainable. As companies implement better cost management practices and tools, they will be able to scale AI in a fiscally responsible way, confidently measure ROI, and prevent financial waste.

Daily Tech Digest - May 29, 2025


Quote for the day:

"All progress takes place outside the comfort zone." -- Michael John Bobak


What Are Deepfakes? Everything to Know About These AI Image and Video Forgeries

Deepfakes rely on deep learning, a branch of AI that mimics how humans recognize patterns. These AI models analyze thousands of images and videos of a person, learning their facial expressions, movements and voice patterns. Then, using generative adversarial networks, AI creates a realistic simulation of that person in new content. GANs are made up of two neural networks where one creates content (the generator), and the other tries to spot if it's fake (the discriminator). The number of images or frames needed to create a convincing deepfake depends on the quality and length of the final output. For a single deepfake image, as few as five to 10 clear photos of the person's face may be enough. ... While tech-savvy people might be more vigilant about spotting deepfakes, regular folks need to be more cautious. I asked John Sohrawardi, a computing and information sciences Ph.D. student leading the DeFake Project, about common ways to recognize a deepfake. He advised people to look at the mouth to see if the teeth are garbled." Is the video more blurry around the mouth? Does it feel like they're talking about something very exciting but act monotonous? That's one of the giveaways of more lazy deepfakes." ... "Too often, the focus is on how to protect yourself, but we need to shift the conversation to the responsibility of those who create and distribute harmful content," Dorota Mani tells CNET


Beyond GenAI: Why Agentic AI Was the Real Conversation at RSA 2025

Contrary to GenAI, that primarily focuses on the divergence of information, generating new content based on specific instructions, SynthAI developments emphasize the convergence of information, presenting less but more pertinent content by synthesizing available data. SynthAI will enhance the quality and speed of decision-making, potentially making decisions autonomously. The most evident application lies in summarizing large volumes of information that humans would be unable to thoroughly examine and comprehend independently. SynthAI’s true value will be in aiding humans to make more informed decisions efficiently. ... Trust in AI also needs to evolve. This isn’t a surprise as AI, like all technologies, is going through the hype cycle and in the same way that cloud and automation suffered with issues around trust in the early stages of maturity, so AI is following a very similar pattern. It will be some time before trust and confidence are in balance with AI. ... Agentic AI encompasses tools that can understand objectives, make decisions, and act. These tools streamline processes, automate tasks, and provide intelligent insights to aid in quick decision making. In a use case involving repetitive processes, take a call center as an example, agentic AI can have significant value. 


The Privacy Challenges of Emerging Personalized AI Services

The nature of the search business will change substantially in this world of personalized AI services. It will evolve from a service for end users to an input into an AI service for end users. In particular, search will become a component of chatbots and AI agents, rather than the stand-alone service it is today. This merger has already happened to some degree. OpenAI has offered a search service as part of its ChatGPT deployment since last October. Google launched AI Overview in May of last year. AI Overview returns a summary of its search results generated by Google’s Gemini AI model at the top of its search results. When a user asks a question to ChatGPT, the chatbot will sometimes search the internet and provide a summary of its search results in its answer. ... The best way forward would not be to invent a sector-specific privacy regime for AI services, although this could be made to work in the same way that the US has chosen to put financial, educational, and health information under the control of dedicated industry privacy regulators. It might be a good approach if policymakers were also willing to establish a digital regulator for advanced AI chatbots and AI agents, which will be at the heart of an emerging AI services industry. But that prospect seems remote in today’s political climate, which seems to prioritize untrammeled innovation over protective regulation.


What CISOs can learn from the frontlines of fintech cybersecurity

For Shetty, the idea that innovation competes with security is a false choice. “They go hand in hand,” she says. User trust is central to her approach. “That’s the most valuable currency,” she explains. Lose it, and it’s hard to get back. That’s why transparency, privacy, and security are built into every step of her team’s work, not added at the end. ... Supply chain attacks remain one of her biggest concerns. Many organizations still assume they’re too small to be a target. That’s a dangerous mindset. Shetty points to many recent examples where attackers reached big companies by going through smaller suppliers. “It’s not enough to monitor your vendors. You also have to hold them accountable,” she says. Her team helps clients assess vendor cyber hygiene and risk scores, and encourages them to consider that when choosing suppliers. “It’s about making smart choices early, not reacting after the fact.” Vendor security needs to be an active process. Static questionnaires and one-off audits are not enough. “You need continuous monitoring. Your supply chain isn’t standing still, and neither are attackers.” ... The speed of change is what worries her most. Threats evolve quickly. The amount of data to protect grows every day. At the same time, regulators and customers expect high standards, and they should.


Tech optimism collides with public skepticism over FRT, AI in policing

Despite the growing alarm, some tech executives like OpenAI’s Sam Altman have recently reversed course, downplaying the need for regulation after previously warning of AI’s risks. This inconsistency, coupled with massive federal contracts and opaque deployment practices, erodes public trust in both corporate actors and government regulators. What’s striking is how bipartisan the concern has become. According to the Pew survey, only 17 percent of Americans believe AI will have a positive impact on the U.S. over the next two decades, while 51 percent express more concern than excitement about its expanding role. These numbers represent a significant shift from earlier years and a rare area of consensus between liberal and conservative constituencies. ... Bias in law enforcement AI systems is not simply a product of technical error; it reflects systemic underrepresentation and skewed priorities in AI design. According to the Pew survey, only 44 percent of AI experts believe women’s perspectives are adequately accounted for in AI development. The numbers drop even further for racial and ethnic minorities. Just 27 percent and 25 percent say the perspectives of Black and Hispanic communities, respectively, are well represented in AI systems.


6 rising malware trends every security pro should know

Infostealers steal browser cookies, VPN credentials, MFA (multi-factor authentication) tokens, crypto wallet data, and more. Cybercriminals sell the data that infostealers grab through dark web markets, giving attackers easy access to corporate systems. “This shift commoditizes initial access, enabling nation-state goals through simple transactions rather than complex attacks,” says Ben McCarthy, lead cyber security engineer at Immersive. ... Threat actors are systematically compromising the software supply chain by embedding malicious code within legitimate development tools, libraries, and frameworks that organizations use to build applications. “These supply chain attacks exploit the trust between developers and package repositories,” Immersive’s McCarthy tells CSO. “Malicious packages often mimic legitimate ones while running harmful code, evading standard code reviews.” ... “There’s been a notable uptick in the use of cloud-based services and remote management platforms as part of ransomware toolchains,” says Jamie Moles, senior technical marketing manager at network detection and response provider ExtraHop. “This aligns with a broader trend: Rather than relying solely on traditional malware payloads, adversaries are increasingly shifting toward abusing trusted platforms and ‘living-off-the-land’ techniques.”


How Constructive Criticism Can Improve IT Team Performance

Constructive criticism can be an excellent instrument for growth, both individually and on the team level, says Edward Tian, CEO of AI detection service provider GPTZero. "Many times, and with IT teams in particular, work is very independent," he observes in an email interview. "IT workers may not frequently collaborate with one another or get input on what they're doing," Tian states. ... When using constructive criticism, take an approach that focuses on seeking improvement with the poor result, Chowning advises. Meanwhile, use empathy to solicit ideas on how to improve on a poor result. She adds that it's important to ask questions, listen, seek to understand, acknowledge any difficulties or constraints, and solicit improvement ideas. ... With any IT team there are two key aspects of constructive criticism: creating the expectation and opportunity for performance improvement, and -- often overlooked -- instilling recognition in the team that performance is monitored and has implications, Chowning says. ... The biggest mistake IT leaders make is treating feedback as a one-way directive rather than a dynamic conversation, Avelange observes. "Too many IT leaders still operate in a command-and-control mindset, dictating what needs to change rather than co-creating solutions with their teams."


How AI will transform your Windows web browser

Google isn’t the only one sticking AI everywhere imaginable, of course. Microsoft Edge already has plenty of AI integration — including a Copilot icon on the toolbar. Click that, and you’ll get a Copilot sidebar where you can talk about the current web page. But the integration runs deeper than most people think, with more coming yet: Copilot in Edge now has access to Copilot Vision, which means you can share your current web view with the AI model and chat about what you see with your voice. This is already here — today. Following Microsoft’s Build 2025 developers’ conference, the company is starting to test a Copilot box right on Edge’s New Tab page. Rather than a traditional Bing search box in that area, you’ll soon see a Copilot prompt box so you can ask a question or perform a search with Copilot — not Bing. It looks like Microsoft is calling this “Copilot Mode” for Edge. And it’s not just a transformed New Tab page complete with suggested prompts and a Copilot box, either: Microsoft is also experimenting with “Context Clues,” which will let Copilot take into account your browser history and preferences when answering questions. It’s worth noting that Copilot Mode is an optional and experimental feature. ... Even the less AI-obsessed browsers of Mozilla Firefox and Brave are now quietly embracing AI in an interesting way.


No, MCP Hasn’t Killed RAG — in Fact, They’re Complementary

Just as agentic systems are all the rage this year, so is MCP. But MCP is sometimes talked about as if it’s a replacement for RAG. So let’s review the definitions. In his “Is RAG dead yet?” post, Kiela defined RAG as follows: “In simple terms, RAG extends a language model’s knowledge base by retrieving relevant information from data sources that a language model was not trained on and injecting it into the model’s context.” As for MCP (and the middle letter stands for “context”), according to Anthropic’s documentation, it “provides a standardized way to connect AI models to different data sources and tools.” That’s the same definition, isn’t it? Not according to Kiela. In his post, he argued that MCP complements RAG and other AI tools: “MCP simplifies agent integrations with RAG systems (and other tools).” In our conversation, Kiela added further (ahem) context. He explained that MCP is a communication protocol — akin to REST or SOAP for APIs — based on JSON-RPC. It enables different components, like a retriever and a generator, to speak the same language. MCP doesn’t perform retrieval itself, he noted, it’s just the channel through which components interact. “So I would say that if you have a vector database and then you make that available through MCP, and then you let the language model use it through MCP, that is RAG,” he continued.


AI didn’t kill Stack Overflow

Stack Overflow’s most revolutionary aspect was its reputation system. That is what elevated it above the crowd. The brilliance of the rep game allowed Stack Overflow to absorb all the other user-driven sites for developers and more or less kill them off. On Stack Overflow, users earned reputation points and badges for asking good questions and providing helpful answers. In the beginning, what was considered a good question or answer was not predetermined; it was a natural byproduct of actual programmers upvoting some exchanges and not others. ... For Stack Overflow, the new model, along with highly subjective ideas of “quality” opened the gates to a kind of Stanford Prison Experiment. Rather than encouraging a wide range of interactions and behaviors, moderators earned reputation by culling interactions they deemed irrelevant. Suddenly, Stack Overflow wasn’t a place to go and feel like you were part of a long-lived developer culture. Instead, it became an arena where you had to prove yourself over and over again. ... Whether the culture of helping each other will survive in this new age of LLMs is a real question. Is human helping still necessary? Or can it all be reduced to inputs and outputs? Maybe there’s a new role for humans in generating accurate data that feeds the LLMs. Maybe we’ll evolve into gardeners of these vast new tracts of synthetic data.