Showing posts with label iiot. Show all posts
Showing posts with label iiot. Show all posts

Daily Tech Digest - April 22, 2026


Quote for the day:

"Any code of your own that you haven't looked at for six or more months might as well have been written by someone else." -- Eagleson's law


🎧 Listen to this digest on YouTube Music

▶ Play Audio Digest

Duration: 18 mins • Perfect for listening on the go.


From pilots to platforms: Industrial IoT comes of age

The article "From Pilots to Platforms: Industrial IoT Comes of Age" explores the transformative shift in India’s manufacturing sector as Industrial IoT (IIoT) matures from isolated experimental pilots into robust, enterprise-wide operational platforms. Historically, IIoT deployments were limited to simple sensor installations for monitoring single machines; however, the current landscape focuses on building a production-grade digital infrastructure that integrates data from across the entire shop floor. This evolution enables a transition from reactive maintenance to proactive operational intelligence, allowing leaders to prioritize measurable outcomes such as increased throughput, energy efficiency, and overall revenue. Experts emphasize that the conversation has moved beyond questioning the technology's viability to addressing the complexities of scaling across multiple facilities and managing "brownfield" realities where decades-old equipment must be retrofitted for connectivity. The modern IIoT stack now balances edge and cloud workloads while leveraging digital twins to sustain continuous operations. Despite these advancements, robust network design and cybersecurity remain critical challenges that must be addressed to ensure resilience. Ultimately, the success of IIoT in India now hinges on converting vast operational data into repeatable, high-speed decisions that deliver tangible business value across the industrial ecosystem.


Beyond the ‘25 reasons projects fail’: Why algorithmic, continuous scenario planning addresses the root causes

The article "Beyond the '25 reasons projects fail'" argues that high failure rates in enterprise initiatives—highlighted by BCG and Gartner data—are not merely delivery misses but symptoms of a systemic failure in portfolio design and decision logic. While visible symptoms like scope creep and poor communication are real, they represent a deeper "pattern under the pattern" where organizations lack the capacity to calculate the ripple effects of change. The author, John Reuben, posits that modern governance requires "algorithmic planning" and "continuous scenario planning" to translate strategic ambition into modeled consequences. Without this discipline, leadership cannot effectively navigate trade-offs or manage dependencies. Furthermore, the piece emphasizes that while AI offers transformative potential, it must be anchored in mathematically sound planning data to avoid magnifying weak assumptions. To address these root causes, CIOs are urged to implement a modern control system for change featuring six essential capabilities: a unified planning model across priorities and budgets, side-by-side scenario comparison, interdependency mapping, early visibility into bottlenecks, continuous recalculation as conditions shift, and executive-facing summaries that turn data into decisions. Ultimately, the solution lies in evolving planning from a static, narrative process into a dynamic, algorithmic discipline capable of seeing and governing complex interactions in real time.


Is AI creating value or just increasing your IT bill?

The Spiceworks article, grounded in the "State of IT 2026" research by Spiceworks Ziff Davis, examines the economic tension between AI’s promise of value and its actual impact on corporate budgets. While AI software expenditures currently appear manageable—with a median spend of only 2.7% of total IT computing infrastructure—the report warns that this represents just the visible portion of a much larger financial commitment. The "hidden" bill for enterprise AI includes critical investments in high-performance servers, specialized storage, and robust networking, which experts estimate can increase the total cost by four to five times the software license fees. This disparity highlights a significant risk: organizations may underestimate the capital required to move from experimentation to full-scale deployment. The article argues that "putting your money where your mouth is" requires a strategic alignment of talent, time, and treasure rather than just following market hype. To achieve a positive return on investment, IT leaders must look beyond software-as-a-service costs and account for the substantial infrastructure upgrades necessary to power modern AI workloads. Ultimately, the path to value depends on a holistic understanding of the total cost of ownership in an increasingly AI-driven landscape.


Cryptographic debt is becoming the next enterprise risk layer

"Cryptographic debt" is emerging as a critical enterprise risk layer, especially within the financial sector, as organizations face the consequences of outdated algorithms, fragmented key management, and encryption deeply embedded in legacy systems. According to Ruchin Kumar of Futurex, this "debt" has long remained invisible to boardrooms because cryptography was historically treated as a technical silo rather than a strategic risk domain. However, the rise of quantum computing and the impending transition to post-quantum cryptography (PQC) are exposing these structural vulnerabilities. Major hurdles to modernization include a lack of centralized cryptographic visibility, the tight coupling of security logic with application code, and manual, error-prone key management processes. To address these challenges, enterprises must shift toward a "crypto-agile" architecture. This transformation requires centralizing governance through Hardware Security Modules (HSMs), abstracting cryptographic functions via standardized APIs, and automating the entire key lifecycle. Such a horizontal transformation will likely trigger a massive wave of IT spending, comparable to cloud migration. As ecosystems become increasingly interconnected through APIs and fintech partnerships, weak cryptographic governance in any single segment now poses a systemic threat, making unified, architecture-first security essential for long-term business resilience and regulatory compliance.


Practical SRE Habits That Keep Teams Sane

The article "Practical SRE Habits That Keep Teams Sane" outlines essential strategies for Site Reliability Engineering teams to maintain high system availability while safeguarding engineer well-being. Central to these habits is the clear definition of Service Level Objectives (SLOs), which provide a data-driven framework for balancing feature velocity with operational stability. To combat burnout, the piece emphasizes reducing "toil"—repetitive, manual tasks—through targeted automation and the creation of actionable runbooks that lower the cognitive burden during high-pressure incidents. A significant portion of the advice focuses on human-centric operations, advocating for blameless post-mortems that prioritize systemic learning over individual finger-pointing, effectively removing the drama from failure analysis. Furthermore, the article suggests optimizing on-call health by implementing "interrupt buffers" and rotating "shield" roles to protect the rest of the team from productivity-killing context switching. By adopting safer deployment patterns and rigorous backlog hygiene, teams can shift from a chaotic, reactive firefighting mode to a controlled and predictable "boring" operational state. Ultimately, these practical habits aim to create a sustainable culture where reliability is a shared responsibility, ensuring that both the technical infrastructure and the humans who support it remain resilient and efficient in the long term.


From the engine room to the bridge: What the modern leadership shift means for architects like me

The article explores how the evolving role of modern technology leadership, specifically CIOs, necessitates a fundamental shift in the approach of system architects. Traditionally, CIOs focused on uptime and cost efficiency, but today’s leaders prioritize competitive differentiation, workforce transformation, and organizational alignment. Many modernization projects fail not due to technical flaws, but because of "upstream" issues like unresolved stakeholder conflicts or a lack of strategic clarity. Consequently, architects must look beyond sound code and clean implementation to build the "social infrastructure" and trust required for adoption. Modern leadership acts as both navigator and engineer, demanding infrastructure that supports both technical needs—like automated policy enforcement—and business outcomes. Managing technical debt proactively is crucial, as legacy systems often stifle innovation like AI adoption. For architects, this means evolving from purely technical resources into strategic partners who understand the cultural and decision-making constraints of the business. The best architectural designs are ultimately useless unless they resonate with the organizational reality and strategic pressures facing the customer. Bridging the gap between the engine room and the bridge is now the essential mandate for those designing the systems that drive modern business forward.


Are We Actually There? Assessing RPKI Maturity

The article "Are We Actually There? Assessing RPKI Maturity" provides a critical evaluation of the Resource Public Key Infrastructure (RPKI) and its current state of global deployment for securing internet routing. The authors argue that while RPKI adoption is steadily growing, the system is still far from reaching true maturity. Through comprehensive measurements, the research reveals that the effectiveness of RPKI enforcement varies significantly across the internet ecosystem; while large transit networks provide broad protection, the impact of enforcement at Internet Exchange Points remains localized. Furthermore, the paper highlights severe vulnerabilities within the RPKI software ecosystem, identifying over 40 security flaws that could compromise deployments. These issues are often rooted in the immense complexity and vague requirements of the RPKI specifications, which make correct implementation difficult and error-prone. The research also notes dependencies on other protocols like DNSSEC, which itself faces design-flaw vulnerabilities like KeyTrap. Ultimately, the authors conclude that although RPKI is currently the most effective defense against Border Gateway Protocol (BGP) hijacks, achieving a robust and mature architecture requires a fundamental redesign to simplify its structure, clarify specifications, and improve overall efficiency. Until these systemic flaws are addressed, the internet's routing security remains precarious.


Study finds AI fraud losses decline, but the risks are growing

The Javelin Strategy & Research 2026 identity fraud study, "The Illusion of Progress," highlights a deceptive shift in the digital landscape where total monetary losses have decreased while systemic risks continue to escalate. In 2025, combined fraud and scam losses fell to $38 billion, a $9 billion reduction from the previous year, accompanied by a drop in victim numbers to 36 million. This decline was primarily fueled by a 45 percent drop in scam-related losses. However, these improvements are overshadowed by a 31 percent surge in new-account fraud victims, signaling that criminals are pivoting their tactics. Artificial intelligence is at the core of this evolution, as fraudsters adopt advanced tools more rapidly than financial institutions can update their defenses. Lead analyst Suzanne Sando warns that lower loss figures are misleading because scammers are increasingly focused on stealing personal data to seed future, more sophisticated attacks rather than seeking immediate cash. To address this "inflection point," the report stresses that organizations must move beyond one-time security decisions. Instead, they must implement continuous fraud controls and foster deep industry collaboration to stay ahead of AI-powered criminals who operate without the regulatory constraints that often slow down legitimate financial services.


Why identity is the driving force behind digital transformation

In the modern digital landscape, identity has evolved from a simple login mechanism into the fundamental "invisible engine" driving successful digital transformation. As traditional network perimeters dissolve due to cloud adoption and remote work, identity has emerged as the critical new security boundary, utilizing a "never trust, always verify" approach to protect sensitive data. This shift empowers businesses to implement fine-grained access controls that enhance security while streamlining operations. Beyond security, identity systems act as a catalyst for business agility, allowing software teams to navigate complex environments more efficiently. Crucially, centralized identity management enhances the customer experience by unifying disparate data points to provide highly personalized interactions and build brand trust. In high-stakes sectors like finance, identity-centric frameworks are essential for real-time fraud detection and comprehensive risk assessment by linking multiple accounts to a single verified user. To truly leverage identity as a strategic asset, organizations must ensure their systems are real-time, easily integrable, and governed by strict access rules. Ultimately, establishing identity as a core infrastructure is no longer optional; it is the essential foundation for innovation, security, and competitive growth in an increasingly interconnected and complex global digital economy.


From Panic to Playbook: Modernizing Zero‑Day Response in AppSec

In "From Panic to Playbook: Modernizing Zero-Day Response in AppSec," Shannon Davis explores how the increasing frequency and rapid exploitation of zero-day vulnerabilities, such as Log4Shell, necessitate a shift from reactive improvisation to structured, rehearsed workflows. Traditional AppSec cadences—where vulnerabilities are typically addressed through scheduled scans and predictable sprint fixes—fail to meet the urgent demands of zero-day events due to collapsed time-to-exploit windows, high data volatility, and complex transitive dependencies. To bridge this gap, Davis highlights the Mend AppSec Platform’s modernized approach, which emphasizes four critical components: a live, authoritative data feed independent of scan schedules, instant correlation with existing inventory to identify exposure without manual rescanning, a defined 30-day lifecycle for active threats, and a centralized audit trail for cross-team alignment. This framework enables organizations to respond effectively within the vital first 72 hours after disclosure by providing a single source of truth for both human teams and automated tooling. Ultimately, the article argues that organizational resilience during a security crisis depends less on the total size of a security budget and more on the implementation of a proactive, data-driven playbook that transforms chaotic incident response into a sustainable, repeatable, and efficient operational reality.

Daily Tech Digest - November 26, 2025


Quote for the day:

“There is only one thing that makes a dream impossible to achieve: the fear of failure.” -- Paulo Coelho



7 signs your cybersecurity framework needs rebuilding

The biggest mistake, Pearlson says, is failing to recognize that the current plan is out of date or simply not working. Breaches happen, but that doesn’t always mean your cyber framework needs rebuilding. It does, however, indicate that the framework needs to be rethought and redesigned. ... “If your framework hasn’t kept pace with evolving threats or business needs, it’s time for a rebuild.” Cyber threats are always evolving, so staying proactive with regular reviews and fostering a culture of cybersecurity awareness will help catch issues before they become crises, Bucher says. ... “The cybersecurity landscape has evolved rapidly, especially with the rise of generative AI — your framework should reflect these shifts.” McLeod recommends a complete a biannual framework review combined with a cursory review during the gap years. “This helps to ensure that the framework stays aligned with evolving threats, business changes, and regulatory requirements.” Ideally, security leaders should always have their security framework in mind while maintaining a rough, running list of areas that could be improved, streamlined, or clarified, McLeod suggests. ... If an organization is stuck in a cycle of continually chasing alerts and incidents, as well as reporting events after the fact instead of performing predictive threat assessments, data analysis, and forward planning, it’s time for a change, Baiati advises. 


Your Million-Dollar IIoT Strategy is Being Sabotaged by Hundred-Dollar Radios

The ambition is clear: to create hyper-efficient, data-driven operations in a market expected to exceed $1.6 billion by 2030. Yet, a fundamental paradox lies at the heart of this transformation. While we architect complex digital twins and deploy sophisticated AI models, the foundational tools entrusted to our most valuable asset—the frontline workforce—are often decades old, disconnected, and failing at an alarming rate. ... Data shows that one in four organizations loses more than an entire day of productivity every month simply dealing with broken technology. The primary culprits are as predictable as they are preventable: nearly half of workers cite battery problems (48.4%) and physical damage (46.8%) as the most common causes of failure. ... While conversations about this crisis often focus on pay and career paths, Relay’s research reveals a more immediate, tangible cause: the daily frustration of using broken tools. 1 in 4 frontline workers already feel their equipment is second-class compared to what their corporate counterparts use, and a staggering 43% of workers saying they’d be less likely to quit if guaranteed access to modern, automatically upgraded devices. ... Beyond reliability, it’s important to address the data black hole created by legacy, disconnected tools. Every day, frontline teams generate thousands of hours of spoken communication—a rich stream of unstructured data filled with maintenance alerts, safety concerns, and process bottlenecks. 


Ask the Experts: Validate, don't just migrate

"Refactoring code is certainly a big undertaking. And if you start before you have good hygiene and governance, then you're just setting yourself up for failure. Similarly, if you haven't tagged properly, you have no way to attribute it to the project, and that becomes a cost problem." ... "If you do conclude [that migration is necessary], then you really must make sure the application is architected right. A lot of times, these workloads weren't designed for the cloud world, so you must adapt them and deliberately architect them for a cloud workload. "[To prepare a mission-critical application], it's key to look at the appropriateness, operating system [and] licenses. Sometimes, there are licenses tied to CPUs or other things that might introduce issues for you as well, so regression, latency and performance testing will be mandatory. ... "[IT leaders must also understand] the risks and costs associated with taking things into the cloud, and the pros and cons of that versus leaving it alone. Because old stuff, whether it was [procured] yesterday or five years ago, is inherently going to be vulnerable from a cybersecurity standpoint. Risk No. 2 is interoperability and compatibility, because old stuff doesn't talk to new stuff. And the third one is supportability, because it's hard to find old people to support old systems. ... "Sometimes, people have the false sense that if it's in cloud, then I'm all set. Everything is available, and everything is highly redundant. And it is, if you design [the application] with those things in mind.


Heineken CISO champions a new risk mindset to unlock innovation

Starting as an auditor and later leading a cyber defense team. It’s easy to fall into the black-and-white trap of being the function that always says “no” or speaks in cryptic tech jargon. It’s a scary world out there with so many attacks happening in every industry. The classical reaction of most security professionals is to tighten defences and impose even more rules. ... CISOs need to shift the mindset from pure compliance to asking: How does our cyber strategy support the business and its values? What calculated risks do we want the business to take? Where do we need their attention and help to embed security into the DNA of our people and our company? ... Be visible and approachable. Share the lessons that shaped you as a leader, what worked, what didn’t, and the principles that guide your decisions. I’m passionate about building diverse teams where everyone gets the same opportunities, no matter age, gender, or background. Diversity makes us stronger, and when there’s trust and openness, it sparks mentoring, coaching, and knowledge sharing. Make coaching and mentoring non-negotiable, and carve out time for it. It’s easy to push aside when you’re busy putting out security fires, but neglecting people’s growth and well-being is a big miss. Be authentic and vulnerable, walk the talk. Share the real stories, including failures and what made you stronger. Too often, people focus only on titles, certifications, and tech skills.


Data-Driven Enterprise: How Companies Turn Data into Strategic Advantage

A data-driven enterprise is not defined by the number of dashboards or analytics tools it owns. It’s defined by its ability to turn raw information into intelligent action. True data-driven organizations embed data thinking into every level of decision-making from boardroom strategy to day-to-day operations. ... A modern data architecture is not a single platform, but an interconnected ecosystem designed to balance agility, governance, and scalability. ... As organizations mature in their data journey, they are moving away from rigid, centralized models that rely on a single source of truth. While centralization once ensured control, it often created bottlenecks slowing down innovation and limiting agility.  ... We are entering an era of data agents self-learning systems capable of autonomously detecting anomalies, assessing risks, and forecasting trends in real time. These intelligent agents will soon become the invisible workforce of the enterprise, operating across domains: predicting supply chain disruptions, optimizing IT performance, personalizing customer journeys, and ensuring compliance through continuous monitoring. Their actions will reshape not only operations but also how organizations think about governance, accountability, and human oversight. For architects, this shift represents both a challenge and an extraordinary opportunity. The role is evolving from that of a data custodian focused on structure and governance to an ecosystem designer who engineers environments where data and AI can coexist, learn, and continuously create value.


10 benefits of an optimized third-party IT services portfolio

By entrusting day-to-day IT operations to trusted providers, organizations can reallocate internal resources toward higher-value initiatives such as digital transformation, automation, and product innovation. This accelerates adoption of emerging technologies, and allows internal teams to deepen business expertise, strengthen cross-functional collaboration, and focus on driving growth where it matters most. ... A well-structured third-party IT services portfolio can provide flexibility to scale up or down based on business needs. This is particularly valuable for CEOs who need to adapt to changing market conditions and seize growth opportunities. Securing talent in the market today is challenging and time consuming, so tapping into the talent pools of your strategic IT services partner base allows organizations to leverage their bench strength to fill immediate needs for talent. ... IT service providers continuously invest in advanced tech and talent development, enabling clients to benefit from cutting-edge innovations without bearing the full cost of adoption. As AI, automation, and cybersecurity evolve, providers offer the subject matter expertise and tools organizations need to stay ahead of disruption. ... With operational stability ensured through a balance of internal talent and trusted third parties, CIOs can dedicate more focus to long-term strategic initiatives that fuel growth and innovation. 


Modernizing SOCs with Agentic AI and Human-in-the-Loop: A Guide to CISOs

Traditional SOCs were not built for today’s speed and scale. Alert fatigue, manual investigations, disconnected tools, and talent shortages all contribute to the operational drag. Many security leaders are stuck in a reactive loop with no clear path to improvement. ... Legacy SOCs rely heavily on outdated technologies and rule-based detection, generating high volumes of alerts, many of which are false positives, leading to analyst burnout. Analysts are compelled to manually inspect and triage a deluge of meaningless signals, making the entire effort unsustainable. ... Before transformation can happen, one needs to understand where one stands. This can be accomplished with key benchmarking metrics for SOC performance, such as MTTD (Mean time to detect), MTTR (Mean time to respond), case closure rates, and tool effectiveness. ... Agentic AI represents the next evolution of AI-powered cybersecurity, which is modular, explainable, and autonomous. Through a coordinated system of AI agents, the Agentic SOC continuously responds and adapts to the evolving security environment in real time. It is designed to accelerate threat detection, investigation, and response by 10x, bringing speed, precision, and clarity to every function of SecOps. Agentic AI is the technology shift that changes the game. Unlike traditional automation, Agentic AI is decision-oriented, self-improving, and always operating with human-in-the-loop for oversight.


3 SOC Challenges You Need to Solve Before 2026

2026 will mark a pivotal shift in cybersecurity. Threat actors are moving from experimenting with AI to making it their primary weapon, using it to scale attacks, automate reconnaissance, and craft hyper-realistic social engineering campaigns. ... Attackers have mastered evasion. ClickFix campaigns trick employees into pasting malicious PowerShell commands by themselves. LOLBins are abused to hide malicious behavior. Multi-stage phishing hides behind QR codes, CAPTCHAs, rewritten URLs, and fake installers. Traditional sandboxes stall because they can't click "Next," solve challenges, or follow human-dependent flows. Result? Low detection rates for the exact threats exploding in 2025 and beyond. ... Thousands of daily alerts, mostly false positives. An average SOC handles 11,000 alerts daily, with only 19% worth investigating, according to the 2024 SANS SOC Survey. Tier 1 analysts drown in noise, escalating everything because they lack context. Every alert becomes a research project. Every investigation starts from zero. Burnout hits hard. Turnover doubles, morale tanks, and real threats hide in the backlog. By 2026, AI-orchestrated attacks will flood systems even faster, turning alert fatigue into a full-blown crisis. ... From a financial leadership perspective, security spending often feels like a black hole: money is spent, but risk reduction is hard to quantify. SOCs are challenged to justify investments, especially when security teams seem to be a cost center without clear profit or business-driving impact.


Digital surveillance tools are reshaping workplace privacy, GAO warns

Privacy concerns intensify when surveillance data feeds into automated systems that evaluate performance, set productivity metrics, or flag workers for potential discipline. GAO found that employers often rely on flawed benchmarks and incomplete measurements. Tools rarely capture the full range of work performed, such as research, mentoring, reading, or off-screen tasks, and frequently misinterpret normal behavior as inefficiency. When employers trust these tools “at face value,” the report notes, workers can be unfairly labeled unproductive or noncompliant despite doing their jobs well. ... Meanwhile, past federal efforts to issue guidance on reducing surveillance related harms such as transparency practices, human oversight, and safeguards against discriminatory impacts have been rescinded or paused since January by the Trump administration as agencies reassess their policy priorities. GAO also notes that existing federal privacy protections are narrow. The Electronic Communications Privacy Act restricts covert interception of communications, but it does not cover most forms of digital monitoring, such as keystroke logging, location tracking, biometric data collection, or algorithmic productivity scoring. ... The report concludes that while digital surveillance can improve safety, efficiency, and health monitoring, its benefits depend wholly on how employers use it.


How to avoid becoming an “AI-first” company with zero real AI usage

A competitor declared they’re going AI-first. Another publishes a case study about replacing support with LLMs. And a third shares a graph showing productivity gains. Within days, boardrooms everywhere start echoing the same message: “We should be doing this. Everyone else already is, and we can’t fall behind.” So the work begins. Then come the task forces, the town halls, the strategy docs and the targets. Teams are asked to contribute initiatives. But if you’ve been through this before, you know there’s often a difference between what companies announce and what they actually do. Because press releases don’t mention the pilots that stall, or the teams that quietly revert to the old way, or even the tools that get used once and abandoned. ... By then, your company’s AI-first mandate will have set into motion departmental initiatives, vendor contracts and maybe even some new hires with “AI” in their titles. The dashboards will be green, and the board deck will have a whole slide on AI. But in the quiet spaces where your actual work happens, what will have meaningfully changed? Maybe you'll be like the teams that never stopped their quiet experiments. ... That’s invisible architecture of genuine progress: Patient, and completely uninterested in performance. It doesn't make for great LinkedIn posts, and it resists grand narratives. But it transforms companies in ways that truly last. Every organization is standing at the same crossroads right now: Look like you’re innovating, or create a culture that fosters real innovation.

Daily Tech Digest - September 02, 2025


Quote for the day:

“The art of leadership is saying no, not yes. It is very easy to say yes.” -- Tony Blair


When Browsers Become the Attack Surface: Rethinking Security for Scattered Spider

Scattered Spider, also referred to as UNC3944, Octo Tempest, or Muddled Libra, has matured over the past two years through precision targeting of human identity and browser environments. This shift differentiates them from other notorious cybergangs like Lazarus Group, Fancy Bear, and REvil. If sensitive information such as your calendar, credentials, or security tokens is alive and well in browser tabs, Scattered Spider is able to acquire them. ... Once user credentials get into the wrong hands, attackers like Scattered Spider will move quickly to hijack previously authenticated sessions by stealing cookies and tokens. Securing the integrity of browser sessions can best be achieved by restricting unauthorized scripts from gaining access or exfiltrating these sensitive artifacts. Organizations must enforce contextual security policies based on components such as device posture, identity verification, and network trust. By linking session tokens to context, enterprises can prevent attacks like account takeovers, even after credentials have become compromised. ... Although browser security is the last mile of defense for malware-less attacks, integrating it into an existing security stack will fortify the entire network. By implementing activity logs enriched with browser data into SIEM, SOAR, and ITDR platforms, CISOs can correlate browser events with endpoint activity for a much fuller picture. 


The Transformation Resilience Trifecta: Agentic AI, Synthetic Data and Executive AI Literacy

The current state of Agentic AI is, in a word, fragile. Ask anyone in the trenches. These agents can be brilliant one minute and baffling the next. Instructions get misunderstood. Tasks break in new contexts. Chaining agents into even moderately complex workflows exposes just how early we are in this game. Reliability? Still a work in progress. And yet, we’re seeing companies experiment. Some are stitching together agents using LangChain or CrewAI. Others are waiting for more robust offerings from Microsoft Copilot Studio, OpenAI’s GPT-4o Agents, or Anthropic’s Claude toolsets. It’s the classic innovator’s dilemma: Move too early, and you waste time on immature tech. Move too late, and you miss the wave. Leaders must thread that needle — testing the waters while tempering expectations. ... Here’s the scarier scenario I’m seeing more often: “Shadow AI.” Employees are already using ChatGPT, Claude, Copilot, Perplexity — all under the radar. They’re using it to write reports, generate code snippets, answer emails, or brainstorm marketing copy. They’re more AI-savvy than their leadership. But they don’t talk about it. Why? Fear. Risk. Politics. Meanwhile, some executives are content to play cheerleader, mouthing AI platitudes on LinkedIn but never rolling up their sleeves. That’s not leadership — that’s theater.


Red Hat strives for simplicity in an ever more complex IT world

One of the most innovative developments in RHEL 10 is bootc in image mode, where VMs run like a container and are part of the CI/CD pipeline. By using immutable images, all changes are controlled from the development environment. Van der Breggen illustrates this with a retail scenario: “I can have one POS system for the payment kiosk, but I can also have another POS system for my cashiers. They use the same base image. If I then upgrade that base image to later releases of RHEL, I create one new base image, tag it in the environments, and then all 500 systems can be updated at once.” Red Hat Enterprise Linux Lightspeed acts as a command-line assistant that brings AI directly into the terminal. ... For edge devices, Red Hat uses a solution called Greenboot, which does not immediately proceed to a rollback but can wait for one if a certain condition are met. After, for example, three reboots without a working system, it reverts to the previous working release. However, not everything has been worked out perfectly yet. Lightspeed currently only works online, while many customers would like to use it offline because their RHEL systems are tucked away behind firewalls. Red Hat is still looking into possibilities for an expansion here, although making the knowledge base available offline poses risks to intellectual property. 


The state of DevOps and AI: Not just hype

The vision of AI that takes you from a list of requirements through work items to build to test to, finally, deployment is still nothing more than a vision. In many cases, DevOps tool vendors use AI to build solutions to the problems their customers have. The result is a mixture of point solutions that can solve immediate developer problems. ... Machine learning is speeding up testing by failing faster. Build steps get reordered automatically so those that are likely to fail happen earlier, which means developers aren’t waiting for the full build to know when they need to fix something. Often, the same system is used to detect flaky tests by muting tests where failure adds no value. ... Machine learning gradually helps identify the characteristics of a working system and can raise an alert when things go wrong. Depending on the governance, it can spot where a defect was introduced and start a production rollback while also providing potential remediation code to fix the defect. ... There’s a lot of puffery around AI, and DevOps vendors are not helping. A lot of their marketing emphasizes fear: “Your competitors are using AI, and if you’re not, you’re going to lose” is their message. Yet DevOps vendors themselves are only one or two steps ahead of you in their AI adoption journey. Don’t adopt AI pell-mell due to FOMO, and don’t expect to replace everyone under the CTO with a large language model.


5 Ways To Secure Your Industrial IoT Network

IIoT is a subcategory of the Internet of Things (IoT). It is made up of a system of interconnected smart devices that uses sensors, actuators, controllers and intelligent control systems to collect, transmit, receive and analyze data.... IIoT also has its unique architecture that begins with the device layer, where equipment, sensors, actuators and controllers collect raw operational data. That information is passed through the network layer, which transmits it to the internet via secure gateways. Next, the edge or fog computing layer processes and filters the data locally before sending it to the cloud, helping reduce latency and improving responsiveness. Once in the service and application support layer, the data is stored, analyzed, and used to generate alerts and insights. ... Many IIoT devices are not built with strong cybersecurity protections. This is especially true for legacy machines that were never designed to connect to modern networks. Without safeguards such as encryption or secure authentication, these devices can become easy targets. ... Defending against IIoT threats requires a layered approach that combines technology, processes and people. Manufacturers should segment their networks to limit the spread of attacks, apply strong encryption and authentication for connected devices, and keep software and firmware regularly updated.


AI Chatbots Are Emotionally Deceptive by Design

Even without deep connection, emotional attachment can lead users to place too much trust in the content chatbots provide. Extensive interaction with a social entity that is designed to be both relentlessly agreeable, and specifically personalized to a user’s tastes, can also lead to social “deskilling,” as some users of AI chatbots have flagged. This dynamic is simply unrealistic in genuine human relationships. Some users may be more vulnerable than others to this kind of emotional manipulation, like neurodiverse people or teens who have limited experience building relationships. ... With AI chatbots, though, deceptive practices are not hidden in user interface elements, but in their human-like conversational responses. It’s time to consider a different design paradigm, one that centers user protection: non-anthropomorphic conversational AI. All AI chatbots can be less anthropomorphic than they are, at least by default, without necessarily compromising function and benefit. A companion AI, for example, can provide emotional support without saying, “I also feel that way sometimes.” This non-anthropomorphic approach is already familiar in robot design, where researchers have created robots that are purposefully designed to not be human-like. This design choice is proven to more appropriately reflect system capabilities, and to better situate robots as useful tools, not friends or social counterparts.


How AI product teams are rethinking impact, risk, feasibility

We’re at a strange crossroads in the evolution of AI. Nearly every enterprise wants to harness it. Many are investing heavily. But most are falling flat. AI is everywhere — in strategy decks, boardroom buzzwords and headline-grabbing POCs. Yet, behind the curtain, something isn’t working. ... One of the most widely adopted prioritization models in product management is RICE — which scores initiatives based on Reach, Impact, Confidence, and Effort. It’s elegant. It’s simple. It’s also outdated. RICE was never designed for the world of foundation models, dynamic data pipelines or the unpredictability of inference-time reasoning. ... To make matters worse, there’s a growing mismatch between what enterprises want to automate and what AI can realistically handle. Stanford’s 2025 study, The Future of Work with AI Agents, provides a fascinating lens. ... ARISE adds three crucial layers that traditional frameworks miss: First, AI Desire — does solving this problem with AI add real value, or are we just forcing AI into something that doesn’t need it? Second, AI Capability — do we actually have the data, model maturity and engineering readiness to make this happen? And third, Intent — is the AI meant to act on its own or assist a human? Proactive systems have more upside, but they also come with far more risk. ARISE lets you reflect that in your prioritization.


Cloud control: The key to greener, leaner data centers

To fully unlock these cost benefits, businesses must adopt FinOps practices: the discipline of bringing engineering, finance, and operations together to optimize cloud spending. Without it, cloud costs can quickly spiral, especially in hybrid environments. But, with FinOps, organizations can forecast demand more accurately, optimise usage, and ensure every pound spent delivers value. ... Cloud platforms make it easier to use computing resources more efficiently. Even though the infrastructure stays online, hyperscalers can spread workloads across many customers, keeping their hardware busier and more productive. The advantage is that hyperscalers can distribute workloads across multiple customers and manage capacity at a large scale, allowing them to power down hardware when it's not in use. ... The combination of cloud computing and artificial intelligence (AI) is further reshaping data center operations. AI can analyse energy usage, detect inefficiencies, and recommend real-time adjustments. But running these models on-premises can be resource-intensive. Cloud-based AI services offer a more efficient alternative. Take Google, for instance. By applying AI to its data center cooling systems, it cut energy use by up to 40 percent. Other organizations can tap into similar tools via the cloud to monitor temperature, humidity, and workload patterns and automatically adjust cooling, load balancing, and power distribution.


You Backed Up Your Data, but Can You Bring It Back?

Many IT teams assume that the existence of backups guarantees successful restoration. This misconception can be costly. A recent report from Veeam revealed that 49% of companies failed to recover most of their servers after a significant incident. This highlights a painful reality: Most backup strategies focus too much on storage and not enough on service restoration. Having backup files is not the same as successfully restoring systems. In real-world recovery scenarios, teams face unknown dependencies, a lack of orchestration, incomplete documentation, and gaps between infrastructure and applications. When services need to be restored in a specific order and under intense pressure, any oversight can become a significant bottleneck. ... Relying on a single backup location creates a single point of failure. Local backups can be fast but are vulnerable to physical threats, hardware failures, or ransomware attacks. Cloud backups offer flexibility and off-site protection but may suffer bandwidth constraints, cost limitations, or provider outages. A hybrid backup strategy ensures multiple recovery paths by combining on-premises storage, cloud solutions, and optionally offline or air-gapped options. This approach allows teams to choose the fastest or most reliable method based on the nature of the disruption.


Beyond Prevention: How Cybersecurity and Cyber Insurance Are Converging to Transform Risk Management

Historically, cybersecurity and cyber insurance have operated in silos, with companies deploying technical defenses to fend off attacks while holding a cyber insurance policy as a safety net. This fragmented approach often leaves gaps in coverage and preparedness. ... The insurance sector is at a turning point. Traditional models that assess risk at the point of policy issuance are rapidly becoming outdated in the face of constantly evolving cyber threats. Insurers who fail to adapt to an integrated model risk being outpaced by agile Cyber Insurtech companies, which leverage cutting-edge cyber intelligence, machine learning, and risk analytics to offer adaptive coverage and continuous monitoring. Some insurers have already begun to reimagine their role—not only as claim processors but as active partners in risk prevention. ... A combined cybersecurity and insurance strategy goes beyond traditional risk management. It aligns the objectives of both the insurer and the insured, with insurers assuming a more proactive role in supporting risk mitigation. By reducing the probability of significant losses through continuous monitoring and risk-based incentives, insurers are building a more resilient client base, directly translating to reduced claim frequency and severity.

Daily Tech Digest - August 28, 2025


Quote for the day:

“Rarely have I seen a situation where doing less than the other guy is a good strategy.” -- Jimmy Spithill


Emerging Infrastructure Transformations in AI Adoption

Balanced scaling of infrastructure storage and compute clusters optimizes resource use in the face of emerging elastic use cases. Throughput, latency, scalability, and resiliency are key metrics for measuring storage performance. Scaling storage with demand for AI solutions without contributing to technical debt is a careful balance to contemplate for infrastructure transformations. ... Data governance in AI extends beyond traditional access control. ML workflows have additional governance tasks such as lineage tracking, role-based permissions for model modification, and policy enforcement over how data is labeled, versioned, and reused. This includes dataset documentation, drift tracking, and LLM-specific controls over prompt inputs and generated outputs. Governance frameworks that support continuous learning cycles are more valuable: Every inference and user correction can become training data. ... As models become more stateful and retain context over time, pipelines must support real-time, memory-intensive operations. Even Apache Spark documentation hints at future support for stateful algorithms (models that maintain internal memory of past interactions), reflecting a broader industry trend. AI workflows are moving toward stateful "agent" models that can handle ongoing, contextual tasks rather than stateless, single-pass processing.


The rise of the creative cybercriminal: Leveraging data visibility to combat them

In response to the evolving cyber threats faced by organisations and governments, a comprehensive approach that addresses both the human factor and their IT systems is essential. Employee training in cybersecurity best practices, such as adopting a zero-trust approach and maintaining heightened vigilance against potential threats, like social engineering attacks, are crucial. Similarly, cybersecurity analysts and Security Operations Centres (SOCs) play a pivotal role by utilising Security Information and Event Management (SIEM) solutions to continuously monitor IT systems, identifying potential threats, and accelerating their investigation and response times. Given that these tasks can be labor-intensive, integrating a modern SIEM solution that harnesses generative AI (GenAI) is essential. ... By integrating GenAI's data processing capabilities with an advanced search platform, cybersecurity teams can search at scale across vast amounts of data, including unstructured data. This approach supports critical functions such as monitoring, compliance, threat detection, prevention, and incident response. With full-stack observability, or in other words, complete visibility across every layer of their technology stack, security teams can gain access to content-aware insights, and the platform can swiftly flag any suspicious activity.


How to secure digital trust amid deepfakes and AI

To ensure resilience in the shifting cybersecurity landscape, organizations should proactively adopt a hybrid fraud-prevention approach, strategically integrating AI solutions with traditional security measures to build robust, layered defenses. Ultimately, a comprehensive, adaptive, and collaborative security framework is essential for enterprises to effectively safeguard against increasingly sophisticated cyberattacks – and there are several preemptive strategies organizations must leverage to counteract threats and strengthen their security posture. ... Fraudsters are adaptive, usually leveraging both advanced methods (deepfakes and synthetic identities) and simpler techniques (password spraying and phishing) to exploit vulnerabilities. By combining AI with tools like strong and continuous authentication, behavioral analytics, and ongoing user education, organizations can build a more resilient defense system. This hybrid approach ensures that no single point of failure exposes the entire system, and that both human and machine vulnerabilities are addressed. Recent threats rely on social engineering to obtain credentials, bypass authentication, and steal sensitive data, and it is evolving along with AI. Utilizing real-time verification techniques, such as liveness detection, can reliably distinguish between legitimate users and deepfake impersonators. 


Why Generative AI's Future Isn't in the Cloud

Instead of telling customers they needed to bring their data to the AI in the cloud, we decided to bring AI to the data where it's created or resides, locally on-premises or at the edge. We flipped the model by bringing intelligence to the edge, making it self-contained, secure and ready to operate with zero dependency on the cloud. That's not just a performance advantage in terms of latency, but in defense and sensitive use cases, it's a requirement. ... The cloud has driven incredible innovation, but it's created a monoculture in how we think about deploying AI. When your entire stack depends on centralized compute and constant connectivity, you're inherently vulnerable to outages, latency, bandwidth constraints, and, in defense scenarios, active adversary disruption. The blind spot is that this fragility is invisible until it fails, and by then the cost of that failure can be enormous. We're proving that edge-first AI isn't just a defense-sector niche, it's a resilience model every enterprise should be thinking about. ... The line between commercial and military use of AI is blurring fast. As a company operating in this space, how do you navigate the dual-use nature of your tech responsibly? We consider ourselves a dual-use defense technology company and we also have enterprise customers. Being dual use actually helps us build better products for the military because our products are also tested and validated by commercial customers and partners. 


Why DEI Won't Die: The Benefits of a Diverse IT Workforce

For technology teams, diversity is a strategic imperative that drives better business outcomes. In IT, diverse leadership teams generate 19% more revenue from innovation, solve complex problems faster, and design products that better serve global markets — driving stronger adoption, retention of top talent, and a sustained competitive edge. Zoya Schaller, director of cybersecurity compliance at Keeper Security, says that when a team brings together people with different life experiences, they naturally approach challenges from unique perspectives. ... Common missteps, according to Ellis, include over-focusing on meeting diversity hiring targets without addressing the retention, development, and advancement of underrepresented technologists. "Crafting overly broad or tokenistic job descriptions can fail to resonate with specific tech talent communities," she says. "Don't treat DEI as an HR-only initiative but rather embed it into engineering and leadership accountability." Schaller cautions that bias often shows up in subtle ways — how résumés are reviewed, who is selected for interviews, or even what it means to be a "culture fit." ... Leaders should be active champions of inclusivity, as it is an ongoing commitment that requires consistent action and reinforcement from the top.


The Future of Software Is Not Just Faster Code - It's Smarter Organizations

Using AI effectively doesn't just mean handing over tasks. It requires developers to work alongside AI tools in a more thoughtful way — understanding how to write structured prompts, evaluate AI-generated results and iterate them based on context. This partnership is being pushed even further with agentic AI. Agentic systems can break a goal into smaller steps, decide the best order to tackle them, tap into multiple tools or models, and adapt in real time without constant human direction. For developers, this means AI can do more than suggesting code. It can act like a junior teammate who can design, implement, test and refine features on its own. ... But while these tools are powerful, they're not foolproof. Like other AI applications, their value depends on how well they're implemented, tuned and interpreted. That's where AI-literate developers come in. It's not enough to simply plug in a tool and expect it to catch every threat. Developers need to understand how to fine-tune these systems to their specific environments — configuring scanning parameters to align with their architecture, training models to recognize application-specific risks and adjusting thresholds to reduce noise without missing critical issues. ... However, the real challenge isn't just finding AI talent, its reorganizing teams to get the most out of AI's capabilities. 


Industrial Copilots: From Assistants to Essential Team Members

Behind the scenes, industrial copilots are supported by a technical stack that includes predictive analytics, real-time data integration, and cross-platform interoperability. These assistants do more than just respond — they help automate code generation, validate engineering logic, and reduce the burden of repetitive tasks. In doing so, they enable faster deployment of production systems while improving the quality and efficiency of engineering work. Despite these advances, several challenges remain. Data remains the bedrock of effective copilots, yet many workers on the shop floor are still not accustomed to working with data directly. Upskilling and improving data literacy among frontline staff is critical. Additionally, industrial companies are learning that while not all problems need AI, AI absolutely needs high-quality data to function well. An important lesson shared during Siemens’ AI with Purpose Summit was the importance of a data classification framework. To ensure copilots have access to usable data without risking intellectual property or compliance violations, one company adopted a color-coded approach: white for synthetic data (freely usable), green for uncritical data (approval required), yellow for sensitive information, and red for internal IP (restricted to internal use only). 


Will the future be Consolidated Platforms or Expanding Niches?

Ramprakash Ramamoorthy believes enterprise SaaS is already making moves in consolidation. “The initial stage of a hype cycle includes features disguised as products and products disguised as companies. Well we are past that, many of these organizations that delivered a single product have to go through either vertical integration or sell out. In fact a lot of companies are mimicking those single-product features natively on large platforms.” Ramamoorthy says he also feels AI model providers will develop into enterprise SaaS organizations themselves as they continue to capture the value proposition of user data and usage signals for SaaS providers. This is why Zoho built their own AI backbone—to keep pace with competitive offerings and to maintain independence. On the subject of vibe-code and low-code tools, Ramamoorthy seems quite clear-eyed about their suitability for mass-market production. “Vibe-code can accelerate you from 0 to 1 faster, but particularly with the increase in governance and privacy, you need additional rigor. For example, in India, we have started to see compliance as a framework.” In terms of the best generative tools today, he observes “Anytime I see a UI or content generated by AI—I can immediately recognize the quality that is just not there yet.”


Beyond the Prompt: Building Trustworthy Agent Systems

While a basic LLM call responds statically to a single prompt, an agent system plans. It breaks down a high-level goal into subtasks, decides on tools or data needed, executes steps, evaluates outcomes, and iterates – potentially over long timeframes and with autonomy. This dynamism unlocks immense potential but can introduce new layers of complexity and security risk. ... Technology controls are vital but not comprehensive. That’s because the most sophisticated agent system can be undermined by human error or manipulation. This is where principles of human risk management become critical. Humans are often the weakest link. How does this play out with agents? Agents should operate with clear visibility. Log every step, every decision point, every data access. Build dashboards showing the agent’s “thought process” and actions. Enable safe interruption points. Humans must be able to audit, understand, and stop the agent when necessary. ... The allure of agentic AI is undeniable. The promise of automating complex workflows, unlocking insights, and boosting productivity is real. But realizing this potential without introducing unacceptable risk requires moving beyond experimentation into disciplined engineering. It means architecting systems with context, security, and human oversight at their core.


Where security, DevOps, and data science finally meet on AI strategy

The key is to define isolation requirements upfront and then optimize aggressively within those constraints. Make the business trade-offs explicit and measurable. When teams try to optimize first and secure second, they usually have to redo everything. However, when they establish their security boundaries, the optimization work becomes more focused and effective. ... The intersection with cost controls is immediate. You need visibility into whether your GPU resources are being utilized or just sitting idle. We’ve seen companies waste a significant portion of their budget on GPUs because they’ve never been appropriately monitored or because they are only utilized for short bursts, which makes it complex to optimize. ... Observability also helps you understand the difference between training workloads running on 100% utilization and inference workloads, where buffer capacity is needed for response times. ... From a security perspective, the very reason teams can get away with hoarding is the reason there may be security concerns. AI initiatives are often extremely high priority, where the ends justify the means. This often makes cost control an afterthought, and the same dynamic can also cause other enterprise controls to be more lax as innovation and time to market dominate.

Daily Tech Digest - June 25, 2025


Quote for the day:

"Your present circumstances don’t determine where you can go; they merely determine where you start." -- Nido Qubein



Why data observability is the missing layer of modern networking

You might hear people use these terms interchangeably, but they’re not the same thing. Visibility is about what you can see – dashboard statistics, logs, uptime numbers, bandwidth figures, the raw data that tells you what’s happening across your network. Observability, on the other hand, is about what that data actually means. It’s the ability to interpret, analyse, and act on those insights. It’s not just about seeing a traffic spike but instead understanding why it happened. It’s not just spotting a latency issue, but knowing which apps are affected and where the bottleneck sits. ... Today, connectivity needs to be smart, agile, and scalable. It’s about building infrastructure that supports cloud, remote work, and everything in between. Whether you’re adding a new site, onboarding a remote team, or launching a cloud-hosted app, your network should be able to scale and respond at speed. Then there’s security, a non-negotiable layer that protects your entire ecosystem. Great security isn’t about throwing up walls, it’s about creating confidence. That means deploying zero trust principles, segmenting access, detecting threats in real time, and encrypting data, without making users lives harder. ... Finally, we come to observability. Arguably the most unappreciated of the three but quickly becoming essential. 


6 Key Security Risks in LLMs: A Platform Engineer’s Guide

Prompt injection is the AI-era equivalent of SQL injection. Attackers craft malicious inputs to manipulate an LLM, bypass safeguards or extract sensitive data. These attacks range from simple jailbreak prompts that override safety rules to more advanced exploits that influence backend systems. ... Model extraction attacks allow adversaries to systematically query an LLM to reconstruct its knowledge base or training data, essentially cloning its capabilities. These attacks often rely on automated scripts submitting millions of queries to map the model’s responses. One common technique, model inversion, involves strategically structured inputs that extract sensitive or proprietary information embedded in the model. Attackers may also use repeated, incremental queries with slight variations to amass a dataset that mimics the original training data. ... On the output side, an LLM might inadvertently reveal private information embedded in its dataset or previously entered user data. A common risk scenario involves users unknowingly submitting financial records or passwords into an AI-powered chatbot, which could then store, retrieve or expose this data unpredictably. With cloud-based LLMs, the risk extends further. Data from one organization could surface in another’s responses.


Adopting Agentic AI: Ethical Governance, Business Impact, Talent Demand, and Data Security

Agentic AI introduces a spectrum of ethical challenges that demand proactive governance. Given its capacity for independent decision-making, there is a heightened need for transparent, accountable, and ethically driven AI models. Ethical governance in Agentic AI revolves around establishing robust policies that govern decision logic, bias mitigation, and accountability. Organizations leveraging Agentic AI must prioritize fairness, inclusivity, and regulatory compliance to avoid unintended consequences. ... The integration of Agentic AI into business ecosystems promises not just automation but strategic enhancement of decision-making. These AI agents are designed to process real-time data, predict market shifts, and autonomously execute decisions that would traditionally require human intervention. In sectors such as finance, healthcare, and manufacturing, Agentic AI is optimizing supply chains, enhancing predictive analytics, and streamlining operations with unparalleled accuracy. ... One of the major concerns surrounding Agentic AI is data security. Autonomous decision-making systems require vast amounts of real-time data to function effectively, raising questions about data privacy, ownership, and cybersecurity. Cyber threats aimed at exploiting autonomous decision-making could have severe consequences, especially in sectors like finance and healthcare.


Unveiling Supply Chain Transformation: IIoT and Digital Twins

Digital twins and IIoTs are evolving technologies that are transforming the digital landscape of supply chain transformation. The IIoT aims to connect to actual physical sensors and actuators. On the other hand, DTs are replica copies that virtually represent the physical components. The DTs are invaluable for testing and simulating design parameters instead of disrupting production elements. ... Contrary to generic IoT, which is more oriented towards consumers, the IIoT enables the communication and interconnection between different machines, industrial devices, and sensors within a supply chain management ecosystem with the aim of business optimization and efficiency. The incubation of IIoT in supply chain management systems aims to enable real-time monitoring and analysis of industrial environments, including manufacturing, logistics management, and supply chain. It boosts efforts to increase productivity, cut downtime, and facilitate information and accurate decision-making. ... A supply chain equipped with IIoT will be a main ingredient in boosting real-time monitoring and enabling informed decision-making. Every stage of the supply chain ecosystem will have the impact of IIoT, like automated inventory management, health monitoring of goods and their tracking, analytics, and real-time response to meet the current marketplace. 


The state of cloud security

An important complicating factor in all this is that customers don’t always know what’s happening in cloud data centers. At the same time, De Jong acknowledges that on-premises environments have the same problem. “There’s a spectrum of issues, and a lot of overlap,” he says, something Wesley Swartelé agrees with: “You have to align many things between on-prem and cloud.” Andre Honders points to a specific aspect of the cloud: “You can be in a shared environment with ten other customers. This means you have to deal with different visions and techniques that do not exist on-premises.” This is certainly the case. There are plenty of worst case scenarios to consider in the public cloud. ... However, a major bottleneck remains the lack of qualified personnel. We hear this all the time when it comes to security. And in other IT fields too, as it happens, meaning one could draw a society-wide conclusion. Nevertheless, staff shortages are perhaps more acute in this sector. Erik de Jong sees society as a whole having similar problems, at any rate. “This is not an IT problem. Just ask painters. In every company, a small proportion of the workforce does most of the work.” Wesley Swartelé agrees it is a challenge for organizations in this industry to find the right people. “Finding a good IT professional with the right mindset is difficult.


As AI reshapes the enterprise, security architecture can’t afford to lag behind

Technology works both ways – it enables the attacker and the smart defender. Cybercriminals are already capitalising on its potential, using open source AI models like DeepSeek and Grok to automate reconnaissance, craft sophisticated phishing campaigns, and produce deepfakes that can convincingly impersonate executives or business partners. What makes this especially dangerous is that these tools don’t just improve the quality of attacks; they multiply their volume. That’s why enterprises need to go beyond reactive defenses and start embedding AI-aware policies into their core security fabric. It starts with applying Zero Trust to AI interactions, limiting access based on user roles, input/output restrictions, and verified behaviour. ... As attackers deploy AI to craft polymorphic malware and mimic legitimate user behaviour, traditional defenses struggle to keep up. AI is now a critical part of the enterprise security toolkit, helping CISOs and security teams move from reactive to proactive threat defense. It enables rapid anomaly detection, surfaces hidden risks earlier in the kill chain, and supports real-time incident response by isolating threats before they can spread. But AI alone isn’t enough. Security leaders must strengthen data privacy and security by implementing full-spectrum DLP, encryption, and input monitoring to protect sensitive data from exposure, especially as AI interacts with live systems. 


Identity Is the New Perimeter: Why Proofing and Verification Are Business Imperatives

Digital innovation, growing cyber threats, regulatory pressure, and rising consumer expectations all drive the need for strong identity proofing and verification. Here is why it is more important than ever:Combatting Fraud and Identity Theft: Criminals use stolen identities to open accounts, secure loans, or gain unauthorized access. Identity proofing is the first defense against impersonation and financial loss. Enabling Secure Digital Access: As more services – from banking to healthcare – go digital, strong remote verification ensures secure access and builds trust in online transactions. Regulatory Compliance: Laws such as KYC, AML, GDPR, HIPAA, and CIPA require identity verification to protect consumers and prevent misuse. Compliance is especially critical in finance, healthcare, and government sectors. Preventing Account Takeover (ATO): Even legitimate accounts are at risk. Continuous verification at key moments (e.g., password resets, high-risk actions) helps prevent unauthorized access via stolen credentials or SIM swapping. Enabling Zero Trust Security: Zero Trust assumes no inherent trust in users or devices. Continuous identity verification is central to enforcing this model, especially in remote or hybrid work environments. 


Why should companies or organizations convert to FIDO security keys?

FIDO security keys significantly reduce the risk of phishing, credential theft, and brute-force attacks. Because they don’t rely on shared secrets like passwords, they can’t be reused or intercepted. Their phishing-resistant protocol ensures authentication is only completed with the correct web origin. FIDO security keys also address insider threats and endpoint vulnerabilities by requiring physical presence, further enhancing protection, especially in high-security environments such as healthcare or public administration. ... In principle, any organization that prioritizes a secure IT infrastructure stands to benefit from adopting FIDO-based multi-factor authentication. Whether it’s a small business protecting customer data or a global enterprise managing complex access structures, FIDO security keys provide a robust, phishing-resistant alternative to passwords. That said, sectors with heightened regulatory requirements, such as healthcare, finance, public administration, and critical infrastructure, have particularly strong incentives to adopt strong authentication. In these fields, the risk of breaches is not only costly but can also have legal and operational consequences. FIDO security keys are also ideal for restricted environments, such as manufacturing floors or emergency rooms, where smartphones may not be permitted. 


Data Warehouse vs. Data Lakehouse

Data warehouses and data lakehouses have emerged as two prominent adversaries in the data storage and analytics markets, each with advantages and disadvantages. The primary difference between these two data storage platforms is that while the data warehouse is capable of handling only structured and semi-structured data, the data lakehouse can store unlimited amounts of both structured and unstructured data – and without any limitations. ... Traditional data warehouses have long supported all types of business professionals in their data storage and analytics endeavors. This approach involves ingesting structured data into a centralized repository, with a focus on warehouse integration and business intelligence reporting. Enter the data lakehouse approach, which is vastly superior for deep-dive data analysis. The lakehouse has successfully blended characteristics of the data warehouse and the data lake to create a scalable and unrestricted solution. The key benefit of this approach is that it enables data scientists to quickly extract insights from raw data with advanced AI tools. ... Although a data warehouse supports BI use cases and provides a “single source of truth” for analytics and reporting purposes, it can also become difficult to manage as new data sources emerge. The data lakehouse has redefined how global businesses store and process data. 


AI or Data Governance? Gartner Says You Need Both

Data and analytics leaders, such as chief data officers, or CDOs, and chief data and analytics officers, or CDAOs, play a significant role in driving their organizations' data and analytics, D&A, successes, which are necessary to show business value from AI projects. Gartner predicts that by 2028, 80% of gen AI business apps will be developed on existing data management platforms. Their analysts say, "This is the best time to be in data and analytics," and CDAOs need to embrace the AI opportunity eyed by others in the C-suite, or they will be absorbed into other technical functions. With high D&A ambitions and AI pilots becoming increasingly ubiquitous, focus is shifting toward consistent execution and scaling. But D&A leaders are overwhelmed with their routine data management tasks and need a new AI strategy. ... "We've never been good at governance, and now AI demands that we be even faster, which means you have to take more risks and be prepared to fail. We have to accept two things: Data will never be fully governed. Secondly, attempting to fully govern data before delivering AI is just not realistic. We need a more practical solution like trust models," Zaidi said. He said trust models provide a trust rating for data assets by examining their value, lineage and risk. They offer up-to-date information on data trustworthiness and are crucial for fostering confidence.