Showing posts with label iiot. Show all posts
Showing posts with label iiot. Show all posts

Daily Tech Digest - September 02, 2025


Quote for the day:

“The art of leadership is saying no, not yes. It is very easy to say yes.” -- Tony Blair


When Browsers Become the Attack Surface: Rethinking Security for Scattered Spider

Scattered Spider, also referred to as UNC3944, Octo Tempest, or Muddled Libra, has matured over the past two years through precision targeting of human identity and browser environments. This shift differentiates them from other notorious cybergangs like Lazarus Group, Fancy Bear, and REvil. If sensitive information such as your calendar, credentials, or security tokens is alive and well in browser tabs, Scattered Spider is able to acquire them. ... Once user credentials get into the wrong hands, attackers like Scattered Spider will move quickly to hijack previously authenticated sessions by stealing cookies and tokens. Securing the integrity of browser sessions can best be achieved by restricting unauthorized scripts from gaining access or exfiltrating these sensitive artifacts. Organizations must enforce contextual security policies based on components such as device posture, identity verification, and network trust. By linking session tokens to context, enterprises can prevent attacks like account takeovers, even after credentials have become compromised. ... Although browser security is the last mile of defense for malware-less attacks, integrating it into an existing security stack will fortify the entire network. By implementing activity logs enriched with browser data into SIEM, SOAR, and ITDR platforms, CISOs can correlate browser events with endpoint activity for a much fuller picture. 


The Transformation Resilience Trifecta: Agentic AI, Synthetic Data and Executive AI Literacy

The current state of Agentic AI is, in a word, fragile. Ask anyone in the trenches. These agents can be brilliant one minute and baffling the next. Instructions get misunderstood. Tasks break in new contexts. Chaining agents into even moderately complex workflows exposes just how early we are in this game. Reliability? Still a work in progress. And yet, we’re seeing companies experiment. Some are stitching together agents using LangChain or CrewAI. Others are waiting for more robust offerings from Microsoft Copilot Studio, OpenAI’s GPT-4o Agents, or Anthropic’s Claude toolsets. It’s the classic innovator’s dilemma: Move too early, and you waste time on immature tech. Move too late, and you miss the wave. Leaders must thread that needle — testing the waters while tempering expectations. ... Here’s the scarier scenario I’m seeing more often: “Shadow AI.” Employees are already using ChatGPT, Claude, Copilot, Perplexity — all under the radar. They’re using it to write reports, generate code snippets, answer emails, or brainstorm marketing copy. They’re more AI-savvy than their leadership. But they don’t talk about it. Why? Fear. Risk. Politics. Meanwhile, some executives are content to play cheerleader, mouthing AI platitudes on LinkedIn but never rolling up their sleeves. That’s not leadership — that’s theater.


Red Hat strives for simplicity in an ever more complex IT world

One of the most innovative developments in RHEL 10 is bootc in image mode, where VMs run like a container and are part of the CI/CD pipeline. By using immutable images, all changes are controlled from the development environment. Van der Breggen illustrates this with a retail scenario: “I can have one POS system for the payment kiosk, but I can also have another POS system for my cashiers. They use the same base image. If I then upgrade that base image to later releases of RHEL, I create one new base image, tag it in the environments, and then all 500 systems can be updated at once.” Red Hat Enterprise Linux Lightspeed acts as a command-line assistant that brings AI directly into the terminal. ... For edge devices, Red Hat uses a solution called Greenboot, which does not immediately proceed to a rollback but can wait for one if a certain condition are met. After, for example, three reboots without a working system, it reverts to the previous working release. However, not everything has been worked out perfectly yet. Lightspeed currently only works online, while many customers would like to use it offline because their RHEL systems are tucked away behind firewalls. Red Hat is still looking into possibilities for an expansion here, although making the knowledge base available offline poses risks to intellectual property. 


The state of DevOps and AI: Not just hype

The vision of AI that takes you from a list of requirements through work items to build to test to, finally, deployment is still nothing more than a vision. In many cases, DevOps tool vendors use AI to build solutions to the problems their customers have. The result is a mixture of point solutions that can solve immediate developer problems. ... Machine learning is speeding up testing by failing faster. Build steps get reordered automatically so those that are likely to fail happen earlier, which means developers aren’t waiting for the full build to know when they need to fix something. Often, the same system is used to detect flaky tests by muting tests where failure adds no value. ... Machine learning gradually helps identify the characteristics of a working system and can raise an alert when things go wrong. Depending on the governance, it can spot where a defect was introduced and start a production rollback while also providing potential remediation code to fix the defect. ... There’s a lot of puffery around AI, and DevOps vendors are not helping. A lot of their marketing emphasizes fear: “Your competitors are using AI, and if you’re not, you’re going to lose” is their message. Yet DevOps vendors themselves are only one or two steps ahead of you in their AI adoption journey. Don’t adopt AI pell-mell due to FOMO, and don’t expect to replace everyone under the CTO with a large language model.


5 Ways To Secure Your Industrial IoT Network

IIoT is a subcategory of the Internet of Things (IoT). It is made up of a system of interconnected smart devices that uses sensors, actuators, controllers and intelligent control systems to collect, transmit, receive and analyze data.... IIoT also has its unique architecture that begins with the device layer, where equipment, sensors, actuators and controllers collect raw operational data. That information is passed through the network layer, which transmits it to the internet via secure gateways. Next, the edge or fog computing layer processes and filters the data locally before sending it to the cloud, helping reduce latency and improving responsiveness. Once in the service and application support layer, the data is stored, analyzed, and used to generate alerts and insights. ... Many IIoT devices are not built with strong cybersecurity protections. This is especially true for legacy machines that were never designed to connect to modern networks. Without safeguards such as encryption or secure authentication, these devices can become easy targets. ... Defending against IIoT threats requires a layered approach that combines technology, processes and people. Manufacturers should segment their networks to limit the spread of attacks, apply strong encryption and authentication for connected devices, and keep software and firmware regularly updated.


AI Chatbots Are Emotionally Deceptive by Design

Even without deep connection, emotional attachment can lead users to place too much trust in the content chatbots provide. Extensive interaction with a social entity that is designed to be both relentlessly agreeable, and specifically personalized to a user’s tastes, can also lead to social “deskilling,” as some users of AI chatbots have flagged. This dynamic is simply unrealistic in genuine human relationships. Some users may be more vulnerable than others to this kind of emotional manipulation, like neurodiverse people or teens who have limited experience building relationships. ... With AI chatbots, though, deceptive practices are not hidden in user interface elements, but in their human-like conversational responses. It’s time to consider a different design paradigm, one that centers user protection: non-anthropomorphic conversational AI. All AI chatbots can be less anthropomorphic than they are, at least by default, without necessarily compromising function and benefit. A companion AI, for example, can provide emotional support without saying, “I also feel that way sometimes.” This non-anthropomorphic approach is already familiar in robot design, where researchers have created robots that are purposefully designed to not be human-like. This design choice is proven to more appropriately reflect system capabilities, and to better situate robots as useful tools, not friends or social counterparts.


How AI product teams are rethinking impact, risk, feasibility

We’re at a strange crossroads in the evolution of AI. Nearly every enterprise wants to harness it. Many are investing heavily. But most are falling flat. AI is everywhere — in strategy decks, boardroom buzzwords and headline-grabbing POCs. Yet, behind the curtain, something isn’t working. ... One of the most widely adopted prioritization models in product management is RICE — which scores initiatives based on Reach, Impact, Confidence, and Effort. It’s elegant. It’s simple. It’s also outdated. RICE was never designed for the world of foundation models, dynamic data pipelines or the unpredictability of inference-time reasoning. ... To make matters worse, there’s a growing mismatch between what enterprises want to automate and what AI can realistically handle. Stanford’s 2025 study, The Future of Work with AI Agents, provides a fascinating lens. ... ARISE adds three crucial layers that traditional frameworks miss: First, AI Desire — does solving this problem with AI add real value, or are we just forcing AI into something that doesn’t need it? Second, AI Capability — do we actually have the data, model maturity and engineering readiness to make this happen? And third, Intent — is the AI meant to act on its own or assist a human? Proactive systems have more upside, but they also come with far more risk. ARISE lets you reflect that in your prioritization.


Cloud control: The key to greener, leaner data centers

To fully unlock these cost benefits, businesses must adopt FinOps practices: the discipline of bringing engineering, finance, and operations together to optimize cloud spending. Without it, cloud costs can quickly spiral, especially in hybrid environments. But, with FinOps, organizations can forecast demand more accurately, optimise usage, and ensure every pound spent delivers value. ... Cloud platforms make it easier to use computing resources more efficiently. Even though the infrastructure stays online, hyperscalers can spread workloads across many customers, keeping their hardware busier and more productive. The advantage is that hyperscalers can distribute workloads across multiple customers and manage capacity at a large scale, allowing them to power down hardware when it's not in use. ... The combination of cloud computing and artificial intelligence (AI) is further reshaping data center operations. AI can analyse energy usage, detect inefficiencies, and recommend real-time adjustments. But running these models on-premises can be resource-intensive. Cloud-based AI services offer a more efficient alternative. Take Google, for instance. By applying AI to its data center cooling systems, it cut energy use by up to 40 percent. Other organizations can tap into similar tools via the cloud to monitor temperature, humidity, and workload patterns and automatically adjust cooling, load balancing, and power distribution.


You Backed Up Your Data, but Can You Bring It Back?

Many IT teams assume that the existence of backups guarantees successful restoration. This misconception can be costly. A recent report from Veeam revealed that 49% of companies failed to recover most of their servers after a significant incident. This highlights a painful reality: Most backup strategies focus too much on storage and not enough on service restoration. Having backup files is not the same as successfully restoring systems. In real-world recovery scenarios, teams face unknown dependencies, a lack of orchestration, incomplete documentation, and gaps between infrastructure and applications. When services need to be restored in a specific order and under intense pressure, any oversight can become a significant bottleneck. ... Relying on a single backup location creates a single point of failure. Local backups can be fast but are vulnerable to physical threats, hardware failures, or ransomware attacks. Cloud backups offer flexibility and off-site protection but may suffer bandwidth constraints, cost limitations, or provider outages. A hybrid backup strategy ensures multiple recovery paths by combining on-premises storage, cloud solutions, and optionally offline or air-gapped options. This approach allows teams to choose the fastest or most reliable method based on the nature of the disruption.


Beyond Prevention: How Cybersecurity and Cyber Insurance Are Converging to Transform Risk Management

Historically, cybersecurity and cyber insurance have operated in silos, with companies deploying technical defenses to fend off attacks while holding a cyber insurance policy as a safety net. This fragmented approach often leaves gaps in coverage and preparedness. ... The insurance sector is at a turning point. Traditional models that assess risk at the point of policy issuance are rapidly becoming outdated in the face of constantly evolving cyber threats. Insurers who fail to adapt to an integrated model risk being outpaced by agile Cyber Insurtech companies, which leverage cutting-edge cyber intelligence, machine learning, and risk analytics to offer adaptive coverage and continuous monitoring. Some insurers have already begun to reimagine their role—not only as claim processors but as active partners in risk prevention. ... A combined cybersecurity and insurance strategy goes beyond traditional risk management. It aligns the objectives of both the insurer and the insured, with insurers assuming a more proactive role in supporting risk mitigation. By reducing the probability of significant losses through continuous monitoring and risk-based incentives, insurers are building a more resilient client base, directly translating to reduced claim frequency and severity.

Daily Tech Digest - August 28, 2025


Quote for the day:

“Rarely have I seen a situation where doing less than the other guy is a good strategy.” -- Jimmy Spithill


Emerging Infrastructure Transformations in AI Adoption

Balanced scaling of infrastructure storage and compute clusters optimizes resource use in the face of emerging elastic use cases. Throughput, latency, scalability, and resiliency are key metrics for measuring storage performance. Scaling storage with demand for AI solutions without contributing to technical debt is a careful balance to contemplate for infrastructure transformations. ... Data governance in AI extends beyond traditional access control. ML workflows have additional governance tasks such as lineage tracking, role-based permissions for model modification, and policy enforcement over how data is labeled, versioned, and reused. This includes dataset documentation, drift tracking, and LLM-specific controls over prompt inputs and generated outputs. Governance frameworks that support continuous learning cycles are more valuable: Every inference and user correction can become training data. ... As models become more stateful and retain context over time, pipelines must support real-time, memory-intensive operations. Even Apache Spark documentation hints at future support for stateful algorithms (models that maintain internal memory of past interactions), reflecting a broader industry trend. AI workflows are moving toward stateful "agent" models that can handle ongoing, contextual tasks rather than stateless, single-pass processing.


The rise of the creative cybercriminal: Leveraging data visibility to combat them

In response to the evolving cyber threats faced by organisations and governments, a comprehensive approach that addresses both the human factor and their IT systems is essential. Employee training in cybersecurity best practices, such as adopting a zero-trust approach and maintaining heightened vigilance against potential threats, like social engineering attacks, are crucial. Similarly, cybersecurity analysts and Security Operations Centres (SOCs) play a pivotal role by utilising Security Information and Event Management (SIEM) solutions to continuously monitor IT systems, identifying potential threats, and accelerating their investigation and response times. Given that these tasks can be labor-intensive, integrating a modern SIEM solution that harnesses generative AI (GenAI) is essential. ... By integrating GenAI's data processing capabilities with an advanced search platform, cybersecurity teams can search at scale across vast amounts of data, including unstructured data. This approach supports critical functions such as monitoring, compliance, threat detection, prevention, and incident response. With full-stack observability, or in other words, complete visibility across every layer of their technology stack, security teams can gain access to content-aware insights, and the platform can swiftly flag any suspicious activity.


How to secure digital trust amid deepfakes and AI

To ensure resilience in the shifting cybersecurity landscape, organizations should proactively adopt a hybrid fraud-prevention approach, strategically integrating AI solutions with traditional security measures to build robust, layered defenses. Ultimately, a comprehensive, adaptive, and collaborative security framework is essential for enterprises to effectively safeguard against increasingly sophisticated cyberattacks – and there are several preemptive strategies organizations must leverage to counteract threats and strengthen their security posture. ... Fraudsters are adaptive, usually leveraging both advanced methods (deepfakes and synthetic identities) and simpler techniques (password spraying and phishing) to exploit vulnerabilities. By combining AI with tools like strong and continuous authentication, behavioral analytics, and ongoing user education, organizations can build a more resilient defense system. This hybrid approach ensures that no single point of failure exposes the entire system, and that both human and machine vulnerabilities are addressed. Recent threats rely on social engineering to obtain credentials, bypass authentication, and steal sensitive data, and it is evolving along with AI. Utilizing real-time verification techniques, such as liveness detection, can reliably distinguish between legitimate users and deepfake impersonators. 


Why Generative AI's Future Isn't in the Cloud

Instead of telling customers they needed to bring their data to the AI in the cloud, we decided to bring AI to the data where it's created or resides, locally on-premises or at the edge. We flipped the model by bringing intelligence to the edge, making it self-contained, secure and ready to operate with zero dependency on the cloud. That's not just a performance advantage in terms of latency, but in defense and sensitive use cases, it's a requirement. ... The cloud has driven incredible innovation, but it's created a monoculture in how we think about deploying AI. When your entire stack depends on centralized compute and constant connectivity, you're inherently vulnerable to outages, latency, bandwidth constraints, and, in defense scenarios, active adversary disruption. The blind spot is that this fragility is invisible until it fails, and by then the cost of that failure can be enormous. We're proving that edge-first AI isn't just a defense-sector niche, it's a resilience model every enterprise should be thinking about. ... The line between commercial and military use of AI is blurring fast. As a company operating in this space, how do you navigate the dual-use nature of your tech responsibly? We consider ourselves a dual-use defense technology company and we also have enterprise customers. Being dual use actually helps us build better products for the military because our products are also tested and validated by commercial customers and partners. 


Why DEI Won't Die: The Benefits of a Diverse IT Workforce

For technology teams, diversity is a strategic imperative that drives better business outcomes. In IT, diverse leadership teams generate 19% more revenue from innovation, solve complex problems faster, and design products that better serve global markets — driving stronger adoption, retention of top talent, and a sustained competitive edge. Zoya Schaller, director of cybersecurity compliance at Keeper Security, says that when a team brings together people with different life experiences, they naturally approach challenges from unique perspectives. ... Common missteps, according to Ellis, include over-focusing on meeting diversity hiring targets without addressing the retention, development, and advancement of underrepresented technologists. "Crafting overly broad or tokenistic job descriptions can fail to resonate with specific tech talent communities," she says. "Don't treat DEI as an HR-only initiative but rather embed it into engineering and leadership accountability." Schaller cautions that bias often shows up in subtle ways — how résumés are reviewed, who is selected for interviews, or even what it means to be a "culture fit." ... Leaders should be active champions of inclusivity, as it is an ongoing commitment that requires consistent action and reinforcement from the top.


The Future of Software Is Not Just Faster Code - It's Smarter Organizations

Using AI effectively doesn't just mean handing over tasks. It requires developers to work alongside AI tools in a more thoughtful way — understanding how to write structured prompts, evaluate AI-generated results and iterate them based on context. This partnership is being pushed even further with agentic AI. Agentic systems can break a goal into smaller steps, decide the best order to tackle them, tap into multiple tools or models, and adapt in real time without constant human direction. For developers, this means AI can do more than suggesting code. It can act like a junior teammate who can design, implement, test and refine features on its own. ... But while these tools are powerful, they're not foolproof. Like other AI applications, their value depends on how well they're implemented, tuned and interpreted. That's where AI-literate developers come in. It's not enough to simply plug in a tool and expect it to catch every threat. Developers need to understand how to fine-tune these systems to their specific environments — configuring scanning parameters to align with their architecture, training models to recognize application-specific risks and adjusting thresholds to reduce noise without missing critical issues. ... However, the real challenge isn't just finding AI talent, its reorganizing teams to get the most out of AI's capabilities. 


Industrial Copilots: From Assistants to Essential Team Members

Behind the scenes, industrial copilots are supported by a technical stack that includes predictive analytics, real-time data integration, and cross-platform interoperability. These assistants do more than just respond — they help automate code generation, validate engineering logic, and reduce the burden of repetitive tasks. In doing so, they enable faster deployment of production systems while improving the quality and efficiency of engineering work. Despite these advances, several challenges remain. Data remains the bedrock of effective copilots, yet many workers on the shop floor are still not accustomed to working with data directly. Upskilling and improving data literacy among frontline staff is critical. Additionally, industrial companies are learning that while not all problems need AI, AI absolutely needs high-quality data to function well. An important lesson shared during Siemens’ AI with Purpose Summit was the importance of a data classification framework. To ensure copilots have access to usable data without risking intellectual property or compliance violations, one company adopted a color-coded approach: white for synthetic data (freely usable), green for uncritical data (approval required), yellow for sensitive information, and red for internal IP (restricted to internal use only). 


Will the future be Consolidated Platforms or Expanding Niches?

Ramprakash Ramamoorthy believes enterprise SaaS is already making moves in consolidation. “The initial stage of a hype cycle includes features disguised as products and products disguised as companies. Well we are past that, many of these organizations that delivered a single product have to go through either vertical integration or sell out. In fact a lot of companies are mimicking those single-product features natively on large platforms.” Ramamoorthy says he also feels AI model providers will develop into enterprise SaaS organizations themselves as they continue to capture the value proposition of user data and usage signals for SaaS providers. This is why Zoho built their own AI backbone—to keep pace with competitive offerings and to maintain independence. On the subject of vibe-code and low-code tools, Ramamoorthy seems quite clear-eyed about their suitability for mass-market production. “Vibe-code can accelerate you from 0 to 1 faster, but particularly with the increase in governance and privacy, you need additional rigor. For example, in India, we have started to see compliance as a framework.” In terms of the best generative tools today, he observes “Anytime I see a UI or content generated by AI—I can immediately recognize the quality that is just not there yet.”


Beyond the Prompt: Building Trustworthy Agent Systems

While a basic LLM call responds statically to a single prompt, an agent system plans. It breaks down a high-level goal into subtasks, decides on tools or data needed, executes steps, evaluates outcomes, and iterates – potentially over long timeframes and with autonomy. This dynamism unlocks immense potential but can introduce new layers of complexity and security risk. ... Technology controls are vital but not comprehensive. That’s because the most sophisticated agent system can be undermined by human error or manipulation. This is where principles of human risk management become critical. Humans are often the weakest link. How does this play out with agents? Agents should operate with clear visibility. Log every step, every decision point, every data access. Build dashboards showing the agent’s “thought process” and actions. Enable safe interruption points. Humans must be able to audit, understand, and stop the agent when necessary. ... The allure of agentic AI is undeniable. The promise of automating complex workflows, unlocking insights, and boosting productivity is real. But realizing this potential without introducing unacceptable risk requires moving beyond experimentation into disciplined engineering. It means architecting systems with context, security, and human oversight at their core.


Where security, DevOps, and data science finally meet on AI strategy

The key is to define isolation requirements upfront and then optimize aggressively within those constraints. Make the business trade-offs explicit and measurable. When teams try to optimize first and secure second, they usually have to redo everything. However, when they establish their security boundaries, the optimization work becomes more focused and effective. ... The intersection with cost controls is immediate. You need visibility into whether your GPU resources are being utilized or just sitting idle. We’ve seen companies waste a significant portion of their budget on GPUs because they’ve never been appropriately monitored or because they are only utilized for short bursts, which makes it complex to optimize. ... Observability also helps you understand the difference between training workloads running on 100% utilization and inference workloads, where buffer capacity is needed for response times. ... From a security perspective, the very reason teams can get away with hoarding is the reason there may be security concerns. AI initiatives are often extremely high priority, where the ends justify the means. This often makes cost control an afterthought, and the same dynamic can also cause other enterprise controls to be more lax as innovation and time to market dominate.

Daily Tech Digest - June 25, 2025


Quote for the day:

"Your present circumstances don’t determine where you can go; they merely determine where you start." -- Nido Qubein



Why data observability is the missing layer of modern networking

You might hear people use these terms interchangeably, but they’re not the same thing. Visibility is about what you can see – dashboard statistics, logs, uptime numbers, bandwidth figures, the raw data that tells you what’s happening across your network. Observability, on the other hand, is about what that data actually means. It’s the ability to interpret, analyse, and act on those insights. It’s not just about seeing a traffic spike but instead understanding why it happened. It’s not just spotting a latency issue, but knowing which apps are affected and where the bottleneck sits. ... Today, connectivity needs to be smart, agile, and scalable. It’s about building infrastructure that supports cloud, remote work, and everything in between. Whether you’re adding a new site, onboarding a remote team, or launching a cloud-hosted app, your network should be able to scale and respond at speed. Then there’s security, a non-negotiable layer that protects your entire ecosystem. Great security isn’t about throwing up walls, it’s about creating confidence. That means deploying zero trust principles, segmenting access, detecting threats in real time, and encrypting data, without making users lives harder. ... Finally, we come to observability. Arguably the most unappreciated of the three but quickly becoming essential. 


6 Key Security Risks in LLMs: A Platform Engineer’s Guide

Prompt injection is the AI-era equivalent of SQL injection. Attackers craft malicious inputs to manipulate an LLM, bypass safeguards or extract sensitive data. These attacks range from simple jailbreak prompts that override safety rules to more advanced exploits that influence backend systems. ... Model extraction attacks allow adversaries to systematically query an LLM to reconstruct its knowledge base or training data, essentially cloning its capabilities. These attacks often rely on automated scripts submitting millions of queries to map the model’s responses. One common technique, model inversion, involves strategically structured inputs that extract sensitive or proprietary information embedded in the model. Attackers may also use repeated, incremental queries with slight variations to amass a dataset that mimics the original training data. ... On the output side, an LLM might inadvertently reveal private information embedded in its dataset or previously entered user data. A common risk scenario involves users unknowingly submitting financial records or passwords into an AI-powered chatbot, which could then store, retrieve or expose this data unpredictably. With cloud-based LLMs, the risk extends further. Data from one organization could surface in another’s responses.


Adopting Agentic AI: Ethical Governance, Business Impact, Talent Demand, and Data Security

Agentic AI introduces a spectrum of ethical challenges that demand proactive governance. Given its capacity for independent decision-making, there is a heightened need for transparent, accountable, and ethically driven AI models. Ethical governance in Agentic AI revolves around establishing robust policies that govern decision logic, bias mitigation, and accountability. Organizations leveraging Agentic AI must prioritize fairness, inclusivity, and regulatory compliance to avoid unintended consequences. ... The integration of Agentic AI into business ecosystems promises not just automation but strategic enhancement of decision-making. These AI agents are designed to process real-time data, predict market shifts, and autonomously execute decisions that would traditionally require human intervention. In sectors such as finance, healthcare, and manufacturing, Agentic AI is optimizing supply chains, enhancing predictive analytics, and streamlining operations with unparalleled accuracy. ... One of the major concerns surrounding Agentic AI is data security. Autonomous decision-making systems require vast amounts of real-time data to function effectively, raising questions about data privacy, ownership, and cybersecurity. Cyber threats aimed at exploiting autonomous decision-making could have severe consequences, especially in sectors like finance and healthcare.


Unveiling Supply Chain Transformation: IIoT and Digital Twins

Digital twins and IIoTs are evolving technologies that are transforming the digital landscape of supply chain transformation. The IIoT aims to connect to actual physical sensors and actuators. On the other hand, DTs are replica copies that virtually represent the physical components. The DTs are invaluable for testing and simulating design parameters instead of disrupting production elements. ... Contrary to generic IoT, which is more oriented towards consumers, the IIoT enables the communication and interconnection between different machines, industrial devices, and sensors within a supply chain management ecosystem with the aim of business optimization and efficiency. The incubation of IIoT in supply chain management systems aims to enable real-time monitoring and analysis of industrial environments, including manufacturing, logistics management, and supply chain. It boosts efforts to increase productivity, cut downtime, and facilitate information and accurate decision-making. ... A supply chain equipped with IIoT will be a main ingredient in boosting real-time monitoring and enabling informed decision-making. Every stage of the supply chain ecosystem will have the impact of IIoT, like automated inventory management, health monitoring of goods and their tracking, analytics, and real-time response to meet the current marketplace. 


The state of cloud security

An important complicating factor in all this is that customers don’t always know what’s happening in cloud data centers. At the same time, De Jong acknowledges that on-premises environments have the same problem. “There’s a spectrum of issues, and a lot of overlap,” he says, something Wesley Swartelé agrees with: “You have to align many things between on-prem and cloud.” Andre Honders points to a specific aspect of the cloud: “You can be in a shared environment with ten other customers. This means you have to deal with different visions and techniques that do not exist on-premises.” This is certainly the case. There are plenty of worst case scenarios to consider in the public cloud. ... However, a major bottleneck remains the lack of qualified personnel. We hear this all the time when it comes to security. And in other IT fields too, as it happens, meaning one could draw a society-wide conclusion. Nevertheless, staff shortages are perhaps more acute in this sector. Erik de Jong sees society as a whole having similar problems, at any rate. “This is not an IT problem. Just ask painters. In every company, a small proportion of the workforce does most of the work.” Wesley Swartelé agrees it is a challenge for organizations in this industry to find the right people. “Finding a good IT professional with the right mindset is difficult.


As AI reshapes the enterprise, security architecture can’t afford to lag behind

Technology works both ways – it enables the attacker and the smart defender. Cybercriminals are already capitalising on its potential, using open source AI models like DeepSeek and Grok to automate reconnaissance, craft sophisticated phishing campaigns, and produce deepfakes that can convincingly impersonate executives or business partners. What makes this especially dangerous is that these tools don’t just improve the quality of attacks; they multiply their volume. That’s why enterprises need to go beyond reactive defenses and start embedding AI-aware policies into their core security fabric. It starts with applying Zero Trust to AI interactions, limiting access based on user roles, input/output restrictions, and verified behaviour. ... As attackers deploy AI to craft polymorphic malware and mimic legitimate user behaviour, traditional defenses struggle to keep up. AI is now a critical part of the enterprise security toolkit, helping CISOs and security teams move from reactive to proactive threat defense. It enables rapid anomaly detection, surfaces hidden risks earlier in the kill chain, and supports real-time incident response by isolating threats before they can spread. But AI alone isn’t enough. Security leaders must strengthen data privacy and security by implementing full-spectrum DLP, encryption, and input monitoring to protect sensitive data from exposure, especially as AI interacts with live systems. 


Identity Is the New Perimeter: Why Proofing and Verification Are Business Imperatives

Digital innovation, growing cyber threats, regulatory pressure, and rising consumer expectations all drive the need for strong identity proofing and verification. Here is why it is more important than ever:Combatting Fraud and Identity Theft: Criminals use stolen identities to open accounts, secure loans, or gain unauthorized access. Identity proofing is the first defense against impersonation and financial loss. Enabling Secure Digital Access: As more services – from banking to healthcare – go digital, strong remote verification ensures secure access and builds trust in online transactions. Regulatory Compliance: Laws such as KYC, AML, GDPR, HIPAA, and CIPA require identity verification to protect consumers and prevent misuse. Compliance is especially critical in finance, healthcare, and government sectors. Preventing Account Takeover (ATO): Even legitimate accounts are at risk. Continuous verification at key moments (e.g., password resets, high-risk actions) helps prevent unauthorized access via stolen credentials or SIM swapping. Enabling Zero Trust Security: Zero Trust assumes no inherent trust in users or devices. Continuous identity verification is central to enforcing this model, especially in remote or hybrid work environments. 


Why should companies or organizations convert to FIDO security keys?

FIDO security keys significantly reduce the risk of phishing, credential theft, and brute-force attacks. Because they don’t rely on shared secrets like passwords, they can’t be reused or intercepted. Their phishing-resistant protocol ensures authentication is only completed with the correct web origin. FIDO security keys also address insider threats and endpoint vulnerabilities by requiring physical presence, further enhancing protection, especially in high-security environments such as healthcare or public administration. ... In principle, any organization that prioritizes a secure IT infrastructure stands to benefit from adopting FIDO-based multi-factor authentication. Whether it’s a small business protecting customer data or a global enterprise managing complex access structures, FIDO security keys provide a robust, phishing-resistant alternative to passwords. That said, sectors with heightened regulatory requirements, such as healthcare, finance, public administration, and critical infrastructure, have particularly strong incentives to adopt strong authentication. In these fields, the risk of breaches is not only costly but can also have legal and operational consequences. FIDO security keys are also ideal for restricted environments, such as manufacturing floors or emergency rooms, where smartphones may not be permitted. 


Data Warehouse vs. Data Lakehouse

Data warehouses and data lakehouses have emerged as two prominent adversaries in the data storage and analytics markets, each with advantages and disadvantages. The primary difference between these two data storage platforms is that while the data warehouse is capable of handling only structured and semi-structured data, the data lakehouse can store unlimited amounts of both structured and unstructured data – and without any limitations. ... Traditional data warehouses have long supported all types of business professionals in their data storage and analytics endeavors. This approach involves ingesting structured data into a centralized repository, with a focus on warehouse integration and business intelligence reporting. Enter the data lakehouse approach, which is vastly superior for deep-dive data analysis. The lakehouse has successfully blended characteristics of the data warehouse and the data lake to create a scalable and unrestricted solution. The key benefit of this approach is that it enables data scientists to quickly extract insights from raw data with advanced AI tools. ... Although a data warehouse supports BI use cases and provides a “single source of truth” for analytics and reporting purposes, it can also become difficult to manage as new data sources emerge. The data lakehouse has redefined how global businesses store and process data. 


AI or Data Governance? Gartner Says You Need Both

Data and analytics leaders, such as chief data officers, or CDOs, and chief data and analytics officers, or CDAOs, play a significant role in driving their organizations' data and analytics, D&A, successes, which are necessary to show business value from AI projects. Gartner predicts that by 2028, 80% of gen AI business apps will be developed on existing data management platforms. Their analysts say, "This is the best time to be in data and analytics," and CDAOs need to embrace the AI opportunity eyed by others in the C-suite, or they will be absorbed into other technical functions. With high D&A ambitions and AI pilots becoming increasingly ubiquitous, focus is shifting toward consistent execution and scaling. But D&A leaders are overwhelmed with their routine data management tasks and need a new AI strategy. ... "We've never been good at governance, and now AI demands that we be even faster, which means you have to take more risks and be prepared to fail. We have to accept two things: Data will never be fully governed. Secondly, attempting to fully govern data before delivering AI is just not realistic. We need a more practical solution like trust models," Zaidi said. He said trust models provide a trust rating for data assets by examining their value, lineage and risk. They offer up-to-date information on data trustworthiness and are crucial for fostering confidence. 

Daily Tech Digest - May 08, 2025


Quote for the day:

Don't fear failure. Fear being in the exact same place next year as you are today. - Unknown



Security Tools Alone Don't Protect You — Control Effectiveness Does

Buying more tools has long been considered the key to cybersecurity performance. Yet the facts tell a different story. According to the Gartner report, "misconfiguration of technical security controls is a leading cause for the continued success of attacks." Many organizations have impressive inventories of firewalls, endpoint solutions, identity tools, SIEMs, and other controls. Yet breaches continue because these tools are often misconfigured, poorly integrated, or disconnected from actual business risks. ... Moving toward true control effectiveness takes more than just a few technical tweaks. It requires a real shift - in mindset, in day-to-day practice, and in how teams across the organization work together. Success depends on stronger partnerships between security teams, asset owners, IT operations, and business leaders. Asset owners, in particular, bring critical knowledge to the table - how their systems are built, where the sensitive data lives, and which processes are too important to fail. Supporting this collaboration also means rethinking how we train teams. ... Making security controls truly effective demands a broader shift in how organizations think and work. Security optimization must be embedded into how systems are designed, operated, and maintained - not treated as a separate function.


APIs: From Tools to Business Growth Engines

Apart from earning revenue, APIs also offer other benefits, including providing value to customers, partners and internal stakeholders through seamless integration and improving response time. By integrating third-party services seamlessly, APIs allow businesses to offer feature-rich, convenient and highly personalized experiences. This helps improve the "stickiness" of the customer and reduces churn. ... As businesses adopt cloud solutions, develop mobile applications and transition to microservice architectures, APIs have become a critical foundation of technological innovation. But their widespread use presents significant security risks. Poorly secured APIs can be prone to becoming cyberattack entry points, potentially exposing sensitive data, granting unauthorized access or even leading to extensive network compromises. ... Managing the API life cycle using specialized tools and frameworks is also essential. This ensures a structured approach in the seven stages of API life cycle: design, development, testing, deployment, API performance monitoring, maintenance and retirement. This approach maximizes their value while minimizing risks. "APIs should be scalable and versioned to prevent breaking changes, with clear documentation for adoption. Performance should be optimized through rate limiting, caching and load balancing ..." Musser said.


How to Slash Cloud Waste Without Annoying Developers

Waste in cloud spending is not necessarily due to negligence or a lack of resources; it’s often due to poor visibility and understanding of how to optimize costs and resource allocations. Ironically, Kubernetes and GitOps were designed to enable DevOps practices by providing building blocks to facilitate collaboration between operations teams and developers ... ScaleOps’ platform serves as an example of an option that abstracts and automates the process. It’s positioned not as a platform for analysis and visibility but for resource automation. ScaleOps automates decision-making by eliminating the need for manual analysis and intervention, helping resource management become a continuous optimization of the infrastructure map. Scaling decisions, such as determining how to vertically scale, horizontally scale, and schedule pods onto the cluster to maximize performance and cost savings, are then made in real time. This capability forms the core of the ScaleOps platform. Savings and scaling efficiency are achieved through real-time usage data and predictive algorithms that determine the correct amount of resources needed at the pod level at the right time. The platform is “fully context-aware,” automatically identifying whether a workload involves a MySQL database, a stateless HTTP server, or a critical Kafka broker, and incorporating this information into scaling decisions, Baron said.


How to Prevent Your Security Tools from Turning into Exploits

Attackers don't need complex strategies when some security tools provide unrestricted access due to sloppy setups. Without proper input validation, APIs are at risk of being exploited, turning a vital defense mechanism into an attack vector. Bad actors can manipulate such APIs to execute malicious commands, seizing control over the tool and potentially spreading their reach across your infrastructure. Endpoint detection tools that log sensitive credentials in plain text worsen the problem by exposing pathways for privilege escalation and further compromise. ... If monitoring tools and critical production servers share the same network segment, a single compromised tool can give attackers free rein to move laterally and access sensitive systems. Isolating security tools into dedicated network zones is a best practice to prevent this, as proper segmentation reduces the scope of a breach and limits the attacker's ability to move laterally. Sandboxing adds another layer of security, too. ... Collaboration is key for zero trust to succeed. Security cannot be siloed within IT; developers, operations, and security teams must work together from the start. Automated security checks within CI/CD pipelines can catch vulnerabilities before deployment, such as when verbose logging is accidentally enabled on a production server. 


Fortifying Your Defenses: Ransomware Protection Strategies in the Age of Black Basta

What sets Black Basta apart is its disciplined methodology. Initial access is typically gained through phishing campaigns, vulnerable public-facing applications, compromised credentials or malicious software packages. Once inside, the group moves laterally through the network, escalates privileges, exfiltrates data and deploys ransomware at the most damaging points. Bottom line: Groups like Black Basta aren’t using zero-day exploits. They’re taking advantage of known gaps defenders too often leave open. ... Start with multi-factor authentication across remote access points and cloud applications. Audit user privileges regularly and apply the principle of least privilege. Consider passwordless authentication to eliminate commonly abused credentials. ... Unpatched internet-facing systems are among the most frequent entry points. Prioritize known exploited vulnerabilities, automate updates when possible and scan frequently. ... Secure VPNs with MFA. Where feasible, move to stronger architectures like virtual desktop infrastructure or zero trust network access, which assumes compromise is always a possibility. ... Phishing is still a top tactic. Go beyond spam filters. Use behavioral analysis tools and conduct regular training to help users spot suspicious emails. External email banners can provide a simple warning signal.


AI Emotional Dependency and the Quiet Erosion of Democratic Life

Byung-Chul Han’s The Expulsion of the Other is particularly instructive here. He argues that neoliberal societies are increasingly allergic to otherness: what is strange, challenging, or unfamiliar. Emotionally responsive AI companions embody this tendency. They reflect a sanitized version of the self, avoiding friction and reinforcing existing preferences. The user is never contradicted, never confronted. Over time, this may diminish one’s capacity for engaging with real difference; precisely the kind of engagement required for democracy to flourish. In addition, Han’s Psychopolitics offers a crucial lens through which to understand this transformation. He argues that power in the digital age no longer represses individuals but instead exploits their freedom, leading people to voluntarily submit to control through mechanisms of self-optimization, emotional exposure, and constant engagement. ... As behavioral psychologist BJ Fogg has shown, digital systems are designed to shape behavior. When these persuasive technologies take the form of emotionally intelligent agents, they begin to shape how we feel, what we believe, and whom we turn to for support. The result is a reconfiguration of subjectivity: users become emotionally aligned with machines, while withdrawing from the messy, imperfect human community.


From prompts to production: AI will soon write most code, reshape developer roles

While that timeline might sound bold, it points to a real shift in how software is built, with trends like vibe coding already taking off. Diego Lo Giudice, a vice president analyst at Forrester Research, said even senior developers are starting to leverage vibe as an additional tool. But he believes vibe coding and other AI-assisted development methods are currently aimed at “low hanging fruit” that frees up devs and engineers for more important and creative tasks. ... Augmented coding tools can help brainstorm, prototype, build full features, and check code for errors or security holes using natural language processing — whether through real-time suggestions, interactive code editing, or full-stack guidance. The tools streamline coding, making them ideal for solo developers, fast prototyping, or collaborative workflows, according to Gartner. GenAI tools include prompt-to-application tools such as StackBlitz Bolt.new, Github Spark, and Lovable, as well as AI-augmented testing tools such as BlinqIO, Diffblue, IDERA, QualityKiosk Technologies and Qyrus. ... Developers find genAI tools most useful for tasks like boilerplate generation, code understanding, testing, documentation, and refactoring. But they also create risks around code quality, IP, bias, and the effort needed to guide and verify outputs, Gartner said in a report last month.


Navigating the Warehouse Technology Matrix: Integration Strategies and Automation Flexibility in the IIoT Era

Warehouses have evolved from cost centers to strategic differentiators that directly impact customer satisfaction and competitive advantages. This transformation has been driven by e-commerce growth, heightened consumer expectations, labor challenges, and rapid technological advancement. For many organizations, the resulting technology ecosystem resembles a patchwork of systems struggling to communicate effectively, creating what analysts term “analysis paralysis” where leaders become overwhelmed by options. ... Among warehouse complexity dimensions, MHE automation plays a pivotal role—and it is easy to determine where you are on the Maturity Model. Organizations at Level 5 in automation automatically reach Level 5 overall complexity due to the integration, orchestration and investment needed to take advantage of MHE operational efficiencies. ... Providing unified control for diverse automation equipment, optimizing tasks and simplifying integration. Put simply, this is a software layer that coordinates multiple “agents” in real time, ensuring they work together without clashing. By dynamically assigning and reassigning tasks based on current workloads and priorities, these platforms reduce downtime, enhance productivity, and streamline communication between otherwise siloed systems.


How AI-Powered OSINT is Revolutionizing Threat Detection and Intelligence Gathering

Police and intelligence officers have traditionally relied on tips, informants, and classified sources. In contrast, OSINT draws from the vast “digital public square,” including social media networks, public records, and forums. For example, even casual social media posts can signal planned riots or extremist recruitment efforts. India’s diverse linguistic and cultural landscape also means that important signals may appear in dozens of regional languages and scripts – a scale that outstrips human monitoring. OSINT platforms address this by incorporating multilingual analysis, automatically translating and interpreting content from Hindi, Tamil, Telugu, and more. In practice, an AI-driven system can flag a Tamil-language tweet with extremist rhetoric just as easily as an English Facebook post. ... Artificial intelligence is what turns raw OSINT data into strategic intelligence. Machine learning and natural language processing (NLP) allow systems to filter noise, detect patterns and make predictions. For instance, sentiment analysis algorithms can gauge public mood or support for extremist ideologies in real time​. By tracking language trends and emotional tone across social media, AI can alert analysts to rising anger or unrest. In one recent case study, an AI-powered OSINT tool identified over 1,300 social media accounts spreading incendiary propaganda during Delhi protests. 


How to Determine Whether a Cloud Service Delivers Real Value

The cost of cloud services varies widely, but so does the functionality they offer. This means an expensive service may be well worth the price — if the capabilities it offers deliver a great deal of value. On the other hand, some cloud services simply cost a lot without providing much in the way of value. For IT organizations, then, a primary challenge in selecting cloud services is figuring out how much value they generate relative to their cost. This is rarely straightforward because what is valuable to one team might be of little use to another. ... No one can predict how cloud service providers may change their pricing or features in the future, of course. But you can make reasonable predictions. For instance, there's an argument to be made (and I will make it) that as generative AI cloud services mature and AI adoption rates increase, cloud service providers will raise fees for AI services. Currently, most generative AI services appear to be operating at a steep financial loss — which is unsurprising because all of the GPUs powering AI services don't just pay for themselves. If cloud providers want to make money on genAI, they'll probably need to raise their rates sooner or later, potentially reducing the value that businesses leverage from generative AI.

Daily Tech Digest - April 14, 2025


Quote for the day:

"Power should be reserved for weightlifting and boats, and leadership really involves responsibility." -- Herb Kelleher



The quiet data breach hiding in AI workflows

Prompt leaks happen when sensitive data, such as proprietary information, personal records, or internal communications, is unintentionally exposed through interactions with LLMs. These leaks can occur through both user inputs and model outputs. On the input side, the most common risk comes from employees. A developer might paste proprietary code into an AI tool to get debugging help. A salesperson might upload a contract to rewrite it in plain language. These prompts can contain names, internal systems info, financials, or even credentials. Once entered into a public LLM, that data is often logged, cached, or retained without the organization’s control. Even when companies adopt enterprise-grade LLMs, the risk doesn’t go away. Researchers found that many inputs posed some level of data leakage risk, including personal identifiers, financial data, and business-sensitive information. Output-based prompt leaks are even harder to detect. If an LLM is fine-tuned on confidential documents such as HR records or customer service transcripts, it might reproduce specific phrases, names, or private information when queried. This is known as data cross-contamination, and it can occur even in well-designed systems if access controls are loose or the training data was not properly scrubbed.


The Rise of Security Debt: Your Security IOUs Are Due

Despite measurable improvements, security debt — defined as flaws that remain unfixed for more than a year after discovery — continues to put enterprises at risk. Security debt impacts almost three-quarters (74.2%) of organizations, up from 71% in previous measurements. More frighteningly, half of all organizations suffer from critical security debt: a dangerous combination of high-severity, long-unresolved flaws. There's a reason it is described as critical debt: the longer a security flaw survives within an enterprise, the less likely it will be resolved. Today, more than a quarter (28%) of flaws remain open two years after discovery, and even after five years, 9% of flaws still linger in applications. ... Applications are only as secure as the code used to write them, and security flaws are a fact of life in every code base in the world. That being said, the origin of the code that is being used matters. Leveraging third-party code has become standard practice across the industry, which introduces added risks. ... organizations need the ability to correlate and contextualize findings in a single view to prioritize their backlog based on context. This allows companies to reduce the most risk with the least effort. Since the average time to fix flaws has increased dramatically, programs seeking to improve their security posture must focus on the findings that matter most in their specific context. 


How to Cut the Hidden Costs of IT Downtime

"Workers struggling with these problems waste productive time waiting for fixes," said Ryan MacDonald, CTO at Liquid Web. Businesses can reduce these costs by investing in proactive IT support, automating troubleshooting processes, and training workers on best practices to prevent repeat problems, he said. MacDonald explained that while tech failures are inevitable, companies often take a reactive rather than proactive approach to IT. Instead of addressing persistent issues at their root, organizations frequently apply short-term fixes, resulting in continuous inefficiencies and mounting expenses. ... Companies that fail to modernize their systems will continue to experience recurring IT problems that hinder productivity and increase operational costs. In addition to upgrading infrastructure, organizations must conduct regular IT audits to proactively identify inefficiencies before they escalate into major disruptions. MacDonald stressed the importance of continuous evaluation. "Regularly scheduled IT audits allow companies to find recurring inefficiencies and invest money into fixing them before they become costly disruption points," he said. Rather than waiting for issues to break, businesses should implement proactive IT strategies, which can save time, reduce financial losses, and improve overall system reliability.


A multicloud experiment in agentic AI: Lessons learned

At its core, an agentic AI system is a self-governing decision-making system. It uses AI to assign and execute tasks autonomously, responding to changing conditions while balancing cost, performance, resource availability, and other factors. I wanted to leverage multiple public cloud platforms harmoniously. The architecture would have to be flexible enough to balance cloud-specific features while achieving platform-agnostic consistency. ... challenges with interoperability, platform-specific nuances, and cost optimization remain. More work is needed to improve the viability of multicloud architectures. The big gotcha is that the cost was surprisingly high. The price of resource usage on public cloud providers, egress fees, and other expenses seemed to spring up unannounced. Using public clouds for agentic AI deployments may be too expensive for many organizations and push them to cheaper on-prem alternatives, including private clouds, managed services providers, and colocation providers. I can tell you firsthand that those platforms are more affordable in today’s market and provide many of the same services and tools. This experiment was a small but meaningful step toward realizing a future where cloud environments serve as dynamic, self-managing ecosystems.


What boards want and don’t want to hear from cybersecurity leaders

A lack of clarity can lead to either oversharing technical details or not providing enough strategic context. Paul Connelly, former CISO turned board advisor, independent director and mentor, finds many CISOs focus too heavily on metrics while the board is looking for more strategic insights. The board doesn’t need to know the results of your phishing test, says Connelly. Boards are focused on risks the organization faces, strategies to address these risks, progress updates, obstacles to success, and whether they’re tackling the right things. “I coach CISOs to study their board — read their bios, understand their background, and understand the fiduciary responsibility of a board,” he says. The goal is to understand the make-up of the board and their priorities and channel their metrics into risk and threat analysis for the business. Using this information, CISOs can develop a story about their program aligned with the business. “That high-level story — supported by measurements — is what boards want to hear, not a bunch of metrics on malicious emails and critical patches or scary Chicken Little-type of threats,” Connelly tells CSO. However, it’s not a one-way interaction, yet many CISOs are engaging with boards that lack the appropriate skills and understanding to foster meaningful discussions on cyber threats. “Very few boards have any directors with true expertise in technology or cyber,” says Connelly.


The future of insurance is digital, intelligent, and customer-first

The Indian insurance sector is undergoing transformative changes, driven by insurtech innovations, personalised policies, and efficient claim settlements. Reliance General Insurance leads this evolution by integrating AI, data science, and automation to enhance customer experiences. According to Deloitte, 70% of Central European insurers have recently partnered with insurtech, with 74% expressing satisfaction, highlighting the global trend of technological collaboration. Emphasising innovation, speed, and customer-centric measures, the industry aims to demystify insurance, boost its adoption, and eliminate service hindrances, steering towards a technology-oriented future. ... Protecting our customer’s data is essential at Reliance General Insurance. To avoid the misuse of the customer information, the company employs a strong multi-layered security framework involving encryption, threat intelligence services, and real-time monitoring. To help mitigate these risks, we also offer cyber insurance products.  ... As much as self-regulatory innovation evokes progressive strides, risk management becomes paramount in the adoption of insurtech solutions. Seamlessly integrating new technologies is the objective, and Reliance General employs constant feedback monitoring to ensure new technologies meet security and regulatory standards.


Examining the business case for multi-million token LLMs

As enterprises weigh the costs of scaling infrastructure against potential gains in productivity and accuracy, the question remains: Are we unlocking new frontiers in AI reasoning, or simply stretching the limits of token memory without meaningful improvements? This article examines the technical and economic trade-offs, benchmarking challenges and evolving enterprise workflows shaping the future of large-context LLMs. ... Increasing the context window also helps the model better reference relevant details and reduces the likelihood of generating incorrect or fabricated information. A 2024 Stanford study found that 128K-token models reduced hallucination rates by 18% compared to RAG systems when analyzing merger agreements. However, early adopters have reported some challenges: JPMorgan Chase’s research demonstrates how models perform poorly on approximately 75% of their context, with performance on complex financial tasks collapsing to near-zero beyond 32K tokens. Models still broadly struggle with long-range recall, often prioritizing recent data over deeper insights. This raises questions: Does a 4-million-token window truly enhance reasoning, or is it just a costly expansion of memory? How much of this vast input does the model actually use? And do the benefits outweigh the rising computational costs?


IT compensation satisfaction at an all-time low

“We’re going through a leveling of the economy right now,” Sutton said, adding that during difficult business periods employees crave consistency and reliability. “There is a little bit of satisfaction and contentment with what is seen as a stable role.” Industry observers also said that although money is a critical factor in how appreciated employees feel, unhappiness with one’s IT role is often a result of other factors, such as changing job descriptions and a general lack of job security. “Compensation is not the only tool enterprises have to improve employee experience and satisfaction. Enterprises can make sure that their employees are focused on work that excites them and they can see the value of,” Forrester’s Mark said. “Provide ample opportunities for upskilling in line not just with the technology strategy, but also with employees’ career aspirations. Ensure that employees feel empowered and have autonomy over decisions which impact them, and of course manage work-life balance, demonstrating that organizations do not simply value the work outputs, but the employees themselves as unique individuals.” Matt Kimball, VP and principal analyst for Moor Insights and Strategy, agreed that employee sentiment goes well beyond salary and bonuses.


Amazon Gift Card Email Hooks Microsoft Credentials

The Cofense Phishing Defense Center (PDC) has recently identified a new credential phishing campaign that uses an email disguised as an Amazon e-gift card from the recipient’s employer. While the email appears to offer a substantial reward, its true purpose is to harvest Microsoft credentials from unsuspecting recipients. The combination of the large monetary value and the appearance of an email seemingly from their employer lures the recipient into a false sense of security that leaves them unaware of the dangers ahead. ... Once the recipient submits their email address, they will be redirected to a phishing page, as shown in Figure 3. The phishing page is well-disguised as a legitimate Microsoft login site, once again prompting the victim to input their credentials. Legitimate Microsoft Outlook login pages should be hosted on domains belonging to Microsoft (such as live.com or outlook.com), but as you can see in Figure 3, the domain for this site is officefilecenter[.]com, which was created less than a month before the time of analysis. Credential phishing emails such as these are a perfect example of the various ways that threat actors can exploit the emotions of the recipient. Whether it is the theme of phish, the content within, or the time of the year, threat actors will utilize anything they can to make sure you do not catch on until it’s too late. 


Driving Sustainability Forward with IIoT: Smarter Processes for a Greener Future

AI-driven IIoT systems are transforming how industries manage raw materials, inventory, and human resources. In smart factories, AI forecasts demand, streamlines production schedules, and optimizes supply chains to reduce waste and emissions. For instance, AI calculates the exact quantity of materials needed for production, preventing overstocking and minimizing excess. It also enhances SIOP and logistics by consolidating shipments and selecting eco-friendly transportation routes, reducing the carbon footprint of global supply chains. Predictive maintenance, powered by AI, contributes by detecting equipment issues early, preventing breakdowns, extending lifespan and uptime while reducing defective outputs. ... IIoT is a key enabler of the circular economy, which focuses on recycling, reusing, and reducing waste. Automated systems allow manufacturers to recycle heat, water, and materials within their facilities, creating closed-loop processes. For example, excess heat from industrial ovens can be captured and repurposed for heating water or other facility needs. While sensors monitor production processes to optimize material usage and reduce scrap, product take-back programs are another cornerstone of the circular economy. 

Daily Tech Digest - March 01, 2025


Quote for the day:

"Your life does not get better by chance, it gets better by change." -- Jim Rohn


Two AI developer strategies: Hire engineers or let AI do the work

Philip Walsh, director analyst in Gartner’s software engineering practice, said that from his vantage point he sees “two contrasting signals: some leaders, like Marc Benioff at Salesforce, suggest they may not need as many engineers due to AI’s impact, while others — Alibaba being a prime example — are actively scaling their technical teams and specifically hiring for AI-oriented roles.” In practice, he said, Gartner believes AI is far more likely to expand the need for software engineering talent. “AI adoption in software development is early and uneven,” he said, “and most large enterprises are still early in deploying AI for software development — especially beyond pilots or small-scale trials.” Walsh noted that, while there is a lot of interest in AI-based coding assistants (Gartner sees roughly 80% of large enterprises piloting or deploying them), actual active usage among developers is often much lower. “Many organizations report usage rates of 30% or less among those who have access to these tools,” he said, adding that the most common tools are not yet generating sufficient productivity gains to generate cost savings or headcount reductions. He said, “current solutions often require strong human supervision to avoid errors or endless loops. Even as these technologies mature over the next two to three years, human expertise will remain critical.”


The Great AI shift: The rise of ‘services as software’

Today, AI is pushing the envelope by turning services built to be used by humans as ‘self-serve’ utilities into automatically-running software solutions that execute autonomously—a paradigm shift the venture capital world, in particular, has termed ‘Services as Software’ ... The shift is already conspicuous across industries. AI tools like Harvey AI are transforming the legal and compliance sector by analysing case law and generating legal briefs, essentially replacing human research assistants. The customer support ecosystem that once required large human teams in call centres now handles significant query volumes daily with AI chatbots and virtual agents. ... The AI-driven shift brings into question the traditional notion of availing an ‘expert service’. Software development,legal, and financial services are all coveted industries where workers are considered ‘experts’ delivering specialised services. The human role will undergo tremendous redefinition and will require calibrated re-skilling. ... Businesses won't simply replace SaaS with AI-powered tools; they will build the company's processes and systems around these new systems. Instead of hiring marketing agencies, companies will use AI to generate dynamic marketing and advertising campaigns. Businesses will rely on AI-driven quality assurance and control instead of outsourcing software testing, Quality Assurance, and Quality Control.


Resilience, Observability and Unintended Consequences of Automation

Instead of thinking of replacing work that humans might make or do, it's augmenting that work. And how do we make it easier for us to do these kinds of jobs? And that might be writing code, that might be deploying it, that might be tackling incidents when they come up, but understanding what the fancy, nerdy academic jargon for this is joint cognitive systems. But thinking instead of replacement or our functional allocation, another good nerdy academic term, we'll give you this piece, we'll give the humans those pieces. How do we have a joint system where that automation is really supporting the work of the humans in this complex system? And in particular, how do you allow them to troubleshoot that, to introspect that, to actually understand and to have even maybe the very nerdy versions of this research lay out possible ways of thinking about what can these computers do to help us? ... We could go monolith to microservices, we could go pick your digital transformation. How long did that take you? And how much care did you put into that? Maybe some of it was too long or too bureaucratic or what have you, but I would argue that we tend to YOLO internal developer technology way faster and way looser than we do with the things that actually make us money as that is the perception, the things that actually make us money.


The Modern CDN Means Complex Decisions for Developers

“Developers should not have to be experts on how to scale an application; that should just be automatic. But equally, they should not have to be experts on where to serve an application to stay compliant with all these different patchworks of requirements; that should be more or less automatic,” Engates argues. “You should be able to flip a few switches and say ‘I need to be XYZ compliant in these countries,’ and the policy should then flow across that network and orchestrate where traffic is encrypted and where it’s served and where it’s delivered and what constraints are around it.” ... Along with the physical constraint of the speed of light and the rise of data protection and compliance regimes, Alexander also highlights the challenge of costs as something developers want modern CDNs to help them with. “Egress fees between clouds are one of the artificial barriers put in place,” he claims. That can be 10%, 20% or even 30% of overall cloud spend. “People can’t build the application that they want, they can’t optimize, because of some of these taxes that are added on moving data around.” Update patterns aren’t always straightforward either. Take a wiki like Fandom, where Fastly founder and CTO Artur Bergman was previously CTO. 


A Comprehensive Look at OSINT

Cybersecurity professionals within corporations rely on public data to identify emerging phishing campaigns, data breaches, or malicious activity targeting their brand. Investigative journalists and academic researchers turn to OSINT for fact-checking, identifying new leads, and gathering reliable support for their reporting or studies. ... Avoiding OSINT or downplaying its value can leave organizations unaware of threats and opportunities that are readily discoverable to others. By failing to gather open-source data, businesses and government agencies could remain in the dark about malicious activities, negative brand impersonations, or stolen credentials circulating on forums and dark web marketplaces. In the event of a security breach or public scandal, stakeholders may view the lack of proper OSINT measures as a failure of due diligence, eroding trust and tarnishing the organization’s image. ... The primary driver behind OSINT’s growth is the vast reservoir of information generated daily by digital platforms, databases, and news outlets. This public data can be invaluable for enhancing security, improving transparency, and making more informed decisions. Security professionals, for instance, can preemptively identify threats and vulnerabilities posted openly by malicious actors. 


OT/ICS cyber threats escalate as geopolitical conflicts intensify

A persistent lack of visibility into OT environments continues to obscure the full scale of these attacks. These insights come from Dragos’ 2025 OT/ICS Cybersecurity Report, its eighth annual Year in Review, which analyzes industrial organizations’ cyber threats. .., VOLTZITE is arguably the most crucial threat group to track in critical infrastructure. Due to its dedicated focus on OT data, the group is a capable threat to ICS asset owners and operators. This group shares extensive technical overlaps with the Volt Typhoon threat group tracked by other organizations. It utilizes the same techniques as in previous years, setting up complex chains of network infrastructure to target, compromise, and steal compromising OT-relevant data—GIS data, OT network diagrams, OT operating instructions, etc.—from victim ICS organizations. ... Increasing collaboration between hacktivist groups and state-backed cyber actors has led to a hybrid threat model where hacktivists amplify state objectives, either directly or through shared infrastructure and intelligence. State actors increasingly look to exploit hacktivist groups as proxies to conduct deniable cyber operations, allowing for more aggressive attacks with reduced attribution risks.


Leveraging AR & VR for Remote Maintenance in Industrial IoT

AR tools like Microsoft’s HoloLens 2 are enabling workers on-site to receive real-time guidance from experts located anywhere in the world. Using AR glasses or headsets, on-site personnel can share their view with remote technicians, who can then overlay instructions, schematics, or step-by-step troubleshooting guidance directly onto the worker’s field of vision. This allows maintenance teams to resolve issues faster and more accurately, without the need for travel, reducing downtime and operational costs. ... By using VR simulations, workers can familiarize themselves with equipment, troubleshoot issues, and practice responses to emergencies, all in a virtual setting. This hands-on experience builds confidence and competence, ultimately improving safety and efficiency when dealing with real equipment. As IIoT systems become more sophisticated, VR training can play a key role in ensuring that the workforce is well-prepared to handle advanced technologies without risking costly mistakes or accidents. ... In the future, we can expect even more seamless integration between AR/VR systems and IIoT platforms, where real-time data from sensors and machines is directly fed into the AR/VR environment, providing a comprehensive view of machine health, performance and issues. 


Just as DNA defines an organism’s identity, business continuity must be deeply embedded in every aspect of your organization. It is more than just a collection of emergency plans or procedures; it embodies a philosophy that ensures not only survival during disruptions, but long-term sustainability as well. ... An organization without continuity is like a tree without roots—fragile and vulnerable to the slightest shock. Continuity serves as an anchor, allowing organizations to navigate crises while staying aligned with their strategic goals. Any organization that aims to grow and thrive must take a proactive approach to continuity. Continuity strategies and initiatives can be seen as the roots of a tree, natural extensions that provide stability and sustain growth. ... It is essential that both leaders and team members possess the experience and skills needed to execute their work effectively. ... Thoroughly assess your key vulnerabilities. This involves two primary methods: a BIA, which analyzes the impacts of a disturbance over time to determine recovery priorities, resource requirements, and appropriate responses; and risk analysis, which identifies risks tied to prioritized activities and critical resources. Together, these two approaches offer a comprehensive understanding of your organization’s pain points.


Keep Your Network Safe From the Double Trouble of a ‘Compound Physical-Cyber Threat'

This phenomenon, a “compound physical-cyber threat,” where a cyberattack is intentionally launched around a heatwave or hurricane, for example, would have outsized and potentially devastating effects on businesses, communities, and entire economies, according to a 2024 study led by researchers at Johns Hopkins University. “Cyber-attacks are more disruptive when infrastructure components face stresses beyond normal operating conditions,” the study asserted. Businesses and their IT and risk management people would be wise to take notice, because both cyberattacks and weather-related disasters are increasing in frequency and in the cost they exact from their victims. ... Take what you learn from the risk assessment to develop a detailed plan that outlines the steps your organization intends to take to preserve cybersecurity, business continuity, and network connectivity during a crisis. Whether you’re a B2B or B2C organization, your customers, employees, suppliers and other stakeholders expect your business to be “always on,” 24/7/365. How will you keep the lights on, the lines of communications open, and your network insulated from cyberattack during a disaster? 


‘It Won’t Happen to Us:’ The Dangerous Mindset Minimizing Crisis Preparation

The main mistakes in crisis situations include companies staying silent and not releasing official statements from management, creating a vacuum of information and promoting the spread of rumors. ... First and foremost, companies should not underestimate the importance of communication, especially when things are not going well. During a crisis, many companies prefer to sit quietly and wait without informing or sharing anything about their measures and actions in connection with the crisis. This is the wrong approach. Silence gives competitors enough space to thrive and gain a market advantage. Meanwhile, journalists won’t stop working on hot stories. When you don’t share anything meaningful with them or your audience, they may collect and publish rumors and misinformation about your company. And the lack of comments creates the ground for negative interpretations. Therefore, transparency and efficiency are key principles of anti-crisis communication. If you are clear in your messages and give quick responses, it allows the company to control the information agenda. The surefire way to gain and maintain trust is to promptly and regularly inform your company’s investors during a crisis through your own channels.