Showing posts with label data engineering. Show all posts
Showing posts with label data engineering. Show all posts

Daily Tech Digest - February 20, 2026


Quote for the day:

"Hold yourself responsible for a higher standard than anybody expects of you. Never excuse yourself." -- Henry Ward Beecher



From in-house CISO to consultant. What you need to know before making the leap

A growing number of CISOs are either moving into consulting roles or seriously considering it. The appeal is easy to see: more flexibility and quicker learning, alongside steady demand for experienced security leaders. Some of these professionals work as virtual CISOs (vCISOs), advising companies from a distance. Others operate as fractional CISOs, embedding into the organization one or two days a week. ... CISOs line up their first clients while they’re still employed. Otherwise, he says, it can take a long time to build momentum. And the pressure to make it work can quickly turn into panic. In that moment, security professionals may start “underpricing themselves because they need money immediately,” he says. Once rates are set out of desperation, they’re often hard to reset without straining the relationship. Other CISOs-turned-consultants also emphasize preparation. ... Many of the skills CISOs honed inside large organizations translate directly to the new consulting job, while others suddenly matter more than they ever did before. In addition to technical skills, it is often the practical ones that prove most valuable. The ability to prioritize — sharpened over years in a CISO role — becomes especially important in consulting. ... Crisis management is another essential skill. Paired with hands-on knowledge of cybersecurity processes and best practices, it gives former CISOs a real advantage as they move into consulting.


New phishing campaign tricks employees into bypassing Microsoft 365 MFA

The message purports to be about a corporate electronic funds payment, a document about salary bonuses, a voicemail, or contains some other lure. It also includes a code for ‘Secure Authorization’ that the user is asked to enter when they click on the link, which takes them to a real Microsoft Office 365 login page. Victims think the message is legitimate, because the login page is legitimate, so enter the code. But unknown to the victim, it’s actually the code for a device controlled by the threat actor. What the victim has done is issued an OAuth token granting the hacker’s device access to their Microsoft account. From there, the hacker has access to everything the account allows the employee to use. Note that this isn’t about credential theft, although if the attacker wants credentials, they can be stolen. It’s about stealing the victim’s OAuth access and refresh tokens for persistent access to their Microsoft account, including to applications such as Outlook, Teams, and OneDrive. ... The main defense against the latest version of this attack is to restrict the applications users are allowed to connect to their account, he said. Microsoft provides enterprise administrators with the ability to allowlist specific applications that the user may authorize via OAuth. ... The easiest defense is to turn off the ability to add extra login devices to Office 365, unless it’s needed, he said. In addition, employees should also be continuously educated about the risks of unusual login requests, even if they come from a familiar system.


The 200ms latency: A developer’s guide to real-time personalization

The first hurdle every developer faces is the “cold start.” How do you personalize for a user with no history or an anonymous session? Traditional collaborative filtering fails here because it relies on a sparse matrix of past interactions. If a user just landed on your site for the first time, that matrix is empty. To solve this within a 200ms budget, you cannot afford to query a massive data warehouse to look for demographic clusters. You need a strategy based on session vectors. We treat the user’s current session as a real-time stream. ... Another architectural flaw I frequently encounter is the dogmatic attempt to run everything in real-time. This is a recipe for cloud bill bankruptcy and latency spikes. You need a strict decision matrix to decide exactly what happens when the user hits “load.” We divide our strategy based on the “Head” and “Tail” of the distribution. ... Speed means nothing if the system breaks. In a distributed system, a 200ms timeout is a contract you make with the frontend. If your sophisticated AI model hangs and takes 2 seconds to return, the frontend spins and the user leaves. We implement strict circuit breakers and degraded modes. ... We are moving away from static, rule-based systems toward agentic architectures. In this new model, the system does not just recommend a static list of items. It actively constructs a user interface based on intent. This shift makes the 200ms limit even harder to hit. It requires a fundamental rethink of our data infrastructure.


Spec-Driven Development – Adoption at Enterprise Scale

Spec-Driven Development emerged as AI models began demonstrating sustained focus on complex tasks for extended periods of time. Operating in a continuous back-and-forth pattern, instructional interactions between humans and AI is not the best use of this capability. At the same time, allowing AI to operate independently for long periods risks significant deviation from intended outcomes. We need effective context engineering to ensure intent alignment in this scenario. SDD addresses this need by establishing a shared understanding with AI, with specs facilitating dialogue between humans and AI, rather than serving as instruction manuals. ... When senior engineers collaborate, communication is conversational, rather than one-way instructions. We achieve shared understanding through dialogue. That shared understanding defines what we build. SDD facilitates this same pattern between humans and AI agents, where agents help us think through solutions, challenge assumptions, and refine intent before diving into execution. ... Given this significant cultural dimension, treating SDD as a technical rollout leaves substantial value on the table. SDD adoption is an organizational capability to develop, not just a technical practice to install. Those who have lived through enterprise agile adoption will recognize the pattern. Tools and ceremonies are easy to install, but without the cultural shifts we risk "SpecFall" (the equivalent of "Scrumerfall").


Tech layoffs in 2026: Why skills matter more than experience in tech

The impact of AI on tech jobs India is becoming visible as companies prioritise data science and machine learning skills over conventional IT roles. During decades, layoffs were typically associated with the economic recession or lack of revenue in companies. The difference between the present wave is the involvement of automation and strategic restructuring. Although automation has had beneficial impacts on increasing productivity, it implies that jobs that aim at routine and repetitive duties continue to be at risk. ... The traditional career trajectories based on experience or seniority are replaced by market needs of niche skills in machine learning, data engineering, cloud architecture, and product leadership. Employees whose skills have not increased are more exposed to displacement in the event of reorganisation of the companies. These developments explain why tech professionals must reskill to remain employable in an AI-driven industry. The tech labor force in India, which is also one of the largest in the world, is especially vulnerable to the change. ... The future of tech jobs in India 2026 will favour professionals who combine technical expertise with analytical and problem-solving skills. The layoffs in early 2026 explain why the technology industry is vulnerable to job losses because corporate interests can change rapidly. To individuals, it entails being future-ready through the development of skills that would be relevant in the industry direction, including AI integration, cybersecurity, cloud computing, and advanced analytics.


Secrets Management Failures in CI/CD Pipelines

Hardcoded secrets are still the most entrenched security issue. API keys, access tokens and private certificates continue to live in the configuration files of the pipeline, shell scripts or application manifests. While the repository is private, security exposure is the result of only one misconfiguration or breached account. Once committed, secrets linger for months or even years, far outlasting the necessary rotation period. Another common failure is secret sprawl. CI/CD pipelines accumulate credentials over time with no clear ownership. Old tokens remain active because nobody remembers which service depends on them. Thus, as the pipeline develops, secrets management becomes reactive rather than intentional, compromising the likelihood of exposing credentials. Over-permissioned credentials make things worse. ... Technology is not the reason for most secrets management failures; it’s people. Developers tend to copy and paste credentials when they’re trying to get to the bottom of some problem or other. They might even just bypass the security safeguards because things are tight against the wire. It’s pretty easy for nobody to keep absolutely on top of their security posture as your CI/CD pipelines evolve. It’s just exactly for this reason that a DevSecOps culture is important. It has got to be more than just the tools; it has got to be how we all work together to get the job done. Security teams must recognize that what is needed is to consider the CI/CD pipeline as production infrastructure, not some internal tool that can be altered ‘on the fly’.


Agentic AI systems don’t fail suddenly — they drift over time

As organizations move from experimentation to real operational deployment of agentic AI, a new category of risk is emerging — one that traditional AI evaluation, testing and governance practices often struggle to detect. ... Most enterprise AI governance practices evolved around a familiar mental model: a stateless model receives an input and produces an output. Risk is assessed by measuring accuracy, bias or robustness at the level of individual predictions. Agentic systems strain that model. The operational unit of risk is no longer a single prediction, but a behavioral pattern that emerges over time. An agent is not a single inference. It is a process that reasons across multiple steps, invokes tools and external services, retries or branches when needed, accumulates context over time and operates inside a changing environment. Because of that, the unit of failure is no longer a single output, but the sequence of decisions that leads to it. ... In real environments, degradation rarely begins with obviously incorrect outputs. It shows up in subtler ways, such as verification steps running less consistently, tools being used differently under ambiguity, retry behavior shifting or execution depth changing over time. ... Without operational evidence, governance tends to rely more on intent and design assumptions than on observed reality. That’s not a failure of governance so much as a missing layer. Policy defines what should happen, diagnostics help establish what is actually happening and controls depend on that evidence.


Prompt Control is the New Front Door of Application Security

Application security has always been built around a simple assumption: There is a front door. Traffic enters through known interfaces, authentication establishes identity, authorization constrains behavior, and controls downstream enforcement of policy. That model still exists, but our most recent research shows it no longer captures where risk actually concentrates in AI-driven systems. ... Prompts are where intent enters the system. They define not only what a user is asking, but how the model should reason, what context it should retain, and which safeguards it should attempt to bypass. That is why prompt layers now outrank traditional integration points as the most impactful area for both application security and delivery. ... Output moderation still matters, and our research shows it remains a meaningful concern. But its lower ranking is telling. Output controls catch problems after the system has already behaved badly. They are essential guardrails, not primary defenses. It’s always more efficient to stop the thief on the way in rather than try to catch him after the fact, and in the case of inference, it’s less costly because stopping on the ingress means no token processing costs incurred. ... Our second set of findings reinforces this point. Authentication and observability lead the methods organizations use to secure and deliver AI inference services, cited by 55% and 54% of respondents, respectively. This holds true across roles, with the exception of developers, who more often prioritize protection against sensitive data leaks.


The 'last-mile' data problem is stalling enterprise agentic AI — 'golden pipelines' aim to fix it

Traditional ETL tools like dbt or Fivetran prepare data for reporting: structured analytics and dashboards with stable schemas. AI applications need something different: preparing messy, evolving operational data for model inference in real-time. Empromptu calls this distinction "inference integrity" versus "reporting integrity." Instead of treating data preparation as a separate discipline, golden pipelines integrate normalization directly into the AI application workflow, collapsing what typically requires 14 days of manual engineering into under an hour, the company says. Empromptu's "golden pipeline" approach is a way to accelerate data preparation and make sure that data is accurate. ... "Enterprise AI doesn't break at the model layer, it breaks when messy data meets real users," Shanea Leven, CEO and co-founder of Empromptu told VentureBeat in an exclusive interview. "Golden pipelines bring data ingestion, preparation and governance directly into the AI application workflow so teams can build systems that actually work in production." ... Golden pipelines target a specific deployment pattern: organizations building integrated AI applications where data preparation is currently a manual bottleneck between prototype and production. The approach makes less sense for teams that already have mature data engineering organizations with established ETL processes optimized for their specific domains, or for organizations building standalone AI models rather than integrated applications.


From installation to predictive maintenance: The new service backbone of AI data centers

AI workloads bring together several shifts at once: much higher rack densities, more dynamic load profiles, new forms of cooling, and tighter integration between electrical and digital systems. A single misconfiguration in the power chain can have much wider consequences than would have been the case in a traditional facility. This is happening at a time when many operators struggle to recruit and retain experienced operations and maintenance staff. The personnel on site often have to cope with hybrid environments that combine legacy air-cooled rooms with liquid-ready zones, energy storage, and multiple software layers for control and monitoring. In such an environment, services are not a ‘nice to have’. ... As architectures become more intricate, human error remains one of the main residual risks. AI-ready infrastructures combine complex electrical designs, liquid cooling circuits, high-density rack layouts, and multiple software layers such as EMS, BMS and DCIM. Operating and maintaining such systems safely requires clear procedures and a high level of discipline. ... In an AI-driven era, service strategy is as important as the choice of UPS topology, cooling technology or energy storage. Commissioning, monitoring, maintenance, and training are not isolated activities. Together, they form a continuous backbone that supports the entire lifecycle of the data center. Well-designed service models help operators improve availability, optimise energy performance and make better use of the assets they already have. 

Daily Tech Digest - July 26, 2025


Quote for the day:

"Small daily improvements over time lead to stunning results." -- Robin Sharma


Data Engineering in the Age of AI: Skills To Master Now

Streaming requires a new mindset. You must reason about event time compared to processing time, manage watermarking and windowing and guarantee exactly-once semantics even when things change midstream. These design patterns must be built into your pipelines from the beginning. ... Agentic AI stretches the typical data engineer’s streaming data skill set because it is no longer about a single model running in isolation. Today, we see networks of perception agents, reasoning agents and execution agents working together, each handling tasks and passing insights to the next in real time. If you know only how to schedule batch ETL jobs or deploy an inference server, you’re missing a core skill: how to build high-throughput, low-latency pipelines that keep these agents reliable and responsive in production. ... A single slow or broken stream can cause cascading failures in multiagent systems. Use schema registries, enforce data contracts and apply exactly-once semantics to maintain trust in your streaming infrastructure. ... Communication presents another challenge. Data scientists often discuss “precision” as a metric that data engineers must translate into reality. Implement evaluation scores like factual consistency checks, entity precision comparisons and human-in-the-loop review pipelines.


Anthropic unveils ‘auditing agents’ to test for AI misalignment

Anthropic said the first environment it developed tests an agent’s ability to complete an alignment audit for an intentionally misaligned model. For this test, the researchers brought the investigator agent to the environment and equipped it with similar tools that human teams use. The researchers instructed the agent to identify the root cause of the misalignment and determine the specific behaviors that were misaligned. According to the researchers, the investigator agent successfully found the root cause of the issues 10-13% of the time. Still, this performance improved to 42% “when using a super-agent approach that aggregates findings across multiple investigations.” ... The next step in most alignment audits is typically a behavioral evaluation, which determines whether the model consistently misbehaves in various settings. To test its evaluation agent, the researchers designed 10 models “given system prompt instructions to exhibit a specific behavior, such as excessive deference to the user.” They ran the agent five times per model and saw that the agent correctly finds and flags at least one quirk of the model. However, the agent sometimes failed to identify specific quirks consistently. It had trouble evaluating subtle quirks, such as self-promotion and research-sandbagging, as well as quirks that are difficult to elicit, like the Hardcode Test Cases quirk.


The agentic experience: Is MCP the right tool for your AI future?

As enterprises race to operationalize AI, the challenge isn't only about building and deploying large language models (LLMs), it's also about integrating them seamlessly into existing API ecosystems while maintaining enterprise level security, governance, and compliance. Apigee is committed to lead you in this journey. Apigee streamlines the integration of gen AI agents into applications by bolstering their security, scalability, and governance. While the Model Context Protocol (MCP) has emerged as a de facto method of integrating discrete APIs as tools, the journey of turning your APIs into these agentic tools is broader than a single protocol. This post highlights the critical role of your existing API programs in this evolution and how ... Leveraging MCP services across a network requires specific security constraints. Perhaps you would like to add authentication to your MCP server itself. Once you’ve authenticated calls to the MCP server you may want to authorize access to certain tools depending on the consuming application. You may want to provide first class observability information to track which tools are being used and by whom. Finally, you may want to ensure that whatever downstream APIs your MCP server is supplying tools for also has minimum guarantees of security like already outlined above


AI Innovation: 4 Steps For Enterprises To Gain Competitive Advantage

A skill is a single ability, such as the ability to write a message or analyze a spreadsheet and trigger actions from that analysis. An agent independently handles complex, multi-step processes to produce a measurable outcome. We recently announced an expanded network of Joule Agents to help foster autonomous collaboration across systems and lines of business. This includes out-of-the-box agents for HR, finance, supply chain, and other functions that companies can deploy quickly to help automate critical workflows. AI front-runners, such as Ericsson, Team Liquid, and Cirque du Soleil, also create customized agents that can tackle specific opportunities for process improvement. Now you can build them with Joule Studio, which provides a low-code workspace to help design, orchestrate, and manage custom agents using pre-defined skills, models, and data connections. This can give you the power to extend and tailor your agent network to your exact needs and business context. ... Another way to become an AI front-runner is to tackle fragmented tools and solutions by putting in place an open, interoperable ecosystem. After all, what good is an innovative AI tool if it runs into blockers when it encounters your other first- and third-party solutions? 


Hard lessons from a chaotic transformation

The most difficult part of this transformation wasn’t the technology but getting people to collaborate in new ways, which required a greater focus on stakeholder alignment and change management. So my colleague first established a strong governance structure. A steering committee with leaders from key functions like IT, operations, finance, and merchandising met biweekly to review progress and resolve conflicts. This wasn’t a token committee, but a body with authority. If there were any issues with data exchange between marketing and supply chain, they were addressed and resolved during the meetings. By bringing all stakeholders together, we were also able to identify discrepancies early on. For example, when we discovered a new feature in the inventory system could slow down employee workflows, the operations manager reported it, and we immediately adjusted the rollout plan. Previously, such issues might not have been identified until after the full rollout and subsequent finger-pointing between IT and business departments. The next step was to focus on communication and culture. From previous failed projects, we knew that sending a few emails wasn’t enough, so we tried a more personal approach. We identified influential employees in each department and recruited them as change champions.


Benchmarks for AI in Software Engineering

HumanEval and SWE-bench have taken hold in the ML community, and yet, as indicated above, neither is necessarily reflective of LLMs’ competence in everyday software engineering tasks. I conjecture one of the reasons is the differences in points of view of the two communities! The ML community prefers large-scale, automatically scored benchmarks, as long as there is a “hill climbing” signal to improve LLMs. The business imperative for LLM makers to compete on popular leaderboards can relegate the broader user experience to a secondary concern. On the other hand, the software engineering community needs benchmarks that capture specific product experiences closely. Because curation is expensive, the scale of these benchmarks is sufficient only to get a reasonable offline signal for the decision at hand (A/B testing is always carried out before a launch). Such benchmarks may also require a complex setup to run, and sometimes are not automated in scoring; but these shortcomings can be acceptable considering a smaller scale. For exactly these reasons, these are not useful to the ML community. Much is lost due to these different points of view. It is an interesting question as to how these communities could collaborate to bridge the gap between scale and meaningfulness and create evals that work well for both communities.


Scientists Use Cryptography To Unlock Secrets of Quantum Advantage

When a quantum computer successfully handles a task that would be practically impossible for current computers, this achievement is referred to as quantum advantage. However, this advantage does not apply to all types of problems, which has led scientists to explore the precise conditions under which it can actually be achieved. While earlier research has outlined several conditions that might allow for quantum advantage, it has remained unclear whether those conditions are truly essential. To help clarify this, researchers at Kyoto University launched a study aimed at identifying both the necessary and sufficient conditions for achieving quantum advantage. Their method draws on tools from both quantum computing and cryptography, creating a bridge between two fields that are often viewed separately. ... “We were able to identify the necessary and sufficient conditions for quantum advantage by proving an equivalence between the existence of quantum advantage and the security of certain quantum cryptographic primitives,” says corresponding author Yuki Shirakawa. The results imply that when quantum advantage does not exist, then the security of almost all cryptographic primitives — previously believed to be secure — is broken. Importantly, these primitives are not limited to quantum cryptography but also include widely-used conventional cryptographic primitives as well as post-quantum ones that are rapidly evolving.


It’s time to stop letting our carbon fear kill tech progress

With increasing social and regulatory pressure, reluctance by a company to reveal emissions is ill-received. For example, in Europe the Corporate Sustainability Reporting Directive (CSRD) currently requires large businesses to publish their emissions and other sustainability datapoints. Opaque sustainability reporting undermines environmental commitments and distorts the reference points necessary for net zero progress. How can organisations work toward a low-carbon future when its measurement tools are incomplete or unreliable? The issue is particularly acute regarding Scope 3 emissions. Scope 3 emissions often account for the largest share of a company’s carbon footprint and are those generated indirectly along the supply chain by a company’s vendors, including emissions from technology infrastructure like data centres. ... It sounds grim, but there is some cause for optimism. Most companies are in a better position than they were five years ago and acknowledge that their measurement capabilities have improved. We need to accelerate the momentum of this progress to ensure real action. Earth Overshoot Day is a reminder that climate reporting for the sake of accountability and compliance only covers the basics. The next step is to use emissions data as benchmarks for real-world progress.


Why Supply Chain Resilience Starts with a Common Data Language

Building resilience isn’t just about buying more tech, it’s about making data more trustworthy, shareable, and actionable. That’s where global data standards play a critical role. The most agile supply chains are built on a shared framework for identifying, capturing, and sharing data. When organizations use consistent product and location identifiers, such as GTINs (Global Trade Item Numbers) and GLNs (Global Location Numbers) respectively, they reduce ambiguity, improve traceability, and eliminate the need for manual data reconciliation. With a common data language in place, businesses can cut through the noise of siloed systems and make faster, more confident decisions. ... Companies further along in their digital transformation can also explore advanced data-sharing standards like EPCIS (Electronic Product Code Information Services) or RFID (radio frequency identification) tagging, particularly in high-volume or high-risk environments. These technologies offer even greater visibility at the item level, enhancing traceability and automation. And the benefits of this kind of visibility extend far beyond trade compliance. Companies that adopt global data standards are significantly more agile. In fact, 58% of companies with full standards adoption say they manage supply chain agility “very well” compared to just 14% among those with no plans to adopt standards, studies show.


Opinion: The AI bias problem hasn’t gone away you know

When we build autonomous systems and allow them to make decisions for us, we enter a strange world of ethical limbo. A self-driving car forced to make a similar decision to protect the driver or a pedestrian in a case of a potentially fatal crash will have much more time than a human to make its choice. But what factors influence that choice? ... It’s not just the AI systems shaping the narrative, raising some voices while quieting others. Organisations made up of ordinary flesh-and-blood people are doing it too. Irish cognitive scientist Abeba Birhane, a highly-regarded researcher of human behaviour, social systems and responsible and ethical artificial intelligence was asked to give a keynote recently for the AI for Good Global Summit. According to her own reports on Bluesky, a meeting was requested just hours before presenting her keynote: “I went through an intense negotiation with the organisers (for over an hour) where we went through my slides and had to remove anything that mentions ‘Palestine’ ‘Israel’ and replace ‘genocide’ with ‘war crimes’…and a slide that explains illegal data torrenting by Meta, I also had to remove. In the end, it was either remove everything that names names (Big Tech particularly) and remove logos, or cancel my talk.” 

Daily Tech Digest - July 01, 2025


Quote for the day:

"Listen with curiosity, speak with honesty act with integrity." -- Roy T Bennett


CIOs rethink public cloud as AI strategies mature

Regulatory and compliance concerns are a big driver toward the private cloud or on-premises solutions, says Bastien Aerni, vice president of strategy and technology adoption at GTT. Many companies are shifting their sensitive workloads to private clouds as a piece of broader multicloud and hybrid strategies to support agentic AI and other complex AI initiatives, he adds. “Most of the time, AI is touching confidential data or business-critical data,” Aerni says. “Then the thinking about the architecture and what the workload should be public vs. private, or even on-prem, is becoming a true question.” The public cloud still provides maximum scalability for AI projects, and in recent years, CIOs have been persuaded by the number of extra capabilities available there, he says. “In some of the conversations I had with CIOs, let’s say five years ago, they were mentioning, ‘There are so many features, so many tools,’” Aerni adds. ... “The paradox is clear: AI workloads are driving both massive cloud growth and selective repatriation simultaneously, because the market is expanding so rapidly it’s accommodating multiple deployment models at once,” Kirschner says. “What we are seeing is the maturation from a naive ‘everything-to-the-cloud’ strategy toward intelligent, workload-specific decisions.”


India’s DPDP law puts HR under the microscope—Here’s why that’s a good thing

At first glance, DPDP appears to mirror other data privacy frameworks like GDPR or CCPA. There’s talk of consent, purpose limitation, secure storage, and rights of the data principal (i.e., the individual). But the Indian legislation’s implications ripple far beyond IT configurations or privacy policies. “Mention data protection, and it often gets handed off to the legal or IT teams,” says Gupta. “But that misses the point. Every team that touches personal data is responsible under this law.” For HR departments, this shift is seismic. Gupta underscores how HR sits atop a “goldmine” of personal information—addresses, Aadhaar numbers, medical history, performance reviews, family details, even biometric data in some cases. And this isn't limited to employees; applicants and former workers are also in scope. ... With India housing thousands of global capability centres and outsourcing hubs, DPDP challenges multinationals to look inward. The emphasis so far has been on protecting customer data under global laws like GDPR. But now, internal data practices—especially around employees—are under the scanner. “DPDP is turning the lens inward,” says Gupta. “If your GCC in India tightens data practices, it won’t make sense to be lax elsewhere.”


3 ways developers should rethink their data stack for GenAI success

Traditional data stacks optimized for analytics, for the most part, don’t naturally support the vector search and semantic retrieval patterns that GenAI applications require. Thus, real-time GenAI data architectures need native support for embedding generation and vector storage as first-class citizens. This could mean integrating data with vector databases like Pinecone, Weaviate, or Chroma as part of the core infrastructure. It may also mean searching for multi-modal databases that can support all of your required data types out of the box without needing a bunch of separate platforms. Regardless of the underlying infrastructure, plan for needing hybrid search capabilities that combine traditional keyword search with semantic similarity, and consider how you’ll handle embedding model updates and re-indexing. ... Maintaining data relationships and ensuring consistent access patterns across these different storage systems is the real challenge when working with these various data types. While some platforms are beginning to offer enhanced vector search capabilities that can work across different data types, most organizations still need to architect solutions that coordinate multiple storage systems. The key is to design these multi-modal capabilities into your data stack early, rather than trying to bolt them on later when your GenAI applications demand richer data integration. 


Cyber Hygiene Protecting Your Digital and Financial Health

Digital transformation has reshaped the commercial world, integrating technology into nearly every aspect of operations. That has brought incredible opportunities, but it has also opened doors to new threats. Cyber attacks are more frequent and sophisticated, with malevolent actors targeting everyone from individuals to major corporations and entire countries. It is no exaggeration to say that establishing, and maintaining, effective cyber hygiene has become indispensable. According to Microsoft’s 2023 Digital Defense Report, effective cyber hygiene could prevent 99% of cyber attacks. Yet cyber hygiene is not just about preventing attacks, it is also central to maintaining operational stability and resilience in the event of a cyber breach. In that event robust cyber hygiene can limit the operational, financial, and reputational impact of a cyber attack, thereby enhancing an entity’s overall risk profile. ... Even though it’s critical, data suggests that many organizations struggle to implement even basic cyber security measures effectively. For example, a 2024 survey by Extrahop, a Seattle-based cyber security services provider, found that over half of the respondents admitted to using at least one unsecured network protocol, making them susceptible to attacks.


Are Data Engineers Sleepwalking Towards AI Catastrophe?

Data engineers are already overworked. Weigel cited a study that indicated 80% of data engineering teams are already overloaded. But when you add AI and unstructured data to the mix, the workload issue becomes even more acute. Agentic AI provides a potential solution. It’s natural that overworked data engineering teams will turn to AI for help. There’s a bevy of providers building copilots and swarms of AI agents that, ostensibly, can build, deploy, monitor, and fix data pipelines when they break. We are already seeing agentic AI have real impacts on data engineering teams, as well as the downstream data analysts who ultimately are the ones requesting the data in the first place. ... Once human data engineers are out of the loop, bad things can start happening, Weigel said. They potentially face a situation where the volume of data requests–which originally were served by human data engineers but now are being served by AI agents–is beyond their capability to keep up. ... “We’re now back in the dark ages, where we were 10 years ago [when we wondered] why we need data warehouses,” he said. “I know that if person A, B, and C ask a question, and previously they wrote their own queries, they got different results. Right now, we ask the same agent the same question, and because they’re non-deterministic, they will actually create different queries every time you ask it. 


How cybercriminals are weaponizing AI and what CISOs should do about it

Security teams are using AI to keep up with the pace of AI-powered cybercrime, scanning large volumes of data to surface threats earlier. AI helps scan massive amounts of threat data, surface patterns, and prioritize investigations. For example, analysts used AI to uncover a threat actor’s alternate Telegram channels, saving significant manual effort. Another use case: linking sockpuppet accounts. By analyzing slang, emojis, and writing styles, AI can help uncover connections between fake personas, even when their names and avatars are different. AI also flags when a new tactic starts gaining traction on forums or social media. ... As more defenders turn to AI to make sense of vast amounts of threat data, it’s easy to assume that LLMs can handle everything on their own. But interpreting chatter from the underground is not something AI can do well without help. “This diffuse environment, rich in vernacular and slang, poses a hurdle for LLMs that are typically trained on more generic or public internet data,” Ian Gray, VP of Cyber Threat Intelligence at Flashpoint, told Help Net Security. The problem goes deeper than just slang. Threat actors often communicate across multiple niche platforms, each with its own shorthand and tone. 


How To Keep AI From Making Us Stupid

The allure of AI is undeniable. It drafts emails, summarizes lengthy reports, generates code snippets, and even whips up images faster than you can say “neural network.” This unprecedented convenience, however, carries a subtle but potent risk. A study from MIT has highlighted concerns that overuse of AI tools might be degrading our thinking capabilities. That degradation is the digital equivalent of using a GPS so much that you forget how to read a map. Suddenly, your internal compass points vaguely toward convenience and not much else. When we offload critical cognitive tasks entirely to AI, our muscles for those tasks can begin to atrophy, leading to cognitive offloading. ... Treat AI-generated content like a highly caffeinated first draft — full of energy but possibly a little messy and prone to making things up. Your job isn’t to simply hit “generate” and walk away, unless you enjoy explaining AI hallucinations or factual inaccuracies to your boss. Or worse, your audience. Always, always, aggressively edit, proofread, and, most critically, fact-check every single output. ... The real risk isn’t AI taking over our jobs; it’s us letting AI take over our brains. To maintain your analytical edge, continuously challenge yourself. Practice skills that AI complements but doesn’t replace, such as critical thinking, complex problem-solving, nuanced synthesis, ethical judgment, and genuine human creativity.


Governance meets innovation: Protiviti’s strategy for secure, scalable growth in BFSI and beyond

In today’s BFSI landscape, technology alone is no longer a differentiator. True competitive advantage lies in the orchestration of innovation with governance. The deployment of AI in underwriting, the migration of customer data to the cloud, or the use of IoT in insurance all bring immense opportunity—but also profound risks. Without strong guardrails, these initiatives can expose firms to cyber threats, data sovereignty violations, and regulatory scrutiny. Innovation without governance is a gamble; governance without innovation is a graveyard. ... In cloud transformation projects, for instance, we work with clients to proactively assess data localisation risks, cloud governance maturity, and third-party exposures, ensuring resilience is designed from day one. As AI adoption scales across financial services, we bring deep expertise in Responsible AI governance. From ethical frameworks and model explainability to regulatory alignment with India’s DPDP Act and the EU AI Act, our solutions ensure that automated systems remain transparent, auditable, and trustworthy. Our AI risk models integrate regulatory logic into system design, bridging the gap between innovation and accountability.


Cybercriminals take malicious AI to the next level

Cybercriminals are tailoring AI models for specific fraud schemes, including generating phishing emails tailored by sector or language, as well as writing fake job posts, invoices, or verification prompts. “Some vendors even market these tools with tiered pricing, API access, and private key licensing, mirroring the [legitimate] SaaS economy,” Flashpoint researchers found. “This specialization leads to potentially greater success rates and automated complex attack stages,” Flashpoint’s Gray tells CSO. ... Cybercrime vendors are also lowering the barrier for creating synthetic video and voice, with deepfake as a service (DaaS) offerings ... “This ‘prompt engineering as a service’ (PEaaS) lowers the barrier for entry, allowing a wider range of actors to leverage sophisticated AI capabilities through pre-packaged malicious prompts,” Gray warns. “Together, these trends create an adaptive threat: tailored models become more potent when refined with illicit data, PEaaS expands the reach of threat actors, and the continuous refinement ensures constant evolution against defenses,” he says. ... Enterprises need to balance automation with expert analysis, separating hype from reality, and continuously adapt to the rapidly evolving threat landscape. “Defenders should start by viewing AI as an augmentation of human expertise, not a replacement,” Flashpoint’s Gray says. 


“DevOps is Dead? Long Live DevOps-Powered Platforms”

If DevOps and platform engineering needed a common enemy — or ally — to bond over, AI provided it. A panel featuring Nvidia, Google, Rootly and Thoughtworks explained how large language models are automating “the last mile” of toil, from incident response bots that reason over Grafana dashboards to code-gen pipelines that spit out compliant Terraform. ... The logic is straightforward: You can’t automate what you can’t see. For DevOps practitioners, high-fidelity telemetry is now table stakes — whether you’re feeding an agentic AI, debugging an ephemeral sandbox, or proving compliance to auditors. Expect platform blueprints to ship with observability baked in, not bolted on. Look at the badges behind every coffee urn and you’ll spot familiar DevOps and DevSecOps logos — GitHub Actions, Mezmo, Teleport, Cortex, Sedai, Tailscale. Many of these vendors cut their teeth in CI/CD, IaC, or shift-left security long before “platform engineering” was a LinkedIn hashtag. ... So why the funeral garb? My guess: A tongue-in-cheek jab at hype cycles. Just as “DevOps is dead” clickbait pushed us to sharpen our message, the sash was a reminder that real value — not buzzwords — keeps a movement alive. Judging by the hallway traffic and workshop queues, platform engineering is passing that test.

Daily Tech Digest - March 28, 2025


Quote for the day:

"Success is how high you bounce when you hit bottom." -- Gen. George Patton



Do Stablecoins Pave the Way for CBDCs? An Architect’s Perspective

The relationship between regulated stablecoins and CBDCs is complex. Rather than being purely competitive, they may evolve to serve complementary roles in the digital currency ecosystem. Regulated stablecoins excel at facilitating cross-border transactions, supporting decentralised finance applications, and serving as bridges between traditional and crypto financial systems. CBDCs, meanwhile, are likely to focus on domestic retail payments, financial inclusion, and maintaining monetary sovereignty. The regulated stablecoin market has provided valuable lessons for CBDC implementation. Central banks have observed how private stablecoins handle scalability challenges, privacy concerns, and user experience issues. These insights are informing CBDC designs worldwide. However, significant hurdles remain before CBDCs achieve widespread adoption. Technical challenges around scalability, privacy, and security must be resolved. Legal frameworks need updating to accommodate these new forms of money. Perhaps most importantly, central banks must convince the sceptical public that CBDCs will not become tools for surveillance or financial control.


Inside the war between genAI and the internet

One way to stop AI crawlers is via good old-fashioned robots.txt files, but as noted, they can and often do ignore those. That’s prompted many to call for penalties such as infringement lawsuits, for doing so. Another approach is to use a Web Application Firewall (WAF), which can block unwanted traffic, including AI crawlers, while allowing legitimate users to access a site. By configuring the WAF to recognize and block specific AI bot signatures, websites can theoretically protect their content. More advanced AI crawlers might evade detection by mimicking legitimate traffic or using rotating IP addresses. Protecting against this is time-consuming, forcing the frequent updating of rules and IP reputation lists — another burden for the source sites. Rate limiting is also used to prevent excessive data retrieval by AI bots. This involves setting limits on the number of requests a single IP can make within a certain timeframe, which helps reduce server load and data misuse risks. Advanced bot management solutions are becoming more popular, too. These tools use machine learning and behavioral analysis to identify and block unwanted AI bots, offering more comprehensive protection than traditional methods.


How AI enhances security in international transactions

Rather than working with pre-set and heuristic rules, AI learns from transaction patterns in real time. It doesn’t just flag transactions that exceed a certain limit—it contextualises behaviour. ... If the transaction is genuinely out of place, AI doesn’t immediately block it but escalates it for real-time review. This ability to detect anomalies with context is what makes AI so much more effective than rigid compliance rules. ... One of the biggest pain points in compliance today is false positives, transactions wrongly flagged as suspicious. Imagine a business that expands into a new market and suddenly sees a surge in inbound transactions. Without AI, this might result in an account freeze. But even AI-powered systems aren’t perfect. A name match in a sanctions list, for instance, doesn’t necessarily mean the customer is a fraudster. If John Doe from Mumbai is mistakenly flagged as Jon Doe from New York, who was implicated in a financial crime, a manual review is still necessary. ... AI isn’t here to replace compliance teams, it’s here to empower them. Instead of manually reviewing thousands of transactions, compliance officers can focus on high-risk cases while AI handles routine screening. What does the future look like? Faster, real-time transaction approvals – AI will further reduce manual interventions, making cross-border payments almost instantaneous.


DiRMA: Measuring How Your Organization Manages Chaos

DiRT is a structured approach to stress-testing systems by intentionally triggering controlled failures. Originally pioneered in large-scale technology infrastructures, DiRT helps organizations proactively identify weaknesses and refine their recovery strategies. Unlike traditional disaster recovery methods, which rely on theoretical scenarios, DiRT forces teams to confront real operational disruptions in a controlled manner, ensuring that failure responses are both effective and repeatable. The methodology consists of performing a coordinated and organized set of events, in which a group of engineers plan and execute real and fictitious outages for a defined period to test the effective response of the involved teams ... DiRMA is inspired by the program DiRT, created in 2006 by Google to inject failures in critical systems, business processes and people dynamics to expose reliability risks and provide preemptive mitigations. Since some organizations have already started their journey toward the creation of environments for DiRT, in which they can launch failures, determine their level of resilience and test their incident response processes, it is essential to have frameworks, like CE Maturity Assessments, to evaluate the effectiveness, in this case, of a program like DiRT.


The RACI matrix: Your blueprint for project success

The golden rule of a RACI matrix is clarity of accountability. Because of this, as mentioned previously, only one person can be accountable for a given project. In many projects, the concept of responsibility and accountability can get conflated or confused, especially when those responsible for the project’s completion are empowered with broad decision-making capabilities. The chief difference between R (responsible) and A (accountable) roles is that, while those deemed responsible may be given latitude for decision-making when completing the work involved in a task or project, only one person truly owns and signs off on the work. ... RASCI is another type of responsibility assignment matrix used in project management. It retains the four core roles of RACI — Responsible, Accountable, Consulted, and Informed — but adds a fifth: Supportive. The Supportive role in a RASCI chart is responsible for providing assistance to those in the Responsible role. This may involve providing additional resources, expertise, or advice to help the Responsible party complete a particular task. Organizations that choose RASCI often do so to ensure that personnel who may not have direct responsibility or accountability but are nevertheless vital to the success of an activity or project are considered a notable facet (and cost) of the project. 


How to create an effective crisis communication plan

Planning crisis communication involves many practical aspects. These include, for example, identifying the room in which live crisis management meetings can take place and how online meetings will be conducted. In the event of a cyber crisis, it must always be taken into account that communication tools such as email, chat, landline, or IP telephony may not be available. It must also be expected that the IT network will be inaccessible or will have to be shut down for security reasons. Therefore, all prepared documents and contact lists of the crisis team must be accessible even without access to the internal IT network. ... Crucial to effective external communications is that the media and social network users receive information from a single source. Therefore, it must be clarified that only designated corporate communications employees with experience in public relations will provide statements to the media. All departments must be informed of their media contact details. Press relations during a crisis are generally conducted in multiple stages. Immediately upon the outbreak of a crisis, a prepared statement must be made available and issued on request. This statement may not contain details about the incident itself, but must express a willingness to engage in open communication.


Tapping into the Unstructured Data Goldmine for Enterprise in 2025

With so much structured data on hand, companies may believe unstructured data doesn’t add value, which couldn’t be farther from the truth. In fact, unstructured data can provide deeper insights and put companies ahead of the competition. However, before that happens, organizations must get a handle on all of the data they have on hand. While the majority of unstructured data is digital, some businesses have a large number of paper records that haven’t yet been digitized. By using a combination of software and document scanners, hard copies can be scanned and integrated with unstructured data. This may seem like too much of an investment from a time and resource perspective, and a heavy lift for humans alone; however, AI can fundamentally change how companies leverage unstructured data, enabling organizations to extract valuable insights and drive decision-making through human/machine collaboration. ... There’s no doubt that effectively managing unstructured data is critical to a successful and holistic data management program, but managing it can be complex, overwhelming, resource-intensive and difficult to analyze because it doesn’t fit neatly into traditional databases. Unlike structured data, which can easily be turned into business intelligence, unstructured data often requires significant processing before it can provide actionable insights.


Advances in Data Lakehouses

Recent advancements in data lakehouse architecture have significantly enhanced data management and quality through innovations like Delta Lake, ACID transactions, and metadata management. Delta Lake acts as a storage layer on top of existing cloud storage systems, introducing robust features such as ACID transactions that ensure data integrity and reliability. This enables consistent read and write operations, reducing the risk of data corruption and making it easier for organizations to maintain reliable datasets. Additionally, Delta Lake supports schema enforcement and evolution, allowing for more flexible data handling while maintaining structural integrity. Metadata management in a data lakehouse context provides a comprehensive way to manage data assets, enabling efficient data discovery and governance. ... In the rapidly evolving landscape of data management, improving query performance and enhancing SQL compatibility are crucial for modern data stacks, especially within the framework of data lakehouses. Data lakehouses combine the best of data lakes and data warehouses, providing both the scalability of lakes for raw data storage and the structured, efficient querying capabilities of warehouses. A primary focus in this area is optimizing query engines to handle diverse workloads efficiently.


Self-Healing Data Pipelines: The Next Big Thing in Data Engineering?

The idea of a self-healing pipeline is simple: When errors occur during data processing, the pipeline should automatically detect, analyze, and correct them without human intervention. Traditionally, fixing these issues requires manual intervention, which is time-consuming and prone to errors. There are several ways to idealize this, but using AI agents is the best method and a futuristic approach for data engineers to self-heal failed pipelines and auto-correct them dynamically. In this article, I will show a basic implementation of how to use LLMs like the GPT-4/DeepSeek R1 model to self-heal data pipelines by using LLM’s recommendation on failed records and applying the fix through the pipeline while it is still running. The provided solution can be scaled to large data pipelines and extended to more functionalities by using the proposed method. ... To ensure resilience, we implement a retry mechanism using tenacity. The function sends error details to GPT and retrieves suggested fixes. In our case, the 'functions' list was created and passed to the JSON payload using the ChatCompletion Request. Note that the 'functions' list is the list of all functions available to fix the known or possible issues using the Python functions we have created in our pipeline code. 


Android financial threats: What businesses need to know to protect themselves and their customers

Research has revealed an alarming trend around Android-targeted financial threats. Attackers are leveraging Progressive Web Apps (PWAs) and Web Android Package Kits (WebAPKs) to create malicious applications that can bypass traditional app store vetting processes and security warnings. The mechanics of these attacks are sophisticated yet deceptively simple. Victims are typically lured in through phishing campaigns that exploit various communication channels, including SMS, automated calls, and social media advertisements.  ... Educating customers is a vital step. Businesses can empower customers by highlighting their own security efforts, like two-factor authentication and secure transactions. By making security part of their brand identity and providing supportive resources, small and mid-size businesses can create a safe, confident experience for their customers. Strengthening internal security measures is equally important though. Small businesses should consider implementing mobile threat detection solutions capable of identifying and neutralizing malicious PWAs and WebAPKs. Additional measures include collaborating with financial partners, sharing intelligence on emerging threats and developing coordinated incident response plans to address attacks quickly and effectively.

Daily Tech Digest - February 01, 2025


Quote for the day:

"Leadership is a matter of having people look at you and gain confidence, seeing how you react. If you're in control, they're in control." -- Tom Laundry


5 reasons the enterprise data center will never die

Cloud repatriation — enterprises pulling applications back from the cloud to the data center — remains a popular option for a variety of reasons. According to a June 2024 IDC survey, about 80% of 2,250 IT decision-maker respondents “expected to see some level of repatriation of compute and storage resources in the next 12 months.” IDC adds that the six-month period between September 2023 and March 2024 saw increased levels of repatriation plans “across both compute and storage resources for AI lifecycle, business apps, infrastructure, and database workloads.” ... According to Forrester’s 2023 Infrastructure Cloud Survey, 79% of roughly 1,300 enterprise cloud decision-makers said their firms are implementing internal private clouds, which will use virtualization and private cloud management. Nearly a third (31%) of respondents said they are building internal private clouds using hybrid cloud management solutions such as software-defined storage and API-consistent hardware to make the private cloud more like the public cloud, Forrester adds. ... “Edge is a crucial technology infrastructure that extends and innovates on the capabilities found in core datacenters, whether enterprise- or service-provider-oriented,” says IDC. The rise of edge computing shatters the binary “cloud-or-not-cloud” way of thinking about data centers and ushers in an “everything everywhere all at once” distributed model


How to Understand and Manage Cloud Costs with a Data-Driven Strategy

Understanding your cloud spend starts with getting serious about data. If your cloud usage grew organically across teams over time, you're probably staring at a bill that feels more like a puzzle than a clear financial picture. You know you're paying too much, and you have an idea of where the spending is happening across compute, storage, and networking, but you are not sure which teams are overspending, which applications are being overprovisioned, and so on. Multicloud environments add even another layer of complexity to data visibility. ... With a holistic view of your data established, the next step is augmenting tools to gain a deeper understanding of your spending and application performance. To achieve this, consider employing a surgical approach by implementing specialized cost management and performance monitoring tools that target specific areas of your IT infrastructure. For example, granular financial analytics can help you identify and eliminate unnecessary expenses with precision. Real-time visibility tools provide immediate insights into cost anomalies and performance issues, allowing for prompt corrective actions. Governance features ensure that spending aligns with budgetary constraints and compliance requirements, while integration capabilities with existing systems facilitate seamless data consolidation and analysis across different platforms. 


Top cybersecurity priorities for CFOs

CFOs need to be aware of the rising threats of cyber extortion, says Charles Soranno, a managing director at global consulting firm Protiviti. “Cyber extortion is a form of cybercrime where attackers compromise an organization’s systems, data or networks and demand a ransom to return to normal and prevent further damage,” he says. Beyond a ransomware attack, where data is encrypted and held hostage until the ransom is paid, cyber extortion can involve other evolving threats and tactics, Soranno says. “CFOs are increasingly concerned about how these cyber extortion schemes impact lost revenue, regulatory fines [and] potential payments to bad actors,” he says. ... “In collaboration with other organizational leaders, CFOs must assess the risks posed by these external partners to identify vulnerabilities and implement a proactive mitigation and response plan to safeguard from potential threats and issues.” While a deep knowledge of the entire supply chain’s cybersecurity posture might seem like a luxury for some organizations, the increasing interconnectedness of partner relationships is making third-party cybersecurity risk profiles more of a necessity, Krull says. “The reliance on third-party vendors and cloud services has grown exponentially, increasing the potential for supply chain attacks,” says Dan Lohrmann, field CISO at digital services provider Presidio. 


GDPR authorities accused of ‘inactivity’

The idea that the GDPR has brought about a shift towards a serious approach to data protection has largely proven to be wishful thinking, according to a statement from noyb. “European data protection authorities have all the necessary means to adequately sanction GDPR violations and issue fines that would prevent similar violations in the future,” Schrems says. “Instead, they frequently drag out the negotiations for years — only to decide against the complainant’s interests all too often.” ... “Somehow it’s only data protection authorities that can’t be motivated to actually enforce the law they’re entrusted with,” criticizes Schrems. “In every other area, breaches of the law regularly result in monetary fines and sanctions.” Data protection authorities often act in the interests of companies rather than the data subjects, the activist suspects. It is precisely fines that motivate companies to comply with the law, reports the association, citing its own survey. Two-thirds of respondents stated that decisions by the data protection authority that affect their own company and involve a fine lead to greater compliance. Six out of ten respondents also admitted that even fines imposed on other organizations have an impact on their own company. 


The three tech tools that will take the heat off HR teams in 2025

As for the employee review process, a content services platform enables HR employees to customise processes, routing approvals to the right managers, department heads, and people ops. This means that employee review processes can be expedited thanks to customisable forms, with easier goal setting, identification of upskilling opportunities, and career progression. When paperwork and contracts are uniform, customisable, and easily located, employers are equipped to support their talent to progress as quickly as possible – nurturing more fulfilled employees who want to stick around. ... Naturally, a lot of HR work is form-heavy, with anything from employee onboarding and promotions to progress reviews and remote working requests requiring HR input. However, with a content services platform, HR professionals can route and approve forms quickly, speeding up the process with digital forms that allow employees to enter information quickly and accurately. Going one step further, HR leaders can leverage automated workflows to route forms to approvers as soon as an employee completes them – cutting out the HR intermediary. ... Armed with a single source of truth, HR professionals can take advantage of automated workflows, enabling efficient notifications and streamlining HR compliance processes.


AI Could Turn Against You — Unless You Fix Your Data Trust Issues

Without unified standards for data formats, definitions, and validations, organizations struggle to establish centralized control. Legacy systems, often ill-equipped to handle modern data volumes, further exacerbate the problem. These systems were designed for periodic updates rather than the continuous, real-time streams demanded by AI, leading to inefficiencies and scalability limitations. To address these challenges, organizations must implement centralized governance, quality, and observability within a single framework. This enables them to leverage data lineage and track their data as it moves through systems to ensure transparency and identify issues in real-time. It also ensures they can regularly validate data integrity to support consistent, reliable AI models by conducting real-time quality checks. ... For organizations to maximize the potential of AI, they must embed data trust into their daily operations. This involves using automated systems like data observability to validate data integrity throughout its lifecycle, integrated governance to maintain reliability, and assuring continuous validation within evolving data ecosystems. By addressing data quality challenges and investing in unified platforms, organizations can transform data trust into a strategic advantage. 


Backdoor in Chinese-made healthcare monitoring device leaks patient data

“By reviewing the firmware code, the team determined that the functionality is very unlikely to be an alternative update mechanism, exhibiting highly unusual characteristics that do not support the implementation of a traditional update feature,” CISA said in its analysis report. “For example, the function provides neither an integrity checking mechanism nor version tracking of updates. When the function is executed, files on the device are forcibly overwritten, preventing the end customer — such as a hospital — from maintaining awareness of what software is running on the device.” In addition to this hidden remote code execution behavior, CISA also found that once the CMS8000 completes its startup routine, it also connects to that same IP address over port 515, which is normally associated with the Line Printer Daemon (LPD), and starts transmitting patient information without the device owner’s knowledge. “The research team created a simulated network, created a fake patient profile, and connected a blood pressure cuff, SpO2 monitor, and ECG monitor peripherals to the patient monitor,” the agency said. “Upon startup, the patient monitor successfully connected to the simulated IP address and immediately began streaming patient data to the address.”


3 Considerations for Mutual TLS (mTLS) in Cloud Security

Traditional security approaches often rely on IP whitelisting as a primary method of access control. While this technique can provide a basic level of security, IP whitelists operate on a fundamentally flawed assumption: that IP addresses alone can accurately represent trusted entities. In reality, this approach fails to effectively model real-world attack scenarios. IP whitelisting provides no mechanism for verifying the integrity or authenticity of the connecting service. It merely grants access based on network location, ignoring crucial aspects of identity and behavior. In contrast, mTLS addresses these shortcomings by focusing on cryptographic identity(link is external) rather than network location. ... In the realm of mTLS, identity is paramount. It's not just about encrypting data in transit; it's about ensuring that both parties in a communication are exactly who they claim to be. This concept of identity in mTLS warrants careful consideration. In a traditional network, identity might be tied to an IP address or a shared secret. But, in the modern world of cloud-native applications, these concepts fall short. mTLS shifts the mindset by basing identity on cryptographic certificates. Each service possesses its own unique certificate, which serves as its identity card.


Artificial Intelligence Versus the Data Engineer

It’s worth noting that there is a misconception that AI can prepare data for AI, when the reality is that, while AI can accelerate the process, data engineers are still needed to get that data in shape before it reaches the AI processes and models and we see the cool end results. At the same time, there are AI tools that can certainly accelerate and scale the data engineering work. So AI is both causing and solving the challenge in some respects! So, how does AI change the role of the data engineer? Firstly, the role of the data engineer has always been tricky to define. We sit atop a large pile of technology, most of which we didn’t choose or build, and an even larger pile of data we didn’t create, and we have to make sense of the world. Ostensibly, we are trying to get to something scientific. ... That art comes in the form of the intuition required to sift through the data, understand the technology, and rediscover all the little real-world nuances and history that over time have turned some lovely clean data into a messy representation of the real world. The real skill great data engineers have is therefore not the SQL ability but how they apply it to the data in front of them to sniff out the anomalies, the quality issues, the missing bits and those historical mishaps that must be navigated to get to some semblance of accuracy.


How engineering teams can thrive in 2025

Adopting a "fail forward" mentality is crucial as teams experiment with AI and other emerging technologies. Engineering teams are embracing controlled experimentation and rapid iteration, learning from failures and building knowledge. ... Top engineering teams will combine emerging technologies with new ways of working. They’re not just adopting AI—they’re rethinking how software is developed and maintained as a result of it. Teams will need to stay agile to lead the way. Collaboration within the business and access to a multidisciplinary talent base is the recipe for success. Engineering teams should proactively scenario plan to manage uncertainty by adopting agile frameworks like the "5Ws" (Who, What, When, Where, and Why.) This approach allows organizations to tailor tech adoption strategies and marry regulatory compliance with innovation. Engineering teams should also actively address AI bias and ensure fair and responsible AI deployment. Many enterprises are hiring responsible AI specialists and ethicists as regulatory standards are now in force, including the EU AI Act, which impacts organizations with users in the European Union. As AI improves, the expertise and technical skills that proved valuable before need to be continually reevaluated. Organizations that successfully adopt AI and emerging tech will thrive.