Showing posts with label MLOps. Show all posts
Showing posts with label MLOps. Show all posts

Daily Tech Digest - January 18, 2026


Quote for the day:

"Surround yourself with great people; delegate authority; get out of the way" -- Ronald Reagan



Data sovereignty: an existential issue for nations and enterprises

Law-making bodies have in recent years sought to regulate data flows to strengthen their citizens’ rights – for example, the EU bolstering individual citizens’ privacy through the General Data Protection Regulation (GDPR). This kind of legislation has redefined companies’ scope for storing and processing personal data. By raising the compliance bar, such measures are already reshaping C-level investment decisions around cloud strategy, AI adoption and third-party access to their corporate data. ... Faced with dynamic data sovereignty risks, enterprises have three main approaches ahead of them: First, they can take an intentional risk assessment approach. They can define a data strategy addressing urgent priorities, determining what data should go where and how it should be managed - based on key metrics such as data sensitivity, the nature of personal data, downstream impacts, and the potential for identification. Such a forward-looking approach will, however, require a clear vision and detailed planning. Alternatively, the enterprise could be more reactive and detach entirely from its non-domestic public cloud service providers. This is riskier, given the likely loss of access to innovation and, worse, the financial fallout that could undermine their pursuit of key business objectives. Lastly, leaders may choose to do nothing and hope that none of these risks directly affects them. This is the highest-risk option, leaving no protection from potentially devastating financial and reputational consequences of an ineffective data sovereignty strategy.


Verification Debt: When Generative AI Speeds Change Faster Than Proof

Software delivery has always lived with an imbalance. It is easier to change a system than to demonstrate that the change is safe under real workloads, real dependencies, and real failure modes. ... The risk is not that teams become careless. The risk is that what looks correct on the surface becomes abundant while evidence remains scarce. ... A useful name for what accumulates in the mismatch is verification debt. It is the gap between what you released and what you have demonstrated, with evidence gathered under conditions that resemble production, to be safe and resilient. Technical debt is a bet about future cost of change. Verification debt is unknown risk you are running right now. Here, verification does not mean theorem proving. It means evidence from tests, staged rollouts, security checks, and live production signals that is strong enough to block a release or trigger a rollback. It is uncertainty about runtime behavior under realistic conditions, not code cleanliness, not maintainability, and not simply missing unit tests. If you want to spot verification debt without inventing new dashboards, look at proxies you may already track. ... AI can help with parts of verification. It can suggest tests, propose edge cases, and summarize logs. It can raise verification capacity. But it cannot conjure missing intent, and it cannot replace the need to exercise the system and treat the resulting evidence as strong enough to change the release decision. Review is helpful. Review is evidence of readability and intent.


Executive-level CISO titles surge amid rising scope strain

Executive-level CISOs were more likely to report outside IT than peers with VP or director titles, according to the findings. The report frames this as part of a broader shift in how organisations place accountability for cyber risk and oversight. The findings arrive as boards and senior executives assess cyber exposure alongside other enterprise risks. The report links these expectations to the need for security leaders to engage across legal, risk, operations and other functions. ... Smaller organisations and industries with leaner security teams showed the highest levels of strain, the report says. It adds that CISOs warn these imbalances can delay strategic initiatives and push teams towards reactive security operations. The report positions this issue as a management challenge as well as a governance question. It links scope creep with wider accountability and higher expectations on security leaders, even where budgets and staffing remain constrained. ... Recruiters and employers have watched turnover trends closely as demand for senior security leadership has remained high across many sectors. The report suggests that title, scope and reporting structure form part of how CISOs evaluate roles. ... "The demand for experienced CISOs remains strong as the role continues to become more complex and more 'executive'," said Martano. "Understanding how organizations define scope, reporting structure, and leadership access and visibility is critical for CISOs planning their next move and for companies looking to hire or retain security leaders."


What’s in, and what’s out: Data management in 2026 has a new attitude

Data governance is no longer a bolt-on exercise. Platforms like Unity Catalog, Snowflake Horizon and AWS Glue Catalog are building governance into the foundation itself. This shift is driven by the realization that external governance layers add friction and rarely deliver reliable end-to-end coverage. The new pattern is native automation. Data quality checks, anomaly alerts and usage monitoring run continuously in the background. ... Companies want pipelines that maintain themselves. They want fewer moving parts and fewer late-night failures caused by an overlooked script. Some organizations are even bypassing pipes altogether. Zero ETL patterns replicate data from operational systems to analytical environments instantly, eliminating the fragility that comes with nightly batch jobs. ... Traditional enterprise warehouses cannot handle unstructured data at scale and cannot deliver the real-time capabilities needed for AI. Yet the opposite extreme has failed too. The highly fragmented Modern Data Stack scattered responsibilities across too many small tools. It created governance chaos and slowed down AI readiness. Even the rigid interpretation of Data Mesh has faded. ... The idea of humans reviewing data manually is no longer realistic. Reactive cleanup costs too much and delivers too little. Passive catalogs that serve as wikis are declining. Active metadata systems that monitor data continuously are now essential.


How Algorithmic Systems Automate Inequality

The deployment of predictive analytics in public administration is usually justified by the twin pillars of austerity and accuracy. Governments and private entities argue that automated decision-making systems reduce administrative bloat while eliminating the subjectivity of human caseworkers. ... This dynamic is clearest in the digitization of the welfare state. When agencies turn to machine learning to detect fraud, they rarely begin with a blank slate, training their models on historical enforcement data. Because low-income and minority populations have historically been subject to higher rates of surveillance and policing, these datasets are saturated with selection bias. The algorithm, lacking sociopolitical context, interprets this over-representation as an objective indicator of risk, identifying correlation and deploying it as causality. ... Algorithmic discrimination, however, is diffuse and difficult to contest. A rejected job applicant or a flagged welfare recipient rarely has access to the proprietary score that disqualified them, let alone the training data or the weighting variable—they face a black box that offers a decision without a rationale. This opacity makes it nearly impossible for an individual to challenge the outcome, effectively insulating the deploying organisation from accountability. ... Algorithmic systems do not observe the world directly; they inherit their view of reality from datasets shaped by prior policy choices and enforcement practices. To assess such systems responsibly requires scrutiny of the provenance of the data on which decisions are built and the assumptions encoded in the variables selected.


DevSecOps for MLOps: Securing the Full Machine Learning Lifecycle

The term "MLSecOps" sounds like consultant-speak. I was skeptical too. But after auditing ML pipelines at eleven companies over the past eighteen months, I've concluded we need the term because we need the concept — extending DevSecOps practices across the full machine learning lifecycle in ways that account for ML-specific threats. The Cloud Security Alliance's framework is useful here. Securing ML systems means protecting "the confidentiality, integrity, availability, and traceability of data, software, and models." That last word — traceability — is where most teams fail catastrophically. In traditional software, you can trace a deployed binary back to source code, commit hash, build pipeline, and even the engineer who approved the merge. ... Securing ML data pipelines requires adopting practices that feel tedious until the day they save you. I'm talking about data validation frameworks, dataset versioning, anomaly detection at ingestion, and schema enforcement like your business depends on it — because it does. Last September, I worked with an e-commerce company deploying a recommendation model. Their data pipeline pulled from fifteen different sources — user behavior logs, inventory databases, third-party demographic data. Zero validation beyond basic type checking. We implemented Great Expectations — an open-source data validation framework — as a mandatory CI check. 


Autonomous Supply Chains: Catalyst for Building Cyber-Resilience

Autonomous supply chains are becoming essential for building resilience amid rising global disruptions. Enabled by a strong digital core, agentic architecture, AI and advanced data-driven intelligence, together with IoT and robotics, they facilitate operations that continuously learn, adapt and optimize across the value chain. ... Conventional thinking suggests that greater autonomy widens the attack surface and diminishes human oversight turning it into a security liability. However, if designed with cyber resilience at its core, autonomous supply chain can act like a “digital immune system,” becoming one of the most powerful enablers of security. ... As AI operations and autonomous supply chains scale, traditional perimeter simply won’t work. Organizations must adopt a Zero Trust security model to eliminate implicit trust at every access point. A Zero Trust model, centered on AI-driven identity and access management, ensures continuous authentication, network micro-segmentation and controlled access across users, devices and partners. By enforcing “never trust, always verify,” organizations can minimize breach impact and contain attackers from freely moving across systems, maintaining control even in highly automated environments. ... Autonomy in the supply chain thrives on data sharing and connectivity across suppliers, carriers, manufacturers, warehouses and retailers, making end-to-end visibility and governance vital for both efficiency and security. 


When enterprise edge cases become core architecture

What matters most is not the presence of any single technology, but the requirements that come with it. Data that once lived in separate systems now must be consistent and trusted. Mobile devices are no longer occasional access points but everyday gateways. Hiring workflows introduce identity and access considerations sooner than many teams planned for. As those realities stack up, decisions that once arrived late in projects are moving closer to the start. Architecture and governance stop being cleanup work and start becoming prerequisites. ... AI is no longer layered onto finished systems. Mobile is no longer treated as an edge. Hiring is no longer insulated from broader governance and security models. Each of these shifts forces organizations to think earlier about data, access, ownership and interoperability than they are used to doing. What has changed is not just ambition, but feasibility. AI can now work across dozens of disparate systems in ways that were previously unrealistic. Long-standing integration challenges are no longer theoretical problems. They are increasingly actionable -- and increasingly unavoidable. ... As a result, integration, identity and governance can no longer sit quietly in the background. These decisions shape whether AI initiatives move beyond experimentation, whether access paths remain defensible and whether risk stays contained or spreads. Organizations that already have a clear view of their data, workflows and access models will find it easier to adapt. 


Why New Enterprise Architecture Must Be Built From Steel, Not Straw

Architecture must reflect future ambition. Ideally, architects build systems with a clear view of where the product and business are heading. When a system architecture is built for the present situation, it’s likely lacking in flexibility and scalability. That said, sound strategic decisions should be informed by well-attested or well-reasoned trends, not just present needs and aspirations. ... Tech leaders should avoid overcommitting to unproven ideas—i.e., not get "caught up" in the hype. Safe experimentation frameworks (from hypothesis to conclusion) reduce risk by carefully applying best practices to testing out approaches. In a business context with something as important as the technology foundation the organization runs in, do not let anyone mischaracterize this as timidity. Critical failure is a career-limiting move, and potentially an organizational catastrophe. ... The art lies in designing systems that can absorb future shifts without constant rework. That comes from aligning technical decisions not only with what the company is today, but also what it intends to become. Future-ready architecture isn’t the comparatively steady and predictable discipline it was before AI-enabled software features. As a consequence, there’s wisdom in staying directional, rather than architecting for the next five years. Align technical decisions with long-term vision but built with optionality wherever possible. 


Why Engineering Culture Is Everything: Building Teams That Actually Work

The culture is something that is a fact and it's also something intrinsic with human beings. We're people, we have a background. We were raised in one part of the world versus another. We have the way that we talk and things that we care about. All those things influence your team indirectly and directly. It's really important, you as a leader, to be aware that as an engineer, I use a lot of metaphors from monitoring and observability. We always talk about known knowns, known unknowns, and unknown unknowns. Those are really important to understand on a systems level, period, because your social technical system is also a system. The people that you work with, the way you work, your organization, it's a system. And if you're not aware of what are the metrics you need to track, what are the things that are threats to it, the good old strengths, weaknesses, opportunities, and threats. ... What we can learn from other industries is their lessons. Again, we are now on yet another industrial revolution. This time it's more of a knowledge revolution. We can learn from civil engineering like, okay, when the brick was invented, that was a revolution. When the brick was invented, what did people do in order to make sure that bricks matter? That's a fascinating and very curious story about the Freemasons. People forget the Freemasons were a culture about making sure that these constructions techniques, even more than the technologies, the techniques, were up to standards. 

Daily Tech Digest - April 27, 2025


Quote for the day:

“Most new jobs won’t come from our biggest employers. They will come from our smallest. We’ve got to do everything we can to make entrepreneurial dreams a reality.” -- Ross Perot



7 key strategies for MLops success

Like many things in life, in order to successfully integrate and manage AI and ML into business operations, organisations first need to have a clear understanding of the foundations. The first fundamental of MLops today is understanding the differences between generative AI models and traditional ML models. Cost is another major differentiator. The calculations of generative AI models are more complex resulting in higher latency, demand for more computer power, and higher operational expenses. Traditional models, on the other hand, often utilise pre-trained architectures or lightweight training processes, making them more affordable for many organisations. ... Creating scalable and efficient MLops architectures requires careful attention to components like embeddings, prompts, and vector stores. Fine-tuning models for specific languages, geographies, or use cases ensures tailored performance. An MLops architecture that supports fine-tuning is more complicated and organisations should prioritise A/B testing across various building blocks to optimise outcomes and refine their solutions. Aligning model outcomes with business objectives is essential. Metrics like customer satisfaction and click-through rates can measure real-world impact, helping organisations understand whether their models are delivering meaningful results. 


If we want a passwordless future, let's get our passkey story straight

When passkeys work, which is not always the case, they can offer a nearly automagical experience compared to the typical user ID and password workflow. Some passkey proponents like to say that passkeys will be the death of passwords. More realistically, however, at least for the next decade, they'll mean the death of some passwords -- perhaps many passwords. We'll see. Even so, the idea of killing passwords is a very worthy objective. ... With passkeys, the device that the end user is using – for example, their desktop computer or smartphone -- is the one that's responsible for generating the public/private key pair as a part of an initial passkey registration process. After doing so, it shares the public key – the one that isn't a secret – with the website or app that the user wants to login to. The private key -- the secret -- is never shared with that relying party. This is where the tech article above has it backward. It's not "the site" that "spits out two pieces of code" saving one on the server and the other on your device. ... Passkeys have a long way to go before they realize their potential. Some of the current implementations are so alarmingly bad that it could delay their adoption. But adoption of passkeys is exactly what's needed to finally curtail a decades-long crime spree that has plagued the internet. 



AI: More Buzzword Than Breakthrough

While Artificial Intelligence focuses on creating systems that simulate human intelligence, Intelligent Automation leverages these AI capabilities to automate end-to-end business processes. In essence, AI is the brain that provides cognitive functions, while Intelligent Automation is the body that executes tasks using AI’s intelligence. This distinction is critical; although Artificial Intelligence is a component of Intelligent Automation, not all AI applications result in automation, and not all automation requires advanced Artificial Intelligence. ... Intelligent Automation automates and optimizes business processes by combining AI with automation tools. This integration results in increased efficiency and reduced operating costs. For instance, Intelligent Automation can streamline supply chain operations by automating inventory management, order fulfillment, and logistics, resulting in faster turnaround times and fewer errors. ... In recent years, the term “AI” has been widely used as a marketing buzzword, often applied to technologies that do not have true AI capabilities. This phenomenon, sometimes referred to as “AI washing,” involves branding traditional automation or data processing systems as AI in order to capitalize on the term’s popularity. Such practices can mislead consumers and businesses, leading to inflated expectations and potential disillusionment with the technology.


Introduction to API Management

API gateways are pivotal in managing both traffic and security for APIs. They act as the frontline interface between APIs and the users, handling incoming requests and directing them to the appropriate services. API gateways enforce policies such as rate limiting and authentication, ensuring secure and controlled access to API functions. Furthermore, they can transform and route requests, collect analytics data and provide caching capabilities. ... With API governance, businesses get the most out of their investment. The purpose of API governance is to make sure that APIs are standardized so that they are complete, compliant and consistent. Effective API governance enables organizations to identify and mitigate API-related risks, including performance concerns, compliance issues and security vulnerabilities. API governance is complex and involves security, technology, compliance, utilization, monitoring, performance and education. Organizations can make their APIs secure, efficient, compliant and valuable to users by following best practices in these areas. ... Security is paramount in API management. Advanced security features include authentication mechanisms like OAuth, API keys and JWT (JSON Web Tokens) to control access. Encryption, both in transit and at rest, ensures data integrity and confidentiality.


Sustainability starts within: Flipkart & Furlenco on building a climate-conscious culture

Based on the insights from Flipkart and Furlenco, here are six actionable steps for leaders seeking to embed climate goals into their company culture: Lead with intent: Make climate goals a strategic priority, not just a CSR initiative. Signal top-level commitment and allocate leadership roles accordingly. Operationalise sustainability: Move beyond policies into process design — from green supply chains to net-zero buildings and water reuse systems. Make It measurable: Integrate climate-related KPIs into team goals, performance reviews, and business dashboards. Empower employees: Create space for staff to lead climate initiatives, volunteer, learn, and innovate. Build purpose into daily roles. Foster dialogue and storytelling: Share wins, losses, and journeys. Use Earth Day campaigns, internal newsletters, and learning modules to bring sustainability to life. Measure Culture, Not Just Carbon: Assess how employees feel about their role in climate action — through surveys, pulse checks, and feedback loops. ... Beyond the company walls, this cultural approach to climate leadership has ripple effects. Customers are increasingly drawn to brands with strong environmental values, investors are rewarding companies with robust ESG cultures, and regulators are moving from voluntary frameworks to mandatory disclosures.


Proof-of-concept bypass shows weakness in Linux security tools

An Israeli vendor was able to evade several leading Linux runtime security tools using a new proof-of-concept (PoC) rootkit that it claims reveals the limitations of many products in this space. The work of cloud and Kubernetes security company Armo, the PoC is called ‘Curing’, a portmanteau word that combines the idea of a ‘cure’ with the io_uring Linux kernel interface that the company used in its bypass PoC. Using Curing, Armo found it was possible to evade three Linux security tools to varying degrees: Falco (created by Sysdig but now a Cloud Native Computing Foundation graduated project), Tetragon from Isovalent (now part of Cisco), and Microsoft Defender. ... Armo said it was motivated to create the rootkit to draw attention to two issues. The first was that, despite the io_uring technique being well documented for at least two years, vendors in the Linux security space had yet to react to the danger. The second purpose was to draw attention to deeper architectural challenges in the design of the Linux security tools that large numbers of customers rely on to protect themselves: “We wanted to highlight the lack of proper attention in designing monitoring solutions that are forward-compatible. Specifically, these solutions should be compatible with new features in the Linux kernel and address new techniques,” said Schendel.


Insider threats could increase amid a chaotic cybersecurity environment

Most organisations have security plans and policies in place to decrease the potential for insider threats. No policy will guarantee immunity to data breaches and IT asset theft but CISOs can make sure their policies are being executed through routine oversight and audits. Best practices include access control and least privilege, which ensures employees, contractors and all internal users only have access to the data and systems necessary for their specific roles. Regular employee training and awareness programmes are also critical. Training sessions are an effective means to educate employees on security best practices such as how to recognise phishing attempts, social engineering attacks and the risks associated with sharing sensitive information. Employees should be trained in how to report suspicious activities – and there should be a defined process for managing these reports. Beyond the security controls noted above, those that govern the IT asset chain of custody are crucial to mitigating the fallout of a breach should assets be stolen by employees, former employees or third parties. The IT asset chain of custody refers to the process that tracks and documents the physical possession, handling and movement of IT assets throughout their lifecycle. A sound programme ensures that there is a clear, auditable trail of who has access to and controls the asset at any given time. 


Distributed Cloud Computing: Enhancing Privacy with AI-Driven Solutions

AI has the potential to play a game-changing role in distributed cloud computing and PETs. By enabling intelligent decision-making and automation, AI algorithms can help us optimize data processing workflows, detect anomalies, and predict potential security threats. AI has been instrumental in helping us identify patterns and trends in complex data sets. We're excited to see how it will continue to evolve in the context of distributed cloud computing. For instance, homomorphic encryption allows computations to be performed on encrypted data without decrypting it first. This means that AI models can process and analyze encrypted data without accessing the underlying sensitive information. Similarly, AI can be used to implement differential privacy, a technique that adds noise to the data to protect individual records while still allowing for aggregate analysis. In anomaly detection, AI can identify unusual patterns or outliers in data without requiring direct access to individual records, ensuring that sensitive information remains protected. While AI offers powerful capabilities within distributed cloud environments, the core value proposition of integrating PETs remains in the direct advantages they provide for data collaboration, security, and compliance. Let's delve deeper into these key benefits, challenges and limitations of PETs in distributed cloud computing.


Mobile Applications: A Cesspool of Security Issues

"What people don't realize is you ship your entire mobile app and all your code to this public store where any attacker can download it and reverse it," Hoog says. "That's vastly different than how you develop a Web app or an API, which sit behind a WAF and a firewall and servers." Mobile platforms are difficult for security researchers to analyze, Hoog says. One problem is that developers rely too much on the scanning conducted by Apple and Google on their app stores. When a developer loads an application, either company will conduct specific scans to detect policy violations and to make malicious code more difficult to upload to the repositories. However, developers often believe the scanning is looking for security issues, but it should not be considered a security control, Hoog says. "Everybody thinks Apple and Google have tested the apps — they have not," he says. "They're testing apps for compliance with their rules. They're looking for malicious malware and just egregious things. They are not testing your application or the apps that you use in the way that people think." ... In addition, security issues on mobile devices tend to have a much shorter lifetime, because of the closed ecosystems and the relative rarity of jailbreaking. When NowSecure finds a problem, there is no guarantee that it will last beyond the next iOS or Android update, he says.


The future of testing in compliance-heavy industries

In today’s fast-evolving technology landscape, being an engineering leader in compliance-heavy industries can be a struggle. Managing risks and ensuring data integrity are paramount, but the dangers are constant when working with large data sources and systems. Traditional integration testing within the context of stringent regulatory requirements is more challenging to manage at scale. This leads to gaps, such as insufficient test coverage across interconnected systems, a lack of visibility into data flows, inadequate logging, and missed edge case conditions, particularly in third-party interactions. Due to these weaknesses, security vulnerabilities can pop up and incident response can be delayed, ultimately exposing organizations to violations and operational risk. ... API contract testing is a modern approach used to validate the expectations between different systems, making sure that any changes in APIs don’t break expectations or contracts. Changes might include removing or renaming a field and altering data types or response structures. These seemingly small updates can cause downstream systems to crash or behave incorrectly if they are not properly communicated or validated ahead of time. ... The shifting left practice has a lesser-known cousin: shifting right. Shifting right focuses on post-deployment validation using concepts such as observability and real-time monitoring techniques.

Daily Tech Digest - April 23, 2025


Quote for the day:

“Become the kind of leader that people would follow voluntarily, even if you had no title or position.” -- Brian Tracy


MLOps vs. DevOps: Key Differences — and Why They Work Better Together

Arguably, the greatest difference between DevOps and MLOps is that DevOps is, by most definitions, an abstract philosophy, whereas MLOps comes closer to prescribing a distinct set of practices. Ultimately, the point of DevOps is to encourage software developers to collaborate more closely with IT operations teams, based on the idea that software delivery processes are smoother when both groups work toward shared goals. In contrast, collaboration is not a major focus for MLOps. You could argue that MLOps implies that some types of collaboration between different stakeholders — such as data scientists, AI model developers, and model testers — need to be part of MLOps workflows. ... Another key difference is that DevOps centers solely on software development. MLOps is also partly about software development to the extent that model development entails writing software. However, MLOps also addresses other processes — like model design and post-deployment management — that don't overlap closely with DevOps as traditionally defined. ... Differing areas of focus lead to different skill requirements for DevOps versus MLOps. To thrive at DevOps, you must master DevOps tools and concepts like CI/CD and infrastructure-as-code (IaC).


Transforming quality engineering with AI

AI-enabled quality engineering promises to be a game changer, driving a level of precision and efficiency that is beyond the reach of traditional testing. AI algorithms can analyse historical data to identify patterns and predict quality issues, enabling organisations to take early action; machine learning tools detect anomalies with great accuracy, ensuring nothing is missed. Self-healing test scripts update automatically, without manual intervention. Machine Learning models automate test selection, picking the most relevant ones, while reducing both manual effort and errors. In addition, AI can prioritise test cases based on criticality, thus optimising resources and improving testing outcomes. Further, it can integrate with CI/CD pipelines, providing real-time feedback on code quality, and distributing updates automatically to ensure software applications are always ready for deployment. ... AI brings immense value to quality engineering, but also presents a few challenges. To function effectively, algorithms require high-quality datasets, which may not always be available. Organisations will likely need to invest significant resources in acquiring AI talent or building skills in-house. There needs to be a clear plan for integrating AI with existing testing tools and processes. Finally, there are concerns such as protecting data privacy and confidentiality, and implementing Responsible AI.


The Role of AI in Global Governance

Aurora drew parallels with transformative technologies such as electricity and the internet. "If AI reaches some communities late, it sets them far behind," he said. He pointed to Indian initiatives such as Bhashini for language inclusion, e-Sanjeevani for telehealth, Karya for employment through AI annotation and farmer.ai in Baramati, which boosted farmers' incomes by 30% to 40%. Schnorr offered a European perspective, stressing that AI's transformative impact on economies and societies demands trustworthiness. Reflecting on the EU's AI Act, he said its dual aim is fostering innovation while protecting rights. "We're reviewing the Act to ensure it doesn't hinder innovation," Schnorr said, advocating for global alignment through frameworks such as the G7's Hiroshima Code of Conduct and bilateral dialogues with India. He underscored the need for rules to make AI human-centric and accessible, particularly for small and medium enterprises, which form the backbone of both German and Indian economies. ... Singh elaborated on India's push for indigenous AI models. "Funding compute is critical, as training models is resource-intensive. We have the talent and datasets," he said, citing India's second-place ranking in GitHub AI projects per the Stanford AI Index. "Building a foundation model isn't rocket science - it's about providing the right ingredients."


Cisco ThousandEyes: resilient networks start with global insight

To tackle the challenges that arise from (common or uncommon) misconfigurations and other network problems, we need an end-to-end topology, Vaccaro reiterates. ThousandEyes (and Cisco as a whole) have recently put a lot of extra work into this. We saw a good example of this recently during Mobile World Congress. There, ThousandEyes announced Connected Devices. This is intended for service providers and extends their insight into the performance of their customers’ networks in their home environments. The goal, as Vaccaro describes it, is to help service providers see deeper so that they can catch an outage or other disruption quickly, before it impacts customers who might be streaming their favorite show or getting on a work call. ... The Digital Operational Resilience Act (DORA) will be no news to readers who are active in the financial world. You can see DORA as a kind of advanced NIS2, only directly enforced by the EU. It is a collection of best practices that many financial institutions must adhere to. Most of it is fairly obvious. In fact, we would call it basic hygiene when it comes to resilience. However, one component under DORA will have caused financial institutions some stress and will continue to do so: they must now adhere to new expectations when it comes to the services they provide and the resilience of their third-party ICT dependencies.


A Five-Step Operational Maturity Model for Benchmarking Your Team

An operational maturity model is your blueprint for building digital excellence. It gives you the power to benchmark where you are, spot the gaps holding you back and build a roadmap to where you need to be. ... Achieving operational maturity starts with knowing where you are and defining where you want to go. From there, organizations should focus on four core areas: Stop letting silos slow you down. Unify data across tools and teams to enable faster incident resolution and improve collaboration. Integrated platforms and a shared data view reduce context switching and support informed decision-making. Because in today’s fast-moving landscape, fragmented visibility isn’t just inefficient — it’s dangerous. ... Standardize what matters. Automate what repeats. Give your teams clear operational frameworks so they can focus on innovation instead of navigation. Eliminate alert noise and operational clutter that’s holding your teams back. Less noise, more impact. ... Deploy automation and AI across the incident lifecycle, from diagnostics to communication. Prioritize tools that integrate well and reduce manual tasks, freeing teams for higher-value work. ... Use data and automation to minimize disruptions and deliver seamless experiences. Communicate proactively during incidents and apply learnings to prevent future issues.


The Future is Coded: How AI is Rewriting the Rules of Decision Theaters

At the heart of this shift is the blending of generative AI with strategic foresight practices. In the past, planning for the future involved static models and expert intuition. Now, AI models (including advanced neural networks) can churn through reams of historical data and real-time information to project trends and outcomes with uncanny accuracy. Crucially, these AI-powered projections don’t operate in a vacuum – they’re designed to work with human experts. By integrating AI’s pattern recognition and speed with human intuition and domain expertise, organizations create a powerful feedback loop. ... The fusion of generative AI and foresight isn’t confined to tech companies or futurists’ labs – it’s already reshaping industries. For instance, in finance, banks and investment firms are deploying AI to synthesize market signals and predict economic trends with greater accuracy than traditional econometric models. These AI systems can simulate how different strategies might play out under various future market conditions, allowing policymakers in central banks or finance ministries to test interventions before committing to them. The result is a more data-driven, preemptive strategy – allowing decision-makers to adjust course before a forecasted risk materializes. 


More accurate coding: Researchers adapt Sequential Monte Carlo for AI-generated code

The researchers noted that AI-generated code can be powerful, but it can also often lead to code that disregards the semantic rules of programming languages. Other methods to prevent this can distort models or are too time-consuming. Their method makes the LLM adhere to programming language rules by discarding code outputs that may not work early in the process and “allocate efforts towards outputs that more most likely to be valid and accurate.” ... The researchers developed an architecture that brings SMC to code generation “under diverse syntactic and semantic constraints.” “Unlike many previous frameworks for constrained decoding, our algorithm can integrate constraints that cannot be incrementally evaluated over the entire token vocabulary, as well as constraints that can only be evaluated at irregular intervals during generation,” the researchers said in the paper. Key features of adapting SMC sampling to model generation include proposal distribution where the token-by-token sampling is guided by cheap constraints, important weights that correct for biases and resampling which reallocates compute effort towards partial generations. ... AI models have made engineers and other coders work faster and more efficiently. It’s also given rise to a whole new kind of software engineer: the vibe coder. 


You Can't Be in Recovery Mode All the Time — Superna CEO

The proactive approach, he explains, shifts their position in the security lifecycle: "Now we're not responding with a very tiny blast radius and instantly recovering. We are officially left-of-the-boom; we are now ‘the incident never occurred.’" Next, Hesterberg reveals that the next wave of innovation focuses on leveraging the unique visibility his company has in terms of how critical data is accessed. “We have a keen understanding of where your critical data is and what users, what servers, and what services access that data.” From a scanning, patching, and upgrade standpoint, Hesterberg shares that large organizations often face the daunting task of addressing hundreds or even thousands of systems flagged for vulnerabilities daily. To help streamline this process, he says that his team is working on a new capability that integrates with the tools these enterprises already depend on. This upcoming feature will surface, in a prioritized way, the specific servers or services that interact with an organization's most critical data, highlighting the assets that matter most. By narrowing down the list, Hesterberg notes, teams can focus on the most potentially dangerous exposures first. Instead of trying to patch everything, he says, “If you know the 15, 20, or 50 that are most dangerous, potentially most dangerous, you're going to prioritize them.” 


When confusion becomes a weapon: How cybercriminals exploit economic turmoil

Defending against these threats doesn’t start with buying more tools. It starts with building a resilient mindset. In a crisis, security can’t be an afterthought – it must be a guiding principle. Organizations relying on informal workflows or inconsistent verification processes are unknowingly widening their attack surface. To stay ahead, protocols must be defined before uncertainty takes hold. Employees should be trained not just to spot technical anomalies, but to recognize emotional triggers embedded in legitimate looking messages. Resilience, at its core, is about readiness. Not just to respond, but to also anticipate. Organizations that view economic disruption as a dual threat, both financial and cyber, will position themselves to lead with control rather than react in chaos. This means establishing behavioral baselines, implementing layered authentication, and adopting systems that validate not just facilitate. As we navigate continued economic uncertainty, we are reminded once again that cybersecurity is no longer just about technology. It’s about psychology, communication, and foresight. Defending effectively means thinking tactically, staying adaptive, and treating clarity as a strategic asset.


The productivity revolution – enhancing efficiency in the workplace

In difficult economic times, when businesses are tightening the purse strings, productivity improvements may often be overlooked in favour of cost reductions. However, cutting costs is merely a short-term solution. By focusing on sustainable productivity gains, businesses will reap dividends in the long term. To achieve this, organisations must turn their focus to technology. Some technology solutions, such as cloud computing, ERP systems, project management and collaboration tools, produce significant flexibility or performance advantages compared to legacy approaches and processes. Whilst an initial expense, the long-term benefits are often multiples of the investment – cost reductions, time savings, employee motivation, to name just a few. And all of those technology categories are being enhanced with artificial intelligence – for example adding virtual agents to help us do more, quickly. ... At a time when businesses and labour markets are struggling with employee retention and availability, it has become more critical than ever for organisations to focus on effective training and wellbeing initiatives. Minimising staff turnover and building up internal skill sets is vital for businesses looking to improve their key outputs. Getting this right will enable organisations to build smarter and more effective productivity strategies.


Daily Tech Digest - January 06, 2025

Should States Ban Mandatory Human Microchip Implants?

“U.S. states are increasingly enacting legislation to pre-emptively ban employers from forcing workers to be ‘microchipped,’ which entails having a subdermal chip surgically inserted between one’s thumb and index finger," wrote the authors of the report. "Internationally, more than 50,000 people have elected to receive microchip implants to serve as their swipe keys, credit cards, and means to instantaneously share social media information. This technology is especially popular in Sweden, where chip implants are more widely accepted to use for gym access, e-tickets on transit systems, and to store emergency contact information.” ... “California-based startup Science Corporation thinks that an implant using living neurons to connect to the brain could better balance safety and precision," Singularity Hub wrote. "In recent non-peer-reviewed research posted on bioarXiv, the group showed a prototype device could connect with the brains of mice and even let them detect simple light signals.” That same piece quotes Alan Mardinly, who is director of biology at Science Corporation, as saying that the advantages of a biohybrid implant are that it "can dramatically change the scaling laws of how many neuros you can interface with versus how much damage you do to the brain."


AI revolution drives demand for specialized chips, reshaping global markets

There’s now a shift toward smaller AI models that only use internal corporate data, allowing for more secure and customizable genAI applications and AI agents. At the same time, Edge AI is taking hold, because it allows AI processing to happen on devices (including PCs, smartphones, vehicles and IoT devices), reducing reliance on cloud infrastructure and spurring demand for efficient, low-power chips. “The challenge is if you’re going to bring AI to the masses, you’re going to have to change the way you architect your solution; I think this is where Nvidia will be challenged because you can’t use a big, complex GPU to address endpoints,” said Mario Morales, a group vice president at research firm IDC. “So, there’s going to be an opportunity for new companies to come in — companies like Qualcomm, ST Micro, Renesas, Ambarella and all these companies that have a lot of the technology, but now it’ll be about how to use it. ... Enterprises and other organizations are also shifting their focus from single AI models to multimodal AI, or LLMs capable of processing and integrating multiple types of data or “modalities,” such as text, images, audio, video, and sensory input. The input from diverse resources creates a more comprehensive understanding of that data and enhances performance across tasks.


How to Address an Overlooked Aspect of Identity Security: Non-human Identities

Compromised identities and credentials are the No. 1 tactic for cyber threat actors and ransomware campaigns to break into organizational networks and spread and move laterally. Identity is the most vulnerable element in an organization’s attack surface because there is a significant misperception around what identity infrastructure (IDP, Okta, and other IT solutions) and identity security providers (PAM, MFA, etc.) can protect. Each solution only protects the silo that it is set up to secure, not an organization’s complete identity landscape, including human and non-human identities (NHIs), privileged and non-privileged users, on-prem and cloud environments, IT and OT infrastructure, and many other areas that go unmanaged and unprotected. ... Most organizations use a combination of on-prem management tools, a mix of one or more cloud identity providers (IdPs), and a handful of identity solutions (PAM, IGA) to secure identities. But each tool operates in a silo, leaving gaps and blind spots that cause increased attacks and blind spots. 8 out of 10 organizations cannot prevent the misuse of service accounts in real-time due to visibility and security being sporadic or missing. NHIs fly under the radar as security and identity teams sometimes don’t even know they exist. 


Version Control in Agile: Best Practices for Teams

With multiple developers working on different features, fixes, or updates simultaneously, it’s easy for code to overlap or conflict without clear guidelines. Having a structured branching approach prevents confusion and minimizes the risk of one developer’s work interfering with another’s. ... One of the cornerstones of good version control is making small, frequent commits. In Agile development, progress happens in iterations, and version control should follow that same mindset. Large, infrequent commits can cause headaches when it’s time to merge, increasing the chances of conflicts and making it harder to pinpoint the source of issues. Small, regular commits, on the other hand, make it easier to track changes, test new functionality, and resolve conflicts early before they grow into bigger problems. ... An organized repository is crucial to maintaining productivity. Over time, it’s easy for the repository to become cluttered with outdated branches, unnecessary files, or poorly named commits. This clutter slows down development, making it harder for team members to navigate and find what they need. Teams should regularly review their repositories and remove unused branches or files that are no longer relevant. 


Abusing MLOps platforms to compromise ML models and enterprise data lakes

Machine learning operations (MLOps) is the practice of deploying and maintaining ML models in a secure, efficient and reliable way. The goal of MLOps is to provide a consistent and automated process to be able to rapidly get an ML model into production for use by ML technologies. ... There are several well-known attacks that can be performed against the MLOps lifecycle to affect the confidentiality, integrity and availability of ML models and associated data. However, performing these attacks against an MLOps platform using stolen credentials has not been covered in public security research. ... Data poisoning: This attack involves an attacker having access to the raw data being used in the “Design” phase of the MLOps lifecycle to include attacker-provided data or being able to directly modify a training dataset. The goal of a data poisoning attack is to be able to influence the data that is being trained in an ML model and eventually deployed to production. ... Model extraction attacks involve the ability of an attacker to steal a trained ML model that is deployed in production. An attacker could use a stolen model to extract sensitive training data such as the training weights used, or to use the predictive capabilities used in the model for their own financial gain. 


Get Going With GitOps

GitOps implementations have a significant impact on infrastructure automation by providing a standardized, repeatable process for managing infrastructure as code, Rose says. The approach allows faster, more reliable deployments and simplifies the maintenance of infrastructure consistency across diverse environments, from development to production. "By treating infrastructure configurations as versioned artifacts in Git, GitOps brings the same level of control and automation to infrastructure that developers have enjoyed with application code." ... GitOps' primary benefit is its ability to enable peer review for configuration changes, Peele says. "It fosters collaboration and improves the quality of application deployment." He adds that it also empowers developers -- even those without prior operations experience -- to control application deployment, making the process more efficient and streamlined. Another benefit is GitOps' ability to allow teams to push minimum viable changes more easily, thanks to faster and more frequent deployments, says Siri Varma Vegiraju, a Microsoft software engineer. "Using this strategy allows teams to deploy multiple times a day and quickly revert changes if issues arise," he explains via email. 


Balancing proprietary and open-source tools in cyber threat research

First, it is important to assess the requirements of an organization by identifying the capabilities needed, such as threat intelligence platforms or malware analysis tools. Next, evaluating open-source tools which can be cost-effective and customizable, but may require community support and frequent updates. In contrast, proprietary tools could offer advanced features, dedicated support, and better integration with other products. Finally, think about scalability and flexibility, as future growth may necessitate scalable solutions. ... The technology is not magic, but it is a powerful tool to speed up processes and bolster security procedures while also reducing the gap between advanced and junior analysts. However, as of today, the technology still requires verification and validation. Globally, the need for security experts with a dual skill set in security and AI will be in high demand. Because the adoption of generative AI systems increases, we need people who understand these technologies because threat actors are also learning. ... If a CISO needs to evaluate effectiveness of these tools, they first need to understand their needs and pain points and then seek guidance from experts. Adopting generative AI security solutions just because it is the latest trend is not the right approach.


Get your IT infrastructure AI-ready

Artificial intelligence adoption is a challenge many CIOs grapple with as they look to the future. Before jumping in, their teams must possess practical knowledge, skills, and resources to implement AI effectively. ... AI implementation is costly and the training of AI models requires a substantial investment. "To realize the potential, you have to pay attention to what it's going to take to get it done, how much it's going to cost, and make sure you're getting a benefit," Ramaswami said. "And then you have to go get it done." GenAI has rapidly transformed from an experimental technology to an essential business tool, with adoption rates more than doubling in 2024, according to a recent study by AI at Wharton ... According to Donahue, IT teams are exploring three key elements: choosing language models, leveraging AI from cloud services, and building a hybrid multicloud operating model to get the best of on-premise and public cloud services. "We're finding that very, very, very few people will build their own language model," he said. "That's because building a language model in-house is like building a car in the garage out of spare parts." Companies look to cloud-based language models, but must scrutinize security and governance capabilities while controlling cost over time. 


What is an EPMO? Your organization’s strategy navigator

The key is to ensure the entire strategy lifecycle is set up for success rather than endlessly iterating to perfect strategy execution. Without properly defining, governing, and prioritizing initiatives upfront, even the best delivery teams will struggle to achieve business goals in a way that drives the right return for the organization’s investment. For most organizations, there’s more than one gap preventing desired results. ... The EPMO’s job is to strip away unnecessary complexity and create frameworks that empower teams to deliver faster, more effectively, and with greater focus. PMO leaders should ask how this process helps to hit business goals faster. So by eliminating redundant meetings and scaling governance to match project size and risk, delivery timelines can shorten. This kind of targeted adjustment keeps momentum high without sacrificing quality or control. ... For an EPMO to be effective, ideally it needs to report directly to the C-suite. This matters because proximity equals influence. When the EPMO has visibility at the top, it can drive alignment across departments, break down silos, drive accountability, and ensure initiatives stay connected to overall business objectives serving as the strategy navigator for the C-suite.


Data Center Hardware in 2025: What’s Changing and Why It Matters

DPUs can handle tasks like network traffic management, which would otherwise fall to CPUs. In this way, DPUs reduce the load placed on CPUs, ultimately making greater computing capacity available to applications. DPUs have been around for several years, but they’ve become particularly important as a way of boosting the performance of resource-hungry workloads, like AI training, by completing AI accelerators. This is why I think DPUs are about to have their moment. ... Recent events have underscored the risk of security threats linked to physical hardware devices. And while I doubt anyone is currently plotting to blow up data centers by placing secret bombs inside servers, I do suspect there are threat actors out there vying to do things like plant malicious firmware on servers as a way of creating backdoors that they can use to hack into data centers. For this reason, I think we’ll see an increased focus in 2025 on validating the origins of data center hardware and ensuring that no unauthorized parties had access to equipment during the manufacturing and shipping processes. Traditional security controls will remain important, too, but I’m betting on hardware security becoming a more intense area of concern in the year ahead.



Quote for the day:

"Nothing in the world is more common than unsuccessful people with talent." -- Anonymous

Daily Tech Digest - October 23, 2024

What Is Quantum Networking, and What Might It Mean for Data Centers?

Conventional networks shard data into packets and move them across wires or radio waves using long-established networking protocols, such as TCP/IP. In contrast, quantum networks move data using photons or electrons. It leverages unique aspects of quantum physics to enable powerful new features like entanglement, which effectively makes it possible to verify the source of data based on the quantum state of the data itself. ... Because quantum networking remains a theoretical and experimental domain, it's challenging to say at present exactly how quantum networks might change data centers. What does seem clear, however, is that data center operators seeking to offer full support for quantum devices will need to implement fundamentally new types of network infrastructure. They'll need to deploy infrastructure resources like quantum repeaters, while also ensuring that they can support whichever networking standards might emerge in the quantum space. The good news for the fledgling quantum data center ecosystem is that true quantum networks aren't a prerequisite for connecting quantum computers. It's possible for quantum machines themselves to send and receive data over classical networks by using traditional computers and networking devices as intermediaries.


Unmasking Big Tech’s AI Policy Playbook: A Warning to Global South Policymakers

Rather than a genuine, inclusive discussion about how governments should approach AI governance, what we are witnessing instead is a clash of seemingly competing narratives swirling together to obfuscate the real aspirations of big tech. The advocates of open-source large language models (LLMs) present themselves as civic-minded, democratic, and responsible, while closed-source proponents position themselves as the responsible stewards of secure, walled-garden AI development. Both sides dress their arguments with warnings about dire consequences if their views aren’t adopted by policymakers. ... For years, tech giants have employed scare tactics to convince policymakers that any regulation will stifle innovation, lead to economic decline, and exclude countries from the prestigious digital vanguard. These dire warnings are frequently targeted, especially in the Global South, where policymakers often lack the resources and expertise to keep pace with rapid technological advancements, including AI. Big tech’s polished lobbyists offer what seems like a reasonable solution, workable regulation" — which translates to delayed, light-touch, or self-regulation of emerging technologies. 


AI Agents: A Comprehensive Introduction for Developers

The best way to think about an AI agent is as a digital twin of an employee with a clear role. When any individual takes up a new job, there is a well-defined contract that establishes the essential elements — such as job definition, success metrics, reporting hierarchy, access to organizational information, and whether the role includes managing other people. These aspects ensure that the employee is most effective in their job and contributes to the overall success of an organization. ... The persona of an AI agent is the most crucial aspect that establishes the key trait of an agent. It is the equivalent of a title or a job function in the traditional environment. For example, a customer support engineer skilled in handling complaints from customers is a job function. It is also the persona of an individual who performs this job. You can easily extend this to an AI agent. ... A task is an extension of the instruction that focuses on a specific, actionable item within the broader scope of the agent’s responsibilities. While the instruction provides a general framework covering multiple potential actions, a task is a direct, concrete action that the agent must take in response to a particular user input.


AI in compliance: Streamlining HR processes to meet regulatory standards

With the increasing focus on data protection laws like the General Data Protection Regulation (GDPR), the California Consumer Privacy Act (CCPA), and India’s Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011 under the Information Technology Act, 2000, maintaining the privacy and security of employee data has become paramount. The Indian IT Privacy Law mandates that companies ensure the protection of sensitive personal data, including employee information, and imposes strict guidelines on how data must be collected, processed, and stored. AI can assist HR teams by automating data management processes and ensuring that sensitive information is stored securely and only accessed by authorized personnel. AI-driven tools can also help monitor compliance with data privacy regulations by tracking how employee data is collected, processed, and shared within the organization. ... This proactive monitoring reduces the likelihood of non-compliance and minimizes risks associated with data breaches, helping organizations align with both international and domestic privacy laws like the Indian IT Privacy Law.


Are humans reading your AI conversations?

Tools like OpenAI’s ChatGPT and Google’s Gemini are being used for all sorts of purposes. In the workplace, people use them to analyze data and speed up business tasks. At home, people use them as conversation partners, discussing the details of their lives — at least, that’s what many AI companies hope. After all, that’s what Microsoft’s new Copilot experience is all about — just vibing and having a chat about your day. But people might share data that’d be better kept private. Businesses everywhere are grappling with data security amid the rise of AI chatbots, with many banning their employees from using ChatGPT at work. They might have specific AI tools they require employees to use. Clearly, they realize that any data fed to a chatbot gets sent to that AI company’s servers. Even if it isn’t used to train genAI models in the future, the very act of uploading data could be a violation of privacy laws such as HIPAA in the US. ... Companies that need to safeguard business data and follow the relevant laws should carefully consider the genAI tools and plans they use. It’s not a good idea to have employees using a mishmash of tools with uncertain data protection agreements or to do anything business-related through a personal ChatGPT account.


CIOs recalibrate multicloud strategies as challenges remain

Like many enterprises, Ally Financial has embraced a primary public cloud provider, adding in other public clouds for smaller, more specialized workloads. It also runs private clouds from HPE and Dell for sensitive applications, such as generative AI and data workloads requiring the highest security levels. “The private cloud option provides us with full control over our infrastructure, allowing us to balance risks, costs, and execution flexibility for specific types of workloads,” says Sathish Muthukrishnan, Ally’s chief information, data, and digital officer. “On the other hand, the public cloud offers rapid access to evolving technologies and the ability to scale quickly, while minimizing our support efforts.” Yet, he acknowledges a multicloud strategy comes with challenges and complexities — such as moving gen AI workloads between public clouds or exchanging data from a private cloud to a public cloud — that require considerable investments and planning. “Aiming to make workloads portable between cloud service providers significantly limits the ability to leverage cloud-native features, which are perhaps the greatest advantage of public clouds,” Muthukrishnan says.


DevOps and Cloud Integration: Best Practices

CI/CD practices are crucial for DevOps implementation with cloud services. Continuous integration regularly merges code changes into a shared repository, where automated tests are run to spot issues early. On the other hand, continuous deployment improves this practice by automatically deploying changes (once they pass tests) to production. The CI/CD approach can accelerate the release cycle and enhance the overall quality of the software. ... Infrastructure as Code (IaC) empowers teams to oversee and provision infrastructure via code rather than manual processes. This DevOps methodology guarantees uniformity across environments and facilitates infrastructure scalability in cloud-based settings. It represents a pivotal element in transforming any enterprise's DevOps strategy. ... According to DevOps experts(link is external), security needs to be a part of every step in the DevOps process, called DevSecOps. This means adding security checks to the CI/CD pipeline, using security tools for the cloud, and always checking for security issues. DevOps professionals usually stress how important it is to tackle security problems early in the development process, called "shifting left."


Data Resilience & Protection In The Ransomware Age

Backups are considered the primary way to recover from a breach, but is this enough to ensure that the organisation will be up and running with minimal impact? Testing is a critical component to ensuring that a company can recover after a breach and provides valuable insight into the steps that the company will need to take to recover from a variety of scenarios. Unfortunately, many organisations implement measures to recover but fail on the last step of their resilience approach, namely testing. Without this step, they cannot know if their recovery strategy is effective. Testing is a critical component as it provides valuable insight into the steps it needs to take to recover, what works, and what areas it needs to focus on for the recovery process, the amount of time it will take to recover the files and more. Without this, companies will not know what processes to follow to restore data following a breach, as well as timelines to recovery. Equally, they will not know if they have backed up their data correctly before an attack if they have not performed adequate testing. Although many IT teams are stretched and struggle to find the time to do regular testing, it is possible to automate the testing process to ensure that it occurs frequently.


Is data gravity no longer centered in the cloud?

The need for data governance and security is escalating as AI becomes more prevalent. Organizations are increasingly aware of the risks associated with cloud environments, especially regarding regulatory compliance. Maintaining sensitive data on premises allows for tighter controls and adherence to industry standards, which are often critical in AI applications dealing with personal or confidential information. The convergence of these factors signals a broader reevaluation of cloud-first strategies, leading to hybrid models that balance the benefits of cloud computing with the reliability of traditional infrastructures. This hybrid approach facilitates a tailored fit for various workloads, optimizing performance while ensuring compliance and security. ... Data can exist on any platform, and accessibility should not be problematic regardless of whether data resides on public clouds or on premises. Indeed, the data location should be transparent. Storing data on-prem or with public cloud providers affects how much an enterprise spends and the data’s accessibility for major strategic applications, including AI. Currently, on-prem is the most cost-effective AI platform—for most data sets and most solutions. 


Choosing Between Cloud and On-Prem MLOps: What's Best for Your Needs?

The big benefit of cloud MLOps is the availability of virtually unlimited quantities of CPU, memory, and storage resources. Unlike on-prem environments, where resource capacity is limited by the amount of servers available and the resources each one provides, you can always acquire more infrastructure in the cloud. This makes cloud MLOps especially beneficial for ML use cases where resource needs vary widely or are unpredictable. ... On-prem MLOps may also offer better performance. On-prem environments don't require you to share hardware with other customers (which the cloud usually does), so you don't have to worry about "noisy neighbors" slowing down your MLOps pipeline. The ability to move data across fast local network connections can also boost on-prem MLOps performance, as can running workloads directly on bare metal, without a hypervisor layer reducing the amount of resources available to your workloads. ... You could also go on, under a hybrid MLOps approach, to deploy your model either on-prem or in the cloud depending on factors like how many resources inference will require. 



Quote for the day:

"You'll never get ahead of anyone as long as you try to get even with him." -- Lou Holtz