Daily Tech Digest - April 09, 2025


Quote for the day:

"Don't judge each day by the harvest you reap but by the seeds that you plant." -- Robert Louis Stevenson



How AI and ML Will Change Financial Planning

AI adoption in finance does not come easily, because finance systems contain vast amounts of sensitive data, they are more susceptible to data breaches. Integrating AI systems with other components, such as cloud services and APIs, can increase the number of entry points that hackers might exploit. Hence, most of the finance executives cite data security as a top challenge. Limited AI skills is another hurdle, most of the finance orgs don’t have the skill set which leverage the AI in planning and budgeting activities. In early stages, high costs, staff resistance, lack of transparency, and uncertain ROI dominate. Other hurdles stay constant, such as data security and finding consistent data. As companies expand their use of AI, the potential for bias and misinformation rises, particularly as finance teams tap GenAI. Integrating AI solutions and tools into existing systems also presents more challenges As AI and ML continue to evolve, their role in financial planning will only grow. The ability to continuously adapt to new data, automate routine processes, and generate predictive insights positions AI as a critical tool for financial leaders. By embracing these technologies, businesses can transition from reactive financial management to proactive, data-driven decision-making that not only mitigates risks but also identifies new opportunities for growth.


The Augmented Architect: Real-Time Enterprise Architecture In The Age Of AI

No human can know everything about a modern digital enterprise. AI doesn’t pretend to either — but it remembers everything and brings the right detail to the fore at the right time. Think of it as a cognitive prosthetic for the architect: surfacing precedents, warnings, and rationale at the point of decision. ... Visibility isn’t just about having access to data — it’s about trust in its freshness. Real-time integration with operational sources (observability platforms, configuration systems, source control, deployment records) ensures that the architecture graph is never out of date. The haystack becomes a needle-sorter. ... Architecture artifacts multiply: PowerPoints, spreadsheets, PDFs, whiteboards. But in an agentic system, everything is rendered on demand from the same graph (and its associated unstructured content, linked via vector embeddings). Want a heatmap of system risks? A regulatory trace? A roadmap to sunset legacy? One prompt, one view — consistent, explainable, and composable. And those unstructured artifacts? An agent is happy to harvest new insights from them back into the knowledge store. ... Review boards become decision accelerators instead of speed bumps. Agents pre-check submissions. Exceptions, not compliance, become the focus. Draft decisions are generated and validated before the meeting even starts. 


Choosing the Most Secure Cloud Service for Your Workloads

Managed cloud servers offer the security benefit of being relatively simple to configure and operate. Simplicity breeds security because the fewer variables you have to work with, the lower the risk of making a mistake that will lead to a breach. On the other hand, managed cloud servers are subject to a relatively large attack surface. Threat actors could target multiple components, including the operating systems installed on server instances, individual applications, and network-facing services. ... If you deploy containers using a managed service like AWS Fargate or GKE, you get many of the same security advantages as you enjoy when using serverless functions: The only vulnerabilities and misconfigurations you have to worry about are ones that impact your containers. The cloud provider bears responsibility for securing the host infrastructure. This isn't true, however, if you deploy containers on infrastructure that you manage yourself — by, for example, creating a Kubernetes cluster using nodes hosted on EC2. In that case, you end up with a broad and complex environment, making it quite challenging to secure. ... Note, too, that containers tend to be complex. A single container image could include code drawn from many sources. 


The Invisible Data Battle: How AI Became a Cybersec Professional’s Biggest Friend and Foe

With all of these boobytraps and stonewalling techniques in mind, cybersec professionals have been working on smart scrapers for years, and they’re finally here. A “smart” or “adaptive” scraper uses natural language processing (NLP) and machine learning to handle dynamic content and intricate website architectures (e.g., nested categories and varied page layouts), bypass IP blocking and rate limiting via rotating proxies, deal with CAPTCHAs, login forms and cookies — and even provide real-time data updates. For instance, adaptive scrapers can identify the structure of a web page by analyzing its document object model (DOM) or by following specific patterns, and this allows for dynamic adaptation. AI models like convolutional neural networks (CNNS) can also detect and interact with visual elements on websites, such as buttons. In fact, smart scrapers can even mimic human browsing patterns with random pauses, mouse movements and realistic navigation sequences that bypass behavioral analysis tools. And that’s not all. AI-powered web scrapers can modify browser configurations to mask telltale signs of automation (such as headless browsers that run without a traditional graphical interface) that anti-bot systems look for. 


The Agile Advantage: doubling down on the biggest business challenges

Agile practices have been gaining popularity, with 51% of respondents indicating their organisations actively use Agile to organize and deliver work. However, the data reveals inconsistencies in how the benefits of Agile are perceived across teams and organisations. ... Regardless of whether teams fully embrace Agile practices completely, there are opportunities for leaders to bring forward Agile principles to address the unique challenges of modern work. While leaders may feel confident in their teams’ direction, the lack of alignment experienced by entry-level employees can have serious repercussions. Feedback from these employees can serve as a valuable indicator of how effectively an organisation integrates Agile practice–and the data clearly shows there is considerable room for improvement. For organisations of any size, addressing these gaps is imperative. Leaders must adopt consistent tools and frameworks that enhance training, improve communication and foster greater alignment across teams. Proactively tackling these issues early can alleviate future issues like misalignment and burnout, while building a more cohesive and resilient organisation. 


The Strategic Evolution of IT: From Cost Center to Business Catalyst

The most successful organizations recognize that technology-driven transformation requires more than just implementing new solutions — it demands an organization-wide cultural shift. This means evolving IT teams from traditional "order-takers" to influential decision-makers who help shape and execute business strategy. The key lies in creating an environment where innovation thrives and tech professionals feel empowered to contribute their unique perspectives to business discussions. Organizations must invest in both the technical and business acumen of their IT talent. A dual focus on these areas enables teams to better understand the broader business context of their work and contribute more meaningfully to strategic discussions. When IT professionals can speak the languages of both technology and business, they become invaluable partners in driving broader innovation. Success in this area requires a commitment to continuous learning, mentorship programs and creating opportunities for cross-functional collaboration that expose IT teams to diverse business challenges and perspectives. ... With technology continuing to reshape industries and markets, the question is no longer whether tech professionals should have a seat at the strategic table, but how to maximize its potential and impact on business success.


Is HR running your employee security training? Here’s why that’s not always the best idea

“HR departments may not be fully aware of current cyber threats or the organization’s specific risks,” she says. This can result in overly broad or generic training, which reduces its effectiveness. These programs can also fail to emphasize the practical, real-world application of security practices or offer enough guidance on addressing threats if they lack collaboration with security and IT teams.” HR may not effectively tailor the training to the organization’s industry-specific threats, Murphy notes. Without the security department’s involvement, training content often lacks focus and fails to address the company’s unique threats, leaving employees unsure of what to watch for. ... However, while HR shouldn’t run employee security training, Willett does view the HR team as a key partner. He suggests a collaborative approach where HR and security teams work together, leveraging their respective strengths. He explains that HR can help translate complex technical information into understandable language, while the security team provides the core content and technical expertise. ... HR has skin in the game for employee onboarding, compliance, and adherence to company policies and practices, according to Hughes. 


Why CISOs are doubling down on cyber crisis simulations

“It was once enough to theorise risk identification through using risk matrixes and lodging them in a spreadsheet describing threats and their likelihood of materialising,” says Aaron Bugal, Field CISO, APJ at Sophos. “However, looking at the impact caused by ransomware and subsequent extortion demands sending executive teams and board members into a spin, highlights the lack of understanding of how pervasive cyber criminals are and the opportunities they take.” To move beyond theoretical planning, Bugal advocates for breach simulations as a practical step forward. “A simulation of a breach will allow you to draw out the concise and well-measured response actions that are demanded by you and your organisation,” he explains. Bringing together a cross-section of executives helps uncover gaps in readiness. “Physically sitting with a cross section of executives, board members, human resources, IT, security, legal and public relations will ilk out the procedures, responsibilities and resources needed to respond with efficacy.” By running these exercises in advance, organizations can avoid the chaos of real-time crisis management. “Simulations provide a structured approach to build and refine a breach response while playing it out and discovering where improvements are needed,” Bugal adds, “rather than learning and panicking whilst under the pressure of an active attack.”


Google Cloud Security VP on solving CISO pain points

On the strategic side, Bailey said CISOs are asking for a middle ground between highly integrated platforms and the flexibility of best-of-breed tools. "They want best of breed with the limited toil of what a platform gives," he said. "They're tired of integrations constantly breaking." Bailey also discussed how the role of development-level security – often called DevSecOps – is increasingly being absorbed into security operations. "The CISO is going to have responsibility for all these problems," he said. "Visibility into what's being deployed, compliance reporting, and detection on application code – that's all coming into SecOps." Another emerging front is model protection. Google's Model Armour and AI Protection aim to defend not just infrastructure but also the AI models themselves. "If a bad prompt starts coming through, we can help block that," Bailey said. "We're putting security controls around development environments, models, data and prompts." The Mandiant brand, once synonymous with incident response, has found new life as both a consulting arm and a foundation for content in Google Threat Intelligence. "Mandiant is our consulting practice," Bailey said. "It's also where our elite threat hunters live – a lot of them are ex-Mandiant, and they're integrated with our consulting team to operationalise what they see on the front lines."


Shadow Table Strategy for Seamless Service Extractions and Data Migrations

The shadow table strategy maintains a parallel copy of data in a new location (the "shadow" table or database) that mirrors the original system’s current state. The core idea is to feed data changes to the shadow in real time, so that by the end of the migration, the shadow data store is a complete, up-to-date clone of the original. At that point, you can seamlessly switch to the shadow copy as the primary source. ... Transitioning from a monolithic architecture to a microservices-based system requires more than just rewriting code; you often must carefully migrate data associated with specific services. Extracting a service from a monolith risks inaccuracy if you do not transfer its dependent data accurately and consistently. Here, shadow tables play a crucial role in decoupling and migrating a subset of data without disrupting the existing system. In a typical service extraction, the legacy system continues to handle all live operations while developers build a new microservice to handle a specific functionality. During extraction, engineers mirror the data relevant to the new service into a dedicated shadow database. Whether implemented through triggers or event-based replication, the dual-write mechanism ensures that the system simultaneously records every change made in the legacy system in the shadow database.

Daily Tech Digest - April 08, 2025


Quote for the day:

"Individual commitment to a group effort - that is what makes a team work, a company work, a society work, a civilization work." -- Vince Lombardi



AI demands more software developers, not less

Entry-level software development will change in the face of AI, but it won’t go away. As LLMs increasingly handle routine coding tasks, the traditional responsibilities of entry-level developers—such as writing boilerplate code—are diminishing. Instead their roles will evolve into AI supervisors; they’ll test outputs, manage data labeling, and integrate code into broader systems. This necessitates a deeper understanding of software architecture, business logic, and user needs. Doing this effectively requires a certain level of experience and, barring that, mentorship. The dynamic between junior and senior engineers is shifting. Seniors need to mentor junior developers in AI tool usage and code evaluation. Collaborative practices such as AI-assisted pair programming will also offer learning opportunities. Teams are increasingly co-creating with AI; this requires clear communication and shared responsibilities across experience levels. Such mentorship is essential to prevent more junior engineers from depending too heavily on AI, which results in shallow learning and a downward spiral of productivity loss. Across all skill levels, companies are scrambling to upskill developers in AI and machine learning. A late-2023 survey in the United States and United Kingdom showed 56% of organizations listed prowess in AI/ML as their top hiring priority for the coming year. 


Ask a CIO Recruiter: How AI is Shaping the Modern CIO Role

Everything right now revolves around AI, but you still as CIO have to have that grounding in all of the traditional disciplines of IT. Whether that is systems, whether that’s infrastructure, whether that’s cybersecurity, you have to have that well-rounded background. Even as these AI technologies become more prolific, you must consider your past infrastructure spend, your cloud spend, that went into these technologies. How do you manage that? If you don’t have grounding in managing those costs, and being able to balance those costs with the innovation you are trying to create, that’s a recipe for failure on the cyber side. ... When we’re looking for skill sets, we’re looking for people who have actually taken those AI technologies and applied them within their organizations to create real business value -- whether that is cost savings or top-line revenue creation, whatever those are. It’s hard to find those candidates, because there are a lot of those people who can talk the talk around AI, but when you really drill down there is not much in terms of results to show. It’s new, especially in applying the technology to certain settings. Take manufacturing: there’s not that many CIOs out there who have great examples of applying AI to create value within organizations. It’s certainly accelerating, and you’re going to see it accelerating more as we go into the future. It’s just so new that those examples are few and far between.


Architectural Experimentation in Practice: Frequently Asked Questions

When the cost of reversing a decision is low or trivial, experimentation does not reduce cost very much and may actually increase cost. Prior experience with certain kinds of decisions usually guides the choice; if team members have worked on similar systems or technical challenges, they will have an understanding of how easily a decision can be reversed. ... Experiments are more than just playing around with technology. There is a place for playing with new ideas and technologies in an unstructured, exploratory way, and people often say that they are "experimenting" when they are doing this. When we talk about experimentation, we mean a process that involves forming a hypothesis and then building something that tests this hypothesis, either accepting or rejecting it. We prefer to call the other approach "unstructured exploratory learning", a category that includes hackathons, "10% Time", and other professional development opportunities. ... Experiments should have a clear duration and purpose. When you find an experiment that’s not yielding results in the desired timeframe, it’s time to stop it and design something else to test your hypothesis that will yield more conclusive results. The "failed" experiment can still yield useful information, as it may indicate that the hypothesis is difficult to prove or may influence subsequent, more clearly defined experiments.


Optimizing IT with Open Source: A Guide to Asset Management Solutions

Orchestration frameworks are crucial for developing sophisticated AI applications that can perform tasks beyond simply answering a single question. While a single LLM is proficient in understanding and generating text, many real-world AI applications require performing a series of steps involving different components. Orchestration frameworks provide the structure necessary to design and manage these complex workflows, ensuring that all the various components of the AI system work together efficiently. ... One way orchestration frameworks enhance the power of LLMs is through a technique known as “prompt chaining.” Think of it as telling a story one step at a time. Instead of giving the LLM a single, lengthy instruction, you provide it with a series of more minor, interconnected instructions known as prompts. The response from one prompt then becomes the starting point for the following prompt, guiding the LLM through a more complex thought process. Open-source orchestration frameworks make it much simpler to create and manage these chains of prompts. They often provide tools that allow developers to easily link prompts together, sometimes through visual interfaces or programming tools. Prompt chaining can be helpful in many situations. 


Reframing DevSecOps: Software Security to Software Safety

A templatized, repeatable, process-led approach, driven by collaboration between platform and security teams, leads to a fundamental shift in the way teams think about their objectives. They move from the concept of security, which promises a state free from danger or threat, to safety, which focuses on creating systems that are protected from and unlikely to create danger. This shift emphasizes proactive risk mitigation through thoughtful, reusable design patterns and implementation rather than reactive threat mitigation. ... The outcomes between security products and product security are vastly different with the latter producing far greater value. Instead of continuing to shift responsibilities, development teams should embrace the platform security engineering paradigm. By building security directly into shared processes and operations, development teams can scale up to meet their needs today and in the future. Only after these strong foundations have been established should teams layer in routinely run security tools for assurance and problem identification. This approach, combined with aligned incentives and genuine collaboration between teams, creates a more sustainable path to secure software development that works at scale.


10 things you should include in your AI policy

A carefully thought AI use policy can help a company set criteria for risk and safety, protect customers, employees, and the general public, and help the company zero in on the most promising AI use cases. “Not embracing AI in a responsible manner is actually reducing your advantage of being competitive in the marketplace,” says Bhrugu Pange, managing director who leads the technology services group at AArete, a management consulting firm. ... An AI policy needs to start with the organization’s core values around ethics, innovation, and risk. “Don’t just write a policy to write a policy to meet a compliance checkmark,” says Avani Desai, CEO at Schellman, a cybersecurity firm that works with companies on assessing their AI policies and infrastructure. “Build a governance framework that’s resilient, ethical, trustworthy, and safe for everyone — not just so you have something that nobody looks at.” Starting with core values will help with the creation of the rest of the AI policy. “You want to establish clear guidelines,” Desai says. “You want everyone from top down to agree that AI has to be used responsibly and has to align with business ethics.” ... Taking a risk-based approach to AI is a good strategy, says Rohan Sen, data risk and privacy principal at PwC. “You don’t want to overly restrict the low-risk stuff,” he says. 


FedRAMP's Automation Goal Brings Major Promises - and Risks

FedRAMP practitioners, federal cloud security specialists and cybersecurity professionals who spoke to Information Security Media Group welcomed the push to automate security assessments and streamline approvals. They warned that without clear details on execution, the changes risk creating new uncertainties in the process and disrupt companies midway through the exiting process. Program officials said they will establish a series of community working groups to serve as a platform for industry and the public to engage directly with FedRAMP experts and collaborate on solutions that meet its standards and policies. "This is both exciting and scary," said John Allison, senior director of federal advisory services for the federal cybersecurity solutions provider, Optiv + ClearShark. "As someone who works with clients on their FedRAMP strategy, this is going to open new options for companies - but I can see a lot of uncertainty weighing heavily on corporate leadership until more details are available." Automation may help reduce costs and timelines, he said, but companies mid-process could face disruption and agencies will shoulder more responsibility until new tools are in place. Allison said GSA could further streamline FedRAMP by allowing cloud providers to submit materials directly and pursue authorization without an agency sponsor.


Is hyperscaler lock-in threatening your future growth?

Infrastructure flexibility has increasingly become a competitive differentiator. Enterprises that maintain the ability to deploy workloads across multiple environments—whether hyperscaler, private cloud, or specialized provider—gain strategic advantages that extend beyond operational efficiency. This cloud portability empowers organizations to select the optimal infrastructure for each application and workload based on their specific requirements rather than provider limitations. When a new service emerges that delivers substantial business value, companies with diversified infrastructure can adopt it without dismantling their existing technology stack. Central to maintaining this flexibility is the strategic adoption of open source technologies. Enterprise-grade open source solutions provide the consistency and portability that proprietary alternatives cannot match. By standardizing on technologies like Kubernetes for container orchestration, PostgreSQL for database services, or Apache Kafka for event streaming, organizations create a foundation that works consistently across any infrastructure environment. The most resilient enterprises approach their technology stack like a portfolio manager approaches investments—diversifying strategically to maximize returns while minimizing exposure to any single point of failure.


7 risk management rules every CIO should follow

The most critical risk management rule for any CIO is maintaining a comprehensive, continuously updated inventory of the organization’s entire application portfolio, proactively identifying and mitigating security risks before they can materialize, advises Howard Grimes, CEO of the Cybersecurity Manufacturing Innovation Institute, a network of US research institutes focusing on developing manufacturing technologies through public-private partnerships. That may sound straightforward, but many CIOs fall short of this fundamental discipline, Grimes observes. ... Cybersecurity is now a multi-front war, Selby says. “We no longer have the luxury of anticipating the attacks coming at us head-on.” Leaders must acknowledge the interdependence of a robust risk management plan: Each tier of the plan plays a vital role. “It’s not merely a cyber liability policy that does the heavy lifting or even top-notch employee training that makes up your armor — it’s everything.” The No. 1 way to minimize risk is to start from the top down, Selby advises. “There’s no need to decrease cyber liability coverage or slack on a response plan,” he says. Cybersecurity must be an all-hands-on-deck endeavor. “Every team member plays a vital role in protecting the company’s digital assets.” 


Shift-Right Testing: Smart Automation Through AI and Observability

Shift-right testing goes beyond the conventional approach of performing pre-release testing, thereby enabling the development teams to deploy the software in real-time conditions. This approach includes canary releases where new features are released to a subset of users before the full launch. It also involves A/B testing, where two versions of the application are compared in real time. Another important feature is chaos engineering, which implies that failures are deliberately introduced to check the strength of the system. ... Chaos engineering is the practice of injecting controlled failures into the system to assess its robustness with the help of tools like Chaos Monkey and Gremlin. This helps validate the actual behavior of a system in a production-like environment. All the testing feedback loops are also automated to ensure that Shift-Right is applied consistently by using AI-powered test analytics tools like Testim and Applitools to learn from test case selection. This makes it possible to use production data to inform the automatic generation of test suites, thus increasing coverage and precision. Real-time alerting and self-healing mechanisms also enhance shift-right testing. Observability tools can be set up to send out alerts whenever a test fails and auto-remediation scripts to enable the environment to repair itself when test environments fail without the need to involve the IT staff.

Daily Tech Digest - April 07, 2025


Quote for the day:

"Failure isn't fatal, but failure to change might be" -- John Wooden



How enterprise IT can protect itself from genAI unreliability

The AI-watching-AI approach is scarier, although a lot of enterprises are giving it a go. Some are looking to push any liability down the road by partnering with others to do their genAI calculations for them. Still others are looking to pay third-parties to come in and try and improve their genAI accuracy. The phrase “throwing good money after bad” immediately comes to mind. The lack of effective ways to improve genAI reliability internally is a key factor in why so many proof-of-concept trials got approved quickly, but never moved into production. Some version of throwing more humans into the mix to keep an eye on genAI outputs seems to be winning the argument, for now. “You have to have a human babysitter on it. AI watching AI is guaranteed to fail,” said Missy Cummings, a George Mason University professor and director of Mason’s Autonomy and Robotics Center (MARC). “People are going to do it because they want to believe in the (technology’s) promises. People can be taken in by the self-confidence of a genAI system,” she said, comparing it to the experience of driving autonomous vehicles (AVs). When driving an AV, “the AI is pretty good and it can work. But if you quit paying attention for a quick second,” disaster can strike, Cummings said. “The bigger problem is that people develop an unhealthy complacency.”


Why neglecting AI ethics is such risky business - and how to do AI right

The struggle often comes from the lack of a common vocabulary around AI. This is why the first step is to set up a cross-organizational strategy that brings together technical teams as well as legal and HR teams. AI is transformational and requires a corporate approach. Second, organizations need to understand what the key tenets of their AI approach are. This goes beyond the law and encompasses the values they want to uphold. Third, they can develop a risk taxonomy based on the risks they foresee. Risks are based on legal alignment, security, and the impact on the workforce. ... As a starting point, enterprises will need to establish clear policies, principles, and guidelines on the sustainable use of AI. This creates a baseline for decisions around AI innovation and enables teams to make the right choices around the type of AI infrastructure, models, and algorithms they will adopt. Additionally, enterprises need to establish systems to effectively track, measure, and monitor environmental impact from AI usage and demand this from their service providers. We have worked with clients to evaluate current AI policies, engage internal and external stakeholders, and develop new principles around AI and the environment before training and educating employees across several functions to embed thinking in everyday processes.


The risks of entry-level developers over relying on AI

Some CISOs are concerned about the growing reliance on AI code generators — especially among junior developers — while others take a more relaxed, wait-and-see approach, saying that this might be an issue in the future rather than an immediate threat. Karl Mattson, CISO at Endor Labs, argues that the adoption of AI is still in its early stages in most large enterprises and that the benefits of experimentation still outweigh the risks. ... Tuskira’s CISO lists two major issues: first, that AI-generated security code may not be hardened against evolving attack techniques; and second, that it may fail to reflect the specific security landscape and needs of the organization. Additionally, AI-generated code might give a false sense of security, as developers, particularly inexperienced ones, often assume it is secure by default. Furthermore, there are risks associated with compliance and violations of licensing terms or regulatory standards, which can lead to legal issues down the line. “Many AI tools, especially those generating code based on open-source codebases, can inadvertently introduce unvetted, improperly licensed, or even malicious code into your system,” O’Brien says. Open-source licenses, for example, often have specific requirements regarding attribution, redistribution, and modifications, and relying on AI-generated code could mean accidentally violating these licenses.


Language models in generative AI – does size matter?

Firstly, using SLMs rather than full-blown LLMs can bring the cost of that multi-agent system down considerably. Employing smaller and more lightweight language models to fulfill specific requirements will be more cost-effective than using LLMs for every step in an agentic AI system. This approach involves looking at what would be the right component for each element of a multi-agent system, rather than automatically thinking that a “best of breed” approach is the best approach. Secondly, using agentic AI for generative AI use cases should be adopted where multi-agent processes can provide more value per transaction than simpler single-agent models. The choice here affects how you think about pricing your service, what customers expect from AI and how you will deliver your service overall. Alongside looking at the technical and architecture elements for AI, you will also have to consider what your line of business team wants to achieve. While simple AI agents can carry out specific tasks or automate repetitive tasks, they generally require human input to complete those requests. Where agentic AI takes things further is through delivering greater autonomy within business processes through employing that multi-agent approach to constantly adapt to dynamic environments. With agentic AI, companies can use AI to independently create, execute and optimize results around that business process workflow. 


Lessons from a Decade of Complexity: Microservices to Simplicity

This shift made us stop and think: if fast growth isn’t the priority anymore, is microservices still the right choice? ... After going through years of building and maintaining systems with microservices, we’ve learned a lot, especially about what really matters in choosing an architecture. Here are some key takeaways that guide how we think about system design today: Be pragmatic, not idealistic: Don’t get caught up in trendy architecture patterns just because they sound impressive. Focus on what makes sense for your team and your situation. Not every new system needs to start with microservices, especially if the problems they solve aren’t even there yet. Start simple: The simplest solution is often the best one. It’s easier to build, easier to understand, and easier to change. Keeping things simple takes discipline, but it saves time and pain in the long run. Split only when it really makes sense: Don’t break things apart just because “that’s what we do”. Split services when there’s a clear technical reason, like performance, resource needs, or special hardware. Microservices are just a tool: They’re not good or bad by themselves. What matters is whether they help your team move faster, stay flexible, and solve real problems. Every choice comes with tradeoffs: No architecture is perfect. Every decision has upsides and downsides. What’s important is to be aware of those tradeoffs and make the best call for your team.


Massive modernization: Tips for overhauling IT at scale

A core part of digital transformation is decommissioning legacy apps, upgrading aging systems, and modernizing the tech stack. Yet, as appealing as it is for employees to be able to use modern technologies, decommissioning and replacing systems is arduous for IT. ... “You almost do what I call putting lipstick on a pig, which is modernizing your legacy ecosystem with wrappers, whether it be web wrappers, front end and other technologies that allow customers to be able to interact with more modern interfaces,” he says. ... When an organization is truly legacy, most will likely have very little documentation of how those systems can be supported, Mehta says. That was the case for National Life, and it became the first roadblock. “You don’t know what you don’t know until we begin,” he says. This is where the archaeological dig metaphor comes in. “You’re building a new city over the top of the old city, but you’ve got to be able to dig it only enough so you don’t collapse the foundation.” IT has to figure out everything a system touches, “because over time, people have done all kinds of things to it that are not clearly documented,” Mehta says. ... “You have to have a plan to get rid of” legacy systems. He also discovered that “decommissioning is not free. Everybody thinks you just shut a switch off and legacy systems are gone. Legacy decommissioning comes at a cost. You have to be willing to absorb that cost as part of your new system. That was a lesson learned; you cannot ignore that,” he says.


Culture is not static: Prasad Menon on building a thriving workplace at Unplugged 3

To cultivate a thriving workplace, organisations must engage in active listening. Employees should have structured platforms to voice their concerns, aspirations, and feedback without hesitation. At Amagi, this commitment to deep listening is reinforced by technology. The company has implemented an AI-powered chatbot named Samb, which acts as a "listening manager," facilitating real-time employee feedback collection. This tool ensures that concerns and suggestions are acknowledged and addressed within 15 days, allowing for a more responsive and agile work environment. "Culture is not just a feel-good factor—it must be measured and linked to results," Menon emphasised. To track and optimise cultural impact, Amagi has developed a "happiness index" that measures employee well-being across financial, mental, and physical dimensions. By using data to evaluate cultural effectiveness, the organisation ensures that workplace culture is not just an abstract ideal but a tangible force driving business success. ... At the core of Amagi’s culture is a commitment to becoming "the happiest workplace in the world." This vision is driven by a leadership model that prioritises genuine care, consistency, and empowerment. Leaders at Amagi undergo a six-month cultural immersion programme designed to equip them with the skills needed to foster a safe, inclusive, and high-performing work environment.


Speaking the Board’s Language: A CISO’s Guide to Securing Cybersecurity Budget

A major challenge for CISOs in budget discussions is making cybersecurity risk feel tangible. Cyber risks often remain invisible – that is, until a breach happens. Traditional tools like heat maps, which visually represent risk by color-coding potential threats, can be misleading or oversimplified. While they offer a high-level view of risk areas, heat maps fail to provide a concrete understanding of the actual financial impact of those risks. This makes it essential to shift from qualitative risk assessments like heat maps to cyber risk quantification (CRQ), which assigns a measurable dollar value to potential threats and mitigation efforts. ... The biggest challenge CISOs face isn’t just securing budget – it’s making sure decision-makers understand why they need it. Boards and executives don’t think in terms of firewalls and threat detection; they care about business continuity, revenue protection and return on investment (ROI). For cyber investments, though, ROI is not typically the figure security experts turn to to validate these investments, largely because of the difficulties in estimating the value of risk reduction. However, new approaches to cyber risk quantification have made this a reality. With models validated by real-world loss data, it is now possible to produce an ROI figure. 


Can AI predict who will commit crime?

Simulating the conditions for individual offending is not the same as calculating the likelihood of storms or energy outages. Offending is often situational and is heavily influenced by emotional, psychological and environmental elements (a bit like sport – ever wondered why Predictive AI hasn’t put bookmakers out of business yet?). Sociological factors also play a big part in rehabilitation which, in turn, affects future offending. Predictive profiling relies on past behaviour being a good indicator of future conduct. Is this a fair assumption? Occupational psychologists say past behaviour is a reliable predictor of future performance – which is why they design job selection around it. Unlike financial instruments which warn against assuming future returns from past rewards, human behaviour does have a perennial quality. Leopards and spots come to mind. ... Even if the data could reliably tell us who will be charged with, prosecuted for and convicted of which specific offence in the future, what should the police do about it now? Implant a biometric chip and have them under perpetual surveillance to stop them doing what they probably didn’t know they were going to do? Fine or imprison them? (how much, for how long?). What standard of proof will the AI apply to its predictions? Beyond a reasonable doubt? How will we measure the accuracy of the process? 


CISOs battle security platform fatigue

“Adopting more security tools doesn’t guarantee better cybersecurity,” says Jonathan Gill, CEO at Panaseer. “These tools can only report on what they can see – but they don’t know what they’re missing.” This fragmented visibility leaves security leaders making high-stakes decisions based on partial information. Without a verified, comprehensive system of record for all assets and security controls, many organizations are operating under what Gill calls an “illusion of visibility.” “Without a true denominator,” he explains, “CISOs are unable to confidently assess coverage gaps or prove compliance with evolving regulatory demands.” And those blind spots aren’t just theoretical. Every overlooked asset or misconfigured control becomes an open door for attackers — and they’re getting better at finding them. “Each of these coverage gaps represents risk,” Gill warns, “and they are increasingly easy for attackers to find and exploit.” The lack of clear visibility also muddies accountability. “This creates dark corners that go overlooked – servers and applications are left without owners, making it hard to assign responsibility for fixing issues,” Gill says. Even when gaps are known, security teams often find themselves drowning in data from too many tools, struggling to separate signal from noise. 

Daily Tech Digest - April 04, 2025


Quote for the day:

“Going into business for yourself, becoming an entrepreneur, is the modern-day equivalent of pioneering on the old frontier.” -- Paula Nelson



Hyperlight Wasm points to the future of serverless

WebAssembly support significantly expands the range of supported languages for Hyperlight, ensuring that compiled languages as well as interpreted ones like JavaScript can be run on a micro VM. Your image does get more complex here, as you need to bundle an additional runtime in the Hyperlight image, along with writing code that loads both runtime and application as part of the launch process. ... There’s a lot of work going on in the WebAssembly community to define a specification for a component model. This is intended to be a way to share binaries and libraries, allowing code to interoperate easily. The Hyperlight Wasm tool offers the option of compiling a development branch with support for WebAssembly Components, though it’s not quite ready for prime time. In practice, this will likely be the basis for any final build of the platform, as the specification is being driven by the main WebAssembly platforms. One point that Microsoft makes is that Wasm isn’t only language-independent, it’s architecture-independent, working against a minimal virtual machine. So, code written and developed on an x64 architecture system will run on Arm64 and vice versa, ensuring portability and allowing service providers to move applications to any spare capacity, no matter the host virtual machine.


Beyond SIEM: Embracing unified XDR for smarter security

Implementing SIEM solutions can have challenges and has to be managed proactively. Configuring the SIEM system can be very complex where any error can lead to false positives or missed threats. Integrating SIEM tools with existing security tools and systems is not easy. The implementation and maintenance processes are also resource-intensive and require significant time and manpower. Alert fatigue can be set with traditional SIEM platforms where numerous alerts are generated making it rather difficult to identify the genuine ones. ... For industries with stringent compliance requirements, such as finance and healthcare, SIEM remains a necessity due to its log retention, compliance reporting, and event correlation capabilities. Microsoft Sentinel’s AI-driven analytics help security teams fine-tune alerts, reducing false positives and increasing threat detection accuracy. Microsoft Defender XDR platform offers, Unified visibility across attack surfaces, CTEM Exposure management solution, CIS framework assessment, Zero Trust, EASM, AI-driven automated response to threats, Integrated security across all Microsoft 365 and third-party platforms, Office, Email, Data, CASB, Endpoint, Identity, and Reduced complexity by eliminating the need for custom configurations. 


Compliance Without Chaos: Build Resilient Digital Operations

A unified platform makes service ownership a no-brainer by directly connecting critical services to the right responders so there’s no scrambling when things go sideways. Teams can set up services quickly and at scale, making it easier to get a real-time pulse on system health and see just how far the damage spreads when something breaks. Instead of chasing down data across a dozen monitoring tools, everything is centralized in one place for easy analysis. ... With all data centralized in a unified platform, the classification and reporting of incidents is far easier with accessible and detailed incident logs that provide a clear audit trail. Sophisticated platforms also integrate with IT service management (ITSM) and IT operations (ITOps) tools to simplify the reporting of incidents based on predefined criteria. ... Every incident, both real and simulated, should be viewed as a learning opportunity. Aggregating data from disparate tools into a single location gives teams a full picture of how their organization’s operations have been affected and supplies a narrative for reporting. Teams can then uncover patterns across tools, teams and time to drive continuous learning in post-incident reviews. Coupled with regular, automated testing of disaster recovery runbooks, teams can build greater confidence in their system’s resilience.


How Organizations Can Benefit From Intelligent Data Infra

The first is getting your enterprise data AI-ready. Predictive AI has been around for a long time. But teams still spend a significant amount of time identifying and cleaning data, which involves handling ETL pipelines, transformations and loading data into data lakes. This is the most expensive step. The same process applies to unstructured data in generative AI. But organizations still need to identify the files and object streams that need to be a part of the training datasets. Organizations need to securely bring them together and load them into feature stores. That's our approach to data management. ... There's a lot of intelligence tied to files and objects. Without that, they will continue to be seen as simple storage entities. With embedded intelligence, you get detection capabilities that let you see what's inside a file and when it was last modified. For instance, if you create embeddings from a PDF file and vectorize them, imagine doing the same for millions of files, which is typical in AI training. This consumes significant computing resources. You don't want to spend compute resources while recreating embeddings on a million files every time there is a modification to the files. Metadata allows us to track changes and only reprocess the files that have been modified. This differential approach optimizes compute cycles.


Tariff war throws building of data centers into disarray

The potentially biggest variable affecting data center strategy is timing. Depending on the size of an enterprise data center and its purpose, it could take as little as six months to build, or as much as three years. Planning for a location is daunting when ever-changing tariffs and retaliatory tariffs could send costs soaring. Another critical element is knowing when those tariffs will take effect, a data point that has also been changing. Some enterprises are trying to sidestep the tariff issues by purchasing components in bulk, in enough quantities to potentially last a few years. ... “It’s not only space, available energy, cooling, and water resources, but it’s also a question of proximity to where the services are going to be used,” Nguyen said. Finding data center personnel, Nguyen said, is becoming less of an issue, thanks to the efficiencies gained through automation. “The level of automation available means that although personnel costs can be a bit more [in different countries], the efficiencies used means that [hiring people] won’t be the drag that it used to be,” he said. Given the vast amount of uncertainty, enterprise IT leaders wrestling with data center plans have some difficult decisions to make, mostly because they will have to guess where the tariff wars will be many months or years in the future, a virtually impossible task.


The Modern Data Architecture: Unlocking Your Data's Full Potential

If the Data Cloud is your engine, the CDP is your steering wheel—directing that power where it needs to go, precisely when it needs to get there. True real-time CDPs have the ability to transform raw data into immediate action across your entire technology ecosystem, with an event-based architecture that responds to customer signals in milliseconds rather than minutes. This ensures you can dynamically personalize experiences as they unfold—whether during a website visit, mobile app session, or contact center interaction–all while honoring consent. ... As AI capabilities evolve, this Intelligence Layer becomes increasingly autonomous—not just providing recommendations but taking appropriate actions based on pre-defined business rules and learning from outcomes to continuously improve its performance. ... The Modern Data Architecture serves as the foundation for truly intelligent customer experiences by making AI implementations both powerful and practical. By providing clean, unified data at scale, these architectures enable AI systems to generate more accurate predictions, more relevant recommendations, and more natural conversational experiences. Rather than creating isolated AI use cases, forward-thinking organizations are embedding intelligence throughout the customer journey. 


Why AI therapists could further isolate vulnerable patients instead of easing suffering

While chatbots can be programmed to provide some personalised advice, they may not be able to adapt as effectively as a human therapist can. Human therapists tailor their approach to the unique needs and experiences of each person. Chatbots rely on algorithms to interpret user input, but miscommunication can happen due to nuances in language or context. For example, chatbots may struggle to recognise or appropriately respond to cultural differences, which are an important aspect of therapy. A lack of cultural competence in a chatbot could alienate and even harm users from different backgrounds. So while chatbot therapists can be a helpful supplement to traditional therapy, they are not a complete replacement, especially when it comes to more serious mental health needs. ... The talking cure in psychotherapy is a process of fostering human potential for greater self-awareness and personal growth. These apps will never be able to replace the therapeutic relationship developed as part of human psychotherapy. Rather, there’s a risk that these apps could limit users’ connections with other humans, potentially exacerbating the suffering of those with mental health issues – the opposite of what psychotherapy intends to achieve.


Breaking Barriers in Conversational BI/AI with a Semantic Layer

The push for conversational BI was met with adoption inertia. Two major challenges have hindered its potential—the accuracy of the data insights and the speed at which the interface could provide the answers that were sought. This can be attributed to the inherent complexity of data architecture, which involves fragmented data in disparate systems with varying definitions, formats, and contexts. Without a unified structure, even the most advanced AI models risk delivering contextually irrelevant, inconsistent, or inaccurate results. Moreover, traditional data pipelines are not designed for instantaneous query resolution and resolving data from multiple tables, which delays responses. ... Large language models (LLMs) like GPT excel at interpreting natural language but lack the domain-specific knowledge of a data set. A semantic layer can resolve this challenge by acting as an intermediary between raw data and the conversational interface. It unifies data into a consistent, context-aware model that is comprehensible to both humans and machines. Retrieval-augmented generation (RAG) techniques are employed to combine the generative power of LLMs with the retrieval capabilities of structured data systems. 


The rise of AI PCs: How businesses are reshaping their tech to keep up

Companies are discovering that if they want to take full advantage of AI and run models locally, they need to upgrade their employees' laptops. This realization has introduced a hardware revolution, with the desire to update tech shifting from an afterthought to a priority and attracting significant investment from companies. ... running models locally gives organizations more control over their information and reduces reliance on third-party services. That setup is crucial for companies in financial services, healthcare, and other industries where privacy is a big concern or a regulatory requirement. "For them, on-device AI computer, it's not a nice to have; it's a need to have for fiduciary and HIPAA reasons, respectively," said Mike Bechtel, managing director and the chief futurist at Deloitte Consulting LLP. Another advantage is that local running reduces lag and creates a smoother user experience, which is especially valuable for optimizing business applications. ... As more companies get in on the action and AI-capable computers become ubiquitous, the premium price of AI PCs will continue to drop. Furthermore, Flower said the potential gains in performance offset any price differences. "In those high-value professions, the productivity gain is so significant that whatever small premium you're paying for that AI-enhanced device, the payback will be nearly immediate," said Flower.


Many CIOs operate within a culture of fear

The culture of fear often stems from a few roots, including a lack of accountability from employees who don’t understand their roles, and mistrust of coworkers and management, says Alex Yarotsky, CTO at Hubstaff, vendor of a time tracking and workforce management tool. In both cases, company leadership is to blame. Good leaders create a positive culture laid out in a set of rules and guidelines for employees to follow, and then model those actions themselves, Yarotsky says. “Any case of misunderstanding or miscommunication is always on the management because the management is the force in the company that sets the rules and drives the culture,” he adds. ... Such a culture often starts at the top, says Jack Allen, CEO and chief Salesforce architect at ITequality, a Salesforce consulting firm. Allen experienced this scenario in the early days of building a career, suggesting the problems may be bigger than the survey respondents indicate. “If the leader is unwilling to admit mistakes or punishes mistakes in an unfair way, then the next layer of leadership will be afraid to admit mistakes as well,” Allen says. ... Cultivating a culture of fear leads to several problems, including an inability to learn from mistakes, Mort says. “Organizations that do the best are those that value learning and highlight incidents as valuable learning events,” he says.

Daily Tech Digest - April 03, 2025


Quote for the day:

"The most difficult thing is the decision to act, the rest is merely tenacity." -- Amelia Earhart


Veterans are an obvious fit for cybersecurity, but tailored support ensures they succeed

Both civilian and military leaders have long seen veterans as strong candidates for cybersecurity roles. The National Initiative for Cybersecurity Careers and Studies, part of the US Cybersecurity and Infrastructure Security Agency (CISA), speaks directly to veterans, saying “Your skills and training from the military translate well to a cyber career.” NICCS continues, “Veterans’ backgrounds in managing high-pressure situations, attention to detail, and understanding of secure communications make them particularly well-suited for this career path.” Gretchen Bliss, director of cybersecurity programs at the University of Colorado at Colorado Springs (UCCS), speaks specifically to security execs on the matter: “If I were talking to a CISO, I’d say get your hands on a veteran. They understand the practical application piece, the operational piece, they have hands-on experience. They think things through, they know how to do diagnostics. They already know how to tackle problems.” ... And for veterans who haven’t yet mastered all that, Andrus advises “networking with people who actually do the job you want.” He also advises veterans to learn about the environment at the organization they seek to join, asking themselves whether they’d fit in. And he recommends connecting with others to ease the transition.


The 6 disciplines of strategic thinking

A strategic thinker is not just a good worker who approaches a challenge with the singular aim of resolving the problem in front of them. Rather, a strategic thinker looks at and elevates their entire ecosystem to achieve a robust solution. ... The first discipline is pattern recognition. A foundation of strategic thinking is the ability to evaluate a system, understand how all its pieces move, and derive the patterns they typically form. ... Watkins’s next discipline, and an extension of pattern recognition, is systems analysis. It is easy to get overwhelmed when breaking down the functional elements of a system. A strategic thinker avoids this by creating simplified models of complex patterns and realities. ... Mental agility is Watkins’s third discipline. Because the systems and patterns of any work environment are so dynamic, leaders must be able to change their perspective quickly to match the role they are examining. Systems evolve, people grow, and the larger picture can change suddenly. ... Structured problem-solving is a discipline you and your team can use to address any issue or challenge. The idea of problem-solving is self-explanatory; the essential element is the structure. Developing and defining a structure will ensure that the correct problem is addressed in the most robust way possible.


Why Vendor Relationships Are More Important Than Ever for CIOs

Trust is the necessary foundation, which is built through open communication, solid performance, relevant experience, and proper security credentials and practices. “People buy from people they trust, no matter how digital everything becomes,” says Thompson. “That human connection remains crucial, especially in tech where you're often making huge investments in mission-critical systems.” ... An executive-level technology governance framework helps ensure effective vendor oversight. According to Malhotra, it should consist of five key components, including business relationship management, enterprise technology investment, transformation governance, value capture and having the right culture and change management in place. Beneath the technology governance framework is active vendor governance, which institutionalizes oversight across ten critical areas including performance management, financial management, relationship management, risk management, and issues and escalations. Other considerations include work order management, resource management, contract and compliance, having a balanced scorecard across vendors and principled spend and innovation.


Shadow Testing Superpowers: Four Ways To Bulletproof APIs

API contract testing is perhaps the most immediately valuable application of shadow testing. Traditional contract testing relies on mock services and schema validation, which can miss subtle compatibility issues. Shadow testing takes contract validation to the next level by comparing actual API responses between versions. ... Performance testing is another area where shadow testing shines. Traditional performance testing usually happens late in the development cycle in dedicated environments with synthetic loads that often don’t reflect real-world usage patterns. ... Log analysis is often overlooked in traditional testing approaches, yet logs contain rich information about application behavior. Shadow testing enables sophisticated log comparisons that can surface subtle issues before they manifest as user-facing problems. ... Perhaps the most innovative application of shadow testing is in the security domain. Traditional security testing often happens too late in the development process, after code has already been deployed. Shadow testing enables a true shift left for security by enabling dynamic analysis against real traffic patterns. ... What makes these shadow testing approaches particularly valuable is their inherently low-maintenance nature. 


Rethinking technology and IT's role in the era of agentic AI and digital labor

Rethinking technology and the role of IT will drive a shift from the traditional model to a business technology-focused model. One example will be the shift from one large, dedicated IT team that traditionally handles an organization's technology needs, overseen and directed by the CIO, to more focused IT teams that will perform strategic, high-value activities and help drive technology innovation strategy as Gen AI handles many routine IT tasks. Another shift will be spending and budget allocations. Traditionally, CIOs manage the enterprise IT budget and allocation. In the new model, spending on enterprise-wide IT investments continues to be assessed and guided by the CIO, and some enterprise technology investments are now governed and funded by the business units. ... Today, agentic AI is not just answering questions -- it's creating. Agents take action autonomously. And it's changing everything about how technology-led enterprises must design, deploy, and manage new technologies moving forward. We are building self-driving autonomous businesses using agentic AI where humans and machines work together to deliver customer success. However, giving agency to software or machines to act will require a new currency. Trust is the new currency of AI.


From Chaos to Control: Reducing Disruption Time During Cyber Incidents and Breaches

Cyber disruptions are no longer isolated incidents; they have ripple effects that extend across industries and geographic regions. In 2024, two high-profile events underscored the vulnerabilities in interconnected systems. The CrowdStrike IT outage resulted in widespread airline cancellations, impacting financial markets and customer trust, while the Change Healthcare ransomware attack disrupted claims processing nationwide, costing billions in financial damages. These cases emphasize why resilience professionals must proactively integrate automation and intelligence into their incident response strategies. ... Organizations need structured governance models that define clear responsibilities before, during, and after an incident. AI-driven automation enables proactive incident detection and streamlined responses. Automated alerts, digital action boards, and predefined workflows allow teams to act swiftly and decisively, reducing downtime and minimizing operational losses. Data is the foundation of effective risk and resilience management. When organizations ensure their data is reliable and comprehensive, they gain an integrated view that enhances visibility across business continuity, IT, and security teams. 


What does an AI consultant actually do?

AI consulting involves advising on, designing and implementing artificial intelligence solutions. The spectrum is broad, ranging from process automation using machine learning models to setting up chatbots and performing complex analyses using deep learning methods. However, the definition of AI consulting goes beyond the purely technical perspective. It is an interdisciplinary approach that aligns technological innovation with business requirements. AI consultants are able to design technological solutions that are not only efficient but also make strategic sense. ... All in all, both technical and strategic thinking is required: Unlike some other technology professions, AI consulting not only requires in-depth knowledge of algorithms and data processing, but also strategic and communication skills. AI consultants talk to software development and IT departments as well as to management, product management or employees from the relevant field. They have to explain technical interrelations clearly and comprehensibly so that the company can make decisions based on this knowledge. Since AI technologies are developing rapidly, continuous training is important. Online courses, boot camps and certificates as well as workshops and conferences. 


Building a cybersecurity strategy that survives disruption

The best strategies treat resilience as a core part of business operations, not just a security add-on. “The key to managing resilience is to approach it like an onion,” says James Morris, Chief Executive of The CSBR. “The best strategy is to be effective at managing the perimeter. This approach will allow you to get a level of control on internal and external forces which are key to long-term resilience.” That layered thinking should be matched by clearly defined policies and procedures. “Ensure that your ‘resilience’ strategy and policies are documented in detail,” Morris advises. “This is critical for response planning, but also for any legal issues that may arise. If it’s not documented, it doesn’t happen.” ... Move beyond traditional monitoring by implementing advanced, behaviour-based anomaly detection and AI-driven solutions to identify novel threats. Invest in automation to enhance the efficiency of detection, triage, and initial response tasks, while orchestration platforms enable coordinated workflows across security and IT tools, significantly boosting response agility. ... A good strategy starts with the idea that stuff will break. So you need things like segmentation, backups, and backup plans for your backup plans, along with alternate ways to get back up and running. Fast, reliable recovery is key. Just having backups isn’t enough anymore.


3 key features in Kong AI Gateway 3.10

For teams working with sensitive or regulated data, protecting personally identifiable information (PII) in AI workflows is not optional, it’s essential for proper governance. Developers often use regex libraries or handcrafted filters to redact PII, but these DIY solutions are prone to error, inconsistent enforcement, and missed edge cases. Kong AI Gateway 3.10 introduces out-of-the-box PII sanitization, giving platform teams a reliable, enterprise-grade solution to scrub sensitive information from prompts before they reach the model. And if needed, reinserting sanitized data in the response before it returns to the end user. ... As organizations adopt multiple LLM providers and model types, complexity can grow quickly. Different teams may prefer OpenAI, Claude, or open-source models like Llama or Mistral. Each comes with its own SDKs, APIs, and limitations. Kong AI Gateway 3.10 solves this with universal API support and native SDK integration. Developers can continue using the SDKs they already rely on (e.g., AWS, Azure) while Kong translates requests at the gateway level to interoperate across providers. This eliminates the need for rewriting app logic when switching models and simplifies centralized governance. This latest release also includes cost-based load balancing, enabling Kong to route requests based on token usage and pricing. 


The future of IT operations with Dark NOC

From a Managed Service Provider (MSP) perspective, Dark NOC will shift the way IT operates today by making it more efficient, scalable, and cost-effective. It will replace Traditional NOC’s manual-intensive task of continuous monitoring, diagnosing, and resolving issues across multiple customer environments. ... Another key factor that Dark NOC enables MSPs is scalability. Its analytics and automation capability allows it to manage thousands of endpoints effortlessly without proportionally increasing engineers’ headcount. This enables MSPs to extend their service portfolios, onboard new customers, and increase profit margins while retaining a lean operational model. From a competitive point of view, adopting Dark NOC enables MSPs to differentiate themselves from competitors by offering proactive, AI-driven IT services that minimise downtime, enhance security and maximise performance. Dark NOC helps MSPs provide premium service at affordable price points to customers while making a decent margin internally. ... Cloud infrastructure monitoring & management (Provides real-time cloud resource monitoring and predictive insights). Examples include AWS CloudWatch, Azure Monitor, and Google Cloud Operations Suite.