Showing posts with label PromptOps. Show all posts
Showing posts with label PromptOps. Show all posts

Daily Tech Digest - October 21, 2025


Quote for the day:

"Definiteness of purpose is the starting point of all achievement." -- W. Clement Stone


The teacher is the new engineer: Inside the rise of AI enablement and PromptOps

Enterprises should onboard AI agents as deliberately as they onboard people — with job descriptions, training curricula, feedback loops and performance reviews. This is a cross-functional effort across data science, security, compliance, design, HR and the end users who will work with the system daily. ... Don’t let your AI’s first “training” be with real customers. Build high-fidelity sandboxes and stress-test tone, reasoning and edge cases — then evaluate with human graders. ... As onboarding matures, expect to see AI enablement managers and PromptOps specialists in more org charts, curating prompts, managing retrieval sources, running eval suites and coordinating cross-functional updates. Microsoft’s internal Copilot rollout points to this operational discipline: Centers of excellence, governance templates and executive-ready deployment playbooks. These practitioners are the “teachers” who keep AI aligned with fast-moving business goals. ... In a future where every employee has an AI teammate, the organizations that take onboarding seriously will move faster, safer and with greater purpose. Gen AI doesn’t just need data or compute; it needs guidance, goals, and growth plans. Treating AI systems as teachable, improvable and accountable team members turns hype into habitual value.


How CIOs Can Unlock Business Agility with Modular Cloud Architectures

A modular cloud architecture is one that makes a variety of discrete cloud services available on demand. The services are hosted across multiple cloud platforms, and different units within the business can pick and choose among specific services to meet their needs. ... At a high level, the main challenge stemming from a modular cloud architecture is that it adds complexity to an organization's cloud strategy. The more cloud services the CIO makes available, the harder it becomes to ensure that everyone is using them in a secure, efficient, cost-effective way. This is why a pivot toward a modular cloud strategy must be accompanied by governance and management practices that keep these challenges in check. ... As they work to ensure that the business can consume a wide selection of cloud services efficiently and securely, IT leaders may take inspiration from a practice known as platform engineering, which has grown in popularity in recent years. Platform engineering is the establishment of approved IT solutions that a business's internal users can access on a self-service basis, usually via a type of portal known as an internal developer platform. Historically, organizations have used platform engineering primarily to provide software developers with access to development tools and environments, not to manage cloud services. But the same sort of approach could help to streamline access to modular, composable cloud solutions.


8 platform engineering anti-patterns

Establishing a product mindset also helps drive improvement of the platform over time. “Start with a minimum viable platform to iterate and adapt based on feedback while also considering the need to measure the platform’s impact,” says Platform Engineering’s Galante. ... Top-down mandates for new technologies can easily turn off developers, especially when they alter existing workflows. Without the ability to contribute and iterate, the platform drifts from developer needs, prompting workarounds. ... “The feeling of being heard and understood is very important,” says Zohar Einy, CEO at Port, provider of a developer portal. “Users are more receptive to the portal once they know it’s been built after someone asked about their problems.” By performing user research and conducting developer surveys up front, platform engineers can discover the needs of all stakeholders and create platforms that mesh better with existing workflows and benefit productivity. ... Although platform engineering case studies from large companies, like Spotify, Expedia, or American Airlines, look impressive on paper, it doesn’t mean their strategies will transfer well to other organizations, especially those with mid-size or small-scale environments. ... Platform engineering requires more energy beyond a simple rebrand. “I’ve seen teams simply being renamed from operations or infrastructure teams to platform engineering teams, with very little change or benefit to the organization,” says Paula Kennedy


How Ransomware’s Data Theft Evolution is Rewriting Cyber Insurance Risk Models

Traditional cyber insurance risk models assume ransomware means encrypted files and brief business interruptions. The shift toward data theft creates complex claim scenarios that span multiple coverage lines and expose gaps in traditional policy structures. When attackers steal data rather than just encrypting it, the resulting claims can simultaneously trigger business interruption coverage, professional liability protection, regulatory defense coverage and crisis management. Each coverage line may have different limits, deductibles and exclusions, creating complicated interactions that claims adjusters struggle to parse. Modern business relationships are interconnected, which amplifies complications. A data breach at one organization can trigger liability claims from business partners, regulatory investigations across multiple jurisdictions, and contractual disputes with vendors and customers. Dependencies on third-party services create cascading exposures that traditional risk models fail to capture. ... The insurance implications are profound. Manual risk assessment processes cannot keep pace with the volume and sophistication of AI-enhanced attacks. Carriers still relying on traditional underwriting approaches face a fundamental mismatch of human-speed risk evaluation against machine-speed threat deployment.


Network security devices endanger orgs with ’90s era flaws“

Attackers are not trying to do the newest and greatest thing every single day,” watchTowr’s Harris explains. “They will do what works at scale. And we’ve now just seen that phishing has become objectively too expensive or too unsuccessful at scale to justify the time investment in deploying mailing infrastructure, getting domains and sender protocols in place, finding ways to bypass EDR, AV, sandboxes, mail filters, etc. It is now easier to find a 1990s-tier vulnerability in a border device where EDR typically isn’t deployed, exploit that, and then pivot from there.” ... “Identifying a command injection that is looking for a command string being passed to a system in some C or C++ code is not a terribly difficult thing to find,” Gross says. “But I think the trouble is understanding a really complicated appliance like these security network appliances. It’s not just like a single web application and that’s it.” This can also make it difficult for product developers themselves to understand the risks of a feature they add on one component if they don’t have a full understanding of the entire product architecture. ... Another problem? These appliances have a lot of legacy code, some that is 10 years or older. Plus, products and code bases inherited through acquisitions often means the developers who originally wrote the code might be long gone.


When everything’s connected, everything’s at risk

Treat OT changes as business changes (because they are). Involve plant managers, safety managers, and maintenance leadership in risk decisions. Be sure to test all changes in a development environment that adequately models the production environment where possible. Schedule changes during planned downtime with rollbacks ready. Build visibility passively with read-only collectors and protocol-aware monitoring to create asset and traffic maps without requiring PLC access. ... No one can predict the future. However, if the past is an indicator of the future, adversaries will continue to increasingly bypass devices and hijack cloud consoles, API tokens and remote management platforms to impact businesses on an industrial scale. Another area of risk is the firmware supply chain. Tiny devices often carry third-party code that we can’t easily patch. We’ll face more “patch by replacement” realities, where the only fix is swapping hardware. Additionally, machine identities at the edge, such as certificates and tokens, will outnumber humans by orders of magnitude. The lifecycle and privileges of those identities are the new perimeter. From a threat perspective, we will see an increasing number of ransomware attacks targeting physical disruption to increase leverage for the threat actors, as well as private 5G/smart facilities that, if misconfigured, propagate risk faster than any LAN ever has.


Software engineering foundations for the AI-native era

As developers begin composing software instead of coding line by line, they will need API-enabled composable components and services to stitch together. Software engineering leaders should begin by defining a goal to achieve a composable architecture that is based on modern multiexperience composable applications, APIs and loosely coupled API-first services. ... Software engineering leaders should support AI-ready data by organizing enterprise data assets for AI use. Generative AI is most useful when the LLM is paired with context-specific data. Platform engineering and internal developer portals provide the vehicles by which this data can be packaged, found and integrated by developers. The urgent demand for AI-ready data to support AI requires evolutionary changes to data management and upgrades to architecture, platforms, skills and processes. Critically, Model Context Protocol (MCP) needs to be considered. ... Software engineers can become risk-averse unless they are given the freedom, psychological safety and environment for risk taking and experimentation. Leaders must establish a culture of innovation where their teams are eager to experiment with AI technologies. This also applies in software product ownership, where experiments and innovation lead to greater optimization of the value delivered to customers.


What Does a 'Sovereign Cloud' Really Mean?

First, a sovereign cloud could be approached as a matter of procurement: Canada could shift its contract from US tech companies that currently dominate the approved list to non-American alternatives. At present, eight cloud service providers (CSPs) are approved for use by the Canadian government, seven of which are American. Accordingly, there is a clear opportunity to diversify procurement, particularly towards European CSPs, as suggested by the government’s ongoing discussions with France’s OVH Cloud. ... Second, a sovereign cloud could be defined as cloud infrastructure that is not only located in Canada and insulated from foreign legal access, but also owned by Canadian entities. Practically speaking, this would mean procuring services from domestic companies, a step the government has already taken with ThinkOn, the only non-American company CSP on the government’s approved list. ... Third, perhaps true cloud sovereignty might require more direct state intervention and a publicly built and maintained cloud. The Canadian government could develop in-house capacities for cloud computing and exercise the highest possible degree of control over government data. A dedicated Crown corporation could be established to serve the government’s cloud computing needs. ... No matter how we approach it, cloud sovereignty will be costly. 


Big Tech’s trust crisis: Why there is now the need for regulatory alignment

When companies deploy AI features primarily to establish market position rather than solve user problems, they create what might be termed ‘trust debt’ – a technical and social liability that compounds over time. This manifests in several ways, including degraded user experience, increased attack surfaces, and regulatory friction that ultimately impacts system performance and scalability. ... The emerging landscape of AI governance frameworks, from the EU AI Act to ISO 42001, shows an attempt to codify engineering best practices for managing algorithmic systems at scale. These standards address several technical realities, including bias in training data, security vulnerabilities in model inference, and intellectual property risks in data processing pipelines. Organisations implementing robust AI governance frameworks achieve regulatory compliance while adopting proven system design patterns that reduce operational risk. ... The technical implementation of trust requires embedding privacy and security considerations throughout the development lifecycle – what security engineers call ‘shifting left’ on governance. This approach treats regulatory compliance as architectural requirements that shape system design from inception. Companies that successfully integrate governance into their technical architecture find that compliance becomes a byproduct of good engineering practices which, over time, creates a series of sustainable competitive advantages.


The most sustainable data center is the one that’s already built: The business case for a ‘retrofit first’ mandate

From a sustainability standpoint, reusing and retrofitting legacy infrastructure is the single most impactful step our industry can take. Every megawatt of IT load that’s migrated into an existing site avoids the manufacturing, transport, and installation of new chillers, pumps, generators, piping, conduit, and switchgear and prevents the waste disposal associated with demolition. Sectors like healthcare, airports, and manufacturing have long proven that, with proper maintenance, mechanical and electrical systems can operate reliably for 30–50 years, and distribution piping can last a century. The data center industry – known for redundancy and resilience – can and should follow suit. The good news is that most data centers were built to last. ... When executed strategically, retrofits can reduce capital costs by 30–50 percent compared to greenfield construction, while accelerating time to market by months or even years. They also strengthen ESG reporting credibility, proving that sustainability and profitability can coexist. ... At the end of the day, I agree with Ms. Kass – the cleanest data center is the one that does not need to be built. For those that are already built, reusing and revitalizing the infrastructure we already have is not just a responsible environmental choice, it’s a sound business strategy that conserves capital, accelerates deployment, and aligns our industry’s growth with society’s expectations.

Daily Tech Digest - September 18, 2025


Quote for the day:

"When your life flashes before your eyes, make sure you’ve got plenty to watch.” -- Anonymous


The new IT operating model: cloud-managed networking as a strategic lever

Enterprises are navigating an environment where the complexity of IT is increasing exponentially. Hybrid work requires consistent connectivity across homes, offices, and campuses. Edge computing and IoT generate massive volumes of data at distributed sites. Security risks escalate as the attack surface grows. Traditional, hardware-centric approaches leave IT teams struggling to keep up. Managing dozens or hundreds of controllers, patching firmware manually, and troubleshooting issues site by site is not sustainable. Cloud-managed networking changes that equation. By centralizing management, applying AI-driven intelligence, and extending visibility across distributed environments, it enables IT to shift from reactive firefighting to proactive strategy. ... Enterprises adopting cloud-managed networking are making a decisive shift from complexity to clarity. Success requires more than technology alone. It demands a partner that understands how to translate advanced capabilities into measurable business outcomes. ... Cloud-managed networking is not just another IT trend. It is the operating model that will define enterprise technology for the next decade. By elevating the network from infrastructure to strategy, it enables organizations to move faster, stay secure, and innovate with confidence.


Why Shadow AI Is the Next Big Governance Challenge for CISOs

In many respects, shadow AI is a subset of a broader shadow IT problem. Shadow IT is an issue that emerged more than a decade ago, largely emanating from employee use of unauthorized cloud apps, including SaaS. Lohrmann noted that cloud access security broker (CASB) solutions were developed to deal with the shadow IT issue. These tools are designed to provide organizations with full visibility of what employees are doing on the network and on protected devices, while only allowing access to authorized instances. However, shadow AI presents distinct challenges that CASB tools are unable to adequately address. “Organizations still need to address other questions related to licensing, application sprawl, security and privacy policies, procedures and more ..,” Lohrmann noted. A key difference between IT and AI is the nature of data, the speed of adoption and the complexity of the underlying technology. In addition, AI is often integrated into existing IT systems, including cloud applications, making these tools more difficult to identify. Chuvakin added, “With shadow IT, unauthorized tools often leave recognizable traces – unapproved applications on devices, unusual network traffic or access attempts to restricted services. Shadow AI interactions, however, often occur entirely within a web browser or personal device, blending seamlessly with regular online activity or not leaving any trace on any corporate system at all.”


Cisco strengthens integrated IT/OT network and security controls

Melding IT and OT networking and security is not a new idea, but it’s one that has seen growing attention from Cisco. ... Cisco also added a new technology called AI-powered asset clustering to its Cyber Vision OT management suite. Cyber Vison keeps track of devices connected to an industrial network, builds a real-time map of how these devices talk to each other and to IT systems, and can detect abnormal behavior, vulnerabilities, or policy violations that could signal malware, misconfigurations, or insider threats, Cisco says. ... Another significant move that will help IT/OT integration is the planned integration of the management console for Cisco’s Catalyst and Meraki networks. That combination will allow IT and OT teams to see the same dashboard for industrial OT and IT enterprise/campus networks. Cyber Vision will feeds into the dashboard along with other Cisco management offerings such as ThousandEyes, which gives customers a shared inventory of assets, traffic flows and security. “What we are focusing on is helping our customers have the secure networking foundation and architecture that lets IT teams and operational teams kind of have one fabric, one architecture, that goes from the carpeted spaces all the way to the far reaches of their OT network,” Butaney said.


Global hiring risks: What you need to know about identity fraud and screening trends

Most organizations globally include criminal record checks in their pre-employment screening. Employment and education verifications are also common, especially in EMEA and APAC. ... “Employers that fail to strengthen their identity verification processes or overlook recurring discrepancy patterns could face costly consequences, from compliance failures to reputational harm,” said Euan Menzies, President and CEO of HireRight. ... More than three-quarters of businesses globally found at least one discrepancy in a candidate’s background over the past year. Thirteen percent reported finding one discrepancy for every five candidates screened. Employment verification remains the area where most inconsistencies are discovered, especially in APAC and EMEA. These discrepancies range from minor errors like incorrect dates to more serious issues such as fabricated job histories. ... Companies are increasingly adopting post-hire screening to address risks that emerge after someone is hired. In North America, only 38 percent of companies now say they do no post-hire screening, a sharp drop from 57 percent last year. Common post-hire checks include driver monitoring and periodic rescreening for regulated roles. These efforts help companies catch new issues such as undisclosed criminal activity, changes in legal eligibility to work, or evolving insider threats.


Doomprompting: Endless tinkering with AI outputs can cripple IT results

Some LLMs appear to be designed to encourage long-lasting conversation loops, with answers often spurring another prompt. ... “When an individual engineer is prompting an AI, they get a pretty good response pretty quick,” he says. “It gets in your head, ‘That’s pretty good; surely, I could get to perfect.’ And you get to the point where it’s the classic sunk-cost fallacy, where the engineer is like, ‘I’ve spent all this time prompting, surely I can prompt myself out of this hole.’” The problem often happens when the project lacks definitions of what a good result looks like, he adds. “Employees who don’t really understand the goal they’re after will spin in circles not knowing when they should just call it done or step away,” Farmer says. “The enemy of good is perfect, and LLMs make us feel like if we just tweak that last prompt a little bit, we’ll get there.” ... Govindarajan has seen some IT teams get stuck in “doom loops” as they add more and more instructions to agents to refine the outputs. As organizations deploy multiple agents, constant tinkering with outputs can slow down deployments and burn through staff time, he says. “The whole idea of doomprompting is basically putting that instruction down and hoping that it works as you set more and more instructions, some of them contradicting with each other,” he adds. “It comes at the sacrifice of system intelligence.”


Vanishing Public Record Makes Enterprise Data a Strategic Asset

“We are rapidly running out of public data that is credible and usable. More and more enterprises will start to assign value to their data and go beyond partnerships to monetize it. For example, wind measurements captured by a wind turbine company could be helpful to many businesses that are not competitors,” said Olga Kupriyanova, principal consultant of AI and data engineering at ISG. ... "We’re entering a defining moment in AI where access to reliable, scalable, and ethical data is quickly becoming the central bottleneck, and also the most valuable asset. As legal and regulatory pressure tightens access to public data, due to copyright lawsuits, privacy concerns, or manipulation of open data repositories, enterprises are being forced to rethink where their AI advantage will come from,” said Farshid Sabet, CEO and co-founder at Corvic AI, developer of a GenAI management platform. ... The economic consequences of such data loss are already visible. Analysts estimate that U.S. public data underpinned nearly $750 billion of business activity as recently as 2022, according to the Department of Commerce. The loss of such data blinds companies that build models for everything from supply chain forecasting to investment strategy and predictions.


The Architecture of Responsible AI: Balancing Innovation and Accountability

The field of AI governance suffers from what Mackenzie et al reaffirm as the “principal-agent problem,” where one party (the principal) delegates tasks to another party (the agent). But their interests are not perfectly aligned, leading to potential conflicts and inefficiencies. ... Architects occupy a unique position in this landscape. Unlike regulators who may impose constraints post-design, architects work at the intersection of possibility and constraint. They must balance competing requirements, such as performance and privacy, efficiency and equity, speed and safety, within coherent system designs. Every architectural decision must embed values, priorities, and assumptions about how systems should behave. ... current AI guidance suffers from systematic weaknesses: evidence quality is sacrificed for speed, commercial interests masquerade as objective advice, and some perspectives dominate while broader stakeholder voices remain unheard ... Architects, being well-placed to bridge the gap between strategy and technology, hold a key role in establishing the principles that govern how systems behave, interact, and evolve. In the context of AI, this principle set extends beyond technical design. It encompasses the ethical, social, and legal aspects as well. .


AI will make workers ‘busier in the future’ – so what’s the point exactly?

“I have to admit that I’m afraid to say that we are going to be busier in the future than now,” he told host Liz Claman. “And the reason for that is because a lot of different things that take a long time to do are now faster to do. I’m always waiting for work to get done because I’ve got more ideas.” ... “The more productive we are, the more opportunity we get to pursue new ideas,” Huang continued. Reading between the lines here, it seems the so-called efficiency gains afforded by AI will mean workers have more work dumped in their laps – onto the next task, no rest for the wicked, etc. Huang’s comments run counter to the prevailing sentiment among big tech executives on exactly what AI will deliver for both enterprises and individual workers. ... We’ve all read the marketing copy and heard it regurgitated by tech leaders on podcasts and keynote stages – AI will allow us to focus on the “more rewarding” aspects of our jobs. They’ve never fully explained what this entails, or how it will pan out in the workplace. To be quite honest, I don’t think they know what it means. Marketing probably made it up and they’ve stuck with it. ... Will we be busier spending time on those rewarding aspects of our jobs? I have to say, I’m doubtful. The reality is that workers will be pulled into other tasks and merely end up drowning in the same cumbersome workloads they’ve been dealing with since the pandemic.


Building Safer Digital Experiences Through Robust Testing Practices

Secure software testing forms the bedrock of resilient applications, proactively uncovering flaws before they become critical. Early testing practices can significantly reduce risks, costs, and exposure to threats. According to Global Market Insights, the growing number and size of data breaches have increased the need for security testing services. Organizations that heavily use security AI and automation save an average of USD 1.76 million compared to those that don’t. About 51% plan to increase their security spending. Early integration of techniques like Static Application Security Testing (SAST) can detect vulnerabilities in existing code. It can also help to fix bugs during development. ... Organizations must verify that their systems handle personal data securely and comply with global regulations like GDPR and CCPA. Testing ensures sensitive information is protected from leaks or unauthorized use. Americans are highly concerned about how companies use their private data. ... Stress testing evaluates how applications perform under extreme loads. It helps identify potential failures in scalability, response times, and resource management. Vulnerability assessments concentrate on uncovering security gaps. Verified Market Reports notes that, after recent financial crises, governments are putting stronger emphasis on stress testing.


Prompt Engineering Is Dead – Long Live PromptOps

PromptOps is gaining traction rapidly because it has the potential to address major challenges in the use of LLMs, such as prompt drift and suboptimal output. Yet incorporating PromptOps effectively into an organization is far from simple, requiring a structured and clear process, the right tools, and a mindset that enables collaboration and effective centralization. Digging deeper into what PromptOps is, why it is needed, and how it can be implemented effectively can help companies to find the right approach when incorporating this methodology for improving their LLM applications usage. ... Before PromptOps is implemented, an organization typically has prompts scattered across multiple teams and tools, with no structured management in place. The first stage of implementing PromptOps involves gathering every detail on LLM applications usage within an organization. It is essential to understand precisely which prompts are being used, by which teams, and with which models. The next stage is to build consistency into this practice by incorporating versioning and testing. Adding secure access control at this stage is also important, in order to ensure only those who need it have access to prompts. With these practices in place, organizations will be well-positioned to introduce cross-model design and embed core compliance and security practices into all prompt crafting.