Showing posts with label CxO. Show all posts
Showing posts with label CxO. Show all posts

Daily Tech Digest - July 29, 2025


Quote for the day:

"Great leaders do not desire to lead but to serve." -- Myles Munroe


AI Skills Are in High Demand, But AI Education Is Not Keeping Up

There’s already a big gap between how many AI workers are needed and how many are available, and it’s only getting worse. The report says the U.S. was short more than 340,000 AI and machine learning workers in 2023. That number could grow to nearly 700,000 by 2027 if nothing changes. Faced with limited options in traditional higher education, most learners are taking matters into their own hands. According to the report, “of these 8.66 million people learning AI, 32.8% are doing so via a structured and supervised learning program, the rest are doing so in an independent manner.” Even within structured programs, very few involve colleges or universities. As the report notes, “only 0.2% are learning AI via a credit-bearing program from a higher education institution,” while “the other 99.8% are learning these skills from alternative education providers.” That includes everything from online platforms to employer-led training — programs built for speed, flexibility, and real-world use, rather than degrees. College programs in AI are growing, but they’re still not reaching enough people. Between 2018 and 2023, enrollment in AI and machine learning programs at U.S. colleges went up nearly 45% each year. Even with that growth, these programs serve only a small slice of learners — most people are still turning to other options.


Why chaos engineering is becoming essential for enterprise resilience

Enterprises should treat chaos engineering as a routine practice, just like sports teams before every game. These groups would never participate in matches without understanding their opponent or ensuring they are in the best possible position to win. They train under pressure, run through potential scenarios, and test their plays to identify the weaknesses of their opponents. This same mindset applies to enterprise engineering teams preparing for potential chaos in their environments. By purposely simulating disruptions like server outages, latency, or dropped connections, or by identifying bugs and poor code, enterprises can position themselves to perform at their best when these scenarios occur in real life. They can adopt proactive approaches to detecting vulnerabilities, instituting recovery strategies, building trust in systems and, in the end, improving their overall resilience. ... Additionally, chaos engineering can help improve scalability within the organisation. Enterprises are constantly seeking ways to grow and enhance their apps or platforms so that more and more end-users can see the benefits. By doing this, they can remain competitive and generate more revenue. Yet, if there are any cracks within the facets or systems that power their apps or platforms, it can be extremely difficult to scale and deliver value to both customers and the organisation.


Fractional CXOs: A New Model for a C-Everything World

Fractional leadership isn’t a new idea—it’s long been part of the advisory board and consulting space. But what’s changed is its mainstream adoption. Companies are now slotting in fractional leaders not just for interim coverage or crisis management, but as a deliberate strategy for agility and cost-efficiency. It’s not just companies benefiting either. Many high-performing professionals are choosing the fractional path because it gives them freedom, variety, and a more fulfilling way to leverage their skills without being tied down to one company or role. For them, it’s not just about fractional time—it’s about full-spectrum opportunity. ... Whether you’re a company executive exploring options or a leader considering a lifestyle pivot, here are the biggest advantages of fractional CxOs:Strategic Agility: Need someone to lead a transformation for 6–12 months? Need guidance scaling your data team? A fractional CxO lets you dial in the right leadership at the right time. Cost Containment: You pay for what you need, when you need it. No long-term employment contracts, no full comp packages, no redundancy risk. Experience Density: Most fractional CxOs have deep domain expertise and have led across multiple industries. That cross-pollination of experience can bring unique insights and fast-track solutions.


Cyberattacks reshape modern conflict & highlight resilience needs

Governments worldwide are responding to the changing threat landscape. The United States, European Union, and NATO have increased spending on cyber defence and digital threat-response measures. The UK's National Cyber Force has broadened its recruitment initiatives, while the European Union has introduced new cyber resilience strategies. Even countries with neutral status, such as Switzerland, have begun investing more heavily in cyber intelligence. ... Critical infrastructure encompasses power grids, water systems, and transport networks. These environments often use operational technology (OT) networks that are separated from the internet but still have vulnerabilities. Attackers typically exploit mechanisms such as phishing, infected external drives, or unsecured remote access points to gain entry. In 2024, a group linked to Iran, called CyberAv3ngers, breached several US water utilities by targeting internet-connected control systems, raising risks of water contamination. ... Organisations are advised against bespoke security models, with tried and tested frameworks such as NIST CSF, OWASP SAMM, and ISO standards cited as effective guides for structuring improvement. The statement continues, "Like any quality control system it is all about analysis of the situation and iterative improvements. Things evolve slowly until they happen all at once."


The trials of HR manufacturing: AI in blue-collar rebellion

The challenge of automation isn't just technological, it’s deeply human. How do you convince someone who has operated a ride in your park for almost two decades, who knows every sound, every turn, every lever by heart, that the new sleek control panel is an upgrade and not a replacement? That the machine learning model isn’t taking their job; it’s opening doors to something better? For many workers, the introduction of automation doesn’t feel like innovation but like erasure. A line shuts down. A machine takes over. A skill that took them years to master becomes irrelevant overnight. In this reality, HR’s role extends far beyond workflow design; it now must navigate fear, build trust, and lead people through change with empathy and clarity. Upskilling entails more than just access to platforms that educate you. It’s about building trust, ensuring relevance, and respecting time. Workers aren’t just asking how to learn, but why. Workers want clarity on their future career paths. They’re asking, “Where is this ride taking me?” As Joseph Fernandes, SVP of HR for South Asia at Mastercard, states, change management should “emphasize how AI can augment employee capabilities rather than replace them.” Additionally, HR must address the why of training, not just the how. Workers don’t want training videos; rather, they want to know what the next five years of their job look like. 


What Do DevOps Engineers Think of the Current State of DevOps

The toolchain is consolidating. CI/CD, monitoring, compliance, security and cloud provisioning tools are increasingly bundled or bridged in platform layers. DevOps.com’s coverage tracks this trend: It’s no longer about separate pipelines, it’s about unified DevOps platforms. CloudBees Unify is a prime example: Launched in mid‑2025, it unifies governance across toolchains without forcing migration — an AI‑powered operating layer over existing tools. ... DevOps education and certification remain fragmented. Traditional certs — Kubernetes (CKA, CKAD), AWS/Azure/GCP and DevOps Foundation — remain staples. But DevOps engineers express frustration: Formal learning often lags behind real‑world tooling, AI integration, or platform engineering practices. Many engineers now augment certs with hands‑on labs, bootcamps and informal community learning. Organizations are piloting internal platform engineer training programs to bridge skills gaps. Still, a mismatch persists between the modern tech stack and classroom syllabi. ... DevOps engineers today stand at a crossroads: Platform engineering and cloud tooling have matured into the ecosystem, AI is no longer experimentation but embedded flow. Job markets are shifting, but real demand remains strong — for creative, strategic and adaptable engineers who can shepherd tools, teams and AI together into scalable delivery platforms.


7 enterprise cloud strategy trends shaking up IT today

Vertical cloud platforms aren’t just generic cloud services — they’re tailored ecosystems that combine infrastructure, AI models, and data architectures specifically optimized for sectors such as healthcare, manufacturing, finance, and retail, says Chandrakanth Puligundla, a software engineer and data analyst at grocery store chain Albertsons. What makes this trend stand out is how quickly it bridges the gap between technical capabilities and real business outcomes, Puligundla says. ... Organizations must consider what workloads go where and how that distribution will affect enterprise performance, reduce unnecessary costs, and help keep workloads secure, says Tanuj Raja, senior vice president, hyperscaler and marketplace, North America, at IT distributor and solution aggregator TD SYNNEX. In many cases, needs are driving a move toward a hybrid cloud environment for more control, scalability, and flexibility, Raja says. ... We’re seeing enterprises moving past the assumption that everything belongs in the cloud, says Cache Merrill, founder of custom software development firm Zibtek. “Instead, they’re making deliberate decisions about workload placement based on actual business outcomes.” This transition represents maturity in the way enterprises think about making technology decisions, Merrill says. He notes that the initial cloud adoption phase was driven by a fear of being left behind. 


Beyond the Rack: 6 Tips for Reducing Data Center Rental Costs

One of the simplest ways to reduce spending on data center rentals is to choose data centers located in regions where data center space costs the least. Data center rental costs, which are often measured in terms of dollars-per-kilowatt, can vary by a factor of ten or more between different parts of the world. Perhaps surprisingly, regions with the largest concentrations of data centers tend to offer the most cost-effective rates, largely due to economies of scale. ... Another key strategy for cutting data center rental costs is to consolidate servers. Server consolidation reduces the total number of servers you need to deploy, which in turn minimizes the space you need to rent. The challenge, of course, is that consolidating servers can be a complex process, and businesses don’t always have the means to optimize their infrastructure footprint overnight. But if you deploy more servers than necessary, they effectively become a form of technical debt that costs more and more the longer you keep them in service. ... As with many business purchases, the list price for data center rent is often not the lowest price that colocation operators will accept. To save money, consider negotiating. The more IT equipment you have to deploy, the more successful you’ll likely be in locking in a rental discount. 


Ransomware will thrive until we change our strategy

We need to remember that those behind ransomware attacks are part of organized criminal gangs. These are professional criminal enterprises, not lone hackers, with access to global infrastructures, safe havens to operate from, and laundering mechanisms to clean their profits. ... Disrupting ransomware gangs isn’t just about knocking a website or a dark marketplace offline. It requires trained personnel, international legal instruments, strong financial intelligence, and political support. It also takes time, which means political patience. We can’t expect agencies to dismantle global criminal networks with only short-term funding windows and reactive mandates. ... The problem of ransomware, or indeed cybercrime in general, is not just about improving how organizations manage their cybersecurity, we also need to demand better from the technology providers that those organizations rely on. Too many software systems, including ironically cybersecurity solutions, are shipped with outdated libraries, insecure default settings, complex patching workflows, and little transparency around vulnerability disclosure. Customers have been left to carry the burden of addressing flaws they didn’t create and often can’t easily fix. This must change. Secure-by-design and secure-by-default must become reality, and not slogans on a marketing slide or pinkie-promises that vendors “take cybersecurity seriously”.


The challenges for European data sovereignty

The false sense of security created by the physical storage of data in European data centers of US companies deserves critical consideration. Many organizations assume that geographical storage within the EU automatically means that data is protected by European law. In reality, the physical location is of little significance when legal control is in the hands of a foreign entity. After all, the CLOUD Act focuses on the nationality and legal status of the provider, not on the place of storage. This means that data in Frankfurt or Amsterdam may be accessible to US authorities without the customer’s knowledge. Relying on European data centers as being GDPR-compliant and geopolitically neutral by definition is therefore misplaced. ... European procurement rules often do not exclude foreign companies such as Microsoft or Amazon, even if they have a branch in Europe. This means that US providers compete for strategic digital infrastructure, while Europe wants to position itself as autonomous. The Dutch government recently highlighted this challenge and called for an EU-wide policy that combats digital dependency and offers opportunities for European providers without contravening international agreements on open procurement.

Daily Tech Digest - May 23, 2025


Quote for the day:

"People may forget what you say, but they won't forget how you them feel." -- Mary Kay Ash


MCP, ACP, and Agent2Agent set standards for scalable AI results

“Without standardized protocols, companies will not be able to reap the maximum value from digital labor, or will be forced to build interoperability capabilities themselves, increasing technical debt,” he says. Protocols are also essential for AI security and scalability, because they will enable AI agents to validate each other, exchange data, and coordinate complex workflows, Lerhaupt adds. “The industry can build more robust and trustworthy multi-agent systems that integrate with existing infrastructure, encouraging innovation and collaboration instead of isolated, fragmented point solutions,” he says. ... ACP is “a universal protocol that transforms the fragmented landscape of today’s AI agents into inter-connected teammates,” writes Sandi Besen, ecosystem lead and AI research engineer at IBM Research, in Towards Data Science. “This unlocks new levels of interoperability, reuse, and scale.” ACP uses standard HTTP patterns for communication, making it easy to integrate into production, compared to JSON-RPC, which relies on more complex methods, Besen says. ... Agent2Agent, supported by more than 50 Google technology partners, will allow IT leaders to string a series of AI agents together, making it easier to get the specialized functionality their organizations need, Ensono’s Piazza says. Both ACP and Agent2Agent, with their focus on connecting AI agents, are complementary protocols to the model-centric MCP, their creators say.


It’s Time to Get Comfortable with Uncertainty in AI Model Training

“We noticed that some uncertainty models tend to be overconfident, even when the actual error in prediction is high,” said Bilbrey Pope. “This is common for most deep neural networks. But a model trained with SNAP gives a metric that mitigates this overconfidence. Ideally, you’d want to look at both prediction uncertainty and training data uncertainty to assess your overall model performance.” ... “AI should be able to accurately detect its knowledge boundaries,” said Choudhury. “We want our AI models to come with a confidence guarantee. We want to be able to make statements such as ‘This prediction provides 85% confidence that catalyst A is better than catalyst B, based on your requirements.’” In their published study, the researchers chose to benchmark their uncertainty method with one of the most advanced foundation models for atomistic materials chemistry, called MACE. The researchers calculated how well the model is trained to calculate the energy of specific families of materials. These calculations are important to understanding how well the AI model can approximate the more time- and energy-intensive methods that run on supercomputers. The results show what kinds of simulations can be calculated with confidence that the answers are accurate. 


Don’t let AI integration become your weakest link

Ironically, integration meant to boost efficiency can stifle innovation. Once a complex web of AI- interconnected systems exists, adding tools or modifying processes becomes a major architectural undertaking, not plug-and-play. It requires understanding interactions with central AI logic, potentially needing complex model re-training, integration point redevelopment, and extensive regression testing to avoid destabilisation. ... When AI integrates and automates decisions and workflows across systems based on learned patterns, it inherently optimises for the existing or dominant processes observed in the training data. While efficiency is the goal, there’s a tangible risk of inadvertently enforcing uniformity and suppressing valuable diversity in approaches. Different teams might have unique, effective methods deviating from the norm. An AI trained on the majority might flag these as errors, subtly discouraging creative problem-solving or context-specific adaptations. ... Feeding data from multiple sensitive systems (CRM, HR, finance, and communications) into central AI dramatically increases the scope and sensitivity of data processed and potentially exposed. Each integration point is another vector for data leakage or unauthorised access. Sensitive customer, employee, and financial data may flow across more boundaries and be aggregated in new ways, increasing the surface area for breaches or misuse.


Beyond API Uptime: Modern Metrics That Matter

A minuscule delay (measurable in API response times) in processing API requests can be as painful to a customer as a major outage. User behavior and expectations have evolved, and performance standards need to keep up. Traditional API monitoring tools are stuck in a binary paradigm of up versus down, despite the fact that modern, cloud native applications live in complex, distributed ecosystems. ... Measuring performance from multiple locations provides a more balanced and realistic view of user experience and can help uncover metrics you need to monitor, like location-specific latency: What’s fast in San Francisco might be slow in New York and terrible in London. ... The real value of IPM comes from how its core strengths, such as proactive synthetic testing, global monitoring agents, rich analytics with percentile-based metrics and experience-level objectives, interact and complement each other, Vasiliou told me. “IPM can proactively monitor single API URIs [uniform resource identifiers] or full API multistep transactions, even when users are not on your site or app. Many other monitors can also do this. It is only when you combine this with measuring performance from multiple locations, granular analytics and experience-level objectives that the value of the whole is greater than the sum of its parts,” Vasiliou said.


Agentic AI shaping strategies and plans across sectors as AI agents swarm

“Without a robust identity model, agents can’t truly act autonomously or securely,” says the post. “The MCP-I (I for Identity) specification addresses this gap – introducing a practical, interoperable approach to agentic identity.” Vouched also offers its turnkey SaaS Vouched MCP Identity Server, which provides easy-to-integrate APIs and SDKs for enterprises and developers to embed strong identity verification into agent systems. While the Agent Reputation Directory and MCP-I specification are open and free to the public, the MCP Identity Server is available as a commercial offering. “Thinking through strong identity in advance is critical to building an agentic future that works,” says Peter Horadan, CEO of Vouched. “In some ways we’ve seen this movie before. For example, when our industry designed email, they never anticipated that there would be bad email senders. As a result, we’re still dealing with spam problems 50 years later.” ... An early slide outlining definitions tells us that AI agents are ushering in a new definition of the word “tools,” which he calls “one of the big changes that’s happening this year around agentic AI, giving the ability to LLMs to actually do and act with permission on behalf of the user, interact with permission on behalf of the user, interact with third-party APIs,” and so on. Tools aside, what are the challenges for agentic AI? “The biggest one is security,” he says. 


Optimistic Security: A New Era for Cyber Defense

Optimistic cybersecurity involves effective NHI management that reduces risk, improves regulatory compliance, enhances operational efficiency and provides better control over access management. This management strategy goes beyond point solutions such as secret scanners, offering comprehensive protection throughout the entire lifecycle of these identities. ... Furthermore, a proactive attitude towards cybersecurity can lead to potential cost savings by automating processes such as secrets rotation and NHIs decommissioning. By utilizing optimistic cybersecurity strategies, businesses can transform their defensive mechanisms, preparing for a new era in cyber defense. By integrating Non-Human Identities and Secrets Management into their cloud security control strategies, organizations can fortify their digital infrastructure, significantly reducing security breaches and data leaks. ... Implementing an optimistic cybersecurity approach is no less than a transformation in perspective. It involves harnessing the power of technology and human ingenuity to build a resilient future. With optimism at its core, cybersecurity measures can become a beacon of hope rather than a looming threat. By welcoming this new era of cyber defense with open arms, organizations can build a secure digital environment where NHIs and their secrets operate seamlessly, playing a pivotal role in enhancing overall cybersecurity.


Identity Security Has an Automation Problem—And It's Bigger Than You Think

The data reveals a persistent reliance on human action for tasks that should be automated across the identity security lifecycle.41% of end users still share or update passwords manually, using insecure methods like spreadsheets, emails, or chat tools. They are rarely updated or monitored, increasing the likelihood of credential misuse or compromise. Nearly 89% of organizations rely on users to manually enable MFA in applications, despite MFA being one of the most effective security controls. Without enforcement, protection becomes optional, and attackers know how to exploit that inconsistency. 59% of IT teams handle user provisioning and deprovisioning manually, relying on ticketing systems or informal follow-ups to grant and remove access. These workflows are slow, inconsistent, and easy to overlook—leaving organizations exposed to unauthorized access and compliance failures. ... According to the Ponemon Institute, 52% of enterprises have experienced a security breach caused by manual identity work in disconnected applications. Most of them had four or more. The downstream impact was tangible: 43% reported customer loss, and 36% lost partners. These failures are predictable and preventable, but only if organizations stop relying on humans to carry out what should be automated. Identity is no longer a background system. It's one of the primary control planes in enterprise security. 


Critical infrastructure under attack: Flaws becoming weapon of choice

“Attackers have leaned more heavily on vulnerability exploitation to get in quickly and quietly,” said Dray Agha, senior manager of security operations at managed detection and response vendor Huntress. “Phishing and stolen credentials play a huge role, however, and we’re seeing more and more threat actors target identity first before they probe infrastructure.” James Lei, chief operating officer at application security testing firm Sparrow, added: “We’re seeing a shift in how attackers approach critical infrastructure in that they’re not just going after the usual suspects like phishing or credential stuffing, but increasingly targeting vulnerabilities in exposed systems that were never meant to be public-facing.” ... “Traditional methods for defense are not resilient enough for today’s evolving risk landscape,” said Andy Norton, European cyber risk officer at cybersecurity vendor Armis. “Legacy point products and siloed security solutions cannot adequately defend systems against modern threats, which increasingly incorporate AI. And yet, too few organizations are successfully adapting.” Norton added: “It’s vital that organizations stop reacting to cyber incidents once they’ve occurred and instead shift to a proactive cybersecurity posture that allows them to eliminate vulnerabilities before they can be exploited.”


Fundamentals of Data Access Management

An important component of an organization’s data management strategy is controlling access to the data to prevent data corruption, data loss, or unauthorized modification of the information. The fundamentals of data access management are especially important as the first line of defense for a company’s sensitive and proprietary data. Data access management protects the privacy of the individuals to which the data pertains, while also ensuring the organization complies with data protection laws. It does so by preventing unauthorized people from accessing the data, and by ensuring those who need access can reach it securely and in a timely manner. ... Appropriate data access controls improve the efficiency of business processes by limiting the number of actions an employee can take. This helps simplify user interfaces, reduce database errors, and automate validation, accuracy, and integrity checks. By restricting the number of entities that have access to sensitive data, or permission to alter or delete the data, organizations reduce the likelihood of errors being introduced while enhancing the effectiveness of their real-time data processing activities. ... Becoming a data-driven organization requires overcoming several obstacles, such as data silos, fragmented and decentralized data, lack of visibility into security and access-control measures currently in place, and a lack of organizational memory about how existing data systems were designed and implemented.


Chief Intelligence Officers? How Gen AI is rewiring the CxOs Brain

Generative AI is making the most impact in areas like Marketing, Software Engineering, Customer Service, and Sales. These functions benefit from AI’s ability to process vast amounts of data quickly. On the other hand, Legal and HR departments see less GenAI adoption, as these areas require high levels of accuracy, predictability, and human judgment. ... Business and tech leaders must prioritize business value when choosing AI use cases, focus on AI literacy and responsible AI, nurture cross-functional collaboration, and stress continuous learning to achieve successful outcomes. ... Leaders need to clearly outline and share a vision for responsible AI, establishing straightforward principles and policies that address fairness, bias reduction, ethics, risk management, privacy, sustainability, and compliance with regulations. They should also pinpoint the risks associated with Generative AI, such as privacy concerns, security issues, hallucinations, explainability, and legal compliance challenges, along with practical ways to mitigate these risks. When choosing and prioritizing use cases, it’s essential to consider responsible AI by filtering out those that carry unacceptable risks. Each Generative AI use case should have a designated champion responsible for ensuring that development and usage align with established policies. 

Daily Tech Digest - May 22, 2025


Quote for the day:

"Knowledge is being aware of what you can do. Wisdom is knowing when not to do it." -- Anonymous


Consumer rights group: Why a 10-year ban on AI regulation will harm Americans

AI is a tool that can be used for significant good, but it can and already has been used for fraud and abuse, as well as in ways that can cause real harm, both intentional and unintentional — as was thoroughly discussed in the House’s own bipartisan AI Task Force Report. These harms can range from impacting employment opportunities and workers’ rights to threatening accuracy in medical diagnoses or criminal sentencing, and many current laws have gaps and loopholes that leave AI uses in gray areas. Refusing to enact reasonable regulations places AI developers and deployers into a lawless and unaccountable zone, which will ultimately undermine the trust of the public in their continued development and use. ... Proponents of the 10-year moratorium have argued that it would prevent a patchwork of regulations that could hinder the development of these technologies, and that Congress is the proper body to put rules in place. But Congress thus far has refused to establish such a framework, and instead it’s proposing to prevent any protections at any level of government, completely abdicating its responsibility to address the serious harms we know AI can cause. It is a gift to the largest technology companies at the expense of users — small or large — who increasingly rely on their services, as well as the American public who will be subject to unaccountable and inscrutable systems.


Putting agentic AI to work in Firebase Studio

An AI assistant is like power steering. The programmer, the driver, remains in control, and the tool magnifies that control. The developer types some code, and the assistant completes the function, speeding up the process. The next logical step is to empower the assistant to take action—to run tests, debug code, mock up a UI, or perform some other task on its own. In Firebase Studio, we get a seat in a hosted environment that lets us enter prompts that direct the agent to take meaningful action. ... Obviously, we are a long way off from a non-programmer frolicking around in Firebase Studio, or any similar AI-powered development environment, and building complex applications. Google Cloud Platform, Gemini, and Firebase Studio are best-in-class tools. These kinds of limits apply to all agentic AI development systems. Still, I would in no wise want to give up my Gemini assistant when coding. It takes a huge amount of busy work off my shoulders and brings much more possibility into scope by letting me focus on the larger picture. I wonder how the path will look, how long it will take for Firebase Studio and similar tools to mature. It seems clear that something along these lines, where the AI is framed in a tool that lets it take action, is part of the future. It may take longer than AI enthusiasts predict. It may never really, fully come to fruition in the way we envision.


Edge AI + Intelligence Hub: A Match in the Making

The shop floor looks nothing like a data lake. There is telemetry data from machines, historical data, MES data in SQL, some random CSV files, and most of it lacks context. Companies that realize this—or already have an Industrial DataOps strategy—move quickly beyond these issues. Companies that don’t end up creating a solution that works with only telemetry data (for example) and then find out they need other data. Or worse, when they get something working in the first factory, they find out factories 2, 3, and 4 have different technology stacks. ... In comes DataOps (again). Cloud AI and Edge AI have the same problems with industrial data. They need access to contextualized information across many systems. The only difference is there is no data lake in the factory—but that’s OK. DataOps can leave the data in the source systems and expose it over APIs, allowing edge AI to access the data needed for specific tasks. But just like IT, what happens if OT doesn’t use DataOps? It’s the same set of issues. If you try to integrate AI directly with data from your SCADA, historian, or even UNS/MQTT, you’ll limit the data and context to which the agent has access. SCADA/Historians only have telemetry data. UNS/MQTT is report by exception, and AI is request/response based (i.e., it can’t integrate). But again, I digress. Use DataOps.


AI-driven threats prompt IT leaders to rethink hybrid cloud security

Public cloud security risks are also undergoing renewed assessment. While the public cloud was widely adopted during the post-pandemic shift to digital operations, it is increasingly seen as a source of risk. According to the survey, 70 percent of Security and IT leaders now see the public cloud as a greater risk than any other environment. As a result, an equivalent proportion are actively considering moving data back from public to private cloud due to security concerns, and 54 percent are reluctant to use AI solutions in the public cloud citing apprehensions about intellectual property protection. The need for improved visibility is emphasised in the findings. Rising sophistication in cyberattacks has exposed the limitations of existing security tools—more than half (55 percent) of Security and IT leaders reported lacking confidence in their current toolsets' ability to detect breaches, mainly due to insufficient visibility. Accordingly, 64 percent say their primary objective for the next year is to achieve real-time threat monitoring through comprehensive real-time visibility into all data in motion. David Land, Vice President, APAC at Gigamon, commented: "Security teams are struggling to keep pace with the speed of AI adoption and the growing complexity of and vulnerability of public cloud environments. 


Taming the Hacker Storm: Why Millions in Cybersecurity Spending Isn’t Enough

The key to taming the hacker storm is founded on the core principle of trust: that the individual or company you are dealing with is who or what they claim to be and behaves accordingly. Establishing a high-trust environment can largely hinder hacker success. ... For a pervasive selective trusted ecosystem, an organization requires something beyond trusted user IDs. A hacker can compromise a user’s device and steal the trusted user ID, making identity-based trust inadequate. A trust-verified device assures that the device is secure and can be trusted. But then again, a hacker stealing a user’s identity and password can also fake the user’s device. Confirming the device’s identity—whether it is or it isn’t the same device—hence becomes necessary. The best way to ensure the device is secure and trustworthy is to employ the device identity that is designed by its manufacturer and programmed into its TPM or Secure Enclave chip. ... Trusted actions are critical in ensuring a secure and pervasive trust environment. Different actions require different levels of authentication, generating different levels of trust, which the application vendor or the service provider has already defined. An action considered high risk would require stronger authentication, also known as dynamic authentication.


AWS clamping down on cloud capacity swapping; here’s what IT buyers need to know

For enterprises that sourced discounted cloud resources through a broker or value-added reseller (VAR), the arbitrage window shuts, Brunkard noted. Enterprises should expect a “modest price bump” on steady‑state workloads and a “brief scramble” to unwind pooled commitments. ... On the other hand, companies that buy their own RIs or SPs, or negotiate volume deals through AWS’s Enterprise Discount Program (EDP), shouldn’t be impacted, he said. Nothing changes except that pricing is now baselined. To get ahead of the change, organizations should audit their exposure and ask their managed service providers (MSPs) what commitments are pooled and when they renew, Brunkard advised. ... Ultimately, enterprises that have relied on vendor flexibility to manage overcommitment could face hits to gross margins, budget overruns, and a spike in “finance-engineering misalignment,” Barrow said. Those whose vendor models are based on RI and SP reallocation tactics will see their risk profile “changed overnight,” he said. New commitments will now essentially be non-cancellable financial obligations, and if cloud usage dips or pivots, they will be exposed. Many vendors won’t be able to offer protection as they have in the past.


The new C-Suite ally: Generative AI

While traditional GenAI applications focus on structured datasets, a significant frontier remains largely untapped — the vast swathes of unstructured "dark data" sitting in contracts, credit memos, regulatory reports, and risk assessments. Aashish Mehta, Founder and CEO of nRoad, emphasizes this critical gap.
"Most strategic decisions rely on data, but the reality is that a lot of that data sits in unstructured formats," he explained. nRoad’s platform, CONVUS, addresses this by transforming unstructured content into structured, contextual insights. ... Beyond risk management, OpsGPT automates time-intensive compliance tasks, offers multilingual capabilities, and eliminates the need for coding through intuitive design. Importantly, Broadridge has embedded a robust governance framework around all AI initiatives, ensuring security, regulatory compliance, and transparency. Trustworthiness is central to Broadridge’s approach. "We adopt a multi-layered governance framework grounded in data protection, informed consent, model accuracy, and regulatory compliance," Seshagiri explained. ... Despite the enthusiasm, CxOs remain cautious about overreliance on GenAI outputs. Concerns around model bias, data hallucination, and explainability persist. Many leaders are putting guardrails in place: enforcing human-in-the-loop systems, regular model audits, and ethical AI use policies.


Building a Proactive Defence Through Industry Collaboration

Trusted collaboration, whether through Information Sharing and Analysis Centres (ISACs), government agencies, or private-sector partnerships, is a highly effective way to enhance the defensive posture of all participating organisations. For this to work, however, organisations will need to establish operationally secure real-time communication channels that support the rapid sharing of threat and defence intelligence. In parallel, the community will also need to establish processes to enable them to efficiently disseminate indicators of compromise (IoCs) and tactics, techniques and procedures (TTPs), backed up with best practice information and incident reports. These collective defence communities can also leverage the centralised cyber fusion centre model that brings together all relevant security functions – threat intelligence, security automation, threat response, security orchestration and incident response – in a truly cohesive way. Providing an integrated sharing platform for exchanging information among multiple security functions, today’s next-generation cyber fusion centres enable organisations to leverage threat intelligence, identify threats in real-time, and take advantage of automated intelligence sharing within and beyond organisational boundaries. 


3 Powerful Ways AI is Supercharging Cloud Threat Detection

AI’s strength lies in pattern recognition across vast datasets. By analysing historical and real-time data, AI can differentiate between benign anomalies and true threats, improving the signal-to-noise ratio for security teams. This means fewer false positives and more confidence when an alert does sound. ... When a security incidents strike, every second counts. Historically, responding to an incident involves significant human effort – analysts must comb through alerts, correlate logs, identify the root cause, and manually contain the threat. This approach is slow, prone to errors, and doesn’t scale well. It’s not uncommon for incident investigations to stretch hours or days when done manually. Meanwhile, the damage (data theft, service disruption) continues to accrue. Human responders also face cognitive overloads during crises, juggling tasks like notifying stakeholders, documenting events, and actually fixing the problem. ... It’s important to note that AI isn’t about eliminating the need for human experts but rather augmenting their capabilities. By taking over initial investigation steps and mundane tasks, AI frees up human analysts to focus on strategic decision-making and complex threats. Security teams can then spend time on thorough analysis of significant incidents, threat hunting, and improving security posture, instead of constant firefighting. 


The hidden gaps in your asset inventory, and how to close them

The biggest blind spot isn’t a specific asset. It is trusting that what’s on paper is actually live and in production. Many organizations often solely focus on known assets within their documented environments, but this can create a false sense of security. Blind spots are not always the result of malicious intent, but rather of decentralized decision-making, forgotten infrastructure, or evolving technology that hasn’t been brought under central control. External applications, legacy technologies and abandoned cloud infrastructure, such as temporary test environments, may remain vulnerable long after their intended use. These assets pose a risk, particularly when they are unintentionally exposed due to misconfiguration or overly broad permissions. Third-party and supply chain integrations present another layer of complexity.  ... Traditional discovery often misses anything that doesn’t leave a clear, traceable footprint inside the network perimeter. That includes subdomains spun up during campaigns or product launches; public-facing APIs without formal registration or change control; third-party login portals or assets tied to your brand and code repositories, or misconfigured services exposed via DNS. These assets live on the edge, connected to the organization but not owned in a traditional sense. 

Daily Tech Digest - May 19, 2025


Quote for the day:

"Leadership is liberating people to do what is required of them in the most effective and humane way possible." -- Max DePree


Adopting agentic AI? Build AI fluency, redesign workflows, don’t neglect supervision

AI upskilling is still majorly under-prioritized across organizations. Did you know that less than one-third of companies have trained even a quarter of their staff to use AI? How do leaders expect employees to feel empowered to use AI if education isn’t presented as the priority? Maintaining a nimble and knowledgeable workforce is critical, fostering a culture that embraces technological change. Team collaboration in this sense could take the form of regular training about agentic AI, highlighting its strengths and weaknesses and focusing on successful human-AI collaborations. For more established companies, role-based training courses could successfully show employees in different capacities and roles to use generative AI appropriately. ... Although gen AI will not substantially affect organizations’ workforce sizes in the short-term, we should still expect an evolution of role titles and responsibilities. For example, from service operations and product development to AI ethics and AI model validation positions. For this shift to successfully happen, executive-level buy-in is paramount. Senior leaders need a clearly-defined organization-wide strategy, including a dedicated team to drive gen AI adoption. We’ve seen that when senior leaders delegate AI integration solely to IT or digital technology teams, the business context can be neglected. 


Half of tech execs are ready to let AI take the wheel

“AI is not just an incremental change from digital business. AI is a step change in how business and society work,” he said. “A significant implication is that, if savviness across the C-suite is not rapidly improved, competitiveness will suffer, and corporate survival will be at stake.” CEOs perceived even the CIO, chief information security officer (CISO), and chief data officer (CDO) as lacking AI savviness. Respondents said the top two factors limiting AI’s deployment and use are the inability to hire adequate numbers of skilled people and an inability to calculate value or outcomes. “CEOs have shifted their view of AI from just a tool to a transformative way of working,” said Jennifer Carter, a principal analyst at Gartner. “This change has highlighted the importance of upskilling. As leaders recognize AI’s potential and its impact on their organizations, they understand that success isn’t just about hiring new talent. Instead, it’s about equipping their current employees with the skills needed to seamlessly incorporate AI into everyday tasks.” This focus on upskilling is a strategic response to AI’s evolving role in business, ensuring that the entire organization can adapt and thrive in this new paradigm. Sixty-six percent of CEOs said their business models are not fit for AI purposes, according to Gartner’s survey. 


What comes after Stack Overflow?

The most obvious option is the one that is already happening whether we like it or not: LLMs are the new Q&A platforms. In the immediate term, ChatGPT and similar tools have become the go-to source for many. They provide the convenience of natural language queries with immediate answers. It’s possible we’ll see official “Stack Overflow GPT” bots or domain-specific LLMs trained on curated programming knowledge. In fact, Stack Overflow’s own team has been experimenting with using AI to draft preliminary answers to questions, while linking back to the original human posts for context. This kind of hybrid approach leverages AI’s speed but still draws on the library of verified solutions the community has built over years. ... Additionally, it’s still possible that the social Q&A sites will save the experience through data partnerships. For example, Stack Overflow, Reddit, and others have moved toward paid licensing agreements for their data. The idea is to both control how AI companies use community content and to funnel some value back to the content creators. We may see new incentives for experienced developers to contribute knowledge. One proposal is that if an AI answer draws from your Stack Overflow post, you could earn reputation points or even a cut of the licensing fee.


8 security risks overlooked in the rush to implement AI

AI models are frequently deployed as part of larger application pipelines, such as through APIs, plugins, or retrieval-augmented generation (RAG) architectures. “Insufficient testing at this level can lead to insecure handling of model inputs and outputs, injection pathways through serialized data formats, and privilege escalation within the hosting environment,” Mindgard’s Garraghan says. “These integration points are frequently overlooked in conventional AppSec [application security] workflows.” ... AI systems may exhibit emergent behaviors only during deployment, especially when operating under dynamic input conditions or interacting with other services. “Vulnerabilities such as logic corruption, context overflow, or output reflection often appear only during runtime and require operational red-teaming or live traffic simulation to detect,” according to Garraghan. ... The rush to implement AI puts CISOs in a stressful bind, but James Lei, chief operating officer at application security testing firm Sparrow, advises CISOs to push back on the unchecked enthusiasm to introduce fundamental security practices into the deployment process. “To reduce these risks, organizations should be testing AI tools in the same way they would any high-risk software, running simulated attacks, checking for misuse scenarios, validating input and output flows, and ensuring that any data processed is appropriately protected,” he says.


A Brief History of Data Stewardship

Today, in leading-edge organizations, data stewardship is at the heart of data-driven transformation initiatives, such as DataOps, AI governance, and improved metadata management, which have evolved data stewardship beyond traditional data quality control. Data stewards can be found in every industry and in organizations of any size. Modern data stewards interact with:Automated data quality tools that identify and resolve data issues at scale. Data catalogs and data lineage applications that organize business and technical metadata and provide searchable inventories of data assets. AI/ML models that require extensive monitoring to ensure they are trained on unbiased, accurate datasets The scope of data stewardship has expanded to include ethical considerations, particularly concerning data privacy, algorithmic bias, and responsible AI. Data stewards are increasingly seen as the conscience of data within organizations, championing not only compliance but also fairness, transparency, and accountability. New organizational models, such as federated data stewardship – in which data stewardship responsibilities are distributed across teams – can promote improved collaboration and enable scaling data stewardship efforts alongside agile and decentralized business units.


Introducing strands agents, an Open Source AI agents SDK

In Strands’ model-driven approach, tools are key to how you customize the behavior of your agents. For example, tools can retrieve relevant documents from a knowledge base, call APIs, run Python logic, or just simply return a static string that contains additional model instructions. Tools also help you achieve complex use cases in a model-driven approach, such as with these Strands Agents example pre-built tools: Retrieve tool: This tool implements semantic search using Amazon Bedrock Knowledge Bases. Beyond retrieving documents, the retrieve tool can also help the model plan and reason by retrieving other tools using semantic search. For example, one internal agent at AWS has over 6,000 tools to select from! Models today aren’t capable of accurately selecting from quite that many tools. Instead of describing all 6,000 tools to the model, the agent uses semantic search to find the most relevant tools for the current task and describes only those tools to the model. ... Thinking tool: This tool prompts the model to do deep analytical thinking through multiple cycles, enabling sophisticated thought processing and self-reflection as part of the agent. In the model-driven approach, modeling thinking as a tool enables the model to reason about if and when a task needs deep analysis.


AI hallucinations and their risk to cybersecurity operations

“AI hallucinations are an expected byproduct of probabilistic models,” explains Chetan Conikee, CTO at Qwiet AI, emphasizing that the focus shouldn’t be on eliminating them entirely but on minimizing operational disruption. “The CISO’s priority should be limiting operational impact through design, monitoring, and policy.” That starts with intentional architecture. Conikee recommends implementing a structured trust framework around AI systems, an approach that includes practical middleware to vet inputs and outputs through deterministic checks and domain-specific filters. This step ensures that models don’t operate in isolation but within clearly defined bounds that reflect enterprise needs and security postures. Traceability is another cornerstone. “All AI-generated responses must carry metadata including source context, model version, prompt structure, and timestamp,” Conikee notes. Such metadata enables faster audits and root cause analysis when inaccuracies occur, a critical safeguard when AI output is integrated into business operations or customer-facing tools. For enterprises deploying LLMs, Conikee advises steering clear of open-ended generation unless necessary. Instead, organizations should lean on RAG grounded in curated, internal knowledge bases. 


Can Data Governance Set Us Free?

Internally, an important lesson has been to view data management as a federated service. This entails a shift from data management being a ‘governance’ activity – something people did because we pushed them to do it – to a service-driven activity – something people do because they want to. We worked with our User-Centred Service Design team to agree an underpinning set of principles to get buy-in across the organisation on the purpose of, and facets to, good data management. The overarching principle is that data are valuable, shared assets. We can maximise value by making data widely available, easy to use and understand, whilst ensuring data are protected and not misused. Bringing the service to life means getting four things right: First, a proportionate vision for service maturity. All data need to have basic information registered. But where data are widely used or feed into critical processes, it becomes instrumental to dedicate resources to supporting ease of access, use and quality for our users. We are increasingly tending toward managing these assets centrally. Second, the assignment of clear responsibilities across the federation. We are working through which datasets will be managed centrally and which will be managed by teams across the Bank that are expert in them. 


To Fix Platform Engineering, Build What Users Actually Want

If it takes developers and engineers months to become productive, your platform isn’t helping — it’s hindering. A great platform should be as frictionless and intuitive as a consumer-grade product. Internal platforms must empower instant productivity. If your platform offers compute, it shouldn’t just be raw power — it should be integrated, easy to adopt, and evolve seamlessly in the background. Let’s not create unnecessary cognitive load. Developers are adapting quickly to generative AI and new tech. The real value lies in solving real, tangible problems — not fictional ones. This brings us to a deeper look at what’s not working — and why so many efforts fail despite the best intentions. ... Most enterprises are hybrid by nature — legacy systems, siloed processes and complex workflows are the norm. The real challenge isn’t just technological; it’s integrating platform engineering into these messy realities without making it worse. Today, no single product solves this end-to-end. We’re still lacking a holistic solution that manages internal workflows, governance and hybrid complexity without adding friction. What’s needed is a shift in mindset — from assembling open source tools to building integrated, adoption-focused, business-aligned platforms. And that shift must be guided by clear trends in tooling and team structure.


Liquid cooling becoming essential as AI servers proliferate

“A lot of the carbon emissions of the data center happen in the build of it, in laying down the slab,” says Josh Claman, CEO at Accelsius, a liquid cooling company. “I hope that companies won’t just throw all that away and start over.” In addition to the environmental benefits, upgrading an air-cooled data center into a hybrid, liquid and air system has other advantages, says Herb Hogue, CTO at Myriad360, a global systems integrator. Liquid cooling is more effective than air alone, he says, and when used in combination with air cooling, the temperature of the air cooling systems can be increased slightly without impacting performance. “This reduces overall energy consumption and lowers utility bills,” he says. Liquid cooling also allows for not just lower but also more consistent operating temperatures, Hogue says. That leads to less wear on IT equipment, and, without fans, fewer moving parts per server. The downsides, however, include the cost of installing the hybrid system and needed specialized operations and maintenance skills. There might also be space constraints and other challenges. Still, it can be a smart approach for handling high-density server setups, he says. And there’s one more potential benefit, says Gerald Kleyn, vice president of customer solutions for HPC and AI at Hewlett Packard Enterprise. 

Daily Tech Digest - May 14, 2025


Quote for the day:

"Success is what happens after you have survived all of your mistakes." -- Anonymous


3 Stages of Building Self-Healing IT Systems With Multiagent AI

Multiagent AI systems can allow significant improvements to existing processes across the operations management lifecycle. From intelligent ticketing and triage to autonomous debugging and proactive infrastructure maintenance, these systems can pave the way for IT environments that are largely self-healing. ... When an incident is detected, AI agents can attempt to debug issues with known fixes using past incident information. When multiple agents are combined within a network, they can work out alternative solutions if the initial remediation effort doesn’t work, while communicating the ongoing process with engineers. Keeping a human in the loop (HITL) is vital to verifying the outputs of an AI model, but agents must be trusted to work autonomously within a system to identify fixes and then report these back to engineers. ... The most important step in creating a self-healing system is training AI agents to be able to learn from each incident, as well as from each other, to become truly autonomous. For this to happen, AI agents cannot be siloed into incident response. Instead, they must be incorporated into an organization’s wider system, communicate with third-party agents and allow them to draw correlations from each action taken to resolve each incident. In this way, each organization’s incident history becomes the training data for its AI agents, ensuring that the actions they take are organization-specific and relevant.


The three refactorings every developer needs most

If I had to rely on only one refactoring, it would be Extract Method, because it is the best weapon against creating a big ball of mud. The single best thing you can do for your code is to never let methods get bigger than 10 or 15 lines. The mess created when you have nested if statements with big chunks of code in between the curly braces is almost always ripe for extracting methods. One could even make the case that an if statement should have only a single method call within it. ... It’s a common motif that naming things is hard. It’s common because it is true. We all know it. We all struggle to name things well, and we all read legacy code with badly named variables, methods, and classes. Often, you name something and you know what the subtleties are, but the next person that comes along does not. Sometimes you name something, and it changes meaning as things develop. But let’s be honest, we are going too fast most of the time and as a result we name things badly. ... In other words, we pass a function result directly into another function as part of a boolean expression. This is… problematic. First, it’s hard to read. You have to stop and think about all the steps. Second, and more importantly, it is hard to debug. If you set a breakpoint on that line, it is hard to know where the code is going to go next.


ENISA launches EU Vulnerability Database to strengthen cybersecurity under NIS2 Directive, boost cyber resilience

The EU Vulnerability Database is publicly accessible and serves various stakeholders, including the general public seeking information on vulnerabilities affecting IT products and services, suppliers of network and information systems, and organizations that rely on those systems and services. ... To meet the requirements of the NIS2 Directive, ENISA initiated a cooperation with different EU and international organisations, including MITRE’s CVE Programme. ENISA is in contact with MITRE to understand the impact and next steps following the announcement of the funding to the Common Vulnerabilities and Exposures Program. CVE data, data provided by Information and Communication Technology (ICT) vendors disclosing vulnerability information through advisories, and relevant information, such as CISA’s Known Exploited Vulnerability Catalogue, are automatically transferred into the EU Vulnerability Database. This will also be achieved with the support of member states, who established national Coordinated Vulnerability Disclosure (CVD) policies and designated one of their CSIRTs as the coordinator, ultimately making the EUVD a trusted source for enhanced situational awareness in the EU. 


Welcome to the age of paranoia as deepfakes and scams abound

Welcome to the Age of Paranoia, when someone might ask you to send them an email while you’re mid-conversation on the phone, slide into your Instagram DMs to ensure the LinkedIn message you sent was really from you, or request you text a selfie with a time stamp, proving you are who you claim to be. Some colleagues say they even share code words with each other, so they have a way to ensure they’re not being misled if an encounter feels off. ... Ken Schumacher, founder of the recruitment verification service Ropes, says he’s worked with hiring managers who ask job candidates rapid-fire questions about the city where they claim to live on their résumé, such as their favorite coffee shops and places to hang out. If the applicant is actually based in that geographic region, Schumacher says, they should be able to respond quickly with accurate details. Another verification tactic some people use, Schumacher says, is what he calls the “phone camera trick.” If someone suspects the person they’re talking to over video chat is being deceitful, they can ask them to hold up their phone camera to show their laptop. The idea is to verify whether the individual may be running deepfake technology on their computer, obscuring their true identity or surroundings.


CEOs Sound Alarm: C-Suite Behind in AI Savviness

According to the survey, CEOs now see upskilling internal teams as the cornerstone of AI strategy. The top two limiting factors impacting AI's deployment and use, they said, are the inability to hire adequate numbers of skilled people and to calculate value or outcomes. "CEOs have shifted their view of AI from just a tool to a transformative way of working," said Jennifer Carter, senior principal analyst at Gartner. Contrary to the CEOs' assessments by Gartner, most CIOs view themselves as the key drivers and leaders of their organizations' AI strategies. According to a recent report by CIO.com, 80% of CIOs said they are responsible for researching and evaluating AI products, positioning them as "central figures in their organizations' AI strategies." As CEOs increasingly prioritize AI, customer experience and digital transformation, these agenda items are directly shaping the evolving role and responsibilities of the CIO. But 66% of CEOs say their business models are not fit for AI purposes. Billions continue to be spent on enterprisewide AI use cases but little has come in way of returns. Gartner's forecast predicts a 76.4% surge in worldwide spending on gen AI by 2024, fueled by better foundational models and a global quest for AI-powered everything. But organizations are yet to see consistent results despite the surge in investment. 


Dropping the SBOM, why software supply chains are too flaky

“Mounting software supply chain risk is driving organisations to take action. [There is a] 200% increase in organistions making software supply chain security a top priority and growing use of SBOMs,” said Josh Bressers, vice president of security at Anchore. ... “There’s a clear disconnect between security goals and real-world implementation. Since open source code is the backbone of today’s software supply chains, any weakness in dependencies or artifacts can create widespread risk. To effectively reduce these risks, security measures need to be built into the core of artifact management processes, ensuring constant and proactive protection,” said Douglas. If we take anything from these market analysis pieces, it may be true that organisations struggle to balance the demands of delivering software at speed while addressing security vulnerabilities to a level which is commensurate with the composable interconnectedness of modern cloud-native applications in the Kubernetes universe. ... Alan Carson, Cloudsmith’s CSO and co-founder, remarked, “Without visibility, you can’t control your software supply chain… and without control, there’s no security. When we speak to enterprises, security is high up on their list of most urgent priorities. But security doesn’t have to come at the cost of speed. ...”


Does agentic AI spell doom for SaaS?

The reason agentic AI is perceived as a threat to SaaS and not traditional apps is that traditional apps have all but disappeared, replaced in favor of on-demand versions of former client software. But it goes beyond that. AI is considered a potential threat to SaaS for several reasons, mostly because of how it changes who is in control and how software is used. Agentic AI changes how work gets done because agents act on behalf of users, performing tasks across software platforms. If users no longer need to open and use SaaS apps directly because the agents are doing it for them, those apps lose their engagement and perceived usefulness. That ultimately translates into lost revenue, since SaaS apps typically charge either per user or by usage. An advanced AI agent can automate the workflows of an entire department, which may be covered by multiple SaaS products. So instead of all those subscriptions, you just use an agent to do it all. That can lead to significant savings in software costs. On top of the cost savings are time savings. Jeremiah Stone, CTO with enterprise integration platform vendor SnapLogic, said agents have resulted in a 90% reduction in time for data entry and reporting into the company’s Salesforce system. 


Ask a CIO Recruiter: Where Is the ‘I’ in the Modern CIO Role?

First, there are obviously huge opportunities AI can provide the business, whether it’s cost optimization or efficiencies, so there is a lot of pressure from boards and sometimes CEOs themselves saying ‘what are we doing in AI?’ The second side is that there are significant opportunities AI can enable the business in decision-making. The third leg is that AI is not fully leveraged today; it’s not in a very easy-to-use space. That is coming, and CIOs need to be able to prepare the organization for that change. CIOs need to prepare their teams, as well as business users, and say ‘hey, this is coming, we’ve already experimented with a few things. There are a lot of use cases applied in certain industries; how are we prepared for that?’ ... Just having that vision to see where technology is going and trying to stay ahead of it is important. Not necessarily chasing the shiny new toy,, new technology, but just being ahead of it is the most important skill set. Look around the corner and prepare the organization for the change that will come. Also, if you retrained some of the people, you have to be more analytical, more business minded. Those are good skills. That’s not easy to find. A lot of people [who] move into the CIO role are very technical, whether it is coding or heavily on the infrastructure side. That is a commodity today; you need to be beyond that.


Insider risk management needs a human strategy

A technical-only response to insider risk can miss the mark, we need to understand the human side. That means paying attention to patterns, motivations, and culture. Over-monitoring without context can drive good people away and increase risk instead of reducing it. When it comes to workplace monitoring, clarity and openness matter. “Transparency starts with intentional communication,” said Itai Schwartz, CTO of MIND. That means being upfront with employees, not just that monitoring is happening, but what’s being monitored, why it matters, and how it helps protect both the company and its people. According to Schwartz, organizations often gain employee support when they clearly connect monitoring to security, rather than surveillance. “Employees deserve to know that monitoring is about securing data – not surveilling individuals,” he said. If people can see how it benefits them and the business, they’re more likely to support it. Being specific is key. Schwartz advises clearly outlining what kinds of activities, data, or systems are being watched, and explaining how alerts are triggered. ... Ethical monitoring also means drawing boundaries. Schwartz emphasized the importance of proportionality: collecting only what’s relevant and necessary. “Allow employees to understand how their behavior impacts risk, and use that information to guide, not punish,” he said.


Sharing Intelligence Beyond CTI Teams, Across Wider Functions and Departments

As companies’ digital footprints expand exponentially, so too do their attack surfaces. And since most phishing attacks can be carried out by even the least sophisticated hackers due to the prevalence of phishing kits sold in cybercrime forums, it has never been harder for security teams to plug all the holes, let alone other departments who might be undertaking online initiatives which leave them vulnerable. CTI, digital brand protection and other cyber risk initiatives shouldn’t only be utilized by security and cyber teams. Think about legal teams, looking to protect IP and brand identities, marketing teams looking to drive website traffic or demand generation campaigns. They might need to implement digital brand protection to safeguard their organization’s online presence against threats like phishing websites, spoofed domains, malicious mobile apps, social engineering, and malware. In fact, deepfakes targeting customers and employees now rank as the most frequently observed threat by banks, according to Accenture’s Cyber Threat Intelligence Research. For example, there have even been instances where hackers are tricking large language models into creating malware that can be used to hack customers’ passwords.