Showing posts with label chatbots. Show all posts
Showing posts with label chatbots. Show all posts

Daily Tech Digest - January 01, 2026


Quote for the day:

"It always seems impossible until it’s done." -- Nelson Mandela



Why data trust is the missing link in digital transformation

Data trust is often framed as a technical issue, delegated to IT or data teams. In reality, it is a business capability with direct implications for growth, risk, and reputation. Trusted data enables organisations to: Confidently automate customer and operational workflows; Personalise experiences without introducing errors; Improve forecasting and performance reporting; and Reduce operational rework and exception handling When data cannot be trusted, leaders are forced to rely on manual checks, conservative assumptions, and duplicated processes. This increases cost and slows decision-making - the opposite of what digital transformation aims to achieve. .... Establishing data trust is not a one-time project. It requires a shift in mindset across the organisation. Data quality should be viewed as a shared responsibility, supported by the right processes and tools. Leading organisations embed data validation into their digital workflows, measure data quality as part of system health, and treat trusted data as a strategic asset. Over time, this creates a culture where decisions are made with confidence and transformation initiatives are more likely to succeed. ... Digital transformation is ultimately about enabling better decisions, faster execution, and stronger customer relationships. None of these goals can be achieved without trusted data. As organisations continue to modernise their platforms and processes, data quality should be treated as core infrastructure, not an afterthought. 


Health Data Privacy, Cyber Regs: What to Watch in 2026

When federal regulators hesitate, states often jump into filing privacy and security gaps involving health data. That includes mandates in New York to shore up cybersecurity at certain hospitals (see: New York Hospitals Are Facing Tougher Cyber Rules Than HIPAA). Also worth watching is the New York Health Information Privacy Act, Greene said. "It was passed by both New York legislative chambers in January but has not yet been formally submitted to the governor for signature, with lobbying efforts underway to amend it." "In its most recent version, it would be the toughest health privacy law in the country in many respects, including a controversial prohibition on obtaining consents for secondary uses of data until at least 24 hours after an individual creates an account or first uses the requested product or service," Greene said.  ... Greene predicted HIPAA resolution agreements and civil monetary penalties will continue much as they have in years past, with one to two dozen such cases next year. HHS has recently indicated that it intends to begin enforcing the Information Blocking Rule. "The primary target will be health IT developers," Greene said. "I expect that there are less information blocking issues with health information networks and believe that the statute and regulation's knowledge standard makes it more challenging to enforce against healthcare providers because the government must prove that a healthcare provider knew its practice to be unreasonable."


From integration pain to partnership gain: How collaboration strengthens cybersecurity

When collaborators leverage data in specific cybersecurity work, they unlock several valuable benefits, especially since no organization has complete insight into every possible threat. A shared, data-driven cybersecurity framework can offer both sides a better understanding of existing and emerging threats that could undermine one or both collaborators. Data-driven collaboration also enables partners to become more proactive in their cybersecurity posture. Coordinated data can give business partners insights into where there’s greater exposure for a cyberattack, allowing partners to work together with data-backed guidance on how to better prepare. ... The Vested model — an innovative approach based on research from the University of Tennessee — focuses on shared goals and outcomes rather than traditional transactional buyer and seller agreements. Both companies agreed on a specific set of KPIs they could use to measure the health of the partnership and keep their security goals on track, allowing them to continue to adapt cybersecurity initiatives as needs and threats evolve. “You have to build, maintain and exercise the right partnerships with business units and shared services across the enterprise so continuity plans identify the issue quickly, deploy appropriate mitigations, and ultimately restore client and business services as quickly as possible,” says Royce Curtin, IBM’s former VP of corporate security.


AI governance: A risk and audit perspective on responsible AI adoption

AI governance refers to the policies, procedures, and oversight mechanisms that guide how AI systems are developed, deployed, and monitored. It ensures that AI aligns with business objectives, complies with applicable laws, and operates in a way that is ethical and transparent. Regulatory scrutiny is increasing. The EU AI Act is setting a precedent for global standards, and U.S. agencies are signaling more aggressive enforcement, particularly in sectors like healthcare, finance, and employment. Organizations are expected to demonstrate accountability in how AI systems make decisions, manage data, and interact with users. Beyond regulation, there is growing pressure from customers, employees, and investors. ... Audit teams also help boards and audit committees understand the risks associated with AI. Their work supports transparency and builds trust with regulators and stakeholders. As AI becomes more embedded in business operations, internal audit must expand its scope to include model governance, data lineage, and ethical risk. ... Organizations that treat AI as a strategic risk are better positioned to scale it responsibly. Risk and internal audit teams have a central role in ensuring that AI systems are secure, compliant, and aligned with business goals. Citrin Cooperman helps organizations navigate AI adoption with confidence by combining deep risk expertise, practical governance frameworks, and advanced technology solutions that support secure, scalable, and compliant growth.


Six data shifts that will shape enterprise AI in 2026

While RAG won't entirely disappear in 2026, one approach that will likely surpass it in terms of usage for agentic AI is contextual memory, also known as agentic or long-context memory. This technology enables LLMs to store and access pertinent information over extended periods. Multiple such systems emerged over the course of 2025 including Hindsight, A-MEM framework, General Agentic Memory (GAM), LangMem, and Memobase. RAG will remain useful for static data, but agentic memory is critical for adaptive assistants and agentic AI workflows that must learn from feedback, maintain state, and adapt over time. In 2026, contextual memory will no longer be a novel technique; it will become table stakes for many operational agentic AI deployments. ... In 2025, we saw numerous innovations, like the notion that an AI is able to parse data from an unstructured data source like a PDF. That's a capability that has existed for several years, but proved harder to operationalize at scale than many assumed. Databricks now has an advanced parser, and other vendors, including Mistral, have emerged with their own improvements. The same is true with natural language to SQL translation. While some might have assumed that was a solved problem, it's one that continued to see innovation in 2025 and will see more in 2026. It's critical for enterprises to stay vigilant in 2026. 


Communicating AI Risk to the Board With Confidence

Most board members can comprehend that AI will drive growth. What they fail to grasp concretely is how the technology introduces a massive amount of exposure. This predicament is typically a result of how information is presented. Security and risk managers (SRMs) often describe AI incidents in the vocabulary of adversarial inputs, model drift, and architecture choices, which matter deeply but rarely answer the questions that directors tackle during their meetings. High-level stakeholders, in reality, are concerned with issues such as revenue protection, operational continuity, and competitive differentiation, creating a gap that requires more than translating acronyms. ... Traditional discussions about technology risk revolve around the triad of confidentiality, integrity, and availability. Boards know these categories well, and over the past few decades, they have learned that cybersecurity failures directly affect the business along these lines. GenAI has formidably challenged this familiar structure, with its associated risks not limited to one of these three domains.  ... When the conversation begins with the business consequence, though, the relevance is immediate. The most effective approach involves replacing those mechanics that mean so much to the internal teams with the strategic information boards need to operate. These details open a path for meaningful conversations that encourage directors to think through the implications and make more informed decisions. 


The six biggest security challenges coming in 2026

For many organizations, cybersecurity and resilience is a compliance exercise. But it must evolve into “a core intentional cybersecurity capability”, says Dimitriadis. “In 2026, organizations will need to build the capacity to anticipate regulatory changes, understand their strategic implications, and embed them into long-term planning.” ... Attackers are leveraging AI to create convincing email templates and fake websites “almost indistinguishable” from real ones – and without the common warning signs employees are trained to identify, says Mitchell. AI is also being used in vishing attacks, with deepfakes making it easier to clone the voice of high-ranking company executives to trick victims. In 2026, there will be more attacks utilizing realistic voice cloning and high-quality video deepfakes, says Joshua Walsh ... There is a current shift towards agentic AI that can take real-world actions, such as adjusting configurations, interacting with APIs, booking services and initiating financial tasks. This can increase efficiency, but it can also lead to unsafe decisions made at speed, says rradar’s Walsh. An agent told to "optimize performance" might disable logging or bypass authentication because it views security controls as delays, he suggests. Prompt injection is a hidden issue to look out for, he adds. “If a threat actor slips hidden instructions into data that the agent consumes, they can make it run actions on internal systems without anyone realising.” 


5 Changes That Will Define AI-Native Enterprises in 2026

As enterprises scale to multi-agent systems, the engineering focus will shift from creating prompts to architecting context. Multi-agent workflows rapidly expand requirements with tool definitions, conversation history, and data from multiple sources. This creates two challenges: context windows fill up, and models suffer from “context rot,” forgetting information buried in lengthy prompts. By mid-2026, context engineering will emerge as a distinct discipline with dedicated teams and specialized infrastructure, serving the minimal but complete information agents need. The best context engineers will understand both LLM constraints and their business domain’s semantic structure. ... Enterprises are realizing that AI agents need both data and meaning. Companies that spent years perfecting data lakes are already finding those assets are insufficient. AI can retrieve data, but without semantic context, it can’t interpret action or intent. That’s why teams will move beyond vector search toward building knowledge graphs, ontologies, and metadata-driven maps that teach AI how their business works. The battleground will shift from owning raw data to owning its interpretation. Off-the-shelf agents will struggle in complex domains because semantics are domain-specific. ... The AI-native enterprise looks very different from what came before. It serves machine customers, treats context as critical infrastructure, and has the tools to escape decades of technical debt. 


Microsegmentation: the unsung hero of cybersecurity (and why it should be your top priority)

Think of your network like an apartment building. You’ve got a locked front door — that’s your perimeter. But once someone gets inside, there’s no front desk checking IDs, no elevator security and the same outdated lock on every unit. An intruder can roam freely, entering any apartment they choose. Microsegmentation is the internal security system. It’s the keycard for the elevator, the camera in the hallway, the unique lock on your door. It’s what stops one compromised device from becoming a full-blown breach. ... OT environments are different. They’re often built on legacy systems, lack patching and operate in real-time. You can’t just drop an agent or reroute traffic without risking downtime. That’s why agencies need solutions that are agentless, software-defined and tailored to the unique constraints of OT. Otherwise, you’re only protecting half the house. ... Microsegmentation also plays a critical role in enabling zero trust. It enforces least privilege at the network level. It’s not just about who gets in; it’s about what they can touch once they’re inside. For agencies building toward zero trust, microsegmentation isn’t an afterthought. It’s a foundation. Despite all this, microsegmentation remains underutilized. According to TechTarget’s Enterprise Strategy Group, only 36% of organizations use it today, even though it’s foundational to zero trust. Why? Because 28% believe it’s too complex. But that perception is often rooted in outdated tooling.


Beyond Chatbots: What Makes an AI Agent Truly Autonomous

Autonomous agents must retain and use context over time. Memory enables an agent to recall previous interactions, data, and decisions—allowing it to continue a process seamlessly without restarting each time. That persistence turns single exchanges into long-running workflows. In enterprise settings, it means an agent can track a contract review across multiple sessions or follow a complex support case without losing context. ... Traditional automation runs on fixed, rule-based workflows. Autonomous agents build and revise their own plans on the fly, adapting to results and feedback. This ability to plan dynamically—think, act, observe, and adjust—is what differentiates agentic AI from robotic process automation (RPA) or prompt chaining. In practice, an agent might be tasked with analyzing a set of contracts, then automatically decide how to proceed: extract key terms, assess risk, and summarize results. ... Resilient agents are designed to operate across models, retry failed actions, or launch sub-agents to handle specialized work—all within defined guardrails. That adaptability is what separates a proof of concept (POC) from a production-ready system. ... All the reasoning in the world means little if an agent can’t execute. Tools are what translate intelligence into impact. They’re the functions, APIs, and integrations that allow agents to interact with business systems—searching systems, generating documents, updating records, or triggering workflows across CRMs, ERPs, and analytics platforms.

Daily Tech Digest - November 18, 2025


Quote for the day:

"Nothing in the world is more common than unsuccessful people with talent." -- Anonymous



The rise of the chief trust officer: Where does the CISO fit?

Trust is touted as a differentiator for organizations looking to strengthen customer confidence and find a competitive advantage. Trust cuts across security, privacy, compliance, ethics, customer assurance, and internal culture. For the custodians of trust, that’s a wide-ranging remit without the obvious definition of other C-suite roles. Typically, the CISO continues to own controls and protection, while the CTrO broadens the remit to reputation, ethics, and customer confidence. Where cybersecurity reports to the CTrO, it is a way to escape IT and the competing priorities with the CIO. This partnership repositions security from ‘department of no’ to business enabler, Forrester notes. ... Patel says that strong alignment between customer trust and business strategy is critical. “If you don’t have credibility in the marketplace, with your partners and customers, your business strategy is dead on arrival,” he tells CSO. Whereas CISO’s day-to-day responsibilities include checking on the SOC, reviewing alerts, GRC, managing other security operations and board reporting, the chief trust officer role weaves customer trust throughout, says Patel. “It’s really bringing that trust lens into the decision-making equation and challenging colleagues and partners to think in the same manner.” ... There is also the question of how organizations operationalize trust — and can it be measured? No off-the-shelf platform exists, so CTrOs must build their own dashboards combining customer and employee metrics to track trends and identify early signs of trust erosion.


When Machines Attack Machines: The New Reality of AI Security

Attackers decomposed tasks and distributed them across thousands of instructions fed into multiple Claude instances, masquerading as legitimate security tests and circumventing guardrails. The campaign’s velocity and scale dwarfed what human operators could manage, representing a fundamental leap for automated adversarial capability. Anthropic detected the operation by correlating anomalous session patterns and observing operational persistence achievable only through AI-driven task decomposition at superhuman speeds. Though AI-generated attacks sometimes faltered—hallucinating data, forging credentials, or overstating findings—the impact proved significant enough to trigger immediate global warnings and precipitate major investments in new safeguards. Anthropic concluded that this development brings advanced offensive tradecraft within reach of far less sophisticated actors, marking a turning point in the balance between AI’s promise and peril. ... AI-based offensive operations exploit vulnerabilities across entire ecosystems instantly with the goal of exfiltrating critical intelligence and causing damage to the target. Offensive AI iterates adversarial attacks and novel exploits on a scale human red teams cannot attain. Defenses that work well against traditional techniques often fail outright under continuous, machine-driven attack cycles. 


From chatbots to colleagues: How agentic AI is redefining enterprise automation

According to Flores, agentic AI changes that equation. Each agent has a name, a mission defined by its system prompt, and a connection to company data through retrieval-augmented generation. Many of them also wield tools such as CRMs, databases, or workflow platforms. “An agent is like hiring a new employee who already knows your systems on day one,” Flores said. “It doesn’t just respond — it executes.” This new mode of collaboration also changes how employees interact with technology. Flores noted that his clients often name their agents, treating them as teammates rather than tools. “When marketing needs to check something, they’ll say, ‘Let’s ask Marco,’” he added. “That naming makes adoption easier — it feels human.” ... One of IBM’s first success stories came with password resets — an unglamorous but ubiquitous use case. Two agents now collaborate: one triages the request, while the other verifies credentials and performs the reset, all under the company’s identity-and-access-management system. Each agent has its own digital identity, ensuring audit trails and preventing impersonation. ... Agentic AI isn’t a software upgrade — it’s a redesign of how digital work gets done. Each of the leaders interviewed for this story emphasized that success depends as much on data and governance as on culture and experimentation. Before moving beyond chatbots, IT directors should ask not only “Can we do this?” but “Where should we start — and how do we do it safely?”


What to look for in an AI implementation partner

Good AI implementation partners need not be limited to big professional services firms. Smaller firms such as AI consultancies and startups can provide lots of value. Regardless, many organizations require outside expertise when deploying, monitoring, and maintaining AI tools and services. ... “Many firms understand AI tools at a surface level, but what truly matters is the ability to contextualize AI within the nuances of a specific industry,” says Hrishi Pippadipally, CIO at accounting and business advisory firm Wiss. ... An effective partner must be able to balance innovation with the guardrails of security, privacy, and industry-specific compliance, Agrawal adds. “Otherwise, IT leaders will inherit long-term liabilities,” he says. ... “The mistake many organizations make is focusing only on technical credentials or flashy demos,” Agrawal says. “What’s often overlooked and what I prioritize is whether the partner can embed AI into existing workflows without disrupting business continuity. A good partner knows how to integrate AI so that it doesn’t just work in theory, but delivers impact in the complex reality of enterprise operations.” ... “Most evaluation checklists focus on the technical side — security, compliance, data governance, etc.,” says Sara Gallagher, president of The Persimmon Group, a business management consultancy. “While that matters, too many execs are skipping over the thornier questions.


Magnetic tape is going strong in the age of AI, and it's about to get even better

Aramid permits the manufacture of significantly thinner and smoother media, enabling longer tape lengths in a standard LTO Ultrium cartridge form factor,” the organization noted in a statement. “This material innovation provides an extra 10 TB of native capacity than the currently available 30 TB LTO-10 cartridge, which is manufactured using different materials.” Stephen Bacon, VP for data protection solutions product management at HPE, said the new cartridges are aimed at enterprises spanning an array of industries dealing with high data volumes, from manufacturing to financial services. “AI has turned archives into strategic assets,” Bacon commented. ... Tape storage has a number of distinct advantages, including low cost, durability, and easy portability. According to previous analysis from the LTO Program, companies using tape recorded an 86% lower total cost of ownership (TCO) compared to disk storage. TCO compared to cloud storage was also 66% lower across a 10 year period, figures showed. Notably, the use of tape for unstructured data storage also adds to the appeal, with this now vital in the training process for large language models (LLMs). ... Long-term, tape storage is only going to improve, at least if the LTO Program’s roadmap is to be believed. Through generations 11 through to 14, enterprises can expect to see significant capacity gains, eventually peaking with a 913 TB cartridge.


The rebellion against robot drivel

LLMs are “lousy writers and (most importantly!) they are not you,” Cantrill argues. That “you” is what persuades. We don’t read Steinbeck’s The Grapes of Wrath to find a robotic approximation of what desperation and hurt seem to be; we read it because we find ourselves in the writing. No one needs to be Steinbeck to draft press releases, but if that press release sounds samesy and dull, does it really matter that you did it in 10 seconds with an LLM versus an hour on your own mental steam? A few years ago, a friend in product marketing told me that an LLM generated better sales collateral than the more junior product marketing professionals he’d hired. His verdict was that he would hire fewer people and rely on LLMs for that collateral, which only got a few dozen downloads anyway, from a sales force that numbered in the thousands. Problem solved, right? Wrong. If few people are reading the collateral, it’s likely the collateral isn’t needed in the first place. Using LLMs to save money on creating worthless content doesn’t seem to be the correct conclusion. Ditto using LLMs to write press releases or other marketing content. I’ve said before that the average press release sounds like it was written by a computer (and not a particularly advanced computer), so it’s fine to say we should use LLMs to write such drivel. But isn’t it better to avoid the drivel in the first place? Good PR people think about content and its place in a wider context rather than just mindlessly putting out press releases.


AI’s Impact on Mental Health

“Talking to a therapist can be intimidating, expensive, or complicated to access, and sometimes you need someone—or something—to listen at that exact moment,’’ said Stephanie Lewis, a licensed clinical social worker and executive director of Epiphany Wellness addiction and mental health treatment centers. Chatbots allow people to vent, process their feelings, and get advice without worrying about being judged or misunderstood, Lewis said. “I also see that people who struggle with anxiety, social discomfort, or trust issues sometimes find it easier to open up to a chatbot than a real person.” Users are “often looking for a safe space to express emotions, receive reassurance, or find quick stress-management strategies,’’ added Dr. Bryan Bruno, medical director of Mid City TMS, a New York City-based medical center focused on treating depression. ... “Chatbots created for therapy are often built with input from mental health professionals and integrate evidence-based approaches, like cognitive behavioral therapy techniques,’’ Tse said. “They can prompt reflection and guide users toward actionable steps.” Lewis agreed that some therapeutic chatbots are designed with real therapy techniques, like Cognitive Behavioral Therapy (CBT), which can help manage stress or anxiety. “They can guide users through breathing exercises, mindfulness techniques, and journaling prompts, all great tools,” she said.


Holistic Engineering: Organic Problem Solving for Complex Evolving Systems & Late projects. 

Architectures that drift from their original design. Code that mysteriously evolves into something nobody planned. These persistent problems in software development often stem not from technical failures ... Holistic engineering is the practice of deliberately factoring these non-technical forces into our technical decisions, designs, and strategies. ... Holistic engineering involves considering, during technical design, among the factors, not only traditional technical factors, but also all the other non-technical forces that will be influencing your system anyhow. By acknowledging these forces, teams can view the problem as an organic system and influence, to some extent, various parts of the system. ... Consider the actual information structure within your organization. Understanding actual workflow patterns and communication channels reveals how work truly gets accomplished. These communication patterns often differ significantly from the formal hierarchy. Next, identify which processes could block your progress. For example, some organizations require approval from twenty people, including the CTO, to decide on a release. ... Organizations that embrace holistic engineering gain predictable control over forces that typically derail technical projects. Instead of reacting to "unforeseen" delays and architectural drift, teams can anticipate and plan for organizational constraints that inevitably influence technical outcomes.
At its heart, industrial AI is about automating and optimising business processes to improve decision-making, enhance efficiency and increase profitability. It requires the collection of vast volumes of data from sources like IoT sensors, cameras, and back-office systems, and the application of machine and deep learning algorithms to surface insights. In some cases, the AI powers robots to supercharge automation, and in others, it utilises edge computing for faster, localised processing. Agentic AI helps firms go even further, by working autonomously, dynamically and intelligently to achieve the goals it is set. ... “You get the data in from IoT and you trigger that as an anomaly,” says Pederson. “You analyse the anomaly against all your historic records – other incidents that have happened with customers and how they have been fixed. You relate it to your knowledge base articles. And then you relate it to your inventory on your service vans, like which service vans and which technicians are equipped to do the job. “So it’s the whole estate of structured, unstructured and processed data. In the past, they would send a technician out, and they could get it right 84% of the time. Now they have improved their first-time fix rate to 97%.” Both this and the aforementioned field service deployment feature an “agentic dispatcher” which autonomously creates and publishes the schedules to the relevant service technicians, updates their calendar and suggests the best route to take. “In the very near future, AI agents will not only be helping to address work for people behind a desk, but guiding robots directly,” says Pederson.


What security pros should know about insurance coverage for AI chatbot wiretapping claims

There are subtle differences in the way courts are viewing privacy litigation arising from the use of AI chatbots in comparison to litigation involving analytical tools like session reply or cookies. Both claims involve allegations that a third party is intercepting communications without proper consent, often under state wiretapping laws, but the legal arguments and defenses vary because the data being collected is different. ... Whether or not an exclusion will ultimately impact coverage depends both on the specific language of the exclusion and also the allegations raised in the underlying lawsuit. For example, broadly worded exclusions with “catch-all” phrases precluding coverage for any statutory violation may be more difficult for policyholder to overcome than an exclusion that identifies by name specific statutes. As these claims are relatively new, we have yet to see significant examples of how this plays out in the context of insurance coverage litigation. However, we saw similar coverage arguments in the context of insurance coverage litigation where the underlying suit alleged violations of the Biometric Information Privacy Act (BIPA). ... To help mitigate risks, organizations should review their user consent mechanisms for AI Bot Communications. Consent does not always mean signing a form, but could include prominently displaying chatbot privacy notices before any data collection, providing easy access to the business’s privacy policy detailing how chatbot interactions are stored, and using automated disclaimers at the start of each chat session. 

Daily Tech Digest - November 13, 2025


Quote for the day:

“You get in life what you have the courage to ask for.” -- Nancy D. Solomon



Does your chatbot have 'brain rot'? 4 ways to tell

Oxford University Press, publisher of the Oxford English Dictionary, named "brain rot" as its 2024 Word of the Year, defining it as "the supposed deterioration of a person's mental or intellectual state, especially viewed as the result of overconsumption of material (now particularly online content) considered to be trivial or unchallenging." ... Trying to draw exact connections between human cognition and AI is always tricky, despite the fact that neural networks -- the digital architecture upon which modern AI chatbots are based -- were modeled upon networks of organic neurons in the brain. ... That said, there are some clear parallels: as the researchers note in the new paper, for example, models are prone to "overfitting" data and getting caught in attentional biases ... If the ideal AI chatbot is designed to be a completely objective and morally upstanding professional assistant, these junk-poisoned models were like hateful teenagers living in a dark basement who had drunk way too much Red Bull and watched way too many conspiracy theory videos on YouTube. Obviously, not the kind of technology we want to proliferate. ... Obviously, most of us don't have a say in what kind of data gets used to train the models that are becoming increasingly unavoidable in our day-to-day lives. AI developers themselves are notoriously tight-lipped about where they source their training data from, which means it's difficult to rank consumer-facing models


7 behaviors of the AI-Savvy CIO

"The single most critical takeaway for CIOs is that a strong data foundation isn't optional -- it's critical for AI success. AI has made it easy to build prototypes, but unless you have your data in a single place, up to date, secured, and well governed, you'll struggle to put those prototypes into production. The team laying the groundwork for that foundation and getting enterprises' data AI-ready is data engineering. CIOs who still see data engineering as a back-office function are already five years behind, and probably training their future competitors. ... "Your data will never be perfect. And it doesn't have to be. It needs to be indicative of your company's reality. But your data will get a lot better if you first use AI to improve the UX. Then people will use your systems more, and in the way intended, creating better data. That better data will enable better AI. And the virtuous cycle will have begun. But it starts with the human side of the equation, not the technological." ... CIOs don't need deep technical mastery such as coding in Python or tuning neural networks -- but they must understand AI fundamentals. This includes grasping core AI principles, machine learning concepts, statistical modeling, and ethical implications. Mastery starts with CIOs understanding AI as an umbrella of technologies that automate different things. With this foundational fluency, they can ask the right questions, interpret insights effectively, and make informed strategic decisions. Let's look at the three AI domains.


The economics of the software development business

Some software companies quietly tolerated piracy, figuring that the more their software spread—even illegally—the more legitimate sales would follow in the long run. The argument was that if students and hobbyists pirated the software, it would lead to business sales when those people entered the workforce. The catchphrase here was “piracy is cheaper than marketing.” This was never an official position, but piracy was often quietly tolerated. ... Over the years, the boxes got thinner and the documentation went onto the Internet. For a time, though, “online help” meant a *.hlp file on your hard drive. People fought hard to keep that type of online help well into the Internet age. “What if I’m on an airplane? What if I get stranded in northern British Columbia?” Eventually, the physical delivery of software went away as Internet bandwidth allowed for bigger and bigger downloads. ... SaaS too has interesting economic implications for software companies. The marketplace generally expects a free tier for a SaaS product, and delivering free services can become costly if not done carefully. The compute costs money, after all. An additional problem is making sure that your free tier is good enough to be useful, but not so good that no one wants to move up to the paid tiers. ... The economics of software have always been a bit peculiar because the product is maddeningly costly to design and produce, yet incredibly easy to replicate and distribute. The years go by, but the problem remains the same: how to turn ideas and code into a profitable business?


Beyond the checklist: Shifting from compliance frameworks to real-time risk assessments

One of the most overlooked aspects of risk assessments is cadence. While gap analyses are sometimes done yearly or to prepare for large-scale audits, risk assessments need to be continuous or performed on a regular schedule. Threats do not respect calendar cycles. Major changes, including new technologies, mergers, regulatory changes or implementing AI, need to trigger reassessments. ... Risk assessments should culminate in outputs that business leaders can act on. This includes a concise risk heat map, a prioritized remediation roadmap and clear asks, such as budget, ownership and timelines. These deliverables convert technical findings into strategic decisions. They also help build trust with stakeholders, especially in organizations that may be new to formal risk management. ... Targeted risk assessments can be viewed as a low-cost, fundamental option. They are best suited to companies that have limited budget or are not prepared for a full review of the framework. With reduced scope, shorter turnaround and transparent business value, such assessments enable rapid establishment of trust, delivering prioritized outcomes. ... Risk assessments are not just checkboxes. They are tools for making decisions. The best programs are aligned with the business, focused, consistent and made to change over time.


Legitimate Interest as a Lawful Basis: Pros, Cons and the Indian DPDP Act’s Stance

Under the EU’s General Data Protection Regulation (GDPR), for example, a company can process data if it is “necessary for the purposes of the legitimate interests pursued by the controller or by a third party” (Article 6(1)(f) GDPR), so long as those interests are not overridden by the individual’s rights. However, India’s new Digital Personal Data Protection Act, 2023 (DPDP Act) pointedly does not include legitimate interest as a standalone lawful ground for processing. Instead, the Indian law relies primarily on consent and a limited set of “legitimate uses” explicitly enumerated in the statute. This divergence raises important questions about the pros and cons of the legitimate interest basis, its impact on the free flow of data, and whether India might benefit from adopting a similar concept. ... India’s decision to omit a general legitimate interest clause has sparked debate. There are advantages and disadvantages to this approach, and its impact on data flows and innovation is a key consideration. Pros / Rationale for Omission: From a privacy rights perspective, the absence of an open-ended legitimate interest basis means stronger individual control and legal certainty. The law explicitly tells citizens and businesses what the non-consensual exceptions are mostly common-sense or public interest scenarios and everything else by default requires consent.


CIOs: Collect the right data now for future AI-enabled services

In its Technology Trends Outlook for 2025 report, McKinsey suggests the technology landscape continues to undergo significant innovation-sponsored shifts. The consultant says success will depend on executives identifying high-impact areas where their businesses can use AI, while addressing external factors such as regulatory shifts and ecosystem readiness. CIOs, as the guardians of enterprise technology, will be expected to embrace this challenge, but how? For Steve Lucas, CEO at technology specialist Boomi, digital leaders must start with a recognition that the surfeit of data held in modern enterprises is simply a starting point for what comes next. “There’s plenty of data,” he says. “In fact, there’s too much of it. We worry about collecting, storing, and accessing data. I think a successful approach is about determining the data that matters. As a CIO, do you understand what data matters today and what emerging technologies will matter tomorrow?” ... While it can be tough to find the wood for the trees, Corbridge suggests CIOs should search for established data roots within the enterprise. “It’s about going back to the huge volumes of data you’ve got already and working out how you put that information in the right place so it can be used in the right way for your AI projects,” he says. Focusing on the fine details is an approach that chimes with Ian Ruffle, head of data and insight at UK breakdown specialist RAC. 


How TTP-based Defenses Outperform Traditional IoC Hunting

To fight modern ransomware, organizations must shift from chasing IoCs to detecting attacker behaviors — known as Tactics, Techniques, and Procedures (TTPs). The MITRE ATT&CK framework provides a detailed overview of these behaviors throughout the attack lifecycle, from initial access to impact. TTPs are challenging for attackers to modify because they represent core behavioral patterns and strategic approaches, unlike IoCs which are surface-level elements that can be easily altered. This shift is reinforced by the so-called ‘Pyramid of Pain’ – a conceptual model that ranks indicators by how difficult they are for adversaries to alter. At the base are easily changed elements like hash values and IP addresses. At the top are TTPs, which represent the attacker’s core behaviors and strategies. Disrupting TTPs forces adversaries to change their entire strategy, which makes behavior-based detection the most effective and resource-consuming method for them to avoid. ... When security and networking are natively integrated, policy enforcement is consistent, micro-segmentation is practical, and containment actions can be executed inline without stitching together multiple consoles. The cloud model also enables continuous, global updates to prevention logic and the ability to apply AI/ML on aggregated, high‑fidelity data feeds to reduce noise and improve detection quality. All this reminds me of the OODA military model that can help speed up incident response.


Healthcare security is broken because its systems can’t talk to each other

To maintain effectiveness, healthcare organizations should continually evaluate their security toolset for relevance, integration potential, and overall value to the security program. Prioritizing solutions that support open standards, and seamless integration helps minimize context switching and alert fatigue, while ensuring that the security team remains engaged and productive. Ultimately, the decision to balance specialized point solutions with broader integrated platforms must be guided by strategic priorities, resource capacity, and the need to support both operational efficiency and clinical excellence. ... A critical consideration is the interoperability of security tools across both cloud and on-premises environments. Healthcare organizations must assess if their security solutions need to span multiple cloud providers, support on-premises systems, or both, and determine how long on-premises support will be necessary as applications gradually shift to the cloud. Cloud providers are increasingly acquiring and integrating advanced security technologies, offering unified solutions that reduce the need for multiple monitoring platforms. This consolidation not only lessens alert fatigue but also enhances real-time visibility to security threats, an important advantage for healthcare, where timely detection is vital for protecting patient data and ensuring clinical continuity.


The Hard Truth About Trying to AI-ify the Enterprise

Every company has a few proofs of concept running. The problem isn't experimentation, it's scalability. How do you take those POCs and embed them into your business fabric? Many enterprises get trapped in "PowerPoint transformation": ambitious visions that never cross into operational workflows. "I've seen organizations invest millions in analytics platforms and data lakes. But when you ask how that's translating into faster underwriting, better risk models or smarter supply chains, there's often silence," Sen said. "That's because AI doesn't fail at the technology layer - it fails at the architecture of adoption." ... The central challenge, Sen argues, is that most enterprises treat AI as an overlay rather than an integral part of their operational core. "You can't bolt it on top of outdated systems and expect it to transform decision-making. The technology stack, data flow and governance model all need to evolve together," he said. That evolution is what Gartner describes as the pivot from "defending AI pilots to expanding into agentic AI ROI." Organizations that mastered generative AI in 2024 are now moving toward autonomous, interconnected systems that can execute tasks without human micromanagement. Sen points to his own experience at Linde plc as an early example. His team's first gen AI deployment at Linde was built for the audit department. "Our internal audit head wanted a semantic search database and a generative model to cut audit report generation time by half," he said.


Designing Edge AI That Works Where the Mission Happens

The fastest way to make federal AI deployments more reliable is to build on existing systems rather than start from scratch. Every mission already generates valuable data – drone imagery, satellite feeds, radar signals, logistics updates — that tells part of the operational story. AI at the edge helps teams interpret that data faster and more accurately. Instead of rebuilding infrastructure, agencies can embed lightweight, mission-specific AI models directly into their existing systems. ... When AI leaves the data center, its security model must accompany it. Systems operating at the edge face distinct risks, including limited oversight, contested networks, and the potential for physical compromise. Protection has to be built into every layer of the system, from the silicon up, to ensure full-scale security. That starts with end-to-end encryption, protecting data at rest, in transit, and during inference. Hardware-based features, such as secure boot and Trusted Execution Environments, prevent unauthorized code from running, while confidential computing keeps information encrypted even as it’s being processed. ... A decade ago, deploying AI in remote or contested locations required racks of hardware and constant connectivity. Today, a laptop or a single sensor array can deliver that same intelligence locally, securely, and autonomously. The power of AI and edge computing isn’t measured in size or speed; it’s in relevance.

Daily Tech Digest - October 31, 2025


Quote for the day:

“The more you loose yourself in something bigger than yourself, the more energy you will have.” -- Norman Vincent Peale


Breaking the humanoid robot delusion

The robot is called NEO. The company says NEO is the world’s first consumer-ready humanoid robot for the home. It is designed to automate routine chores and offer personal help so you can spend time on other things. ... Full autonomy in perceiving, planning, and manipulating like a human is a massive technology challenge. Robots have to be meticulously and painstakingly trained on every single movement, learn to recognize every object, and “understand” — for lack of a better word — how things move, how easily they break, what goes where, and what constitute appropriate actions. One major way humanoid robots are trained is with teleoperation. A person wearing special equipment remotely controls prototype robots, training them for many hours on how to, say, fold a shirt. Many hours more are required to train the robot how to fold a smaller child’s shirt. Every variable, from the height of the folding table to the flexibility of the fabrics has to be trained separately. ... The temptation to use impressive videos of remotely controlled robots where you can’t see the person controlling them to raise investment money, inspire stock purchases and outright sell robot products, appears to be too strong to resist. Realistically, the technology for a home robot that operates autonomously the way the NEO appears to do in the videos in arbitrary homes under real-world conditions is many years in the future, possibly decades.


Your vendor’s AI is your risk: 4 clauses that could save you from hidden liability

The frontier of exposure now extends to your partners’ and vendors’ use. The main question being: Are they embedding AI into their operations in ways you don’t see until something goes wrong? ... Require vendors to formally disclose where and how AI is used in their delivery of services. That includes the obvious tools and embedded functions in productivity suites, automated analytics and third-party plug-ins. ... Include explicit language that your data may not be used to train external models, incorporated into vendor offerings or shared with other clients. Require that all data handling comply with the strictest applicable privacy laws and specify that these obligations survive the termination of the contract. ... Human oversight ensures that automated outputs are interpreted in context, reviewed for bias and corrected when the system goes astray. Without it, organizations risk over-relying on AI’s efficiency while overlooking its blind spots. Regulatory frameworks are moving in the same direction: for example, high-risk AI systems must have documented human oversight mechanisms under the EU AI Act. ... Negotiate liability provisions that explicitly cover AI-driven issues, including discriminatory outputs, regulatory violations and errors in financial or operational recommendations. Avoid generic indemnity language. Instead, AI-specific liability should be made its own section in the contract, with remedies that scale to the potential impact.


AI chatbots are sliding toward a privacy crisis

The problem reaches beyond internal company systems. Research shows that some of the most used AI platforms collect sensitive user data and share it with third parties. Users have little visibility into how their information is stored or reused, leaving them with limited control over its life cycle. This leads to an important question about what happens to the information people share with chatbots. ... One of the more worrying trends in business is the growing use of shadow AI, where employees turn to unapproved tools to complete tasks faster. These systems often operate without company supervision, allowing sensitive data to slip into public platforms unnoticed. Most employees admit to sharing information through these tools without approval, even as IT leaders point to data leaks as the biggest risk. While security teams see shadow AI as a serious problem, employees often view it as low risk or a price worth paying for convenience. “We’re seeing an even riskier form of shadow AI,” says Tim Morris, “where departments, unhappy with existing GenAI tools, start building their own solutions using open-source models like DeepSeek.” ... Companies need to do a better job of helping employees understand how to use AI tools safely. This matters most for teams handling sensitive information, whether it’s medical data or intellectual property. Any data leak can cause serious harm, from damaging a company’s reputation to leading to costly fines.


The true cost of a cloud outage

The top 2000 companies in the world pay approximately $400 billion for downtime each year. A simple calculation reveals that these organizations, including the Dutch companies ASML, Nationale Nederlanden, AkzoNobel, Philips, and Randstad, lose around $200 million from their annual accounts due to unplanned downtime. Incidentally, what the Splunk study really revealed were the hidden costs of financial damage caused by problems with security tools, infrastructure, and applications. These can wipe billions off market values. ... A more conservative estimate of downtime costs can be found at Information Technology Intelligence Consulting, which conducted research on behalf of Calyptix Security. The majority of the parties surveyed had more than 200 employees, but the combination was more diverse than the top 2000 companies worldwide. The costs of downtime were substantial: at least $300,000 per hour for 90 percent of the companies in question. Forty-one percent stated that IT outages cost between $1 million and $5 million. ... In theory, the largest companies can rely on a multicloud strategy. In addition, hyperscalers absorb many local outages by routing traffic to other regions. However, multicloud is not something that you can just set up as a start-up SME. In addition, you usually do not build your applications in a fully redundant form in different clouds. Furthermore, it is quite possible that you can continue to work yourself, but that your product is inaccessible.


5 Reasons Why You’re Not Landing Leadership Roles

Is your posture confident? Do you maintain steady eye contact? Is the cadence, pace and volume of your voice engaging, assertive and compelling? Recruiters assess numerous factors on the executive presence checklist. ... Are you showing a grasp of the prospective employer’s pain points and demonstrating an original point of view for how you will approach these problems? Treat senior level interviews like consulting RFPs – you are an expert on their business, uncovering potential opportunities with insightful questions, and sharing enough of your expertise that you’re perceived as the solution. ... Title bumps are rare, so you need to give the impression that you are already operating at the C-level in order to be hired as such. Your interview examples should include stories about how you initiated new ideas or processes, as well as measurable results that impact the bottom line. Your examples should specify how many people and dollars you have managed. Ideally, you have stories that show you can get results in up and down markets. ... The hiring process extends over multiple rounds, especially for leadership roles. Keep track of everyone that you have met, as well as what you have specifically discussed with each of them. Send personalized follow-up emails that engage each interviewer uniquely based on what you discussed. This differentiates you as someone who listens and cares about them specifically.


Why understanding your cyber exposure is your first line of defence

Thanks to AI, attacks are faster, more targeted and increasingly sophisticated. As the lines between the physical and digital blur, the threat is no longer isolated to governments or critical national infrastructure. Every organisation is now at risk. Understanding your cyber exposure is the key to staying ahead. This isn’t just a buzzword either; it’s about knowing where you stand and what’s at risk. Knowing every asset, every connection, every potential weakness across your digital ecosystem is now the first step in building a defence that can keep pace with modern threats. But before you can manage your exposure, you need to understand what’s driving it – and why the modern attack surface is so difficult to defend. ... By consolidating data from across the environment and layering it with contextual intelligence, cyber exposure management allows security teams to move beyond passive monitoring. It’s not just about seeing more, it’s about knowing what matters and acting on it. That means identifying risks earlier, prioritising them more effectively and taking action before they escalate. ... Effective and modern cybersecurity is shifting to shaping the battlefield before threats even arrive. That’s down to the value of understanding your cyber exposure. After all, it’s not just about knowing what’s in your environment, it’s about knowing how it all fits together – what’s exposed, what’s critical and where the next threat is likely to emerge.


Applications and the afterlife: how businesses can manage software end of life

Both enterprise software and personal applications have a lifecycle, set by the vendor’s support and maintenance. Once an application or operating system goes out of support, it will continue to run. But there will be no further feature updates and vitally, often no security patches. ... When software end of life is unexpected, it can cause serious disruption to business processes. In the very worst-case scenarios, enterprises will only know there is a problem when a key application no longer functions, or if a malicious actor exploits a vulnerability. The problem for CIOs and CISOs is keeping track of the end of life dates for applications across their entire stack, and understanding and mapping dependencies between applications. This applies equally to in-house applications, off the shelf software and open source. “End of life software is not necessarily bad,” says Matt Middleton-Leal, general manager for EMEA at Qualys. “It’s just not updated any more, and that can lead to vulnerabilities. According to our research, nearly half of the issues on the CISA Known Exploited Vulnerabilities (KEV) list are found in outdated and unsupported software.” As CISA points out, attackers are most likely to exploit older vulnerabilities, and to target unpatched systems. Risks come from old, and known vulnerabilities, which IT teams should have patched.


Tips for CISOs switching between industries

Building a transferable skill set is essential for those looking to switch industries. For Dell’s first-ever CISO, Tim Youngblood, adaptability was never a luxury but a requirement. His early years as a consultant at KPMG gave him a front-row seat to the challenges of multiple industries before he ever moved into cybersecurity. Those early years also taught Youngblood that while every industry has its own nuances, the core security principles remain constant. ... Making the jump into a new industry isn’t about matching past job titles but about proving you can create impact in a new context. DiFranco says the key is to demonstrate relevance early. “When I pitch a candidate, I explain what they did, how they did it, and what their impact was to their organization in their specific industry,” he says. “If what they did and how they did it, and what their impact was on the organization resonates where that company wants to go, they’re a lot more likely to say, ‘I don’t really care where this person comes from because they did exactly what I want done in this organization’. ... The biggest career risk for many CISOs isn’t burnout or data breach, it’s being seen as a one-industry operator. Ashworth’s advice is to focus on demonstrating transferable skills. “It’s a matter of getting whatever job you’re applying for, to realise that those principles are the same, no matter what industry you’re in. Whether it’s aerospace, healthcare, or finance, the principles are the same. Show that, and you’ll avoid being pigeonholed.”


Awareness Is the New Armor: Why Humans Matter Most in Cyber Defense

People remain the most unpredictable yet powerful variable in cybersecurity. Lapses like permission misconfiguration, accidental credential exposure, or careless data sharing continue to cause most incidents. Yet when equipped with the right tools and timely information, individuals can become the strongest line of defense. The challenge often stems from behavior rather than intent. Employees frequently bypass security controls or use unapproved tools in pursuit of productivity, unintentionally creating invisible vulnerabilities that go unnoticed within traditional defences. Addressing this requires more than restrictive policies. Security must be built into everyday workflows so that safe practices become second nature. ... Since technology alone cannot secure an organization, a culture of security-first thinking is essential. Leaders must embed security into everyday workflows, promote upskilling, and focus on reinforcement rather than punishment. This creates a workforce that takes ownership of cybersecurity, checking email sources, verifying requests, and maintaining vigilance in every interaction. Stay Safe Online is both a reminder and a rallying cry. India’s digital economy presents immense opportunity, but its threat surface expands just as fast. 


Creepy AI Crawlers Are Turning the Internet into a Haunted House

The degradation of the internet and market displacement caused by commercial AI crawlers directly undermines people’s ability to access information online. This happens in various ways. First, the AI crawlers put significant technical strain on the internet, making it more difficult and expensive to access for human users, as their activity increases the time needed to access websites. Second, the LLMs trained on this scraped content now provide answers directly to user queries, reducing the need to visit the original sources and cutting off the traffic that once sustained content creators, including media outlets. ... AI crawlers represent a fundamentally different economic and technical proposition––a vampiric relationship rather than a symbiotic one. They harvest content, news articles, blog posts, and open-source code without providing the semi-reciprocal benefits that made traditional crawling sustainable. Little traffic flows back to sources, especially when search engines like Google start to provide AI generated summaries rather than sending traffic on to the websites their summaries are based on. ... What makes this worse is that these actors aren’t requesting books to read individual stories or conduct genuine research, they’re extracting the entire collection to feed massive language model systems. The library’s resources are being drained not to serve readers, but to build commercial AI products that will never send anyone back to the library itself.

Daily Tech Digest - October 14, 2025


Quote for the day:

"What you get by achieving your goals is not as important as what you become by achieving your goals." -- Zig Ziglar


Know your ops: Why all ops lead back to devops

When you see more terms that include the “ops” suffix, you should understand them as ideas that, as Graham Krizek, CEO of Voltage, puts it, “represent different layers of the same overarching goal. These concepts are not isolated silos but overlapping practices that support automation, collaboration, and scalability.” ... While site reliability engineering (SRE) and infrastructure as code (IaC) don’t have “ops” attached to their names, they can be seen in many ways as offshoots of the devops movement. SRE applies software engineering techniques to operations problems, with an emphasis on service-level objectives and error budgets. IaC shops manage and provision infrastructure using machine-readable definition files and scripts that can be version-controlled, automated, and tested just like application code. IaC underpins devops, gitops, and many specialized ops practices. ... “While it is not necessary for every IT professional to master each one individually, understanding the principles behind them is essential for navigating modern infrastructure,” he says. “The focus should remain on creating reliable systems and delivering value, not simply keeping up with new terminology.” In other words: you don’t need to collect ops like trading cards. You need to understand the fundamentals, specialize where it makes sense, and ignore the rest. Start with devops, add security if your compliance requirements demand it, and adopt cloudops practices if you’re heavily in the cloud. 


Digital Trust as a Strategic Asset: Why CISOs Must Think Like CFOs

CFOs are great at framing problems in terms of money. CISOs must also figure out how much risks cost, what not taking action costs, how much revenue loss comes from median dwell time, and how much it will cost to recover. Boards want the truth, not spin. Translate technical metrics into business impact (e.g., how detection/response times and dwell time drive incident scope and recovery costs). Recent threat reports show global median dwell time has fallen to ~10 days, but impact still depends on speed of containment. ... Stop talking about technology. Start describing cybersecurity as keeping your business running, protecting your reputation and building consumer trust – not simply operational disruption, but also how risk scenarios affect P&Ls. ... CISOs need to know how to read trust balance sheets, not simply logs. This entails being able to understand risk economics, insurance models and how to allocate resources strategically. ... We are entering a new era in which CFOs and CISOs are both responsible for keeping the business running: Earnings calls that include integrated trust measures;  Cyber insurance coverage that is in line with active threat modeling; Cyber posture reports that meet regulatory standards, like financial audits; and Shared leadership on risk and value initiatives at the board level. CISOs who understand trust economics will impact the futures of businesses by making security a part of strategy as well as operations.


Five actions for CISOs to manage cloud concentration risks

To effectively mitigate concentration risks, CISOs should start by identifying and documenting both third-party and fourth-party risks, with a focus on the most critical cloud providers. It is important to recognize that some non-cloud products may also have cloud dependencies, such as management consoles or reporting engines. Collaborating closely with strategic procurement and vendor management (SPVM) leaders ensures that each cloud provider has a clearly documented owner who understands their responsibilities. ... CISOs should not rely solely on service level agreements (SLAs) to mitigate financial losses from outages, as SLA payouts are often insufficient. Instead, focus on designing applications to gracefully manage limited failures and use cloud-native resilience patterns. In IaaS and PaaS, focus on short-term failure of some cloud services first, rather than catastrophic failure of a large provider and use cloud-native resilience patterns in your architecture. In addition, special attention should be given to cloud identity providers due to their position as a large single point of failure. ... To reduce the risk associated with single-vendor dependency, organizations should intentionally distribute applications and workloads across at least two cloud providers. While single-vendor solutions can simplify integration and sourcing, a multi-cloud approach limits the potential impact of an issue affecting any one provider.


Your cyber risk problem isn’t tech — it’s architecture

The development of a risk culture — including appetite, tolerance and profile — within the scope of the management program is essential to provide real visibility into ongoing risks, how they are being perceived and mitigated, and to leverage the organization’s ability to improve its security posture. Consequently, the company begins to deliver reliable products to customers, secure its reputation and build a secure image to achieve a competitive advantage and brand recognition. ... Another important factor to be developed in parallel with raising risk culture is the continuous Information security awareness process. This action should include all employees, especially those involved in Incident Management and cyber Resilience. ... From a technical standpoint, it is important to select and implement appropriate controls from the NIST CSF stages: Identify, Protect, Detect, Respond and Recover. However, the selection of each control for building guardrails will depend on the overall cybersecurity big picture and market best practices. For each identified issue, the corresponding control must be determined, each monitored by the three lines of defense ... Finally, the cyber management program must also consider legal, regulatory and regional requirements, including privacy and cybersecurity laws. This covers LGPD, CCPA, GDPR, FFEIC, Central Bank regulations, etc., to understand the consequences of non-compliance, which can pose serious issues for the organization.


Even the best AI agents are thwarted by this protocol - what can be done

An emerging category of artificial intelligence middleware known as Model Context Protocol is meant to make generative AI programs such as chatbots bots more powerful by letting them connect with various resources, including packaged software such as databases. Multiple studies, however, reveal that even the best AI models struggle to use Model Context Protocol. ... Having a standard does not mean that an AI model, whose functionality includes a heavy dose of chance ("probability" in technical terms), will faithfully implement MCP. An AI model plugged into MCP has to generate output that achieves several things, such as formulating a plan to answer a query by choosing which external resources to access, in what order to contact the MCP servers that lead to those external applications, and then structuring several requests for information to produce a final output to answer the query. ... The immediate takeaway from the various benchmarks is that AI models need to adapt to a new epoch in which using MCP is a challenge. AI models may have to evolve in new directions to fulfill the challenge. All three studies identify a problem: Performance degrades as the AI models have to access more MCP servers. The complexity of multiple resources starts to overwhelm even the models that can best plan what steps to take at the outset. As Wu and team put it in their MCPMark paper, the complexity of all those MCP servers strains any AI model's ability to keep track of it all.


Chaos engineering on Google Cloud: Principles, practices, and getting started

A common misconception is that cloud environments automatically provide application resiliency, eliminating the need for testing. Although cloud providers do offer various levels of resiliency and SLAs for their cloud products, these alone do not guarantee that your business applications are protected. If applications are not designed to be fault-tolerant or if they assume constant availability of cloud services, they will fail when a particular cloud service they depend on is not available. ... As a proactive discipline, chaos engineering enables organizations to identify weaknesses in their systems before they lead to significant outages or failures, where a system includes not only the technology components but also the people and processes of an organization. By introducing controlled, real-world disruptions, chaos engineering helps test a system's robustness, recoverability, and fault tolerance. This approach allows teams to uncover potential vulnerabilities, so that systems are better equipped to handle unexpected events and continue functioning smoothly under stress. ... Chaos Toolkit is an open-source framework written in Python that provides a modular architecture where you can plug in other libraries (also known as ‘drivers’) to extend your chaos engineering experiments. ... to enable Google Cloud customers and engineers to introduce chaos testing in their applications, we’ve created a series of Google Cloud-specific chaos engineering recipes. Each recipe covers a specific scenario to introduce chaos in a particular Google Cloud service.


The attack surface you can’t see: Securing your autonomous AI and agentic systems

The deep, non-deterministic nature of the underlying Large Language Models (LLMs) and the complex, multi-step reasoning they perform create systems where key decisions are often unexplainable. When an AI agent performs an unauthorized or destructive action, auditing it becomes nearly impossible. ... When you give an AI agent autonomy and tool access, you create a new class of trusted digital insider. If that agent is compromised, the attacker inherits all its permissions. An autonomous agent, which often has persistent access to critical systems, can be compromised and used to move laterally across the network and escalate privileges. The consequences of this over-permissioning are already being felt. ... The sheer speed and scale of agent autonomy demand a shift from traditional perimeter defense to a Zero Trust model specifically engineered for AI. This is no longer an optional security project; it is an organizational mandate for any leader deploying AI agents at scale. ... Securing Agentic AI is not just about extending your traditional security tools. It requires a new governance framework built for autonomy, not just execution. The complexity of these systems demands a new security playbook focused on control and transparency ... The future of enterprise efficiency is agentic, but the future of enterprise security must be built around controlling that agency. 


Systems that Sustain: Lessons that Nature Never Forgot but We Did

In practice, a major flaw in many technology projects is that existing multi-level approval systems are simply digitalised, leading to only marginal improvements. The process becomes a digital twin of the old: while processing speeds increase, the workflow itself remains long, redundant, and often cumbersome. The introduction of a new digital interface adds to the woes rather than simplifies them. Had processes been genuinely reengineered, digitisation could have saved time by simplifying steps, reducing the training load, improving efficiency, cutting costs, and enabling quicker adaptation in response to change. Another persistent pitfall in public sector digital transformation is misunderstanding the promise of analytics, and more crucially, confusing outputs with outcomes. ... Humans, as players in nature’s game, are unique. Evolution gifted us consciousness, language, memory, and complex social bonds—traits that allowed the creation of technology, law, storytelling, and culture. Yet these very blessings seeded traits antithetical to nature’s raw logic ... Artificial intelligence presents a tantalising prospect. Unlike its human creators, a well-designed AI can, under ideal circumstances, create technologies based on the same bias-free principles that drive nature: redesign for purpose, learn and adapt from data, and commit to real, measurable outcomes. 


California introduces new child safety law aimed at AI chatbots

The law is set to come into effect on Jan. 1, 2026, and requires chatbot operators to implement age verification and warn users of the risks of companion chatbots. The bill implements harsher penalties for anyone profiting from illegal deepfakes, with fines of up to $250,000 per offense. In addition, technology companies must establish protocols that seek to prevent self-harm and suicide. These protocols will have to be shared with the California Department of Health to ensure they’re suitable. Companies will also be required to share statistics on how often their services issue crisis center prevention alerts to their users. Some AI companies have already taken steps to protect children, with OpenAI recently introducing parental controls and content safeguards in ChatGPT, along with a self-harm detection feature. Meanwhile, Character AI has added a disclaimer to its chatbot that reminds users that all chats are generated by AI and fictional. Newsom is no stranger to AI legislation. In September, he signed into law another bill called SB 53, which mandates greater transparency from AI companies. More specifically, it requires AI firms to be fully transparent about the safety protocols they implement, while providing protections for whistleblower employees. The bill means that California is the first U.S. state to require AI chatbots to implement safety protocols, but other states have previously introduced more limited legislation. 


Embedding Security into Enterprise Architecture: A TOGAF-Based Approach to Risk-Aligned Design

Treating security as a separate discipline leads to inefficiencies, redundancies, and vulnerabilities. Bolting on security after systems are designed often results in costly retrofits, fragmented controls, and misaligned priorities. It also creates friction between teams — where security is seen as a blocker rather than a partner. Integrating ESA into EA from the outset changes the dynamic. It ensures that security is considered in every architectural decision — from business processes to data flows, from application design to infrastructure deployment. It aligns security with business goals, reduces risk exposure, and accelerates delivery. ... ISM brings operational rigor to ESA. It defines how security is implemented, monitored, and improved. ISM includes identity and access management, continuity planning, compliance management, and security awareness. When ISM is integrated into EA, security becomes part of the enterprise fabric. It’s not just a set of policies — it’s a way of working. ... This integration is not a technical adjustment — it’s a strategic evolution. It requires collaboration, shared language, and a commitment to embedding security into every architectural decision. When done right, it reduces risk, accelerates delivery, and builds confidence across the enterprise. Security by design is not a luxury — it’s a necessity. And EA Capability is how we make it real.