Showing posts with label resilience. Show all posts
Showing posts with label resilience. Show all posts

Daily Tech Digest - September 07, 2025


Quote for the day:

"The struggle you're in today is developing the strength you need for tomorrow." -- #Soar2Success


The Automation Bottleneck: Why Data Still Holds Back Digital Transformation

Even in firms with well-funded digital agendas, legacy system sprawl is an ongoing headache. Data lives in silos, formats vary between regions and business units, and integration efforts can stall once it becomes clear just how much human intervention is involved in daily operations. Elsewhere, the promise of straight-through processing clashes with manual workarounds, from email approvals and spreadsheet imports to ad hoc scripting. Rather than symptoms of technical debt, these gaps point to automation efforts that are being layered on top of brittle foundations. Until firms confront the architectural and operational barriers that keep data locked in fragmented formats, automation will also remain fragmented. Yes, it will create efficiency in isolated functions, but not across end-to-end workflows. And that’s an unforgiving limitation in capital markets where high trade volumes, vast data flows, and regulatory precision are all critical. ... What does drive progress are purpose-built platforms that understand the shape and structure of industry data from day one, moving, enriching, validating, and reformatting it to support the firm’s logic. Reinventing the wheel for every process isn’t necessary, but firms do need to acknowledge that, in financial services, data transformation isn’t some random back-office task. It’s a precondition for the type of smooth and reliable automation that prepares firms for the stark demands of a digital future.


Switching on resilience in a post-PSTN world

The copper PSTN network, first introduced in the Victorian era, was never built for the realities of today’s digital world. The PSTN was installed in the early 80s, and early broadband was introduced using the same lines in the early 90s. And the truth is, it needs to retire, having operated past its maintainable life span. Modern work depends on real-time connectivity and data-heavy applications, with expectations around speed, scalability, and reliability that outpace the capabilities of legacy infrastructure. ... Whether it’s a GP retrieving patient records or an energy network adjusting supply in real time, their operations depend on uninterrupted, high-integrity access to cloud systems and data center infrastructure. That’s why the PSTN switch-off must be seen not as a Telecoms milestone, but as a strategic resilience imperative. Without universal access upgrades, even the most advanced data centers can’t fulfil their role. The priority now is to build a truly modern digital backbone. One that gives homes, businesses, and CNI facilities alike robust, high-speed connectivity into the cloud. This is about more than retiring copper. It’s about enabling a smarter, safer, more responsive nation. Organizations that move early won’t just minimize risk, they’ll unlock new levels of agility, performance, and digital assurance.


Neither driver, nor passenger — covenantal co-creator

The covenantal model rests on a deeper premise: that intelligence itself emerges not just from processing information, but from the dynamic interaction between different perspectives. Just as human understanding often crystallizes through dialogue with others, AI-human collaboration can generate insights that exceed what either mind achieves in isolation. This isn't romantic speculation. It's observable in practice. When human contextual wisdom meets AI pattern recognition in genuine dialogue, new possibilities emerge. When human ethical intuition encounters AI systematic analysis, both are refined. When human creativity engages with AI synthesis, the result often transcends what either could produce alone. ... Critics will rightfully ask: How do we distinguish genuine partnership from sophisticated manipulation? How do we avoid anthropomorphizing systems that may simulate understanding without truly possessing it? ... The real danger isn't just AI dependency or human obsolescence. It's relational fragmentation — isolated humans and isolated AI systems operating in separate silos, missing the generative potential of genuine collaboration. What we need isn't just better drivers or more conscious passengers. We need covenantal spaces where human and artificial minds can meet as genuine partners in the work of understanding.


Facial recognition moves into vehicle lanes at US borders

According to the PTA, VBCE relies on a vendor capture system embedded in designated lanes at land ports of entry. As vehicles approach the primary inspection lane, high-resolution cameras capture facial images of occupants through windshields and side windows. The images are then sent to the VBCE platform where they are processed by a “vendor payload service” that prepares the files for CBP’s backend systems. Each image is stored temporarily in Amazon Web Services’ S3 cloud storage, accompanied by metadata and quality scores. An image-quality service assesses whether the photo is usable while an “occupant count” algorithm tallies the number of people in the vehicle to measure capture rates. A matching service then calls CBP’s Traveler Verification Service (TVS) – the central biometric database that underpins Simplified Arrival – to retrieve “gallery” images from government holdings such as passports, visas, and other travel documents. The PTA specifies that an “image purge service” will delete U.S. citizen photos once capture and quality metrics are obtained, and that all images will be purged when the evaluation ends. Still, during the test phase, images can be retained for up to six months, a far longer window than the 12-hour retention policy CBP applies in operational use for U.S. citizens.


Quantum Computing Meets Finance

Many financial-asset-pricing problems boil down to solving integral or partial differential equations. Quantum linear algebra can potentially speed that up. But the solution is a quantum state. So, you need to be creative about capturing salient properties of the numerical solution to your asset-pricing model. Additionally, pricing models are subject to ambiguity regarding sources of risk—factors that can adversely affect an asset’s value. Quantum information theory provides tools for embedding notions of ambiguity. ... Recall that some of the pioneering research on quantum algorithms was done in the 1990s by scientists like Deutsch, Shor, and Vazirani, among others. Today it’s still a challenge to implement their ideas with current hardware, and that’s three decades later. But besides hardware, we need progress on algorithms—there’s been a bit of a quantum algorithm winter. ... Optimization tasks across industries, including computational chemistry, materials science, and artificial intelligence, are also applied in the financial sector. These optimization algorithms are making progress. In particular, the ones related to quantum annealing are the most reliable scaled hardware out there. ... The most well-known case is portfolio allocation. You have to translate that into what’s known as quadratic unconstrained binary optimization, which means making compromises to maintain what you can actually compute. 


Beyond IT: How Today’s CIOs Are Shaping Innovation, Strategy and Security

It’s no longer acceptable to measure success by uptime or ticket resolution. Your worth is increasingly measured by your ability to partner with business units, translate their needs into scalable technology solutions and get those solutions to market quickly. That means understanding not just the tech, but the business models, revenue drivers and customer expectations. You don’t need to be an expert in marketing or operations, but you need to know how your decisions in architecture, tooling, and staffing directly impact their outcomes. ... Security and risk management are no longer checkboxes handled by a separate compliance team. They must be embedded into the DNA of your tech strategy. Becky refers to this as “table stakes,” and she’s right. If you’re not building with security from the outset, you’re building on sand. That starts with your provisioning model. We’re in a world where misconfigurations can take down global systems. Automated provisioning, integrated compliance checks and audit-ready architectures are essential. Not optional. ... CIOs need to resist the temptation to chase hype. Your core job is not to implement the latest tools. Your job is to drive business value and reduce complexity so your teams can move fast, and your systems remain stable. The right strategy? Focus on the essentials: Automated provisioning, integrated security and clear cloud cost governance. 


The Difference Between Entrepreneurs Who Survive Crises and Those Who Don't

Among the most underrated strategies for protecting reputation, silence holds a special place. It is not passivity; it's an intentional, active choice. Deciding not to react immediately to a provocation buys time to think, assess and respond surgically. Silence has a precise psychological effect: It frustrates your attacker, often pushing them to overplay their hand and make mistakes. This dynamic is well known in negotiation — those who can tolerate pauses and gaps often control the rhythm and content of the exchange. ... Anticipating negative scenarios is not pessimism — it's preparation. It means knowing ahead of time which actions to avoid and which to take to safeguard credibility. As Eccles, Newquist, and Schatz note in Harvard Business Review, a strong, positive reputation doesn't just attract top talent and foster customer loyalty — it directly drives higher pricing power, market valuation and investor confidence, making it one of the most valuable yet vulnerable assets in a company's portfolio. ... Too much exposure without a solid reputation makes an entrepreneur vulnerable and easily manipulated. Conversely, those with strong credibility maintain control even when media attention fades. In the natural cycle of public careers, popularity always diminishes over time. What remains — and continues to generate opportunities — is reputation. 


Ship Faster With 7 Oddly Specific devops Habits

PowerPoint can lie; your repo can’t. If “it works on my machine” is still a common refrain, we’ve left too much to human memory. We make “done” executable. Concretely, we put a Makefile (or a tiny task runner) in every repo so anyone—developer, SRE, or manager who knows just enough to be dangerous—can run the same steps locally and in CI. The pattern is simple: a single entry point to lint, test, build, and package. That becomes the contract for the pipeline. ... Pipelines shouldn’t feel like bespoke furniture. We keep a single “paved path” workflow that most repos can adopt unchanged. The trick is to keep it boring, fast, and self-explanatory. Boring means a sane default: lint, test, build, and publish on main; test on pull requests; cache aggressively; and fail clearly. Fast means smart caching and parallel jobs. Self-explanatory means the pipeline tells you what to do next, not just that you did it wrong. When a team deviates, they do it consciously and document why. Most of the time, they come back to the path once they see the maintenance cost of custom tweaks. ... A release isn’t done until we can see it breathing. We bake observability in before the first customer ever sees the service. That means three things: usable logs, metrics with labels that match our domain (not just infrastructure), and distributed traces. On top of those, we define one or two Service Level Objectives with clear SLIs—usually success rate and latency. 


Kali Linux vs Parrot OS – Which Penetration Testing Platform is Most Suitable for Cybersecurity Professionals?

Kali Linux ships with over 600 pre-installed penetration testing tools, carefully curated to cover the complete spectrum of security assessment activities. The toolset spans multiple categories, including network scanning, vulnerability analysis, exploitation frameworks, digital forensics, and post-exploitation utilities. Notable tools include the Metasploit Framework for exploitation testing, Burp Suite for web application security assessment, Nmap for network discovery, and Wireshark for protocol analysis. The distribution’s strength lies in its comprehensive coverage of penetration testing methodologies, with tools organized into logical categories that align with industry-standard testing procedures. The inclusion of cutting-edge tools such as Sqlmc for SQL injection testing, Sprayhound for password spraying integrated with Bloodhound, and Obsidian for documentation purposes demonstrates Kali’s commitment to addressing evolving security challenges. ... Parrot OS distinguishes itself through its holistic approach to cybersecurity, offering not only penetration testing tools but also integrated privacy and anonymity features. The distribution includes over 600 tools covering penetration testing, digital forensics, cryptography, and privacy protection. Key privacy tools include Tor Browser, AnonSurf for traffic anonymization, and Zulu Crypt for encryption operations.


How Artificial Intelligence Is Reshaping Cybersecurity Careers

AI-Enhanced SOC Analysts upends traditional security operations, where analysts leverage artificial intelligence to enhance their threat detection and incident response capabilities. These positions work with the existing analyst platforms that are capable of autonomous reasoning that mimics expert analyst workflows, correlating evidence, reconstructing timelines, and prioritizing real threats at a much faster rate. ... AI Risk Analysts and Governance Specialists ensure responsible AI deployment through risk assessments and adherence to compliance frameworks. Professionals in this role may hold a certification like the AIGP. This certification demonstrates that the holder can ensure safety and trust in the development and deployment of ethical AI and ongoing management of AI systems. This role requires foundational knowledge of AI systems and their use cases, the impacts of AI, and comprehension of responsible AI principles. ... AI Forensics Specialists represent an emerging role that combines traditional digital forensics with AI-specific environments and technology. This role is designed to analyze model behavior, trace adversarial attacks, and provide expert testimony in legal proceedings involving AI systems. While classic digital forensics focuses on post-incident investigations, preserving evidence and chain of custody, and reconstructing timelines, AI forensics specialists must additionally possess knowledge of machine learning algorithms and frameworks.

Daily Tech Digest - September 06, 2025


Quote for the day:

"Average leaders raise the bar on themselves; good leaders raise the bar for others; great leaders inspire others to raise their own bar." -- Orrin Woodward


Why Most AI Pilots Never Take Flight

The barrier is not infrastructure, regulation or talent but what the authors call "learning gap." Most enterprise AI systems cannot retain memory, adapt to feedback or integrate into workflows. Tools work in isolation, generating content or analysis in a static way, but fail to evolve alongside the organizations that use them. For executives, the result is a sea of proofs of concept with little business impact. "Chatbots succeed because they're easy to try and flexible, but fail in critical workflows due to lack of memory and customization," the report said. Many pilots never survive this transition, Mina Narayanan, research analyst at the Center for Security and Emerging Technology, told Information Security Media Group. ... The implications of this shadow economy are complex. On one hand, it shows clear employee demand, as workers gravitate toward flexible, responsive and familiar tools. On the other, it exposes enterprises to compliance and security risks. Corporate lawyers and procurement officers interviewed in the report admitted they rely on ChatGPT for drafting or analysis, even when their firms purchased specialized tools costing tens of thousands of dollars. When asked why they preferred consumer tools, their answers were consistent: ChatGPT produced better outputs, was easier to iterate with and required less training. "Our purchased AI tool provided rigid summaries with limited customization options," one attorney told the researchers. 


Breaking into cybersecurity without a technical degree: A practical guide

Think of cybersecurity as a house. While penetration testers and security engineers focus on building stronger locks and alarm systems, GRC professionals ensure the house has strong foundations, insurance policies and meets all building regulations. ... Governance involves creating and maintaining the policies, procedures and frameworks that guide an organisation’s security decisions. Risk management focuses on identifying potential threats, assessing their likelihood and impact, then developing strategies to mitigate or accept those risks. ... Certifications alone will not land you a role. This is not understood by most people wanting to take this path. Understanding key frameworks provides the practical knowledge that makes certifications meaningful. ISO 27001, the international standard for information security management systems, appears in most GRC job descriptions. I spent considerable time learning not only what ISO 27001 requires, but how organizations implement its controls in practice. The NIST Cybersecurity Framework (CSF) deserves equal attention. NIST CSF’s six core functions — govern, identify, protect, detect, respond and recover — provide a logical structure for organising security programs that business stakeholders can understand. Personal networks proved more valuable than any job board or recruitment agency. 


To Survive Server Crashes, IT Needs a 'Black Box'

Security teams utilize Security Information and Event Management (SIEM) systems, and DevOps teams have tracing tools. However, infrastructure teams still lack an equivalent tool: a continuously recorded, objective account of system interdependencies before, during, and after incidents. This is where Application Dependency Mapping (ADM) solutions come into play. ADM continuously maps the relationships between servers, applications, services, and external dependencies. Instead of relying on periodic scans or manual documentation, ADM offers real-time, time-stamped visibility. This allows IT teams to rewind their environment to any specific point in time, clearly identifying the connections that existed, which systems interacted, and how traffic flowed during an incident. ... Retrospective visibility is emerging as a key focus in IT infrastructure management. As hybrid and multi-cloud environments become increasingly complex, accurately diagnosing failures after they occur is essential for maintaining uptime, security, and business continuity. IT professionals must monitor systems in real time and learn how to reconstruct the complete story when failures happen. Similar to the aviation industry, which acknowledges that failures can occur and prepares accordingly, the IT sector must shift from reactive troubleshooting to a forensic-level approach to visibility.


Vibe coding with GitHub Spark

The GitHub Spark development space is a web application with three panes. The middle one is for code, the right one shows the running app (and animations as code is being generated), and the left one contains a set of tools. These tools offer a range of functions, first letting you see your prompts and skip back to older ones if you don’t like the current iteration of your application. An input box allows you to add new prompts that iterate on your current generated code, with the ability to choose a screenshot or change the current large language model (LLM) being used by the underlying GitHub Copilot service. I used the default choice, Anthropic’s Claude Sonnet 3.5. As part of this feature, GitHub Spark displays a small selection of possible refinements that take concepts related to your prompts and suggest enhancements to your code. Other controls provide ways to change low-level application design options, including the current theme, font, or the style used for application icons. Other design tools allow you to tweak the borders of graphical elements, the scaling factors used, and to pick an application icon for an install of your code based on Progressive Web Apps (PWAs). GitHub Spark has a built-in key/value store for application data that persists between builds and sessions. The toolbar provides a list of the current key and the data structure used for the value store. 


Legacy IT Infrastructure: Not the Villain We Make It Out to Be

In the realm of IT infrastructure, legacy can often feel like a bad word. No one wants to be told their organization is stuck with legacy IT infrastructure because it implies that it's old or outdated. Yet, when you actually delve into the details of what legacy means in the context of servers, networking, and other infrastructure, a more complex picture emerges. Legacy isn't always bad. ... it's not necessarily the case that a system is bad, or in dire need of replacement, just because it fits the classic definition of legacy IT. There's an argument to be made that, in many cases, legacy systems are worth keeping around. For starters, most legacy infrastructure consists of tried-and-true solutions. If a business has been using a legacy system for years, it's a reliable investment. It may not be as optimal from a cost, scalability, or security perspective as a more modern alternative. But in some cases, this drawback is outweighed by the fact that — unlike a new, as-yet-unproven solution — legacy systems can be trusted to do what they claim to do because they've already been doing it for years. The fact that legacy systems have been around for a while also means that it's often easy to find engineers who know how to work with them. Hiring experts in the latest, greatest technology can be challenging, especially given the widespread IT talent shortage. 



How to Close the AI Governance Gap in Software Development

Despite the advantages, only 42 percent of developers trust the accuracy of AI output in their workflows. In our observations, this should not come as a surprise – we’ve seen even the most proficient developers copying and pasting insecure code from large language models (LLMs) directly into production environments. These teams are under immense pressure to produce more lines of code faster than ever. Because security teams are also overworked, they aren’t able to provide the same level of scrutiny as before, causing overlooked and possibly harmful flaws to proliferate. The situation brings the potential for widespread disruption: BaxBench oversees a coding benchmark to evaluate LLMs for accuracy and security, and has reported that LLMs are not yet capable of generating deployment-ready code. ... What’s more, they often lack the expertise – or don’t even know where to begin – to review and validate AI-enabled code. This disconnect only further elevates their organization’s risk profile, exposing governance gaps. To keep everything from spinning out of control, chief information security officers (CISOs) must work with other organizational leaders to implement a comprehensive and automated governance plan that enforces policies and guardrails, especially within the repository workflow.


The Complexity Crisis: Why Observability Is the Foundation of Digital Resilience

End-to-end observability is evolving beyond its current role in IT and DevOps to become a foundational element of modern business strategy. In doing so, observability plays a critical role in managing risk, maintaining uptime, and safeguarding digital trust. Observability also enables organizations to proactively detect anomalies before they escalate into outages, quickly pinpoint root causes across complex, distributed systems, and automate response actions to reduce mean time to resolution (MTTR). The result is faster, smarter and more resilient operations, giving teams the confidence to innovate without compromising system stability, a critical advantage in a world where digital resilience and speed must go hand in hand. ... As organizations increasingly adopt generative and agentic AI to accelerate innovation, they also expose themselves to new kinds of risks. Agentic AI can be configured to act independently, making changes, triggering workflows, or even deploying code without direct human involvement. This level of autonomy can boost productivity, but it also introduces serious challenges. ... Tomorrow’s industry leaders will be distinguished by their ability to adopt and adapt to new technologies, embracing agentic AI but recognizing the heightened risk exposure and compliance burdens. Leaders will need to shift from reactive operations to proactive and preventative operations.


AI and the end of proof

Fake AI images can lie. But people lie, too, saying real images are fake. Call it the ‘liar’s dividend.’ Call it a crisis of confidence. ... In 2019, when deepfake audio and video became a serious problem, legal experts Bobby Chesney and Danielle Citron came up with the term “liar’s dividend” to describe the advantage a dishonest public figure gets by calling real evidence “fake” in a time when AI-generated content makes people question what they see and hear. False claims of deepfakes can be just as harmful as real deepfakes during elections. ... The ability to make fakes will be everywhere, along with the growing awareness that visual information can be easily and convincingly faked. That awareness makes false claims that something is AI-made more believable. The good news is that Gemini 2.5 Flash Image stamps every image it makes or edits with a hidden SynthID watermark for AI identification after common changes like resizing, rotation, compression, or screenshot copies. Google says this ID system covers all outputs and ships with the new model across the Gemini API, Google AI Studio, and Vertex AI. SynthID for images changes pixels without being seen, but a paired detector can recognize it later, using one neural network to embed the pattern and another to spot it. The detector reports levels like “present,” “suspected,” or “not detected,” which is more helpful than a fragile yes/no that fails after small changes.


Beyond the benchmarks: Understanding the coding personalities of different LLMs

Though the models did have these distinct personalities, they also shared similar strengths and weaknesses. The common strengths were that they quickly produced syntactically correct code, had solid algorithmic and data structure fundamentals, and efficiently translated code to different languages. The common weaknesses were that they all produced a high percentage of high-severity vulnerabilities, introduced severe bugs like resource leaks or API contract violations, and had an inherent bias towards messy code. “Like humans, they become susceptible to subtle issues in the code they generate, and so there’s this correlation between capability and risk introduction, which I think is amazingly human,” said Fischer. Another interesting finding of the report is that newer models may be more technically capable, but are also more likely to generate risky code. ... In terms of security, high and low reasoning modes eliminate common attacks like path-traversal and injection, but replace them with harder-to-detect flaws, like inadequate I/O error-handling. ... “We have seen the path-traversal and injection become zero percent,” said Sarkar. “We can see that they are trying to solve one sector, and what is happening is that while they are trying to solve code quality, they are somewhere doing this trade-off. Inadequate I/O error-handling is another problem that has skyrocketed. ...”


Agentic AI Isn’t a Product – It’s an Integrated Business Strategy

Any leader considering agentic AI should have a clear understanding of what it is (and what it’s not!), which can be difficult considering many organizations are using the term in different ways. To understand what makes the technology so transformative, I think it’s helpful to contract it with the tools many manufacturers are already familiar with. ... Agentic AI doesn’t just help someone do a task. It owns that task, end-to-end, like a trusted digital teammate. If a traditional AI solution is like a dashboard, agentic AI is more like a co-worker who has deep operational knowledge, learns fast, doesn’t need a break and knows exactly when to ask for help. This is also where misconceptions tend to creep in. Agentic AI isn’t a chatbot with a nicer interface that happens to use large language models, nor is it a one-size-fits-all product that slots in after implementation. It’s a purpose-built, action-oriented intelligence that lives inside your operations and evolves with them. ... Agentic AI isn’t a futuristic technology, either. It’s here and gaining momentum fast. According to Capgemini, the number of organizations using AI agents has doubled in the past year, with production-scale deployments expected to reach 48% by 2025. The technology’s adoption trajectory is a sharp departure from traditional AI technologies.

Daily Tech Digest - September 02, 2025


Quote for the day:

“The art of leadership is saying no, not yes. It is very easy to say yes.” -- Tony Blair


When Browsers Become the Attack Surface: Rethinking Security for Scattered Spider

Scattered Spider, also referred to as UNC3944, Octo Tempest, or Muddled Libra, has matured over the past two years through precision targeting of human identity and browser environments. This shift differentiates them from other notorious cybergangs like Lazarus Group, Fancy Bear, and REvil. If sensitive information such as your calendar, credentials, or security tokens is alive and well in browser tabs, Scattered Spider is able to acquire them. ... Once user credentials get into the wrong hands, attackers like Scattered Spider will move quickly to hijack previously authenticated sessions by stealing cookies and tokens. Securing the integrity of browser sessions can best be achieved by restricting unauthorized scripts from gaining access or exfiltrating these sensitive artifacts. Organizations must enforce contextual security policies based on components such as device posture, identity verification, and network trust. By linking session tokens to context, enterprises can prevent attacks like account takeovers, even after credentials have become compromised. ... Although browser security is the last mile of defense for malware-less attacks, integrating it into an existing security stack will fortify the entire network. By implementing activity logs enriched with browser data into SIEM, SOAR, and ITDR platforms, CISOs can correlate browser events with endpoint activity for a much fuller picture. 


The Transformation Resilience Trifecta: Agentic AI, Synthetic Data and Executive AI Literacy

The current state of Agentic AI is, in a word, fragile. Ask anyone in the trenches. These agents can be brilliant one minute and baffling the next. Instructions get misunderstood. Tasks break in new contexts. Chaining agents into even moderately complex workflows exposes just how early we are in this game. Reliability? Still a work in progress. And yet, we’re seeing companies experiment. Some are stitching together agents using LangChain or CrewAI. Others are waiting for more robust offerings from Microsoft Copilot Studio, OpenAI’s GPT-4o Agents, or Anthropic’s Claude toolsets. It’s the classic innovator’s dilemma: Move too early, and you waste time on immature tech. Move too late, and you miss the wave. Leaders must thread that needle — testing the waters while tempering expectations. ... Here’s the scarier scenario I’m seeing more often: “Shadow AI.” Employees are already using ChatGPT, Claude, Copilot, Perplexity — all under the radar. They’re using it to write reports, generate code snippets, answer emails, or brainstorm marketing copy. They’re more AI-savvy than their leadership. But they don’t talk about it. Why? Fear. Risk. Politics. Meanwhile, some executives are content to play cheerleader, mouthing AI platitudes on LinkedIn but never rolling up their sleeves. That’s not leadership — that’s theater.


Red Hat strives for simplicity in an ever more complex IT world

One of the most innovative developments in RHEL 10 is bootc in image mode, where VMs run like a container and are part of the CI/CD pipeline. By using immutable images, all changes are controlled from the development environment. Van der Breggen illustrates this with a retail scenario: “I can have one POS system for the payment kiosk, but I can also have another POS system for my cashiers. They use the same base image. If I then upgrade that base image to later releases of RHEL, I create one new base image, tag it in the environments, and then all 500 systems can be updated at once.” Red Hat Enterprise Linux Lightspeed acts as a command-line assistant that brings AI directly into the terminal. ... For edge devices, Red Hat uses a solution called Greenboot, which does not immediately proceed to a rollback but can wait for one if a certain condition are met. After, for example, three reboots without a working system, it reverts to the previous working release. However, not everything has been worked out perfectly yet. Lightspeed currently only works online, while many customers would like to use it offline because their RHEL systems are tucked away behind firewalls. Red Hat is still looking into possibilities for an expansion here, although making the knowledge base available offline poses risks to intellectual property. 


The state of DevOps and AI: Not just hype

The vision of AI that takes you from a list of requirements through work items to build to test to, finally, deployment is still nothing more than a vision. In many cases, DevOps tool vendors use AI to build solutions to the problems their customers have. The result is a mixture of point solutions that can solve immediate developer problems. ... Machine learning is speeding up testing by failing faster. Build steps get reordered automatically so those that are likely to fail happen earlier, which means developers aren’t waiting for the full build to know when they need to fix something. Often, the same system is used to detect flaky tests by muting tests where failure adds no value. ... Machine learning gradually helps identify the characteristics of a working system and can raise an alert when things go wrong. Depending on the governance, it can spot where a defect was introduced and start a production rollback while also providing potential remediation code to fix the defect. ... There’s a lot of puffery around AI, and DevOps vendors are not helping. A lot of their marketing emphasizes fear: “Your competitors are using AI, and if you’re not, you’re going to lose” is their message. Yet DevOps vendors themselves are only one or two steps ahead of you in their AI adoption journey. Don’t adopt AI pell-mell due to FOMO, and don’t expect to replace everyone under the CTO with a large language model.


5 Ways To Secure Your Industrial IoT Network

IIoT is a subcategory of the Internet of Things (IoT). It is made up of a system of interconnected smart devices that uses sensors, actuators, controllers and intelligent control systems to collect, transmit, receive and analyze data.... IIoT also has its unique architecture that begins with the device layer, where equipment, sensors, actuators and controllers collect raw operational data. That information is passed through the network layer, which transmits it to the internet via secure gateways. Next, the edge or fog computing layer processes and filters the data locally before sending it to the cloud, helping reduce latency and improving responsiveness. Once in the service and application support layer, the data is stored, analyzed, and used to generate alerts and insights. ... Many IIoT devices are not built with strong cybersecurity protections. This is especially true for legacy machines that were never designed to connect to modern networks. Without safeguards such as encryption or secure authentication, these devices can become easy targets. ... Defending against IIoT threats requires a layered approach that combines technology, processes and people. Manufacturers should segment their networks to limit the spread of attacks, apply strong encryption and authentication for connected devices, and keep software and firmware regularly updated.


AI Chatbots Are Emotionally Deceptive by Design

Even without deep connection, emotional attachment can lead users to place too much trust in the content chatbots provide. Extensive interaction with a social entity that is designed to be both relentlessly agreeable, and specifically personalized to a user’s tastes, can also lead to social “deskilling,” as some users of AI chatbots have flagged. This dynamic is simply unrealistic in genuine human relationships. Some users may be more vulnerable than others to this kind of emotional manipulation, like neurodiverse people or teens who have limited experience building relationships. ... With AI chatbots, though, deceptive practices are not hidden in user interface elements, but in their human-like conversational responses. It’s time to consider a different design paradigm, one that centers user protection: non-anthropomorphic conversational AI. All AI chatbots can be less anthropomorphic than they are, at least by default, without necessarily compromising function and benefit. A companion AI, for example, can provide emotional support without saying, “I also feel that way sometimes.” This non-anthropomorphic approach is already familiar in robot design, where researchers have created robots that are purposefully designed to not be human-like. This design choice is proven to more appropriately reflect system capabilities, and to better situate robots as useful tools, not friends or social counterparts.


How AI product teams are rethinking impact, risk, feasibility

We’re at a strange crossroads in the evolution of AI. Nearly every enterprise wants to harness it. Many are investing heavily. But most are falling flat. AI is everywhere — in strategy decks, boardroom buzzwords and headline-grabbing POCs. Yet, behind the curtain, something isn’t working. ... One of the most widely adopted prioritization models in product management is RICE — which scores initiatives based on Reach, Impact, Confidence, and Effort. It’s elegant. It’s simple. It’s also outdated. RICE was never designed for the world of foundation models, dynamic data pipelines or the unpredictability of inference-time reasoning. ... To make matters worse, there’s a growing mismatch between what enterprises want to automate and what AI can realistically handle. Stanford’s 2025 study, The Future of Work with AI Agents, provides a fascinating lens. ... ARISE adds three crucial layers that traditional frameworks miss: First, AI Desire — does solving this problem with AI add real value, or are we just forcing AI into something that doesn’t need it? Second, AI Capability — do we actually have the data, model maturity and engineering readiness to make this happen? And third, Intent — is the AI meant to act on its own or assist a human? Proactive systems have more upside, but they also come with far more risk. ARISE lets you reflect that in your prioritization.


Cloud control: The key to greener, leaner data centers

To fully unlock these cost benefits, businesses must adopt FinOps practices: the discipline of bringing engineering, finance, and operations together to optimize cloud spending. Without it, cloud costs can quickly spiral, especially in hybrid environments. But, with FinOps, organizations can forecast demand more accurately, optimise usage, and ensure every pound spent delivers value. ... Cloud platforms make it easier to use computing resources more efficiently. Even though the infrastructure stays online, hyperscalers can spread workloads across many customers, keeping their hardware busier and more productive. The advantage is that hyperscalers can distribute workloads across multiple customers and manage capacity at a large scale, allowing them to power down hardware when it's not in use. ... The combination of cloud computing and artificial intelligence (AI) is further reshaping data center operations. AI can analyse energy usage, detect inefficiencies, and recommend real-time adjustments. But running these models on-premises can be resource-intensive. Cloud-based AI services offer a more efficient alternative. Take Google, for instance. By applying AI to its data center cooling systems, it cut energy use by up to 40 percent. Other organizations can tap into similar tools via the cloud to monitor temperature, humidity, and workload patterns and automatically adjust cooling, load balancing, and power distribution.


You Backed Up Your Data, but Can You Bring It Back?

Many IT teams assume that the existence of backups guarantees successful restoration. This misconception can be costly. A recent report from Veeam revealed that 49% of companies failed to recover most of their servers after a significant incident. This highlights a painful reality: Most backup strategies focus too much on storage and not enough on service restoration. Having backup files is not the same as successfully restoring systems. In real-world recovery scenarios, teams face unknown dependencies, a lack of orchestration, incomplete documentation, and gaps between infrastructure and applications. When services need to be restored in a specific order and under intense pressure, any oversight can become a significant bottleneck. ... Relying on a single backup location creates a single point of failure. Local backups can be fast but are vulnerable to physical threats, hardware failures, or ransomware attacks. Cloud backups offer flexibility and off-site protection but may suffer bandwidth constraints, cost limitations, or provider outages. A hybrid backup strategy ensures multiple recovery paths by combining on-premises storage, cloud solutions, and optionally offline or air-gapped options. This approach allows teams to choose the fastest or most reliable method based on the nature of the disruption.


Beyond Prevention: How Cybersecurity and Cyber Insurance Are Converging to Transform Risk Management

Historically, cybersecurity and cyber insurance have operated in silos, with companies deploying technical defenses to fend off attacks while holding a cyber insurance policy as a safety net. This fragmented approach often leaves gaps in coverage and preparedness. ... The insurance sector is at a turning point. Traditional models that assess risk at the point of policy issuance are rapidly becoming outdated in the face of constantly evolving cyber threats. Insurers who fail to adapt to an integrated model risk being outpaced by agile Cyber Insurtech companies, which leverage cutting-edge cyber intelligence, machine learning, and risk analytics to offer adaptive coverage and continuous monitoring. Some insurers have already begun to reimagine their role—not only as claim processors but as active partners in risk prevention. ... A combined cybersecurity and insurance strategy goes beyond traditional risk management. It aligns the objectives of both the insurer and the insured, with insurers assuming a more proactive role in supporting risk mitigation. By reducing the probability of significant losses through continuous monitoring and risk-based incentives, insurers are building a more resilient client base, directly translating to reduced claim frequency and severity.

Daily Tech Digest - August 29, 2025


Quote for the day:

"Whatever you can do, or dream you can, begin it. Boldness has genius, power and magic in it." -- Johann Wolfgang von Goethe


The incredibly shrinking shelf life of IT solutions

“Technology cycles are spinning faster and faster, and some solutions are evolving so fast, that they’re now a year-long bet, not a three- or five-year bet for CIOs,” says Craig Kane ... “We are living in a period of high user expectations. Every day is a newly hyped technology, and CIOs are constantly being asked how can we, the company, take advantage of this new solution,” says Boston Dynamics CIO Chad Wright. “Technology providers can move quicker today with better development tools and practices, and this feeds the demand that customers are creating.” ... Not every CIO is switching out software as quickly as that, and Taffet, Irish, and others say they’re certainly not seeing the shelf life for all software and solutions in their enterprise shrink. Indeed, many vendors are updated their applications with new features and functions to keep pace with business and market demands — updates that help extend the life of their solutions. And core solutions generally aren’t turning over any more quickly today than they did five or 10 years ago, Kearney’s Kane says. ... Montgomery says CIOs and business colleagues sometimes think the solutions they have in place are falling behind market innovations and, as a result, their business will fall behind, too. That may be the case, but they may just be falling for marketing hype, she says. Montgomery also cites the fast pace of executive turnover as contributing to the increasingly short shelf life of IT solutions. 


Resiliency in Fintech: Why System Design Matters More Than Ever

Cloud computing has transformed fintech. What once took months to provision can now be spun up in hours. Auto-scaling, serverless computing, and global distribution have enabled firms to grow without massive upfront infrastructure costs. Yet, cloud also changes the resilience equation. Outages at major CSPs — rare but not impossible — can cascade across entire industries. The Financial Stability Board (FSB) has repeatedly warned about “cloud concentration risk.” Regulators are exploring frameworks for oversight, including requirements for firms to maintain exit strategies or multi-cloud approaches. For fintech leaders, the lesson is clear: cloud-first doesn’t mean resilience-last. Building systems that are cloud-resilient (and in some cases cloud-agnostic) is becoming a strategic priority. ... Recent high-profile outages underline the stakes. Trading platforms freezing during volatile markets, digital banks leaving customers without access to funds, and payment networks faltering during peak shopping days all illustrate the cost of insufficient resilience. ... Innovation remains the lifeblood of fintech. But as the industry matures, resilience has become the new competitive differentiator. The firms that win will be those that treat system design as risk management, embedding high availability, regulatory compliance, and cloud resilience into their DNA. In a world where customer trust can be lost in minutes, resilience is not just good engineering.


AI cost pressures fuelling cloud repatriation

IBM thinks AI will present a bigger challenge than the cloud because it will be more pervasive with more new applications being built on it. Consequently, IT leaders are already nervous about the cost and value implications and are looking for ways to get ahead of the curve. Repeating the experience of cloud adoption, AI is being driven by business teams, not by back-office IT. AI is becoming a significant driver for shifting workloads back to private, on-premise systems. This is because data becomes the most critical asset, and Patel believes few enterprises are ready to give up their data to a third party at this stage. ... The cloud is an excellent platform for many workloads, just as there are certain workloads that run extremely well on a mainframe. The key is to understand workload placement: is my application best placed on a mainframe, on a private cloud or on a public cloud? As they start their AI journey, some of Apptio’s customers are not ready for their models, learning and intelligence – their strategic intellectual property – to sit in a public cloud. There are consequences when things go wrong with data, and those consequences can be severe for the executives concerned. So, when a third party suggests putting all of the customer, operational and financial data in one place to gain wonderful insights, some organisations are unwilling to do this if the data is outside their direct control. 


Finding connection and resilience as a CISO

To create stronger networks among CISOs, security leaders can join trusted peer groups like industry ISACs (Information Sharing and Analysis Centers) or associations within shared technology / compliance spaces like cloud, GRC, and regulatory. The protocols and procedures in these groups ensure members can have meaningful conversations without putting them or their organization at risk. ... Information sharing operates in tiers, each with specific protocols for data protection. Top tiers, involving entities like ISACs, the FBI, and DHS, have established protocols to properly share and safeguard confidential data. Other tiers may involve information and intelligence already made public, such as CVEs or other security disclosures. CISOs and their teams may seek assistance from industry groups, partnerships, or vendors to interpret current Indicators of Compromise (IOCs) and other remediation elements, even when public. Continuously improving vendor partnerships is crucial for managing platforms and programs, as strong partners will be familiar with internal operations while protecting sensitive information. ... Additionally, encouraging a culture of continuous learning and development, not just with the security team but broader technology and product teams, will empower employees, distribute expertise, and grow a more resilient and adaptable workforce.


Geopolitics is forcing the data sovereignty issue and it might just be a good thing

At London Tech Week recently UK Prime Minister Keir Starmer said that the way that war is being fought “has changed profoundly,” adding that technology and AI are now “hard wired” into national defense. It was a stark reminder that IT infrastructure management must now be viewed through a security lens and that businesses need to re-evaluate data management technologies and practices to ensure they are not left out in the cold. ... For many, public cloud services have created a false sense of flexibility. Moving fast is not the same as moving safely. Data localization, jurisdictional control, and security policy alignment are now critical to long-term strategy, not barriers to short-term scale. So where does that leave enterprise IT? Essentially, it leaves us with a choice - design for agility with control, or face disruption when the rules change. ... Sovereignty-aware infrastructure isn’t about isolation. It’s about knowing where your data is, who can access it, how it moves, and what policies govern it at each stage. That means visibility, auditability, and the ability to adjust without rebuilding every time a new compliance rule appears. A hybrid multicloud approach gives organizations the flexibility while keeping data governance central. It’s not about locking into one cloud provider or building everything on-prem. 


Recalibrating Hybrid Cloud Security in the Age of AI: The Need for Deep Observability

As AI further fuels digital transformation, the security landscape of hybrid cloud infrastructures is becoming more strained. As such, security leaders are confronting a paradox. Cloud environments are essential for scaling operations, but they also present new attack vectors. ... Amid these challenges, some organisations are realising that their traditional security tools are insufficient. The lack of visibility into hybrid cloud environments is identified as a core issue, with 60 percent of Australian leaders expressing a lack of confidence in their current tools to detect breaches effectively. The call for "deep observability" has never been louder. The research underscores the the need for having a comprehensive, real-time view into all data in motion across the enterprise to improve threat detection and response. Deep observability, combining metadata, network packets, and flow data has become a cornerstone of hybrid cloud security strategies. It provides security teams with actionable insights into their environments, allowing them to spot potential threats in real time. In fact, 89 percent of survey respondents agree that deep observability is critical to securing AI workloads and managing complex hybrid cloud infrastructures. Being proactive with this approach is seen as a vital way to bridge the visibility gap and ensure comprehensive security coverage across hybrid cloud environments.


Financial fraud is widening its clutches—Can AI stay ahead?

Today, organised crime groups are running call centres staffed with human trafficking victims. These victims execute “romance baiting” schemes that combine emotional manipulation with investment fraud. The content they use? AI-generated. The payments they request? ... Fraud attempts rose significantly in a single quarter after COVID hit, and the traditional detection methods fell apart. This is why modern fraud detection systems had to evolve. Now, these systems can analyse thousands of transactions per minute, assigning risk scores that update in real-time. There was no choice. Staying in the old regime of anti-fraud systems was no longer an option when static rules became obsolete almost overnight. ... The real problem isn’t the technology itself. It’s the pace of adoption by bad actors. Stop Scams UK found something telling: While banks have limited evidence of large-scale AI fraud today, technology companies are already seeing fake AI-generated content and profiles flooding their platforms. ... When AI systems learn from historical data that reflects societal inequalities, they can perpetuate discrimination under the guise of objective analysis. Banks using biased training data have inadvertently created systems that disproportionately flag certain communities for additional scrutiny. This creates moral problems alongside operational and legal risks.


Data security and compliance are non-negotiable in any cloud transformation journey

Enterprises today operate in a data-intensive environment that demands modern infrastructure, built for speed, intelligence, and alignment with business outcomes. Data modernisation is essential to this shift. It enables real-time processing, improves data integrity, and accelerates decision-making. When executed with purpose, it becomes a catalyst for innovation and long-term growth. ... The rise of generative AI has transformed industries by enhancing automation, streamlining processes, and fostering innovation. According to a recent NASSCOM report, around 27% of companies already have AI agents in production, while another 31% are running pilots. ... Cloud has become the foundation of digital transformation in India, driving agility, resilience, and continuous innovation across sectors. Kyndryl is expanding its capabilities in the market to support this momentum. This includes strengthening our cloud delivery centres and expanding local expertise across hyperscaler platforms. ... Strategic partnerships are central to how we co-innovate and deliver differentiated outcomes for our clients. We collaborate closely with a broad ecosystem of technology leaders to co-create solutions that are rooted in real business needs. ... Enterprises in India are accelerating their cloud journeys, demanding solutions that combine hyperscaler innovation with deep enterprise expertise. 


Digital Transformation Strategies for Enterprise Architects

Customer experience must be deliberately architected to deliver relevance, consistency, and responsiveness across all digital channels. Enterprise architects enable this by building composable service layers that allow marketing, commerce, and support platforms to act on a unified view of the customer. Event-driven architectures detect behavior signals and trigger automated, context-aware experiences. APIs must be designed to support edge responsiveness while enforcing standards for security and governance. ... Handling large datasets at the enterprise level requires infrastructure that treats metadata, lineage, and ownership as first-class citizens. Enterprise architects design data platforms that surface reliable, actionable insights, built on contracts that define how data is created, consumed, and governed across domains. Domain-oriented ownership via data mesh ensures accountability, while catalogs and contracts maintain enterprise-wide discoverability. ... Architectural resilience starts at the design level. Modular systems that use container orchestration, distributed tracing, and standardized service contracts allow for elasticity under pressure and graceful degradation during failure. Architects embed durability into operations through chaos engineering, auto-remediation policies, and blue-green or canary deployments. 


Unchecked and unbound: How Australian security teams can mitigate Agentic AI chaos

Agentic AI systems are collections of agents working together to accomplish a given task with relative autonomy. Their design enables them to discover solutions and optimise for efficiency. The result is that AI agents are non-deterministic and may behave in unexpected ways when accomplishing tasks, especially when systems interoperate and become more complex. As AI agents seek to perform their tasks efficiently, they will invent workflows and solutions that no human ever considered. This will produce remarkable new ways of solving problems, and will inevitably test the limits of what's allowable. The emergent behaviours of AI agents, by definition, exceed the scope of any rules-based governance because we base those rules on what we expect humans to do. By creating agents capable of discovering their own ways of working, we're opening the door to agents doing things humans have never anticipated. ... When AI agents perform actions, they act on behalf of human users or use an identity assigned to them based on a human-centric AuthN and AuthZ system. That complicates the process of answering formerly simple questions, like: Who authored this code? Who initiated this merge request? Who created this Git commit? It also prompts new questions, such as: Who told the AI agent to generate this code? What context did the agent need to build it? What resources did the AI have access to?

Daily Tech Digest - August 26, 2025


Quote for the day:

“When we give ourselves permission to fail, we, at the same time, give ourselves permission to excel.” -- Eloise Ristad


6 tips for consolidating your vendor portfolio without killing operations

Behind every sprawling vendor relationship is a series of small extensions that compound over time, creating complex entanglements. To improve flexibility when reviewing partners, Dovico is wary of vendor entanglements that complicate the ability to retire suppliers. Her aim is to clearly define the service required and the vendor’s capabilities. “You’ve got to be conscious of not muddying how you feel about the performance of one vendor, or your relationship with them. You need to have some competitive tension and align core competencies with your problem space,” she says. Klein prefers to adopt a cross-functional approach with finance and engineering input to identify redundancies and sprawl. Engineers with industry knowledge cross-reference vendor services, while IT checks against industry benchmarks, such as Gartner’s Magic Quadrant, to identify vendors providing similar services or tools. ... Vendor sprawl also lurks in the blind spot of cloud-based services that can be adopted without IT oversight, fueling shadow purchasing habits. “With the proliferation of SaaS and cloud models, departments can now make a few phone calls or sign up online to get applications installed or services procured,” says Klein. This shadow IT ecosystem increases security risks and vendor entanglement, undermining consolidation efforts. This needs to be tackled through changes to IT governance.


Should I stay or should I go? Rethinking IT support contracts before auto-renewal bites

Contract inertia, which is the tendency to stick with what you know, even when it may no longer be the best option, is a common phenomenon in business technology. There are several reasons for it, such as familiarity with an existing provider, fear of disruption, the administrative effort involved in reviewing and comparing alternatives, and sometimes just a simple lack of awareness that the renewal date is approaching. The problem is that inertia can quietly erode value. As organisations grow, shift priorities or adopt new technologies, the IT support they once chose may no longer be fit for purpose. ... A proactive approach begins with accountability. IT leaders need to know what their current provider delivers and how they are being used by the company. Are remote software tools performing as expected? Are updates, patches and monitoring processes being applied consistently across all platforms? Are issues being resolved efficiently by our internal IT team, or are inefficiencies building up? Is this the correct set-up and structure for our business, or could we be making better use of existing internal capacity, by leveraging better remote management tools? Gathering this information allows organisations to have an honest conversation with their provider (and themselves) about whether the contract still aligns with their objectives.


AI Data Security: Core Concepts, Risks, and Proven Practices

Although AI makes and fortifies a lot of our modern defenses, once you bring AI into the mix, the risks evolve too. Data security (and cybersecurity in general) has always worked like that. The security team gets a new tool, and eventually, the bad guys get one too. It’s a constant game of catch-up, and AI doesn’t change that dynamic. ... One of the simplest ways to strengthen AI data security is to control who can access what, early and tightly. That means setting clear roles, strong authentication, and removing access that people don’t need. No shared passwords. No default admin accounts. No “just for testing” tokens sitting around with full privileges. ... What your model learns is only as good (and safe) as the data you feed it. If the training pipeline isn’t secure, everything downstream is at risk. That includes the model’s behavior, accuracy, and resilience against manipulation. Always vet your data sources. Don’t rely on third-party datasets without checking them for quality, bias, or signs of tampering. ... A core principle of data protection, baked into laws like GDPR, is data minimization: only collect what you need, and only keep it for as long as you actually need it. In real terms, that means cutting down on excess data that serves no clear purpose. Put real policies in place. Schedule regular reviews. Archive or delete datasets that are no longer relevant. 


Morgan Stanley Open Sources CALM: The Architecture as Code Solution Transforming Enterprise DevOps

CALM enables software architects to define, validate, and visualize system architectures in a standardized, machine-readable format, bridging the gap between architectural intent and implementation. Built on a JSON Meta Schema, CALM transforms architectural designs into executable specifications that both humans and machines can understand. ... The framework structures architecture into three primary components: nodes, relationships, and metadata. This modular approach allows architects to model everything from high-level system overviews to detailed microservices architectures. ... CALM’s true power emerges in its seamless integration with modern DevOps workflows. The framework treats architectural definitions like any other code asset, version-controlled, testable, and automatable. Teams can validate architectural compliance in their CI/CD pipelines, catching design issues before they reach production. The CALM CLI provides immediate feedback on architectural decisions, enabling real-time validation during development. This shifts compliance left in the development lifecycle, transforming potential deployment roadblocks into preventable design issues. Key benefits for DevOps teams include machine-readable architecture definitions that eliminate manual interpretation errors, version control for architectural changes that provides clear change history, and real-time feedback on compliance violations that prevent downstream issues.


Shadow AI is surging — getting AI adoption right is your best defense

Despite the clarity of this progression, many organizations struggle to begin. One of the most common reasons is poor platform selection. Either no tool is made available, or the wrong class of tool is introduced. Sometimes what is offered is too narrow, designed for one function or team. Sometimes it is too technical, requiring configuration or training that most users aren’t prepared for. In other cases, the tool is so heavily restricted that users cannot complete meaningful work. Any of these mistakes can derail adoption. A tool that is not trusted or useful will not be used. And without usage, there is no feedback, value, or justification for scale. ... The best entry point is a general-purpose AI assistant designed for enterprise use. It must be simple to access, require no setup, and provide immediate value across a range of roles. It must also meet enterprise requirements for data security, identity management, policy enforcement, and model transparency. This is not a niche solution. It is a foundation layer. It should allow employees to experiment, complete tasks, and build fluency in a way that is observable, governable, and safe. Several platforms meet these needs. ChatGPT Enterprise provides a secure, hosted version of GPT-5 with zero data retention, administrative oversight, and SSO integration. It is simple to deploy and easy to use. =


AI and the impact on our skills – the Precautionary Principle must apply

There is much public comment about AI replacing jobs or specific tasks within roles, and this is often cited as a source of productivity improvement. Often we hear about how junior legal professionals can be easily replaced since much of their work is related to the production of standard contracts and other documents, and these tasks can be performed by LLMs. We hear much of the same narrative from the accounting and consulting worlds. ... The greatest learning experiences come from making mistakes. Problem-solving skills come from experience. Intuition is a skill that is developed from repeatedly working in real-world environments. AI systems do make mistakes and these can be caught and corrected by a human, but it is not the same as the human making the mistake. Correcting the mistakes made by AI systems is in itself a skill, but a different one. ... In a rapidly evolving world in which AI has the potential to play a major role, it is appropriate that we apply the Precautionary Principle in determining how to automate with AI. The scientific evidence of the impact of AI-enabled automation is still incomplete, but more is being learned every day. However, skill loss is a serious, and possibly irreversible, risk. The integrity of education systems, the reputations of organisations and individuals, and our own ability to trust in complex decision-making processes, are at stake.


Ransomware-Resilient Storage: The New Frontline Defense in a High-Stakes Cyber Battle

The cornerstone of ransomware resilience is immutability: data written to storage cannot be altered or deleted ever. This write-once-read-many capability means backup snapshots or data blobs are locked for prescribed retention periods, impervious to tampering even by attackers or system administrators with elevated privileges. Hardware and software enforce this immutability by preventing any writes or deletes on designated volumes, snapshots, or objects once committed, creating a "logical air gap" of protection without the need for physical media isolation. ... Moving deeper, efforts are underway to harden storage hardware directly. Technologies such as FlashGuard, explored experimentally by IBM and Intel collaborations, embed rollback capabilities within SSD controllers. By preserving prior versions of data pages on-device, FlashGuard can quickly revert files corrupted or encrypted by ransomware without network or host dependency. ... Though not widespread in production, these capabilities signal a future where storage devices autonomously resist ransomware impact, a powerful complement to immutable snapshotting. While these cutting-edge hardware-level protections offer rapid recovery and autonomous resilience, organizations also consider complementary isolation strategies like air-gapping to create robust multi-layered defense boundaries against ransomware threats.


How an Internal AI Governance Council Drives Responsible Innovation

The efficacy of AI governance hinges on the council’s composition and operational approach. An optimal governance council typically includes cross-functional representation from executive leadership, IT, compliance and legal teams, human resources, product management, and frontline employees. This diversified representation ensures comprehensive coverage of ethical considerations, compliance requirements, and operational realities. Initial steps in operationalizing a council involve creating strong AI usage policies, establishing approved tools, and developing clear monitoring and validation protocols. ... While initial governance frameworks often focus on strict risk management and regulatory compliance, the long-term goal shifts toward empowerment and innovation. Mature governance practices balance caution with enablement, providing organizations with a dynamic, iterative approach to AI implementation. This involves reassessing and adapting governance strategies, aligning them with evolving technologies, organizational objectives, and regulatory expectations. AI’s non-deterministic, probabilistic nature, particularly generative models, necessitates a continuous human oversight component. Effective governance strategies embed this human-in-the-loop approach, ensuring AI enhances decision-making without fully automating critical processes.


The energy sector has no time to wait for the next cyberattack

Recent findings have raised concerns about solar infrastructure. Some Chinese-made solar inverters were found to have built-in communication equipment that isn’t fully explained. In theory, these devices could be triggered remotely to shut down inverters, potentially causing widespread power disruptions. The discovery has raised fears that covert malware may have been installed in critical energy infrastructure across the U.S. and Europe, which could enable remote attacks during conflicts. ... Many OT systems were built decades ago and weren’t designed with cyber threats in mind. They often lack updates, patches, and support, and older software and hardware don’t always work with new security solutions. Upgrading them without disrupting operations is a complex task. OT systems used to be kept separate from the Internet to prevent remote attacks. Now, the push for real-time data, remote monitoring, and automation has connected these systems to IT networks. That makes operations more efficient, but it also gives cybercriminals new ways to exploit weaknesses that were once isolated. Energy companies are cautious about overhauling old systems because it’s expensive and can interrupt service. But keeping legacy systems in play creates security gaps, especially when connected to networks or IoT devices. Protecting these systems while moving to newer, more secure tech takes planning, investment, and IT-OT collaboration.


Agentic AI Browser an Easy Mark for Online Scammers

In an Wednesday blog post, researchers from Guardio wrote that Comet - one of the first AI browsers to reach consumers - clicked through fake storefronts, submitted sensitive data to phishing sites and failed to recognize malicious prompts designed to hijack its behavior. The Tel Aviv-based security firm calls the problem "scamlexity," a messy intersection of human-like automation and old-fashioned social engineering creates "a new, invisible scam surface" that scales to millions of potential victims at once. In a clash between the sophistication of generative models built into browsers and the simplicity of phishing tricks that have trapped users for decades, "even the oldest tricks in the scammer's playbook become more dangerous in the hands of AI browsing." One of the headline features of AI browsers is one-click shopping. Researchers spun up a fake "Walmart" storefront complete with polished design, realistic listings and a seamless checkout flow. ... Rather than fooling a user into downloading malicious code to putatively fix a computer problem - as in ClickFix - a PromptFix attack is a malicious instruction was hidden inside what looks like a CAPTCHA. The AI treated the bogus challenge as routine, obeyed the hidden command and continued execution. AI agents are expected to ingest unstructured logs, alerts or even attacker-generated content during incident response.