Showing posts with label SASE. Show all posts
Showing posts with label SASE. Show all posts

Daily Tech Digest - February 24, 2026


Quote for the day:

"Transparent reviews create fairness. Subjective reviews create frustration." -- Gordon Tredgold



AI agents and bad productivity metrics

The great promise of generative artificial intelligence was that it would finally clear our backlogs. Coding agents would churn out boilerplate at superhuman speeds, and teams would finally ship exactly what the business wants. The reality, as we settle into 2026, is far more uncomfortable. Artificial intelligence is not going to save developer productivity because writing code was never the bottleneck in software engineering. ... For decades, one of the most common debugging techniques was entirely social. A production alert goes off. You look at the version control history, find the person who wrote the code, ask them what they were trying to accomplish, and reconstruct the architectural intent. But what happens to that workflow when no one actually wrote the code? What happens when a human merely skimmed a 3,000-line agent-generated pull request, hit merge, and moved on to the next ticket? When an incident happens, where is the deep knowledge that used to live inside the author? ... The metrics that matter are still the boring ones because they measure actual business outcomes. The DORA metrics remain the best sanity check we have because they tie delivery speed directly to system stability. They measure deployment frequency, lead time for changes, change failure rate, and time to restore service. None of those metrics cares about the number of commits your agents produced today. They only care about whether your system can absorb change without breaking.


How vertical SaaS is redefining enterprise efficiency

For the past decade, horizontal SaaS has been the defining force in enterprise technology. Platforms like CRMs, ERP suites and collaboration tools promised universality, offering a single platform to manage every business function across all industries. The strategy made sense: a large total addressable market, reusable architecture and marketing scale. Vertical SaaS flips that model. It is narrow by design but deep in impact. A report by Strategy& found that B2B vertical software companies are now growing faster than their horizontal peers, thanks to higher retention rates, lower churn rates and better unit economics. When software mirrors how a business already works, people stop treating it like a tool they tolerate and start relying on it like infrastructure. ... In regulated industries, compliance isn’t a feature; it’s the baseline for trust. I learned early that trying to retrofit audit trails or data retention policies after go-live only creates technical debt. Instead, design for compliance as a first-class product layer: immutable logs, permission hierarchies and exportable compliance reports built into the system. ... Vertical products don’t thrive in isolation. Integration with industry hardware, marketplaces and regulatory systems drives adoption. In one case, we partnered with a hardware vendor to automatically sync manifest data from their devices, cutting onboarding time in half and unlocking co-marketing opportunities.


API Security Standards: 10 Essentials to Get You Started

Most API security flaws are created during the design phase. You're too late if you're waiting until deployment to think about threats. Shift-left principles mean integrating security early, especially at the design phase, where flawed assumptions become future exploits. Start by mapping out each endpoint's purpose, what data it touches, and who should access it. Identify where trust is assumed (not earned), roles blur, and inputs aren't validated. ... Every API has a breaking point. If you don't define it, attackers will. Rate limiting and throttling prevent denial-of-service (DoS) attacks, and they're also your first defense against scraping, brute-forcing, enumeration, and even accidental misuse by poorly built integrations. APIs, by nature, invite automation. Without guardrails, that openness turns into a floodgate. And in some cases, unchecked abuse opens the door to far worse issues, like remote code execution, where improperly scoped input or lack of throttling leads directly to exploitation. ... APIs are built to accept input. Attackers find ways to exploit it. The core rule is this - if you didn't expect it, don't process it. If you didn't define it, don't send it. Define request and response schemas explicitly using tools like OpenAPI or JSON Schema, as recommended by leading API security standards. Then enforce them — at the gateway, app layer, or both. Don't just use validation as linting; treat it as a runtime contract. If the payload doesn't match the spec, reject it.


Why AI Urgency Is Forcing a Data Governance Reset

The cost of weak governance shows up in familiar ways: teams can’t find data, requirements arrive late in the process, and launches stall when compliance realities collide with product timelines. Without governance, McQuillan argues, organizations “ultimately suffer from higher cost basis,” with downstream consequences that “impact the bottom line.” ... McQuillan sees a clear step-change in executive urgency since generative AI (GenAI) became mainstream. “There’s been a rapid adoption, particularly since the advent of GenAI and the type of generative and agentic technologies that a lot of C-suites are taking on,” he says. But he also describes a common leadership gap: many executives feel pressure to become “AI-enabled” without a clear definition of what that means or how to build it sustainably. “There’s very much a well-understood need across all companies to become AI-enabled in some way,” he says. “But the problem is a lot of folks don’t necessarily know how to define that.” In the absence of clarity, organizations often fall into scattershot experimentation. What concerns McQuillan the most is how the pace of the “race” shapes priorities. ... When asked whether the long-running mantra “data is the new oil” still holds in the era of large language models and agentic workflows, McQuillan is direct. “It holds true now more than ever,” he says. He acknowledges why attention drifts: “It’s natural for people to gravitate toward things that are shiny,” and “AI in and of itself is an absolutely magnificent space.”


Building a Least-Privilege AI Agent Gateway for Infrastructure Automation with MCP, OPA, and Ephemeral Runners

An agent misinterpreting an instruction can initiate destructive infrastructure changes, such as tearing down environments or modifying production resources. A compromised agent identity can be abused to exfiltrate secrets, create unauthorized workloads, or consume resources at scale. In practice, teams often discover these issues late, because traditional logs record what happened, but not why an agent decided to act in the first place. For organizations, this liability creates operational and governance challenges. Incidents become harder to investigate, change approvals are bypassed unintentionally, and security teams are left with incomplete audit trails. Over time, this problem erodes trust in automation itself, forcing teams to either roll back agent usage or accept increasing levels of unmanaged risk. ... A more sustainable approach is to introduce an explicit control layer between agents and the systems they operate on. In this article, we focus on an AI Agent Gateway, a dedicated boundary that validates intent, enforces policy as code, and isolates execution before any infrastructure or service API is invoked. Rather than treating agents as privileged actors, this model treats them as untrusted requesters whose actions must be authorized, constrained, observed, and contained. ... In the context of AI-driven automation, defense in depth means that no single component, neither the agent, nor the gateway, nor the execution environment, has enough authority on its own to cause damage. 


Demystifying CERT‑In’s Elemental Cyber Defense Controls: A Guide for MSMEs

For India’s Micro, Small, and Medium Enterprises (MSMEs), cybersecurity is no longer a “big company problem.” With digital payments, SaaS adoption, cloud-first operations, and supply‑chain integrations becoming the norm, MSMEs are now prime targets for cyberattacks. To help these organizations build a strong foundational security posture, the Indian Computer Emergency Response Team (CERT-In) has released CIGU-2025-0003, outlining a baseline of Cyber Defense Controls, which prescribes 15 Elemental Cyber Security Controls—a pragmatic, baseline set of safeguards designed to uplift the nation’s cyber hygiene. ... These controls, mapped to 45 recommendations, enable essential digital hygiene, protect against ransomware, ensure regulatory compliance, and are required for annual audits. CERT‑In’s Elemental Controls are designed as minimum essential practices that every Indian organization—regardless of size—should implement. ... The CERT-In guidelines offer a simplified, actionable starting point for MSMEs to benchmark their security. These controls are intentionally prescriptive, unlike ISO or NIST, which are more framework‑oriented. ... Because threats constantly evolve and MSMEs face unique risks depending on their industry and data sensitivity, organizations should view this framework not as an endpoint, but as the first critical step toward building a comprehensive security program akin to ISO 27001 or NIST CSF 2.0.


AI-fuelled cyber attacks hit in minutes, warns CrowdStrike

CrowdStrike reports a sharp acceleration in cyber intrusions, with attackers moving from initial access to lateral movement in less than half an hour on average as widely available artificial intelligence tools become embedded in criminal workflows. Its latest Global Threat Report puts average eCrime "breakout time" at 29 minutes in 2025, a 65% improvement on the prior year. ... Alongside generative AI use in preparation and execution, the report describes attempts to exploit AI systems directly. Adversaries injected malicious prompts into GenAI tools at more than 90 organisations, using them to generate commands associated with credential theft and cryptocurrency theft. ... Incidents linked to North Korea rose more than 130%, while activity by the group CrowdStrike tracks as FAMOUS CHOLLIMA more than doubled. The report says DPRK-nexus actors used AI-generated personas to scale insider operations. It also cites a large cryptocurrency theft attributed to the actor it calls PRESSURE CHOLLIMA, valued at USD $1.46 billion and described as the largest single financial heist ever reported. The report also references AI-linked tooling used by other state and criminal groups. Russia-nexus FANCY BEAR deployed LLM-enabled malware, which it named LAMEHUG, for automated reconnaissance and document collection. The eCrime actor tracked as PUNK SPIDER used AI-generated scripts to speed up credential dumping and erase forensic evidence.


Shadow mode, drift alerts and audit logs: Inside the modern audit loop

When systems moved at the speed of people, it made sense to do compliance checks every so often. But AI doesn't wait for the next review meeting. The change to an inline audit loop means audits will no longer occur just once in a while; they happen all the time. Compliance and risk management should be "baked in" to the AI lifecycle from development to production, rather than just post-deployment. This means establishing live metrics and guardrails that monitor AI behavior as it occurs and raise red flags as soon as something seems off. ... Cultural shift is equally important: Compliance teams must act less like after-the-fact auditors and more like AI co-pilots. In practice, this might mean compliance and AI engineers working together to define policy guardrails and continuously monitor key indicators. With the right tools and mindset, real-time AI governance can “nudge” and intervene early, helping teams course-correct without slowing down innovation. In fact, when done well, continuous governance builds trust rather than friction, providing shared visibility into AI operations for both builders and regulators, instead of unpleasant surprises after deployment. ... Shadow mode is a way to check compliance in real time: It ensures that the model handles inputs correctly and meets policy standards before it is fully released. One AI security framework showed how this method worked: Teams first ran AI in shadow mode, then compared AI and human inputs to determine trust. 


Making AI Compliance Practical: A Guide for Data Teams Navigating Risk, Regulation, and Reality

As AI tools become more embedded in enterprise workflows, data teams are encountering a growing reality: compliance isn’t only a legal concern but also a design constraint, a quality signal, and, often, a competitive differentiator. But navigating compliance can feel complex, especially for teams focused on building and shipping. What is the good news? It doesn’t have to be. When approached intentionally, compliance becomes a pathway to better decisions, not a barrier. ... Automation can help with regulations, but only if it's used correctly. I've looked at a tool before that used algorithms to find private information. It worked well with English, but when tested with material in more than one language, it missed a few personal identifiers. The group thought it was "smart enough." It wasn't. We kept the automation, but we added human review for rare cases, confidence levels to make checks happen, and alerts for input formats that aren't common. The automation stayed the same, but there were built-in checks and balances. ... The biggest compliance failures don’t come from bad people. They come from good teams moving fast, skipping hard questions, and assuming nothing will go wrong. But compliance isn’t a blocker. It’s a product quality signal. People will trust you more if they are aware that your team has carefully considered the details.


Tata Communications’ Andrew Winney on why SASE is now non-negotiable

Zero Trust is often discussed as a product decision, but in reality it is a journey. Many enterprises start with a few use cases, such as securing internet access or enabling remote access to private applications. But they do not always extend those principles across contractors, third-party users, software-as-a-service applications and hybrid environments. Practical Zero Trust requires enterprises to rethink access fundamentally. Every request must be evaluated based on who the user is, the context from which they are accessing, the device they are using and the resource they are requesting. Access must then be granted only to that specific resource. ... Secure Access Service Edge represents a structural convergence of networking and security rather than a simple technology swap. What are the most critical architectural and change-management considerations enterprises must address during this transition? SASE is not a one-time technology change. It represents the convergence of networking and security under unified orchestration and policy management. That transition takes time and must be managed carefully. We typically work with enterprises through phased transition plans. If an organisation’s immediate priority is securing internet access or private application access for remote users, we begin there and expand to additional use cases over time. Integration is critical. Enterprises have existing investments in cloud platforms, local area networks and security tools. 

Daily Tech Digest - December 05, 2025


Quote for the day:

“Failure defeats losers, failure inspires winners.” -- Robert T. Kiyosaki



The 'truth serum' for AI: OpenAI’s new method for training models to confess their mistakes

A confession is a structured report generated by the model after it provides its main answer. It serves as a self-evaluation of its own compliance with instructions. In this report, the model must list all instructions it was supposed to follow, evaluate how well it satisfied them and report any uncertainties or judgment calls it made along the way. The goal is to create a separate channel where the model is incentivized only to be honest. ... During training, the reward assigned to the confession is based solely on its honesty and is never mixed with the reward for the main task. "Like the Catholic Church’s 'seal of confession', nothing that the model reveals can change the reward it receives for completing its original task," the researchers write. This creates a "safe space" for the model to admit fault without penalty. This approach is powerful because it sidesteps a major challenge in AI training. The researchers’ intuition is that honestly confessing to misbehavior is an easier task than achieving a high reward on the original, often complex, problem. ... For AI applications, mechanisms such as confessions can provide a practical monitoring mechanism. The structured output from a confession can be used at inference time to flag or reject a model’s response before it causes a problem. For example, a system could be designed to automatically escalate any output for human review if its confession indicates a policy violation or high uncertainty.


Why is enterprise disaster recovery always such a…disaster?

One of the brutal truths about enterprise disaster recovery (DR) strategies is that there is virtually no reliable way to truly test them. ... From a corporate politics perspective, IT managers responsible for disaster recovery have a lot of reasons to avoid an especially meaningful test. Look at it from a risk/reward perspective. They’re going to take a gamble, figuring that any disaster requiring the recovery environment might not happen for a few years. And by then with any luck, they’ll be long gone. ... “Enterprises place too much trust in DR strategies that look complete on slides but fall apart when chaos hits,” he said. “The misunderstanding starts with how recovery is defined. It’s not enough for infrastructure to come back online. What matters is whether the business continues to function — and most enterprises haven’t closed that gap. ... “Most DR tools, even DRaaS, only protect fragments of the IT estate,” Gogia said. “They’re scoped narrowly to fit budget or ease of implementation, not to guarantee holistic recovery. Cloud-heavy environments make things worse when teams assume resilience is built in, but haven’t configured failover paths, replicated across regions, or validated workloads post-failover. Sovereign cloud initiatives might address geopolitical risk, but they rarely address operational realism.


The first building blocks of an agentic Windows OS

Microsoft is adding an MCP registry to Windows, which adds security wrappers and provides discovery tools for use by local agents. An associated proxy manages connectivity for both local and remote servers, with authentication, audit, and authorization. Enterprises will be able to use these tools to control access to MCP, using group policies and default settings to give connectors their own identities. ... Be careful when giving agents access to the Windows file system; use base prompts that reduce the risks associated with file system access. When building out your first agent, it’s worth limiting the connector to search (taking advantage of the semantic capabilities of Windows’ built-in Phi small language model) and reading text data. This does mean you’ll need to provide your own guardrails for agent code running on PCs, for example, forcing read-only operations and locking down access as much as possible. Microsoft’s planned move to a least-privilege model for Windows users could help here, ensuring that agents have as few rights as possible and no avenue for privilege escalation. ... Building an agentic OS is hard, as the underlying technologies work very differently from standard Windows applications. Microsoft is doing a lot to provide appropriate protections, building on its experience in delivering multitenancy in the cloud. 


Syntax hacking: Researchers discover sentence structure can bypass AI safety rules

The findings reveal a weakness in how these models process instructions that may shed light on why some prompt injection or jailbreaking approaches work, though the researchers caution their analysis of some production models remains speculative since training data details of prominent commercial AI models are not publicly available. ... This suggests models absorb both meaning and syntactic patterns, but can overrely on structural shortcuts when they strongly correlate with specific domains in training data, which sometimes allows patterns to override semantic understanding in edge cases. ... In layperson terms, the research shows that AI language models can become overly fixated on the style of a question rather than its actual meaning. Imagine if someone learned that questions starting with “Where is…” are always about geography, so when you ask “Where is the best pizza in Chicago?”, they respond with “Illinois” instead of recommending restaurants based on some other criteria. They’re responding to the grammatical pattern (“Where is…”) rather than understanding you’re asking about food. This creates two risks: models giving wrong answers in unfamiliar contexts (a form of confabulation), and bad actors exploiting these patterns to bypass safety conditioning by wrapping harmful requests in “safe” grammatical styles. It’s a form of domain switching that can reframe an input, linking it into a different context to get a different result.


In 2026, Should Banks Aim Beyond AI?

Developing native AI agents and agentic workflows will allow banks to automate complex journeys while fine-tuning systems to their specific data and compliance landscapes. These platforms accelerate innovation and reinforce governance structures around AI deployment. This next generation of AI applications elevates customer service, fostering deeper trust and engagement. ... But any technological advancement must be paired with accountability and prudent risk management, given the sensitive nature of banking. AI can unlock efficiency and innovation, but its impact depends on keeping human decision-making and oversight firmly in place. It should augment rather than replace human authority, maintaining transparency and accountability in all automated processes. ... The banking environment is too risky for fully autonomous agentic AI workflows. Critical financial decisions require human judgment due to the potential for significant consequences. Nonetheless, many opportunities exist to augment decision-making with AI agents, advanced models and enriched datasets. ... As this evolution unfolds, financial institutions must focus on executing AI initiatives responsibly and effectively. By investing in home-grown platforms, emphasizing explainability, balancing human oversight with automation and fostering adaptive leadership, banks, financial services and insurance providers can navigate the complexities of AI adoption.


Building the missing layers for an internet of agents

The proposed Agent Communication Layer sits above HTTP and focuses on message structure and interaction patterns. It brings together what has been emerging across several protocols and organizes them into a common set of building blocks. These include standardized envelopes, a registry of performatives that define intent, and patterns for one to one or one to many communication. The idea is to give agents a dependable way to understand the type of communication taking place before interpreting the content. A request, an update, or a proposal each follows an expected pattern. This helps agents coordinate tasks without guessing the sender’s intention. The layer does not judge meaning. It only ensures that communication follows predictable rules that all agents can interpret. ... The paper outlines several new risks. Attackers might inject harmful content that fits the schema but tricks the agent’s reasoning. They might distribute altered or fake context definitions that mislead a population of agents. They might overwhelm a system with repetitive semantic queries that drain inference resources rather than network resources. To manage these problems, the authors propose security measures that match the new layer. Signed context definitions would prevent tampering. Semantic firewalls would examine content at the concept level and enforce rules about who can use which parts of a context. 


The Rise of SASE: From Emerging Concept to Enterprise Cornerstone

The case for SASE depends heavily on the business outcomes required, and there can be multiple use cases for SASE deployment. However, not everyone is always aligned around these. Whether you’re looking to modernize systems to boost operational resilience, reduce costs, or improve security to adhere to regulatory compliance, there needs to be alignment around your SASE deployment. Additionally, because of its versatility, SASE demands expertise across networking, cloud security, zero trust, and SD-WAN, but, unfortunately, these skills are in short supply. IT teams must upskill or recruit talent capable of managing (and running) this convergence, while also adapting to new operational models and workflows. ... However, most of the reported benefits don’t focus on tangible or financial outcomes, but rather those that are typically harder to measure, namely boosting operational resilience and enhancing user experience. These are interesting numbers to explore, as SASE investments are often predicated on specific and easily measurable business cases, typically centered around cost savings or mitigation of specific cyber/operational risks. Looking at the benefits from both a networking and security perspective, the data reveals different priorities for SASE adoption: IT Network leaders value operational streamlining and efficiency, while IT Security leaders emphasize secure access and cloud protection. 


Intelligent Banking: A New Standard for Experience and Trust

At its core, Intelligent Banking connects three forces that are redefining what "intelligent" really means: Rising expectations - Customers not only expect their institutions to understand them, but to intuitively put forward recommendations before they realize change is needed. All while acting with empathy while delivering secure, trusted experiences. ... Data abundance - Financial institutions have more data than ever but struggle to turn it into actionable insight that benefits both the customer and the institution. ... AI readiness - For years, AI in banking was at best a buzz word that encapsulated the standard — decision trees, models, rules. ... The next era of AI in banking will be completely different. It will be invisible. Embedded. Contextual. It will be built into the fabric of the experience, not just added on top. And while mobile apps as we know them will likely be around for a while, a fully GenAI native banking experience is both possible and imminent. ... In the age of AI, it’s tempting to see "intelligence" purely as technology alone. But the future of banking will depend just as much on human intelligence as it will artificial intelligence. The expertise, empathy, and judgement of the institutions who understand financial context and complexity blended with the speed, prediction and pattern recognition that uncover insights humans can’t see will create a new standard for banking, one where experiences feel both profoundly human and intelligently anticipatory.


Taking Control of Unstructured Data to Optimize Storage

The modern business preoccupation with collecting and retaining data has become something of a double-edged sword. On the plus side, it has fueled a transformational approach to how organizations are run. On the other hand, it’s rapidly becoming an enormous drain on resources and efficiency. The fact that 80-90% of this information is unstructured, i.e. spread across formats such as documents, images, videos, emails, and sensor outputs, only adds to the difficulty of organizing and controlling it. ... To break this down, detailed metadata insight is essential for revealing how storage is actually being used. Information such as creation dates, last accessed timestamps, and ownership highlights which data is active and requires performance storage, and which has aged out of use or no longer relates to current users. ... So, how can this be achieved? At a fundamental level, storage optimization hinges on adopting a technology approach that manages data, not storage devices; simply adding more and more capacity is no longer viable. Instead, organizations must have the ability to work across heterogeneous storage environments, including multiple vendors, locations and clouds. Tools should support vendor-neutral management so data can be monitored and moved regardless of the underlying platform. Clearly, this has to take place at petabyte scale. Optimization also relies on policy-based data mobility that enables data to be moved based on defined rules, such as age or inactivity, with inactive or long-dormant data.


W.A.R & P.E.A.C.E: The Critical Battle for Organizational Harmony

W.A.R & P.E.A.C.E, the pivotal human lens within TRIAL, designed specifically to address this cultural challenge and shepherd the enterprise toward AARAM (Agentic AI Reinforced Architecture Maturities)2 with what I term “speed 3” transformation of AI. ... The successful, continuous balancing of W.A.R. and P.E.A.C.E. is the biggest battle an Enterprise Architect must win. Just as Tolstoy explored the monumental scope of war against intimate moments of peace in his masterwork, the Enterprise Architect must balance the intense effort to build repositories against the delicate work of fostering organizational harmony. ... The W.A.R. systematically organizes information across the four critical architectural domains defined in our previous article: Business, Information, Technology, and Security (BITS). The true power of W.A.R. lies in its ability to associate technical components with measurable business and financial properties, effectively transforming technical discussions into measurable, strategic imperatives. Each architectural components across BITS are tracked across Plan, Design & Run lifecycle of change under the guardrails of BYTES. ... Achieving effective P.E.A.C.E. mandates a carefully constructed collaborative environment where diverse organizational roles work together toward a shared objective. This requires alignment across all lifecycle stages using social capital and intelligence.

Daily Tech Digest - July 08, 2025


Quote for the day:

“If you really want the key to success, start by doing the opposite of what everyone else is doing.” -- Brad Szollose


MCP Vulnerability Exposes the AI Untrusted Code Crisis

Most organizations have rigorous approval processes before allowing arbitrary code to run in their environments whether from open source projects or vendor solutions. Yet with this new wave of tools, we’re simultaneously allowing thousands of employees to constantly update codebases with arbitrary, untrusted AI-generated code or wiring said codebases and applications to mechanisms that can alter or modify their behavior. This isn’t about stopping the use of AI coding agents or sacrificing the massive productivity gains they provide. Instead, we should standardize better ways that allow us to run untrusted code across our software development pipelines. ... As AI development tools gain adoption across enterprises, there is a new class of systems to support them that can execute code on behalf of developers. This includes AI code assistants generating and running code snippets, MCP servers providing AI systems access to local tools and data, automated testing tools executing AI-generated test cases and development agents performing complex multistep operations. Each of these represents a potential code execution pathway that often bypasses traditional security controls. The risk isn’t just that AI-generated code can be inadvertently malicious; it’s that these new systems also create pathways for untrusted code execution.


Is English the next programming language? JetBrains’ CEO says no

JetBrains does need to contend with the fact that many of its users are being threatened by AI replacing them, even if he notes that job displacement isn’t happening at anywhere near the rate some have suggested. Products, languages and IT infrastructure can indeed be made redundant too. We may also add that many layoff rounds use AI as an excuse to make cuts that are simply financially motivated. Still, we need to appreciate that AI is indeed changing the overall landscape. Tasks can be automated, and AI is eagerly shoveling up the developer code that’s freely available online. What about Kotlin specifically?  ... “Here’s my vision. I think programming languages will evolve a lot. I admit that you may not need high level programming languages in the classical sense anymore, but the solution still wouldn’t be English.” Skrygan envisions a middle ground between Kotlin and natural language. Currently, the closest approximation is Kotlin DSL. It’s a design doc that can be compiled as code. Ultimately, like anything digital, it converts into binary at the lowest level. The JetBrains CEO highlights how this is merely a repeat of what we’ve already seen: “People were writing in bytecode and assembler 40 years ago. Now, nobody cares about it anymore. It’s secondary.”

Privacy is blockchain’s missing link—and America’s opportunity to lead

We are at an inflection point. On one hand, blockchain has evolved from an experimental idea into a foundational layer for decentralized finance (DeFi), gaming, cross-border payments, and digital identity. On the other, the absence of privacy threatens to stall its momentum. Without privacy guarantees, Web3 won’t scale into a secure, inclusive internet economy—it will remain a risky, self-surveilling shadow of its potential. It’s not just user safety at stake. Institutional adoption, long seen as the tipping point for crypto’s maturation, is lagging in part because privacy solutions are underdeveloped. Financial institutions and enterprises cannot embrace systems that force them to reveal business-sensitive transactions to competitors and regulators alike. Privacy is not the enemy of compliance; it’s a prerequisite for serious engagement. ... First, policymakers must move past the false binary of privacy versus compliance. These are not mutually exclusive goals. Clear guidelines that embrace advanced cryptography, establish safe harbors for privacy-preserving innovation, and differentiate between consumer protection and surveillance will enable the next generation of secure digital finance. Second, industry leaders need to elevate privacy to the level of consensus mechanisms, scalability, and user experience. 


How scientists are trying to use AI to unlock the human mind

In one of the studies, researchers transformed a large language model into what they refer to as a “foundation model of human cognition.” Out of the box, large language models aren’t great at mimicking human behavior—they behave logically in settings where humans abandon reason, such as casinos. So the researchers fine-tuned Llama 3.1, one of Meta’s open-source LLMs, on data from a range of 160 psychology experiments, which involved tasks like choosing from a set of “slot machines” to get the maximum payout or remembering sequences of letters. ... Accurate predictions of how humans respond in psychology experiments are valuable in and of themselves: For example, scientists could use Centaur to pilot their experiments on a computer before recruiting, and paying, human participants. In their paper, however, the researchers propose that Centaur could be more than just a prediction machine. ... The second of the two Nature studies focuses on minuscule neural networks—some containing only a single neuron—that nevertheless can predict behavior in mice, rats, monkeys, and even humans. Because the networks are so small, it’s possible to track the activity of each individual neuron and use that data to figure out how the network is producing its behavioral predictions. 


New Study Reveals True AI Capabilities And Job Replacement Risk

For business leaders, this framework offers something really valuable: a reality check that cuts through vendor marketing speak. When a sales representative promises their AI solution will "revolutionize your operations," you can now ask pointed questions about which capability levels their system actually achieves and in which specific domains. The gap analysis between current AI capabilities and the requirements of specific business tasks becomes clearer when standardized benchmarks are in place. Consider customer service, where companies are deploying AI chatbots with the enthusiasm of gold rush prospectors. The OECD framework suggests that while AI can handle structured interactions reasonably well, anything requiring genuine social intelligence, nuanced problem-solving, or creative thinking quickly exposes current limitations. This doesn't mean AI isn't useful in customer service, but it helps set realistic expectations about what human oversight will still be necessary. It's the difference between using AI as a sophisticated tool versus expecting it to be a replacement employee. One approach leads to productivity gains; the other leads to customer complaints and public relations disasters.


Why EU Policy Must Catch Up to the Neurotechnology Boom

After conducting a comprehensive analysis of nearly 300 neurotechnology companies worldwide, the Center for Future Generations discovered a surprising trend: among firms fully dedicated to neurotech, consumer firms now outnumber medical ones, making up 60% of the global neurotechnology landscape. And they're proliferating at an unprecedented rate—more than quadrupling in the past decade compared to the previous 25 years. ... EEG, the technology at the heart of this revolution, has been around since the 1920s. It's crude and can't read individual thoughts, but it can detect patterns of brain activity related to focus, fatigue, and even emotional states. And when coupled with artificial intelligence and other personal data—like location, buying behaviors, and biometrics—these patterns can reveal far more about us than we might imagine. ... As this technology moves into the mainstream, the potential for misuse becomes profound. Imagine pre-election advertising that adapts its messaging based on your emotional reaction. Imagine disinformation campaigns tailored to your subconscious fears, measured directly from your brain. Imagine authoritarian governments monitoring emotional responses to propaganda, searching for dissent in citizens' brainwaves. This marks a critical moment for European policymakers.


Enterprises Are Prioritizing Generative AI Spending in 2025

The report, "Generative AI Adoption Index," highlights how organizations are moving gen AI from experimentation to full-scale implementation and offers practical strategies to create business value. CEOs, CTOs and CIOs currently lead most gen AI innovation, but leadership structures are evolving to include specialized AI roles, such as CAIOs, at the highest levels of organizations. ... Along with CAIOs, a thoughtful change management strategy will be critical. The ideal strategy should address operating model changes, data management practices and talent pipelines. Today, just 14% of organizations have a change management strategy, but this will increase to 76% by end of 2026, highlighting a growing recognition of the need for structured adaptation. But a sizable proportion of organizations may still struggle to keep pace with AI-driven transformation, with one in four organizations still lacking a strategy in 2026. ... Third-party vendors are becoming key enablers of gen AI transformation across organizations globally. From supplying outsourced talent to offering services such as cloud computing and storage, these vendors help bridge critical technology and talent gaps. Effective gen AI deployment will depend on strong collaboration between external experts and internal teams. 


AI’s rise demands more from the UK data center market

The growing demand for digital infrastructure, fueled by the surge in AI, has intensified competition for suitable land to build data centers. This scarcity (particularly in London), coupled with the rise in construction and operational costs, makes it difficult to establish data centers in the most efficient and cost-effective manner. Similarly, an over-reliance on well-established technology clusters (such as West London) can increase resource restraints and vulnerability to power outages and downtime. With UK policy frameworks around data centers still evolving, discussions are ongoing around security, energy consumption, and specific regulatory needs. ... Similarly, traditional methods demand a high level of energy consumption to keep AI chips operating at optimal temperatures. Given the energy-intensive nature of air cooling and it being unlikely to keep up with cooling demands, the data center industry is reaching a critical juncture: stifle the capabilities of AI technologies by not integrating effective thermal management, or investing in a more effective, future-thinking approach to cooling? ... The UK’s data center expansion is not just a scaling project, it is a rethinking of what data centers and associated cooling infrastructures must become. 


Why CISOs are making the SASE switch: Fewer vendors, smarter security, better AI guardrails

“SASE is an existential threat to all appliance-based network security companies,” Shlomo Kramer, Cato’s CEO, told VentureBeat. “The vast majority of the market is going to be refactored from appliances to cloud service, which means SASE [is going to be] 80% of the market.” A fundamental architectural transformation is driving that shift. SASE converges traditionally siloed networking and security functions into a single, cloud-native service edge. It combines SD-WAN with critical security capabilities, including secure web gateway (SWG), cloud access security broker (CASB) and ZTNA to enforce policy and protect data regardless of where users or workloads reside. ... The SASE consolidation wave reveals how enterprises are fundamentally rethinking security architecture. With AI attacks exploiting integration gaps instantly, single-vendor SASE has become essential for both protection and operational efficiency. The reasoning is straightforward. Every vendor handoff creates vulnerability. Each integration adds latency. Security leaders know that unified platforms can help eliminate these risks while enabling business velocity. CISOs are increasingly demanding a single console, a single agent and unified policies. 


CISOs urged to fix API risk before regulation forces their hand

The widespread use of APIs to support mobile apps, cloud services, and partner integrations means that the attack surface has changed. But the security practices often haven’t. APIs today handle everything from identity claims and cardholder data to health and account information. Yet in many organizations, they remain outside the scope of standard security programs. ... Oppenheim added that meaningful oversight at the board level doesn’t require technical fluency. “Board-level metrics in such a technically complex space can be difficult to surface meaningfully, but there are still effective ways to guide oversight and investment. Directors should ask which recognised standards (e.g. FAPI) have been adopted or are in the roadmap, and whether the organization has applied a maturity model or framework to benchmark its current posture and track improvements over time.” ... So far, the biggest improvements in API security have come either through direct regulation or industry-led mandates. But pressure is building elsewhere. “Again, organizational size plays a key role,” said Oppenheim. “Larger firms and infrastructure providers are already moving ahead voluntarily – not just in banking, but in payments and identity platforms – because they see strong API security as a necessary foundation for scale and trust.”

Daily Tech Digest - May 20, 2025


Quote for the day:

"Success is liking yourself, liking what you do, and liking how you do it." -- Maya Angelou


Scalability and Flexibility: Every Software Architect's Challenge

Building successful business applications involves addressing practical challenges and strategic trade-offs. Cloud computing offers flexibility, but poor resource management can lead to ballooning costs. Organizations often face dilemmas when weighing feature richness against budget constraints. Engaging stakeholders early in the development process ensures alignment with priorities. ... Right-sizing cloud resources is essential for software architects, who can leverage tools to monitor usage and scale resources automatically based on demand. Serverless computing models, which charge only for execution time, are ideal for unpredictable workloads and seasonal fluctuations, ensuring organizations only use what they need when needed. .. The next decade will usher in unprecedented opportunities for innovation in business applications. Regularly reviewing market trends and user feedback ensures applications remain relevant. Features like voice commands and advanced analytics are becoming standard as users demand more intuitive interfaces, boosting overall performance and creating new avenues for innovation. Software architects can stay alert and flexible by regularly assessing application performance, user feedback, and market trends to guarantee that systems remain relevant.


Navigating the Future of Network Security with Secure Access Service Edge (SASE)

As businesses expand their digital footprint, cyber attackers increasingly target unsecured cloud resources and remote endpoints. Traditional perimeter-based network and security architectures are not capable of protecting distributed environments. Therefore, organizations must adopt a holistic, future-proof network and cybersecurity architecture to succeed in this rapidly changing business landscape. The ChallengesPerimeter-based security revolves around defending the network’s boundary. It assumes that anyone who has gained access to the network is trusted and that everything outside the network is a potential threat. While this model worked well when applications, data, and users were contained within corporate walls, it is not adequate in a world where cloud applications and hybrid work are the norm. ... ... SASE is an architecture comprising a broad spectrum of technologies, including Zero Trust Network Access (ZTNA), Secure Web Gateway (SWG), Firewall as a Service (FWaaS), Cloud Access Security Brocker (CASB), Data Loss Prevention (DLP), and Software-Defined Wide Area Networking (SD-WAN). Everything is embodied into a single, cloud-native platform that provides advanced cyber protection and seamless network performance for highly distributed applications and users.


Whether AI is a bubble or revolution, how does software survive?

Bubble or not, AI has certainly made some waves, and everyone is looking to find the right strategy. It’s already caused a great deal of disruption—good and bad—among software companies large and small. The speed at which the technology has moved from its coming out party, has been stunning; costs have dropped, hardware and software have improved, and the mediocre version of many jobs can be replicated in a chat window. It’s only going to continue. “AI is positioned to continuously disrupt itself, said McConnell. “It's going to be a constant disruption. If that's true, then all of the dollars going to companies today are at risk because those companies may be disrupted by some new technology that's just around the corner.” First up on the list of disruption targets: startups. If you’re looking to get from zero to market fit, you don’t need to build the same kind of team like you used to. “Think about the ratios between how many engineers there are to salespeople,” said Tunguz. “We knew what those were for 10 or 15 years, and now none of those ratios actually hold anymore. If we are really are in a position that a single person can have the productivity of 25, management teams look very different. Hiring looks extremely different.” That’s not to say there won’t be a need for real human coders. We’ve seen how badly the vibe coding entrepreneurs get dunked on when they put their shoddy apps in front of a merciless internet.


The AI security gap no one sees—until it’s too late

The most serious—and least visible—gaps stem from the “Jenga-style” layering of managed AI services, where cloud providers stack one service on another and ship them with user-friendly but overly permissive defaults. Tenable’s 2025 Cloud AI Risk Report shows that 77 percent of organisations running Google Cloud’s Vertex AI Workbench leave the notebook’s default Compute Engine service account untouched; that account is an all-powerful identity which, if hijacked, lets an attacker reach every other dependent service. ... CIOs should treat every dataset in the AI pipeline as a high-value asset. Begin with automated discovery and classification across all clouds so you know exactly where proprietary corpora or customer PII live, then encrypt them in transit and at rest in private, version-controlled buckets. Enforce least-privilege access through short-lived service-account tokens and just-in-time elevation, and isolate training workloads on segmented networks that cannot reach production stores or the public internet. Feed telemetry from storage, IAM and workload layers into a Cloud-Native Application Protection Platform that includes Data Security Posture Management; this continuously flags exposed buckets, over-privileged identities and vulnerable compute images, and pushes fixes into CI/CD pipelines before data can leak.


5 questions defining the CIO agenda today

CIOs along with their executive colleagues and board members “realize that hacks and disruptions by bad actors are an inevitability,” SIM’s Taylor says. That realization has shifted security programs from being mostly defensive measures to ones that continuously evolve the organization’s ability to identify breaches quickly, respond rapidly, and return to operations as fast as possible, Taylor says. The goal today is ensuring resiliency — even as the bad actors and their attack strategies evolve. ... Building a tech stack that can grow and retract with business needs, and that can evolve quickly to capitalize on an ever-shifting technology landscape, is no easy feat, Phelps and other IT leaders readily admit. “In modernizing, it’s such a moving target, because once you got it modernized, something new can come out that’s better and more automated. The entire infrastructure is evolving so quickly,” says Diane Gutiw ... “CIOs should be asking, ‘How do I change or adapt what I do now to be able to manage a hybrid workforce? What does the future of work look like? How do I manage that in a secure, responsible way and still take advantage of the efficiencies? And how do I let my staff be innovative without violating regulation?’” Gutiw says, noting that today’s managers “are the last generation of people who will only manage people.”


Microsoft just taught its AI agents to talk to each other—and it could transform how we work

Microsoft is giving organizations more flexibility with their AI models by enabling them to bring custom models from Azure AI Foundry into Copilot Studio. This includes access to over 1,900 models, including the latest from OpenAI GPT-4.1, Llama, and DeepSeek. “Start with off-the-shelf models because they’re already fantastic and continuously improving,” Smith said. “Companies typically choose to fine-tune these models when they need to incorporate specific domain language, unique use cases, historical data, or customer requirements. This customization ultimately drives either greater efficiency or improved accuracy.” The company is also adding a code interpreter feature that brings Python capabilities to Copilot Studio agents, enabling data analysis, visualization, and complex calculations without leaving the Copilot Studio environment. Smith highlighted financial applications as a particular strength: “In financial analysis and services, we’ve seen a remarkable breakthrough over the past six months,” Smith said. “Deep reasoning models, powered by reinforcement learning, can effectively self-verify any process that produces quantifiable outputs.” He added that these capabilities excel at “complex financial analysis where users need to generate code for creating graphs, producing specific outputs, or conducting detailed financial assessments.”


Culture fit is a lie: It’s time we prioritised culture add

The idea of culture fit originated with the noble intent of fostering team cohesion. But over time, it has become an excuse to hire people who are familiar, comfortable and easy to manage. In doing so, companies inadvertently create echo chambers—workforces that lack diverse perspectives, struggle to challenge the status quo and fail to innovate. Ankur Sharma, Co-Founder & Head of People at Rebel Foods, understands this well. Speaking at the TechHR Pulse Mumbai 2025 conference, Sharma explained how Rebel Foods moved beyond hiring for cultural likeness. “We are not building a family; we are building a winning team,” he said, emphasising that what truly matters is competency, accountability and adaptability. The problem with culture fit is not just about homogeneity—it’s about stagnation. When teams are made up of individuals who think alike, they lose the ability to see challenges from multiple angles. Companies that prioritise cultural uniformity often struggle to pivot in response to industry shifts. ... Leading organisations are abandoning the notion of culture fit and shifting towards ‘culture add’—hiring employees who bring fresh ideas, challenge existing norms, and contribute new perspectives. Instead of asking, ‘Will this person fit in?’ Hiring managers are asking, ‘What unique value does this person bring?’


Closing security gaps in multi-cloud and SaaS environments

Many organizations are underestimating the risk — especially as the nature of attacks evolves. Traditional behavioral detection methods often fall short in spotting modern threats such as account hijacking, phishing, ransomware, data exfiltration, and denial of service attacks. Detecting these types of attacks require correlation and traceability across different sources including runtime events with eBPF, cloud audit logs, and APIs across both cloud infrastructure and SaaS. ... As attackers adopt stealthier tactics — from GenAI-generated malware to supply chain compromises — traditional signature- and rule-based methods fall short. ... A unified cloud and SaaS security strategy means moving away from treating infrastructure, applications, and SaaS as isolated security domains. Instead, it focuses on delivering seamless visibility, risk prioritization, and automated response across the full spectrum of enterprise environments — from legacy on-premises to dynamic cloud workloads to business-critical SaaS platforms and applications. ... Native CSP and SaaS telemetry is essential, but it’s not enough on its own. Continuous inventory and monitoring across identity, network, compute, and AI is critical — especially to detect misconfigurations and drift. 


AI-Driven Test Automation Techniques for Multimodal Systems

Traditional testing frameworks struggle to meet these demands, particularly as multimodal systems continuously evolve through real-time updates and training. Consequently, AI-powered test automation has emerged as a promising paradigm to ensure scalable and reliable testing processes for multimodal systems. ... Natural Language Processing (NLP)-powered AI tools will understand and define the requirements in a more elaborate and defined structure. This will detect any ambiguity and gaps in requirements. For example, the “System should display message quickly” AI tool will identify the need for a precise definition for the word “quickly.” It looks simple, but if missed, it could lead to great performance issues in production. ... Based on AI-generated requirements and business scenarios, AI-based tools can generate test strategy documents by identifying resources, constraints, and dependencies between systems. All this can be achieved with NLP AI tools ... AI-driven test automation solutions can improve shift-left testing even more by generating automated test scripts faster. Testers can run automation at an early stage when the code is ready to test. AI tools like Chat GPT 4.0 provide script code in any language, like Java or Python, based on simple text input. This uses the NLP (Natural Language Processing) AI model to generate code for automation scripts.


IGA: What Is It, and How Can SMBs Use It?

The first step in a total IGA strategy has nothing to do with software. It actually starts with IT and business leaders determining what the rules of identity governance and behavior should be. The benefit of having a smaller organization is that there are not quite as many stakeholders as in an enterprise. The challenge, of course, is that people, time and resources are limited. IT may have to assume the role of facilitator and earn buy-in. Nevertheless, this is a worthwhile exercise, as it can help establish a platform for secure growth in the future. And again, for SMBs in regulatory-heavy industries — especially finance, healthcare and government contractors — IGA should be a top priority. ... To do this, CIOs should first procure support from key stakeholders by meeting with them individually to explain the need for IGA as an overarching security technology and policy platform for digital security. In these discussions, CIOs can present the long-term benefits of an IGA program that can streamline user identity verification across services while easing audits and automating compliance. ... A strategic roadmap for IGA should involve minimally disruptive business and user adoption and quick technology implementation. One way to do this is to create a phased implementation approach that tackles the most mission-critical and sensitive systems first before extending to other areas of IT.