Showing posts with label firewall. Show all posts
Showing posts with label firewall. Show all posts

Daily Tech Digest - May 08, 2026


Quote for the day:

“Everything you’ve ever wanted is on the other side of fear.” -- George Addair

🎧 Listen to this digest on YouTube Music

▶ Play Audio Digest

Duration: 22 mins • Perfect for listening on the go.


How enterprises can manage LLM costs: A practical guide

Managing large language model (LLM) costs has become a critical priority for enterprises as generative and agentic AI deployments scale. According to the InformationWeek guide, LLM expenses are primarily driven by token pricing and consumption, factors that remain notoriously difficult to forecast due to the iterative nature of AI workflows. This unpredictability is exacerbated by dynamic vendor pricing, a lack of specialized FinOps tools, and limited user awareness regarding how complex queries impact the bottom line. To mitigate these financial risks, the article recommends a multi-pronged approach: matching task complexity to model capability by using lower-cost LLMs for routine work, and implementing technical optimizations like response caching and prompt compression to reduce token usage. Furthermore, enterprises should utilize prompt libraries of validated, efficient inputs and leverage query batching for non-urgent tasks to access vendor discounts. While self-hosting models eliminates third-party token fees, the guide warns of significant underlying costs in infrastructure and energy. Ultimately, successful cost management requires a strategic balance where the productivity gains of AI clearly outweigh the operational expenditures. By proactively setting token allowances and comparing vendor rates, CIOs can prevent AI budgets from spiraling while still fostering innovation across the organization.


The Death of the Firewall

The article "The Death of the Firewall" by Chandrodaya Prasad explores why the firewall has survived decades of premature obituaries to remain a cornerstone of modern cybersecurity. Rather than becoming obsolete, the technology has successfully transitioned from a standalone perimeter appliance into a versatile, integrated architecture. The global firewall market continues to expand, currently valued at approximately $6 billion, as organizations face complex security challenges that identity-centric models alone cannot solve. The firewall has evolved through critical phases, including convergence with SD-WAN for simplified networking and integration with cloud-based Security Service Edge (SSE) frameworks. Crucially, it serves as a necessary enforcement point for inspecting encrypted traffic and implementing post-quantum cryptography. It remains indispensable in Operational Technology (OT) sectors, such as manufacturing and healthcare, where legacy systems and IoT devices cannot support endpoint agents or tolerate cloud-based latency. For these heavily regulated industries, the firewall is not merely an architectural choice but a fundamental requirement for regulatory compliance. Ultimately, the firewall’s endurance is attributed to its ongoing adaptation, offloading intelligence to the cloud while maintaining essential local execution. As cyber threats grow more sophisticated due to AI, the firewall is evolving into a vital, persistent component of a unified security fabric.


AI clones: the good, the bad, and the ugly

The Computerworld article "AI clones: The good, the bad, and the ugly" examines the dual-edged nature of digital personas, categorizing their applications into three distinct ethical spheres. Under "the good," the author highlights authorized use cases where public figures like Imran Khan and Eric Adams employ AI voice clones to transcend physical or linguistic barriers, amplifying their reach and accessibility. However, "the bad" introduces the problematic rise of nonconsensual professional cloning. Tools like "Colleague Skill" enable individuals to replicate the expertise and communication styles of coworkers or supervisors, often to retain institutional knowledge or manipulate workplace dynamics. This section also underscores the threat of sophisticated financial fraud perpetrated through voice impersonation. Finally, "the ugly" explores the deeply controversial territory of "Ex-Partner Skill" and "digital resurrection." These tools allow users to simulate interactions with former or deceased loved ones by mimicking subtle nuances and shared memories, raising profound ethical concerns regarding consent and emotional health. Ultimately, the piece argues that as AI cloning technology becomes more accessible, society must navigate the erosion of reality and establish clear boundaries to protect individual identity and privacy in an increasingly synthetic world.


Fire at Dutch data center has many unintended consequences

On May 7, 2026, a significant fire erupted at the NorthC data center in Almere, Netherlands, triggering a regional emergency response and demonstrating the fragility of modern digital infrastructure. The blaze, which originated in the technical compartment housing critical power systems, forced emergency services to order a total power shutdown. Although the server rooms remained largely protected by fire-resistant separations, the resulting outage caused widespread, often bizarre, secondary consequences. Beyond standard digital disruptions, the failure crippled physical security at Utrecht University, where students and staff were locked out of buildings and even restrooms because electronic access card systems failed completely. Public transit in Utrecht faced communication breakdowns, while healthcare billing services and numerous pharmacies across the country saw their operations grind to a halt. This incident serves as a stark wake-up call, proving that even ISO-certified facilities with redundant backups are susceptible to catastrophic failure when authorities prioritize safety over continuity. It underscores a critical lesson for organizations: business continuity plans must account for the unpredictable ripple effects of physical infrastructure loss. The event highlights the inherent risks of centralized digital dependencies, revealing that a localized technical fire can effectively paralyze diverse sectors of society far beyond the immediate flames.


The hidden cost of front-end complexity

The article "The Hidden Cost of Front-End Complexity" explores how modern web development has transitioned from solving rendering challenges to facing profound system design issues. While current frameworks have optimized UI performance and component modularity, complexity has not disappeared; instead, it has shifted "up the stack" into application logic and state coordination. Modern front-end engineers now shoulder responsibilities once reserved for multiple infrastructure layers, managing distributed APIs, CI/CD pipelines, and intricate data flows that reside within the browser. The author argues that the true "hidden cost" of this evolution is the significantly increased cognitive load required for developers to navigate a dense web of invisible dependencies and reactive chains. Consequently, development cycles slow down and maintainability suffers when state relationships remain opaque or poorly defined. To address these architectural failures, the industry must pivot from debating framework syntax or rendering speed to prioritizing a "state-first" architecture. In this paradigm, the UI is treated as a simple projection of a clearly modeled state. By shifting the focus toward explicit state representation and observable system design, engineering teams can manage the inherent complexity of large-scale applications more effectively. Ultimately, the future of the front-end lies in building systems that are fundamentally easier to reason about.


How Federated Identity and Cross-Cloud Authentication Actually Work at Scale

This article discusses the critical shift from traditional, secrets-based authentication to Federated Identity and Workload Identity Federation (WIF) within modern DevOps and multi-cloud environments. Historically, integrating services across clouds (such as Azure, AWS, or GCP) required storing long-lived service principal keys or static credentials, which posed significant security risks including credential leakage and management overhead. To solve this, Federated Identity utilizes OpenID Connect (OIDC) to establish a trust relationship between an external identity provider and a cloud resource. Instead of using persistent secrets, a workload—such as a GitHub Action or an Azure DevOps pipeline—requests a short-lived, ephemeral token from its identity provider. This token is then exchanged for a temporary access token from the target cloud service, which automatically expires after the task is completed. This approach eliminates the need for manual secret rotation and significantly reduces the attack surface by ensuring no permanent credentials exist to be stolen. By leveraging Managed Identities and structured OIDC exchanges, organizations can achieve a "zero-trust" authentication model that scales across diverse cloud providers, providing a more secure, automated, and maintainable framework for cross-cloud resource management and CI/CD workflows.


Ten years later, has the GDPR fulfilled its purpose?

A decade after its adoption, the General Data Protection Regulation (GDPR) presents a bittersweet legacy, having fundamentally reshaped global corporate culture while facing significant modern hurdles. The regulation successfully elevated privacy from a legal footnote to a core management priority, institutionalizing principles like "privacy by design" and establishing a gold standard for international digital governance. However, experts highlight a growing disconnect between regulatory intent and practical application. While the GDPR empowered citizens with theoretical rights, the reality often manifests as "consent fatigue" through ubiquitous cookie pop-ups rather than providing meaningful control. Furthermore, the enforcement landscape reveals a stark gap; despite billions in issued fines, the actual collection rate remains remarkably low due to protracted legal appeals and the complexity of the "one-stop-shop" mechanism. International data transfers also remain a legal Achilles' heel, plagued by ongoing uncertainty across borders. The emergence of generative AI further complicates this framework, as massive training datasets and opaque algorithms challenge core tenets like data minimization and transparency. Additionally, the proliferation of overlapping EU regulations has created a "regulatory avalanche," making compliance increasingly difficult for smaller organizations. Ultimately, the article suggests that while the GDPR fulfilled its primary purpose, it now requires urgent refinement to remain relevant in a complex, AI-driven digital economy.


Bunkers, Mines, and Caverns: The World of Underground Data Centers

The article "Bunkers, Mines, and Caverns: The World of Underground Data Centers" by Nathan Eddy explores the growing strategic niche of subterranean infrastructure through the adaptive reuse of retired mines and Cold War-era bunkers. Predominantly found in North America and Northern Europe, these facilities offer a unique "underground advantage" centered on unparalleled physical security, environmental resilience, and inherent cooling efficiency. By repurposing sites like Iron Mountain’s Pennsylvania campus or Norway’s Lefdal Mine, operators benefit from a natural, impenetrable shield against extreme weather and external threats, making them ideal for high-security or mission-critical workloads. Furthermore, underground locations often bypass local "NIMBY" resistance because they are invisible to surrounding communities. However, the article notes that subterranean deployments present significant engineering and logistical hurdles. Managing humidity, ventilation, and heat dissipation requires complex systems, and retrofitting older structures can be costly. Site selection is also intricate, requiring rigorous assessments of structural stability and risks like water ingress or geological faults. Despite these challenges, underground data centers are no longer a novelty but a proven, permanent fixture in the industry. They are increasingly attractive in land-constrained hubs like Singapore and for highly regulated sectors, providing a sustainable and secure alternative to traditional above-ground facilities.


Why the future of software is no longer written — it is architected, governed and continuously learned

The article argues that software development is undergoing a fundamental structural shift, moving from manual coding to a paradigm defined by architecture, governance, and continuous learning. As generative AI and agentic systems take over the heavy lifting of building code, the role of the developer is evolving into that of an "intelligence orchestrator" who curates intent rather than writing lines of syntax. For CIOs, this transition represents a critical leadership inflection point where software is no longer just a business enabler but the primary engine for scaling enterprise intelligence. The focus is shifting from development speed to the strategic design of decision systems. This new era necessitates the rise of roles like the Chief AI Officer (CAIO) to govern AI as a strategic asset, ensuring security through zero-trust principles and navigating complex regulatory landscapes like the EU AI Act. While productivity gains are significant, organizations must proactively manage risks such as code hallucinations, model bias, and intellectual property concerns. Ultimately, the future of digital economies will be shaped by leaders who prioritize "intelligence orchestration" over traditional application building, fostering adaptive systems that learn and evolve. Success in 2026 requires a focus on three core mandates: architecting intelligence, governing AI assets, and aligning technology ecosystems with overarching corporate strategy.


Maximizing Impact Amid Constraints: The Role of Automation and Orchestration in Federal IT Modernization

Federal IT leaders currently face a challenging landscape where they must fortify complex digital environments against persistent threats while navigating significant fiscal uncertainty and budget constraints. According to a recent report, over sixty percent of these leaders struggle with monitoring tools across diverse hybrid environments, largely due to the persistence of legacy, multi-vendor systems that create integration gaps and increase operational costs. To overcome these hurdles, federal agencies must strategically embrace automation and orchestration as foundational components of a modern zero-trust architecture. By integrating AI-driven technologies for routine tasks like alert analysis and anomaly detection, IT teams can transition from a reactive posture to a proactive defense, effectively reducing monitoring complexity through single-pane-of-glass solutions. This methodical approach allows organizations to maximize the value of their existing investments while freeing up personnel for mission-critical initiatives. The success of such incremental improvements can be clearly measured through enhanced metrics like mean time to detection (MTTD) and mean time to resolution (MTTR). Ultimately, a disciplined, phased implementation of these technologies ensures that federal agencies maintain operational resilience and mission readiness. By focusing on strategic automation, IT leaders can deliver maximum impact for every budget dollar, ensuring that modernization efforts continue to advance despite the ongoing challenges of a resource-constrained environment.

Daily Tech Digest - January 06, 2026


Quote for the day:

"Our expectation in ourselves must be higher than our expectation in others." -- Victor Manuel Rivera



Data 2026 outlook: The rise of semantic spheres of influence

While data started to garnering attention last year, AI and agents continued to suck up the oxygen. Why the urgency of agents? Maybe it’s “fear of missing out.” Or maybe there’s a more rational explanation. According to Amazon Web Services Inc. CEO Matt Garman, agents are the technology that will finally make AI investments pay off. Go to the 12-minute mark in his recent AWS re:Invent conference keynote, and you’ll hear him say just that. But are agents yet ready for prime time? ... And of course, no discussion of agentic interaction with databases is complete without mention of Model Context Protocol. The open-source MCP framework, which Anthropic PBC recently donated to the Linux Foundation, came out of nowhere over the past year to become the de facto standard for how AI models connect with data. ... There were early advances for extending governance to unstructured data, primarily documents. IBM watsonx.governance introduced a capability for curating unstructured data that transforms documents and enriches them by assigning classifications, data classes and business terms to prepare them for retrieval-augmented generation, or RAG. ... But for most organizations lacking deep skills or rigorous enterprise architecture practices, the starting points for defining semantics is going straight to the sources: enterprise applications and/or, alternatively, the newer breed of data catalogs that are branching out from their original missions of locating and/or providing the points of enforcement for data governance. In most organizations, the solution is not going to be either-or.


Engineering Speed at Scale — Architectural Lessons from Sub-100-ms APIs

Speed shapes perception long before it shapes metrics. Users don’t measure latency with stopwatches - they feel it. The difference between a 120 ms checkout step and an 80 ms one is invisible to the naked eye, yet emotionally it becomes the difference between "smooth" and "slightly annoying". ... In high-throughput platforms, latency amplifies. If a service adds 30 ms in normal conditions, it might add 60 ms during peak load, then 120 ms when a downstream dependency wobbles. Latency doesn’t degrade gracefully; it compounds. ... A helpful way to see this is through a "latency budget". Instead of thinking about performance as a single number - say, "API must respond in under 100 ms" - modern teams break it down across the entire request path: 10 ms at the edge; 5 ms for routing; 30 ms for application logic; 40 ms for data access; and 10–15 ms for network hops and jitter. Each layer is allocated a slice of the total budget. This transforms latency from an abstract target into a concrete architectural constraint. Suddenly, trade-offs become clearer: "If we add feature X in the service layer, what do we remove or optimize so we don’t blow the budget?" These conversations - technical, cultural, and organizational - are where fast systems are born. ... Engineering for low latency is really engineering for predictability. Fast systems aren’t built through micro-optimizations - they’re built through a series of deliberate, layered decisions that minimize uncertainty and keep tail latency under control.


Everything you need to know about FLOPs

A FLOP is a single floating‑point operation, meaning one arithmetic calculation (add, subtract, multiply, or divide) on numbers that have decimals. Compute benchmarking is done in floating point/fractional rather than integer/whole numbers because floating point is far more accurate of a measure than integers. A prefix is added to FLOPs to measure how many are performed in a second, starting with mega- (millions) the giga- (billions), tera- (trillions), peta- (quadrillions), and now exaFLOPs (quintillions). ... Floating point in computing starts at FP4, or 4 bits of floating point, and doubles all the way to FP64. There is a theoretical FP128, but it is never used as a measure. FP64 is also referred to as double-precision floating-point format, a 64-bit standard under IEEE 754 for representing real numbers with high accuracy. ... With petaFLOPS and exaFLOPs becoming a marketing term, some hardware vendors have been less than scrupulous in disclosing what level of floating-point operation their benchmarks use. It’s not it’s not uncommon for a company to promote exascale performance and then saying the little fine print that they’re talking about FP8, according to Snell. “It used to be if someone said exaFLOP, you could be pretty confident that they meant exaFLOP according to 64-bit scientific computing, but not anymore, especially in the field of AI, you need to look at what’s going behind that FLOP,” said Snell.


From SBOM to AI BOM: Rethinking supply chain security for AI native software

An effective AI BOM is not a static document generated at release time. It is a lifecycle artifact that evolves alongside the system. At ingestion, it records dataset sources, classifications, licensing constraints, and approval status. During training or fine-tuning, it captures model lineage, parameter changes, evaluation results, and known limitations. At deployment, it documents inference endpoints, identity and access controls, monitoring hooks, and downstream integrations. Over time, it reflects retraining events, drift signals, and retirement decisions. Crucially, each element is tied to ownership. Someone approved the data. Someone selected the base model. Someone accepted the residual risk. This mirrors how mature organizations already think about code and infrastructure, but extends that discipline to AI components that have historically been treated as experimental or opaque. To move from theory to practice, I encourage teams to treat the AI BOM as a “Digital Bill of Lading, a chain-of-custody record that travels with the artifact and proves what it is, where it came from, and who approved it. The most resilient operations cryptographically sign every model checkpoint and the hash of every dataset. By enforcing this chain of custody, they’ve transitioned from forensic guessing to surgical precision. When a researcher identifies a bias or security flaw in a specific open-source dataset, an organization with a mature AI BOM can instantly identify every downstream product affected by that “raw material” and act within hours, not weeks.


Beyond the Firehose: Operationalizing Threat Intelligence for Effective SecOps

Effective operationalization doesn't happen by accident. It requires a structured approach that aligns intelligence gathering with business risks. A framework for operationalizing threat intelligence structures the process from raw data to actionable defence, involving key stages like collection, processing, analysis, and dissemination, often using models like MITRE ATT&CK and Cyber Kill Chain. It transforms generic threat info into relevant insights for your organization by enriching alerts, automating workflows (via SOAR), enabling proactive threat hunting, and integrating intelligence into tools like SIEM/EDR to improve incident response and build a more proactive security posture. ... As intel maturity develops, the framework continuously incorporates feedback mechanisms to refine and adapt to the evolving threat environment. Cross-departmental collaboration is vital, enabling effective information sharing and coordinated response capabilities. The framework also emphasizes contextual integration, allowing organizations to prioritize threats based on their specific impact potential and relevance to critical assets. This ultimately drives more informed security decisions. ... Operationalization should be regarded as an ongoing process rather than a linear progression. If intelligence feeds result in an excessive number of false positives that overwhelm Tier 1 analysts, this indicates a failure in operationalization. It is imperative to institute a formal feedback mechanism from the Security Operations Center to the Intelligence team.


Compliance vs. Creativity: Why Security Needs Both Rule Books and Rebels

One of the most common tensions in the SOC arises from mismatched expectations. Compliance officers focus on control documentation when security teams are focusing on operational signals. For example, a policy may require multi-factor authentication (MFA), but if the system doesn’t generate alerts on MFA fatigue or unusual login patterns, attackers can slip past controls without detection. It’s important to also remember that just because something’s written in a policy doesn’t mean it’s being protected. A control isn’t a detection. It only matters if it shows up in the data. Security teams need to make sure that every big control, like MFA, logging, or encryption, has a signal that tells them when it’s being misused, misconfigured, or ignored. ... In a modern SOC, competing priorities are expected. Analysts want manageable alert volumes, red teams want room to experiment, and managers need to show compliance is covered. And at the top, CISOs need metrics that make sense to the board. However, high-performing teams aren’t the ones that ignore these differences. They, again, focus on alignment. ... The most effective security programs don’t rely solely on rigid policy or unrestricted innovation. They recognize that compliance offers the framework for repeatable success, while creativity uncovers gaps and adapts to evolving threats. When organizations enable both, they move beyond checklist security. 


AI governance through controlled autonomy and guarded freedom

Controlled autonomy in AI governance refers to granting AI systems and their development teams a defined level of independence within clear, pre-established boundaries. The organization sets specific guidelines, standards and checkpoints, allowing AI initiatives to progress without micromanagement but still within a tightly regulated framework. The autonomy is “controlled” in the sense that all activities are subject to oversight, periodic review and strict adherence to organizational policies. ... In practice, controlled autonomy might involve delegated decision-making authority to AI project teams, but with mandatory compliance to risk assessment protocols, ethical guidelines and regulatory requirements. For example, an organization may allow its AI team to choose algorithms and data sources, but require regular reports and audits to ensure transparency and accountability. Automated systems may operate independently, yet their outputs are monitored for biases, errors or security vulnerabilities. ... Deciding between controlled autonomy and guarded freedom in AI governance largely depends on the nature of the enterprise, its industry and the specific risks involved. Controlled autonomy is best suited for sectors where regulatory compliance and risk mitigation are paramount, such as banking, healthcare or government services. ... Both controlled autonomy and guarded freedom offer valuable frameworks for AI governance, each with distinct strengths and potential drawbacks. 


The 20% that drives 80%: Uncovering the secrets of organisational excellence

There are striking universalities in what truly drives impact. The first, which all three prioritise, is the belief that employee experience is inseparable from customer experience. Whether it is called EX = CX or framed differently, the sharp focus on making the workplace purposeful and engaging is foundational. Each business does this in a unique way, but the intent is the same: great employee experience leads to great customer experience. ... The second constant is an unwavering drive for business excellence. This is a nuanced but powerful 20% that shapes 80% of outcomes. Take McDonald’s, for instance: the consistency of quality and service, whether you are in Singapore, India, Japan or the US, is remarkable. Even as we localise, the core excellence remains unchanged. The same is true for Google, where the reliability of Search and breakthroughs in AI define the brand, and for PepsiCo, where high standards across foods and beverages define the brand.  ... The third—and perhaps most challenging—is connectedness. For giants of this scale, fostering deep connections across global, regional and country boundaries, and within and across teams, is crucial. It is about psychological safety, collaboration, and creating space for people to connect and recognise each other. This focus on connectedness enables the other two priorities to flourish. If organisations keep these three at the heart of their practice, they remain agile, resilient, and, as I like to put it, the giants keep dancing.


Turning plain language into firewall rules

A central feature of the design is an intermediate representation that captures firewall policy intent in a vendor agnostic format. This representation resembles a normalized rule record that includes the five tuple plus additional metadata such as direction, logging, and scheduling. This layer separates intent from device syntax. Security teams can review the intermediate representation directly, since it reflects the policy request in structured form. Each field remains explicit and machine checkable. After the intermediate representation is built, the rest of the pipeline operates through deterministic logic. The current prototype includes a compiler that translates the representation into Palo Alto PAN OS command line configuration. The design supports additional firewall platforms through separate back end modules. ... A vendor specific linter applies rules tied to the target firewall platform. In the prototype, this includes checks related to PAN OS constraints, zone usage, and service definitions. These checks surface warnings that operators can review. A separate safety gate enforces high level security constraints. This component evaluates whether a policy meets baseline expectations such as defined sources, destinations, zones, and protocols. Policies that fail these checks stop at this stage. After compilation, the system runs the generated configuration through a Batfish based simulator. The simulator validates syntax and object references against a synthetic device model. Results appear as warnings and errors for inspection.


Why cybersecurity needs to focus more on investigation and less on just detection and response

The real issue? Many of today’s most dangerous threats are the ones that don’t show up easily on detection radars. Think about the advanced persistent threats (APTs) that remain hidden for months or the zero-day attacks that exploit vulnerabilities no one even knew existed. These threats may slip right past the detection systems because they don’t act in obvious ways. That’s why, in these cases, detection alone isn’t enough. It’s just the first step. ... Think of investigation as the part where you understand the full story. It’s like detective work: not just looking at the footprints, but figuring out where they came from, who’s leaving them, and why they’re trying to break in in the first place. You can’t stop a cyberattack with detection alone if you don’t understand what caused it or how it worked. And if you don’t know the cause, you can’t appropriately respond to the detected threat. ... The cost of neglecting investigation goes beyond just missing a threat. It’s about missed opportunities for learning and growth. Every attack offers a lesson. By investigating the full scope of a breach, you gain insights that not only help in responding to that incident but also prepare you to defend against future ones. It’s about building resilience, not just reaction. Think about it: If you never investigate an incident thoroughly, you’re essentially ignoring the underlying risk that allowed the threat to flourish. You might fix the hole that was exploited, but you won’t have a clear understanding of why it was there in the first place. 

Daily Tech Digest - November 30, 2025


Quote for the day:

"The real leader has no need to lead - he is content to point the way." -- Henry Miller



Four important lessons about context engineering

Modern LLMs operate with context windows ranging from 8K to 200K+ tokens, with some models claiming even larger windows. However, several technical realities shape how we should think about context. ... Research has consistently shown that LLMs experience attention degradation in the middle portions of long contexts. Models perform best with information placed at the beginning or end of the context window. This isn’t a bug. It’s an artifact of how transformer architectures process sequences. ... Context length impacts latency and cost quadratically in many architectures. A 100K token context doesn’t cost 10x a 10K context, it can cost 100x in compute terms, even if providers don’t pass all costs to users. ... The most important insight: more context isn’t better context. In production systems, we’ve seen dramatic improvements by reducing context size and increasing relevance. ... LLMs respond better to structured context than unstructured dumps. XML tags, markdown headers, and clear delimiters help models parse and attend to the right information. ... Organize context by importance and relevance, not chronologically or alphabetically. Place critical information early and late in the context window. ... Each LLM call is stateless. This isn’t a limitation to overcome, but an architectural choice to embrace. Rather than trying to maintain massive conversation histories, implement smart context management


What Fuels AI Code Risks and How DevSecOps Can Secure Pipelines

AI-generated code refers to code snippets or entire functions produced by Machine Learning models trained on vast datasets. While these models can enhance developer productivity by providing quick solutions, they often lack the nuanced understanding of security implications inherent in manual coding practices. ... Establishing secure pipelines is the backbone of any resilient development strategy. When code flows rapidly from development to production, every step becomes a potential entry point for vulnerabilities. Without careful controls, even well-intentioned automation can allow flawed or insecure code to slip through, creating risks that may only surface once the application is live. A secure pipeline ensures that every commit, every integration, and every deployment undergo consistent security scrutiny, reducing the likelihood of breaches and protecting both organizational assets and user trust. Security in the pipeline begins at the earliest stages of development. By embedding continuous testing, teams can identify vulnerabilities before they propagate, identifying issues that traditional post-development checks often miss. This proactive approach allows security to move in tandem with development rather than trailing behind it, ensuring that speed does not come at the expense of safety. 


The New Role of Enterprise Architecture in the AI Era

Traditional architecture assumes predictability in which once the code has shipped, systems behave in a standard way. On the contrary, AI breaks that assumption completely, given that the machine learning models continuously change as data evolves and model performance keeps fluctuating as every new dataset gets added. ... Architecture isn’t just a phase in the AI era; rather it’s a continuous cycle that must operate across various interconnected stages that follow well-defined phases. This process starts with discovery, where the teams assess and identify AI opportunities that are directly linked to the business objectives. Engage early with business leadership to define clear outcomes. Next comes design, where architects create modular blueprints for data pipelines and model deployment by reusing the proven patterns. In the delivery phase, teams execute iteratively with governance built in from the onset. Ethics, compliance and observability should be baked into the workflows, not added later as afterthoughts. Finally, adaptation keeps the system learning. Models are monitored, retrained and optimized continuously, with feedback loops connecting system behavior back to business metrics and KPIs (key performance indicators). When architecture operates this way, it becomes a living ecosystem that learns, adapts and improves with every iteration.


Quenching Data Center Thirst for Power Now Is Solvable Problem

“Slowing data center growth or prohibiting grid connection is a short-sighted approach that embraces a scarcity mentality,” argued Wannie Park, CEO and founder of Pado AI, an energy management and AI orchestration company, in Malibu, Calif. “The explosive growth of AI and digital infrastructure is a massive engine for economic, scientific, and industrial progress,” he told TechNewsWorld. “The focus should not be on stifling this essential innovation, but on making data centers active, supportive participants in the energy ecosystem.” ... Planning for the full lifecycle of a data center’s power needs — from construction through long-term operations — is essential, he continued. This approach includes having solutions in place that can keep facilities operational during periods of limited grid availability, major weather events, or unexpected demand pressures, he said. ... The ITIF report also called for the United States to squeeze more power from the existing grid without negatively impacting customers, while also building new capacity. New technology can increase supply from existing transmission lines and generators, the report explained, which can bridge the transition to an expanded physical grid. On the demand side, it added, there is spare capacity, but not at peak times. It suggested that large users, such as data centers, be encouraged to shift their demand to off-peak periods, without damaging their customers. Grids do some of that already, it noted, but much more is needed.


A Waste(d) Opportunity: How can the UK utilize data center waste heat?

Walking into the data hall, you are struck by the heat resonating from the numerous server racks, each capable of handling up to 20kW of compute. However, rather than allowing this heat to dissipate into the atmosphere, the team at QMUL had another plan. Instead, in partnership with Schneider Electric, the university deployed a novel heat reuse system. ... Large water cylinders across campus act like thermal batteries, storing hot water overnight when compute needs are constant, but demand is low, then releasing it in the morning rush. As one project lead put it, there is “no mechanical rejection. All the heat we generate here is used. The gas boilers are off or dialed down - the computing heat takes over completely.” At full capacity, the data center could supply the equivalent of nearly 4 million ten-minute showers per year. ... Walking out, it’s easy to see why Queen Mary’s project is being held up as a model for others. In the UK, however, the project is somewhat of an oddity, but through the lens of QMUL you can see a glimpse of the future, where compute is not only solving the mysteries of our universe but heating our morning showers. The question remains, though, why data center waste heat utilization projects in the UK are few and far between, and how the country can catch up to regions such as the Nordics, which has embedded waste heat utilization into the planning and construction of its data center sector.


Redefining cyber-resilience for a new era

The biggest vulnerability is still the human factor, not the technology. Many companies invest in expensive tools but overlook the behaviour and mindset of their teams. In regions experiencing rapid digital growth, that gap becomes even more visible. Phishing, credential theft and shadow IT remain common ways attackers gain access. What’s needed is a shift in culture. Cybersecurity should be seen as a shared responsibility, embedded in daily routines, not as a one-time technical solution. True resilience begins with awareness, leadership and clarity at all levels of the organisation. ... Leaders play a crucial role in shaping that future. They need to understand that cybersecurity is not about fear, but about clarity and long-term thinking. It is part of strategic leadership. The leaders who make the biggest impact will be the ones who see cybersecurity as cultural, not just technical. They will prioritise transparency, invest in ethical and explainable technology, and build teams that carry these values forward. ... Artificial Intelligence is already transforming how we detect and respond to threats, but the more important shift is about ownership. Who controls the infrastructure, the models and the data? Centralised AI, controlled by a few major companies, creates dependence and limits transparency. It becomes harder to know what drives decisions, how data is used and where vulnerabilities might exist.


Building Your Geopolitical Firewall Before You Need One

In today’s world, where regulators are rolling out data sovereignty and localization initiatives that turn every cross-border workflow into a compliance nightmare, this is no theoretical exercise. Service disruption has shifted from possibility to inevitability, and geopolitical moves can shut down operations overnight. For storage engineers and data infrastructure leaders, the challenge goes beyond mere compliance – it’s about building genuine operational independence before circumstances force your hand. ... The reality is messier than any compliance framework suggests. Data sprawls everywhere, from edge, cloud and core to laptops and mobile devices. Building walls around everything does not offer true operational independence. Instead, it’s really about having the data infrastructure flexibility to move workloads when regulations shift, when geopolitical tensions escalate, or when a foreign government’s legislative reach suddenly extends into your data center. ... When evaluating sovereign solutions, storage engineers typically focus on SLAs and certifications. However, Oostveen argues that the critical question is simpler and more fundamental: who actually owns the solution or the service provider? “If you’re truly sovereign, my view is that you (the solution provider) are a company that is owned and operated exclusively within the borders of that particular jurisdiction,” he explains.


The 5 elements of a good cybersecurity risk assessment

Companies can use a cybersecurity risk assessment to evaluate how effective their security measures are. This provides a foundation for deciding which security measures are important — and which are not. But also for deciding when a product or system is secure enough and additional measures would be excessive. When they’ve done enough cybersecurity. However, not every risk assessment fulfills this promise. ... Too often, cybersecurity risk assessments take place solely in cyberspace — but this doesn’t allow meaningful prioritizing of requirements. “Server down” is annoying, but cyber systems never exist for their own sake. That’s why risk assessments need a connection to real processes that are mission critical for the organization — or perhaps not. ... Without system understanding, there is no basis for attack modeling. Without attack modeling, there is no basis for identifying the most important requirements. It shouldn’t really be cybersecurity’s job to create system understanding. But since there is often a lack of documentation in IT, OT, or for cyber systems in general, cybersecurity is often left to provide it. And if cybersecurity is the first team to finally create an overview of all cyber systems, then it’s a result that is useful far beyond security risk assessment. ... Attack scenarios are a necessary stepping stone to move your thinking from systems and real-world impacts to meaningful security requirements — no more and no less. 


Finding Strength in Code, Part 2: Lessons from Loss and the Power of Reflection

Every problem usually has more than one solution. The engineers who grow the fastest are the ones who can look at their own mistakes without ego, list what they’re good at and what they're not, and then actually see multiple ways forward. Same with life. A loss (a pet, a breakup, whatever) is a bug that breaks your personal system. ... Solo debugging has limits. On sprawling systems, we rally the squad—frontend, backend, QA—to converge faster. Similarly, grief isn't meant for isolation. I've leaned on my network: a quick Slack thread with empathetic colleagues or a vulnerability share in my dev community. It distributes the load and uncovers blind spots you might miss on your own. ... Once a problem is solved, it is essential to communicate the solution. The list of lessons from that solution: some companies solve problems, but never put the effort into documenting the process in a way that prevents them from happening again. I know it is impossible to avoid problems, as it is impossible not to make mistakes in our lives. The true inefficiency? Skipping the "why" and "how next time." ... Borrowed from incident response, it's a structured debrief that prevents recurrence without finger-pointing. In engineering, it ensures resilience; in life, it builds emotional antifragility. There are endless flavours of postmortems—simple Markdown outlines to full-blown docs—but the gold standard is "blameless," focusing on systems over scapegoats.


Cyber resilience is a business imperative: skills and strategy must evolve

Cyber upskilling must be built into daily work for both technical and non-technical employees. It’s not a one-off training exercise; it’s part of how people perform their roles confidently and securely. For technical teams, staying current on certifications and practicing hands-on defense is essential. Labs and sandboxes that simulate real-world attacks give them the experience needed to respond effectively when incidents happen. For everyone else, the focus should be on clarity and relevance. Employees need to understand exactly what’s expected of them; how their individual decisions contribute to the organization's resilience. Role-specific training makes this real: finance teams need to recognize invoice fraud attempts; HR should know how to handle sensitive data securely; customer service needs to spot social engineering in live interactions. ... Resilience should now sit alongside financial performance and sustainability as a core board KPI. That means directors receiving regular updates not only on threat trends and audit findings, but also on recovery readiness, incident transparency, and the cultural maturity of the organization's response. Re-engaging boards on this agenda isn’t about assigning blame—it’s about enabling smarter oversight. When leaders understand how resilience protects trust, continuity, and brand, cybersecurity stops being a technical issue and becomes what it truly is: a measure of business strength.

Daily Tech Digest - August 08, 2025


Quote for the day:

“Every adversity, every failure, every heartache carries with it the seed of an equal or greater benefit.” -- Napoleon Hill


Major Enterprise AI Assistants Can Be Abused for Data Theft, Manipulation

In the case of Copilot Studio agents that engage with the internet — over 3,000 instances have been found — the researchers showed how an agent could be hijacked to exfiltrate information that is available to it. Copilot Studio is used by some organizations for customer service, and Zenity showed how it can be abused to obtain a company’s entire CRM. When Cursor is integrated with Jira MCP, an attacker can create malicious Jira tickets that instruct the AI agent to harvest credentials and send them to the attacker. This is dangerous in the case of email systems that automatically open Jira tickets — hundreds of such instances have been found by Zenity. In a demonstration targeting Salesforce’s Einstein, the attacker can target instances with case-to-case automations — again hundreds of instances have been found. The threat actor can create malicious cases on the targeted Salesforce instance that hijack Einstein when they are processed by it. The researchers showed how an attacker could update the email addresses for all cases, effectively rerouting customer communication through a server they control. In a Gemini attack demo, the experts showed how prompt injection can be leveraged to get the gen-AI tool to display incorrect information. 


Who’s Leading Whom? The Evolving Relationship Between Business and Data Teams

As the data boom matured, organizations realized that clear business questions weren’t enough. If we wanted analytics to drive value, we had to build stronger technical teams, including data scientists and machine learning engineers. And we realized something else: we had spent years telling business leaders they needed a working knowledge of data science. Now we had to tell data scientists they needed a working knowledge of the business. This shift in emphasis was necessary, but it didn’t go perfectly. We had told the data teams to make their work useful, usable, and used, and they took that mandate seriously. But in the absence of clear guidance and shared norms, they filled in the gap in ways that didn’t always move the business forward. ... The foundation of any effective business-data partnership is a shared understanding of what actually counts as evidence. Without it, teams risk offering solutions that don’t stand up to scrutiny, don’t translate into action, or don’t move the business forward. A shared burden of proof makes sure that everyone is working from the same assumptions about what’s convincing and credible. This shared commitment is the foundation that allows the organization to decide with clarity and confidence. 


A new worst coder has entered the chat: vibe coding without code knowledge

A clear disconnect then stood out to me between the vibe coding of this app and the actual practiced work of coding. Because this app existed solely as an experiment for myself, the fact that it didn’t work so well and the code wasn’t great didn’t really matter. But vibe coding isn’t being touted as “a great use of AI if you’re just mucking about and don’t really care.” It’s supposed to be a tool for developer productivity, a bridge for nontechnical people into development, and someday a replacement for junior developers. That was the promise. And, sure, if I wanted to, I could probably take the feedback from my software engineer pals and plug it into Bolt. One of my friends recommended adding “descriptive class names” to help with the readability, and it took almost no time for Bolt to update the code.  ... The mess of my code would be a problem in any of those situations. Even though I made something that worked, did it really? Had this been a real work project, a developer would have had to come in after the fact to clean up everything I had made, lest future developers be lost in the mayhem of my creation. This is called the “productivity tax,” the biggest frustration that developers have with AI tools, because they spit out code that is almost—but not quite—right.


From WAF to WAAP: The Evolution of Application Protection in the API Era

The most dangerous attacks often use perfectly valid API calls arranged in unexpected sequences or volumes. API attacks don't break the rules. Instead, they abuse legitimate functionality by understanding the business logic better than the developers who built it. Advanced attacks differ from traditional web threats. For example, an SQL injection attempt looks syntactically different from legitimate input, making it detectable through pattern matching. However, an API attack might consist of perfectly valid requests that individually pass all schema validation tests, with the malicious intent emerging only from their sequence, timing, or cross-endpoint correlation patterns. ... The strategic value of WAAP goes well beyond just keeping attackers out. It's becoming a key enabler for faster, more confident API development cycles. Think about how your API security works today — you build an endpoint, then security teams manually review it, continuous penetration testing (link is external) breaks it, you fix it, and around and around you go. This approach inevitably creates friction between velocity and security. Through continuous visibility and protection, WAAP allows development teams to focus on building features rather than manually hardening each API endpoint. Hence, you can shift the traditional security bottleneck into a security enablement model. 


Scrutinizing LLM Reasoning Models

Assessing CoT quality is an important step towards improving reasoning model outcomes. Other efforts attempt to grasp the core cause of reasoning hallucination. One theory suggests the problem starts with how reasoning models are trained. Among other training techniques, LLMs go through multiple rounds of reinforcement learning (RL), a form of machine learning that teaches the difference between desirable and undesirable behavior through a point-based reward system. During the RL process, LLMs learn to accumulate as many positive points as possible, with “good” behavior yielding positive points and “bad” behavior yielding negative points. While RL is used on non-reasoning LLMs, a large amount of it seems to be necessary to incentivize LLMs to produce CoT, which means that reasoning models generally receive more of it. ... If optimizing for CoT length leads to confused reasoning or inaccurate answers, it might be better to incentivize models to produce shorter CoT. This is the intuition that inspired researchers at Wand AI to see what would happen if they used RL to encourage conciseness and directness rather than verbosity. Across multiple experiments conducted in early 2025, Wand AI’s team discovered a “natural correlation” between CoT brevity and answer accuracy, challenging the widely held notion that the additional time and compute required to create long CoT leads to better reasoning outcomes.


4 regions you didn't know already had age verification laws – and how they're enforced

Australia’s 2021 Online Safety Act was less focused on restricting access to adult content than it was on tackling issues of cyberbullying and online abuse of children, especially on social media platforms. The act introduced a legal framework to allow people to request the removal of hateful and abusive content,  ... Chinese law has required online service providers to implement a real-name registration system for over a decade. In 2012, the Decision on Strengthening Network Information Protection was passed, before being codified into law in 2016 as the Cybersecurity Law. The legislation requires online service providers to collect users’ real names, ID numbers, and other personal information. ... As with the other laws we’ve looked at, COPPA has its fair share of critics and opponents, and has been criticized as being both ineffective and unconstitutional by experts. Critics claim that it encourages users to lie about their age to access content, and allows websites to sidestep the need for parental consent. ... In 2025, the European Commission took the first steps towards creating an EU-wide strategy for age verification on websites when it released a prototype app for a potential age verification solution called a mini wallet, which is designed to be interoperable with the EU Digital Identity Wallet scheme.


The AI-enabled company of the future will need a whole new org chart

Let’s say you’ve designed a multi-agent team of AI products. Now you need to integrate them into your company by aligning them with your processes, values and policies. Of course, businesses onboard people all the time – but not usually 50 different roles at once. Clearly, the sheer scale of agentic AI presents its own challenges. Businesses will need to rely on a really tight onboarding process. The role of the agent onboarding lead creates the AI equivalent of an employee handbook: spelling out what agents are responsible for, how they escalate decisions, and where they must defer to humans. They’ll define trust thresholds, safe deployment criteria, and sandbox environments for gradual rollout. ... Organisational change rarely fails on capability – it fails on culture. The AI Culture & Collaboration Officer protects the human heartbeat of the company through a time of radical transition. As agents take on more responsibilities, human employees risk losing a sense of purpose, visibility, or control. The culture officer will continually check how everyone feels about the transition. This role ensures collaboration rituals evolve, morale stays intact, and trust is continually monitored — not just in the agents, but in the organisation’s direction of travel. It’s a future-facing HR function with teeth.


The Myth of Legacy Programming Languages: Age Doesn't Define Value

Instead of trying to define legacy languages based on one or two subjective criteria, a better approach is to consider the wide range of factors that may make a language count as legacy or not. ... Languages may be considered legacy when no one is still actively developing them — meaning the language standards cease receiving updates, often along with complementary resources like libraries and compilers. This seems reasonable because when a language ceases to be actively maintained, it may stop working with modern hardware platforms. ... Distinguishing between legacy and modern languages based on their popularity may also seem reasonable. After all, if few coders are still using a language, doesn't that make it legacy? Maybe, but there are a couple of complications to consider. One is that measuring the popularity of programming languages in a highly accurate way is impossible — so just because one authority deems a language to be unpopular doesn't necessarily mean developers hate it. The other challenge is that when a language becomes unpopular, it tends to mean that developers no longer prefer it for writing new applications. ... Programming languages sometimes end up in the "legacy" bin when they are associated with other forms of legacy technology — or when they lack associations with more "modern" technologies.


From Data Overload to Actionable Insights: Scaling Viewership Analytics with Semantic Intelligence

Semantic intelligence allows users to find reliable and accurate answers, irrespective of the terminology used in a query. They can interact freely with data and discover new insights by navigating massive databases, which previously required specialized IT involvement, in turn, reducing the workload of already overburdened IT teams. At its core, semantic intelligence lays the foundation for true self-serve analytics, allowing departments across an organization to confidently access information from a single source of truth. ... A semantic layer in this architecture lets you query data in a way that feels natural and enables you to get relevant and precise results. It bridges the gap between complex data structures and user-friendly access. This allows users to ask questions without any need to understand the underlying data intricacies. Standardized definitions and context across the sources streamlines analytics and accelerates insights using any BI tool of choice. ... One of the core functions of semantic intelligence is to standardize definitions and provide a single source of truth. This improves overall data governance with role-based access controls and robust security at all levels. In addition, row- and column-level security at both user and group levels can ensure that access to specific rows is restricted for specific users. 


Why VAPT is now essential for small & medium business security

One misconception, often held by smaller companies, is that they are less likely to be targeted. Industry experts disagree. "You might think, 'Well, we're a small company. Who'd want to hack us?' But here's the hard truth: Cybercriminals love easy targets, and small to medium businesses often have the weakest defences," states a representative from Borderless CS. VAPT combines two different strategies to identify vulnerabilities and potential entry points before malicious actors do. A Vulnerability Assessment scans servers, software, and applications for known problems in a manner similar to a security walkthrough of a physical building. Penetration Testing (often shortened to pen testing) simulates real attacks, enabling businesses to understand how a determined attacker might breach their systems. ... Borderless CS maintains that VAPT is applicable across sectors. "Retail businesses store customer data and payment info. Healthcare providers hold sensitive patient information. Service companies often rely on cloud tools and email systems that are vulnerable. Even a small eCommerce store can be a jackpot for the wrong person. Cyber attackers don't discriminate. In fact, they often prefer smaller businesses because they assume you haven't taken strong security measures. Let's not give them that satisfaction."

Daily Tech Digest - July 21, 2025


Quote for the day:

"Absolute identity with one's cause is the first and great condition of successful leadership." -- Woodrow Wilson


Is AI here to take or redefine your cybersecurity role?

Unlike Thibodeaux, Watson believes the level-one SOC analyst role “is going to be eradicated” by AI eventually. But he agrees with Thibodeaux that AI will move the table stakes forward on the skills needed to land a starter job in cyber. “The thing that will be cannibalized first is the sort of entry-level basic repeatable tasks, the things that people traditionally might have cut their teeth on in order to sort of progress to the next level. Therefore, the skill requirement to get a role in cybersecurity will be higher than what it has been traditionally,” says Watson. To help cyber professionals attain AI skills, CompTIA is developing a new certification program called SecAI. The course will target cyber people who already have three to four years of experience in a core cybersecurity job. The curriculum will include practical AI skills to proactively combat emerging cyber threats, integrating AI into security operations, defending against AI-driven attacks, and compliance for AI ethics and governance standards. ... As artificial intelligence takes over a rising number of technical cybersecurity tasks, Watson says one of the best ways security workers can boost their employment value is by sharpening their human skills like business literacy and communication: “The role is shifting to be one of partnering and advising because a lot of the technology is doing the monitoring, triaging, quarantining and so on.”


5 tips for building foundation models for AI

"We have to be mindful that, when it comes to training these models, we're doing it purposefully, because you can waste a lot of cycles on the exercise of learning," he said. "The execution of these models takes far less energy and resources than the actual training." OS usually feeds training data to its models in chunks. "Building up the label data takes quite a lot of time," he said. "You have to curate data across the country with a wide variety of classes that you're trying to learn from, so a different mix between urban and rural, and more." The organisation first builds a small model that uses several hundred examples. This approach helps to constrain costs and ensures OS is headed in the right direction. "Then we slowly build up that labelled set," Jethwa said. "I think we're now into the hundreds of thousands of labelled examples. Typically, these models are trained with millions of labelled datasets." While the organization's models are smaller, the results are impressive. "We're already outperforming the existing models that are out there from the large providers because those models are trained on a wider variety of images," he said. "The models might solve a wider variety of problems, but, for our specific domain, we outperform those models, even at a smaller scale."


Reduce, re-use, be frugal with AI and data

By being more selective with the data included in language models, businesses can better control their carbon emissions, limiting energy to be spent on the most important resources. In healthcare, for example, separating the most up-to-date medical information and guidance from the rest of the information on that topic will mean safer, more reliable and faster responses to patient treatment. ... Frugal AI means adopting an intelligent approach to data that focuses on using the most valuable information only. When businesses have a greater understanding of their data, how to label it, identify it and which teams are responsible for its deletion, then the storage of single use data can be significantly reduced. Only then can frugal AI systems be put in place, allowing businesses to adopt a resource aware and efficient approach to both their data consumption and AI usage. It’s important to stress here though that frugal AI doesn’t mean that the end results are lesser or of a reduced impact of technology, it means that the data that goes into AI is concentrated, smaller but just as impactful. Think of it like making a drink with extra concentrated squash. Frugal AI is that extra concentrate squash that puts data efficiency, consideration and strategy at the centre of an organisation’s AI ambitions.


Cyber turbulence ahead as airlines strap in for a security crisis

Although organizations have acknowledged the need to boost spending, progress remains to be made and new measures adopted. Legacy OT systems, which often lack security features such as automated patching and built-in encryption, should be addressed as a top priority. Although upgrading these systems can be costly, it is essential to prevent further disruptions and vulnerabilities. Mapping the aviation supply chain helps identify all key partners, which is important for conducting security audits and enforcing contractual cybersecurity requirements. This should be reinforced with multi-layered perimeter defenses, including encryption, firewalls, and intrusion detection systems, alongside zero-trust network segmentation to minimize the risk of attackers moving laterally within networks. Companies should implement real-time threat monitoring and response by deploying intrusion detection systems, centralizing analysis with SIEM, and maintaining a regularly tested incident response plan to identify, contain, and mitigate cyberattacks. ... One of the most important steps is to train all staff, including pilots and ground crews, to recognize scams. Since recent security breaches have mostly relied on social engineering tactics, this type of training is essential. A single phone call or a convincing email can be enough to trigger a data breach. 


What Does It Mean to Be Data-Driven?

A data-driven organization understands the value of its data and the best ways to capitalize on that value. Its data assets are aligned with its goals and the processes in place to achieve those goals. Protecting the company’s data assets requires incorporating governance practices to ensure managers and employees abide by privacy, security, and integrity guidelines. In addition to proper data governance, the challenges to implementing a data-driven infrastructure for business processes are data quality and integrity, data integration, talent acquisition, and change management. ... To ensure the success of their increasingly critical data initiatives, organizations look to the characteristics that led to effective adoption of data-driven programs at other companies. Management services firm KPMG identifies four key characteristics of successful data-driven initiatives: leadership involvement, investments in digital literacy, seamless access to data assets, and promotion and monitoring. ... While data-as-a-service (DaaS) emphasizes the sale of external data, data as a product (DaaP) considers all of a company’s data and the mechanisms in place for moving and storing the data as a product that internal operations rely on. The data team becomes a “vendor” serving “customers” throughout the organization.


AI Needs a Firewall and Cloud Needs a Rethink

Hyperscalers dominate most of enterprise IT today, and few are willing to challenge the status quo of cloud economics, artificial intelligence infrastructure and cybersecurity architectures. But Tom Leighton, co-founder and CEO of Akamai, does just that. He argues that the cloud has become bloated, expensive and overly centralized. The internet needs a new kind of infrastructure that is distributed, secure by design and optimized for performance at the edge, Leighton told Information Security Media Group. From edge-native AI inference and API security to the world's first firewall for artificial intelligence, Akamai is no longer just delivering content - it's redesigning the future. ... Among the most notable developments Leighton discussed was a new product category: an AI firewall. "People are training models on sensitive data and then exposing them to the public. That creates a new attack surface," Leighton said. "AI hallucinates. You never know what it's going to do. And the bad guys have figured out how to trick models into leaking data or doing bad things." Akamai's AI firewall monitors prompts and responses to prevent malicious prompts from manipulating the model and to avoid leaking sensitive data. "It can be implemented on-premises, in the cloud or within Akamai's platform, providing flexibility based on customer preference. 


Human and machine: Rediscovering our humanity in the age of AI

In an era defined by the rapid advancement of AI, machines are increasingly capable of tasks once considered uniquely human. ... Ethical decision-making, relationship building and empathy have been identified as the most valuable, both in our present reality and in the AI-driven future. ... As we navigate this era of AI, we must remember that technology is a tool, not a replacement for humanity. By embracing our capacity for creativity, connection and empathy, we can ensure that AI serves to enhance our humanity, not diminish it. This means accepting that preserving our humanness sometimes requires assistance. It means investing in education and training that fosters critical thinking, problem-solving and emotional intelligence. It means creating workplaces that value human connection and collaboration, where employees feel supported and empowered to bring their whole selves to work. And it means fostering a culture that celebrates creativity, innovation and the pursuit of knowledge. At a time when seven out of every ten companies are already using AI in at least one business function, let us embrace the challenge of this new era with both optimism and intentionality. Let us use AI to build a better future for ourselves and for generations to come – a future where technology serves humanity, and where every individual has the opportunity to thrive.


‘Interoperable but not identical’: applying ID standards across diverse communities

Exchanging knowledge and experiences with identity systems to improve future ID projects is central to the concept of ID4Africa’s mission. At this year’s ID4Africa AGM in Addis Ababa, Ethiopia, a tension was more evident than ever before between the quest for transferable insights and replicable successes and the uniqueness of each African nation. Thales Cybersecurity and Digital Identity Field Marketing Director for the Middle East and Africa Jean Lindner wrote in an emailed response to questions from Biometric Update following the event that the mix of attendees reflected that “every African country has its own diverse history or development maturity and therefore unique legacy identity systems, with different constraints. Let us recognize here there is no unique quick-fix to country-specific hurdles,” he says. The lessons of one country can only benefit another to the extent that common ground is identified. The development of the concept of digital public infrastructure has mapped out some common ground, but standards and collaborative organizations have a major role to play. Unfortunately, Stéphanie de Labriolle, executive director services at the Secure Identity Alliance says “the widespread lack of clarity around standards and what compliance truly entails” was striking at this year’s ID4Africa AGM.


The Race to Shut Hackers out of IoT Networks

Considered among the weakest links in enterprise networks, IoT devices are used across industries to perform critical tasks at a rapid rate. An estimated 57% of deployed units "are susceptible to medium- or high-severity attacks," according to research from security vendor Palo Alto Networks. IoT units are inherently vulnerable to security attacks, and enterprises are typically responsible for protecting against threats. Additionally, the IoT industry hasn't settled on standardized security, as time to market is sometimes a priority over standards. ... 3GPP developed RedCap to provide a viable option for enterprises seeking a higher-performance, feature-rich 5G alternative to traditional IoT connectivity options such as low-power WANs (LPWANs). LPWANs are traditionally used to transmit limited data over low-speed cellular links at a low cost. In contrast, RedCap offers moderate bandwidth and enhanced features for more demanding use cases, such as video surveillance cameras, industrial control systems in manufacturing and smart building infrastructure. ... From a security standpoint, RedCap inherits strong capabilities in 5G, such as authentication, encryption and integrity protection. It can also be supplemented at application and device levels for a multilayered security approach.


Architecting the MVP in the Age of AI

A key aspect of architecting an MVP is forming and testing hypotheses about how the system will meet its QARs. Understanding and prioritizing these QARs is not an easy task, especially for teams without a lot of architecture experience. AI can help when teams provide context by describing the QARs that the system must satisfy in a prompt and asking the LLM to suggest related requirements. The LLM may suggest additional QARs that the team may have overlooked. For example, if performance, security, and usability are the top 3 QARs that a team is considering, an LLM may suggest looking at scalability and resilience as well. This can be especially helpful for people who are new to software architecture. ... Sometimes validating the AI’s results may require more skills than would be required to create the solution from scratch, just as is sometimes the case when seeing someone else’s code and realizing that it’s better than what you would have developed on your own. This can be an effective way to improve developers’ skills, provided that the code is good. AI can also help you find and fix bugs in your code that you may miss. Beyond simple code inspection, experimentation provides a means of validating the results produced by AI. In fact, experimentation is the only real way to validate it, as some researchers have discovered.