Showing posts with label knowledge. Show all posts
Showing posts with label knowledge. Show all posts

Daily Tech Digest - December 10, 2025


Quote for the day:

"Develop success from failures. Discouragement and failure are two of the surest stepping stones to success." -- Dale Carnegie



Design in the age of AI: How small businesses are building big brands faster

Instead of hiring separate agencies for naming, logo design, and web development, small businesses are turning to unified AI platforms that handle the full early-stage design stack. Tools like Design.com merge naming, logo creation, and website generation into a single workflow — turning an entrepreneur’s first sketch into a polished brand system within minutes. ... Behind the surge in AI design tools lies a broader ecosystem shift. Companies like Canva and Wix made design accessible; the current wave — led by AI-native platforms like Design.com — is more personal and adaptive. Unlike templated platforms, these tools understand context. A restaurant founder and a SaaS startup will get not just different visuals, but different copy tones, typography systems, and user flows — automatically. “What we’re seeing,” Lynch explains, “isn’t just growth in one product category. It’s a movement toward connected creativity — where every part of the brand experience learns from every other.” ... Imagine naming a company and watching an AI instantly generate a logo, color palette, and homepage layout that all reflect the same personality. As your audience grows, the same system helps you update your visual identity or tone to match new goals — while preserving your original DNA.


Henkel CISO on the messy truth of monitoring factories built across decades

On the factory floor, it is common to find a solitary engineering workstation that holds the only up-to-date copies of critical logic files, proprietary configuration tools, and project backups. If that specific computer suffers a hardware failure or is compromised by ransomware, the maintenance team loses the ability to diagnose errors or recover the production line. ... If the internet connection is severed, or if the third-party cloud provider suffers an outage, the equipment on the floor stops working. This architecture fails because it prioritizes connectivity over local autonomy, creating a fragile ecosystem where a disruption in a remote cloud environment creates a “digital brick” out of physical machinery. ... An attacker does not need sophisticated “zero-day” exploits to compromise a fifteen-year-old human-machine interface, they often just need publicly known vulnerabilities that will never be fixed by the vendor. By compromising a peripheral camera or an outdated visualization node, they gain a persistence mechanism that security teams rarely monitor, allowing them to map the operational technology network and prepare for a disruptive attack on the critical control systems at their leisure. ... A critical question for CISOs to ask is: “Can you provide a continuously updated Software Bill of Materials for your firmware, and what is your specific process for mitigating vulnerabilities in embedded third-party libraries?”


AI churn has IT rebuilding tech stacks every 90 days

Even without full production status, the fact that so many organizations are rebuilding components of their agent tech stacks every few months demonstrates not only the speed of change in the AI landscape but also a lack of faith in agentic results, Northcutt claims. Changes in the agent tech stack range from something as simple as updating the underlying AI model’s version, to moving from a closed-source to an open-source model or changing the database where agent data is stored, he notes. In many cases, replacing one component in the stack sets off a cascade of changes downstream, he adds. ... While the speed of AI evolution can drive frequent rebuilds, part of the problem lies in the way AI models are tweaked, she says. “The deeper issue is that many agent systems rely on behaviors that sit inside the model rather than on clear rules,” Hashem explains. “When the model updates, the behavior drifts. When teams set clear steps and checks for the agent, the stack can evolve without constant breakage.” ... “What works now may become suboptimal later on,” he says. “If organizations don’t actively keep up to date and refresh their stack, they risk falling behind in performance, security, and reliability.” Constant rebuilds don’t have to create chaos, however, Balabanskyy adds. CIOs should take a layered approach to their agent stacks, he recommends, with robust version control, continuous monitoring, and a modular deployment approach.


Why Losing One Security Engineer Can Break Your Defences

When tools are hard to manage – or if you need to bundle numerous tools from different vendors together – tribal knowledge builds up in one engineer’s head. It’s unrealistic to expect them to document it. Gartner recently said that organizations use an average of 45 cybersecurity tools and called for security leaders to optimize their toolsets. And in that context, losing the one person who understands how these systems actually work is not just inconvenient: it's a structural risk. And the impact this has is seen in the data from the State of AI in Security & Development report; using numerous vendors for security tools correlates with more incidents, more time spent prioritising alerts and slower remediation. In short, a security engineer has too much on their plate, and most security tools aren’t making their job any easier. ... “Organisations tend to be all looking for the same blend of technical cloud, integration, SecOps, IAM experience but with extensive knowledge in each pillar,” says James Walsh, National Lead for Cyber, Data & Cloud UK&I at Hays. “Everyone wants the unicorn security engineer whose experience spans all of this, but it comes at too high a price for lots of organisations,” he adds. Walsh notes that hiring is often driven by teams below the CISO — such as Heads of SecOps — which can create inconsistent expectations of what a ‘fully competent’ engineer should look like.


Overload Protection: The Missing Pillar of Platform Engineering

Some limits exist to protect systems. Others enforce fairness between customers or align with contractual tiers. Regardless of the reason, these limits must be enforced predictably and transparently. ... In data-intensive environments, bottlenecks often appear in storage, compute, or queueing layers. One unbounded query or runaway job can starve others, impacting entire regions or tenants. Without a unified overload protection layer, every team becomes a potential failure domain. ... Enterprise customers often face challenges when quota systems evolve organically. Quotas are published inconsistently, counted incorrectly, or are not visible to the right teams. Both external customers and internal services need predictable limits. A centralized Quota Service solves this. It defines clear APIs for tracking and enforcing usage across tenants, resources, and time intervals.  ... When overload protection is not owned by the platform, teams reinvent it repeatedly. Each implementation behaves differently, often under pressure. The result is a fragile ecosystem where: Limits are enforced inconsistently, for example, some endpoints apply resource limits, while others run requests without enforcing any limits, leading to unpredictable behavior and downstream problems; Failures cascade unpredictably, for example, a runaway data pipeline job can saturate a shared database, delaying or failing unrelated jobs and triggering retries and alerts across teams


Is your DR plan just wishful thinking? Prove your resilience with chaos engineering

At its core, it’s about building confidence in your system’s resilience. The process starts with understanding your system's steady state, which is its normal, measurable, and healthy output. You can't know the true impact of a failure without first defining what "good" looks like. This understanding allows you to form a clear, testable hypothesis: a statement of belief that your system's steady state will persist even when a specific, turbulent condition is introduced. To test this hypothesis, you then execute a controlled action, which is a precise and targeted failure injected into the system. This isn't random mischief; it's a specific simulation of real-world failures, such as consuming all CPU on a host (resource exhaustion), adding network latency (network failure), or terminating a virtual machine (state failure). While this action is running, automated probes act as your scientific instruments, continuously monitoring the system's state to measure the effect. ... Beyond simply proving system availability, chaos engineering builds trust in your reliability metrics, ensuring that you meet your SLOs even when services become unavailable. An SLO is a specific, acceptable target level of your service's performance measured over a specified period that reflects the user's experience. SLOs aren't just internal goals; they are the bedrock of customer trust and the foundation of your contractual service level agreements (SLAs).


The data center of the future: high voltage, liquid cooled, up to 4 MW per rack

Developments such as microfluidic cooling could have a profound impact on how racks and accompanying infrastructure will be built towards the future. Also, it is not all about the type of cooling, but also about the way chips communicate with each other and communicate internally. What will the impact of an all-photonics network be on cooling, for example? The first couple of stages building that type of end-to-end connection have been completed. The interesting parts for the discussion we have here are next on the roadmap for all-photonics networks: using photonics connections between and inside silicon on boards. ... However, there are many moving parts to take into account. It will need a more dynamic approach to selling space in data centers, which is usually based on the amount of watts a customer wants. Irrespective of the actual load, the data center reserves that for the customer. If data centers need to be more dynamic, so do the contracts. ... The data center of the future will be characterized by high-density computing, liquid cooling, sustainable power sources, and a more integrated role in the grid ecosystem. As technology continues to advance, data centers will become more efficient, flexible, and environmentally responsible. That may sound like an oxymoron to many people nowadays, but it’s the only way to get to the densities we need moving forward.


Vietnam integrating biometrics into daily life in digital transformation drive

Vietnam is rapidly integrating biometrics and digital identity into everyday life, rolling out identity‑based systems across public transport, air travel and banking as part of an ambitious national digital transformation drive. New deployments in Hanoi’s metro, airports nationwide and the financial sector show how VNeID and biometric verification increasingly constitute Vietnam’s infrastructure. ... Officials argue the initiative strengthens Hanoi’s ambitions as a smart city and improves interoperability across transport modes. It also introduces a unified digital identity layer for public transit, which no other Vietnamese city can yet boast. Passenger data, operations and transactions are now centralized on a single platform, enabling targeted subsidies based on usage patterns rather than flat‑rate models. The Hanoi Metro app, available on major app stores, supports tap‑and‑go access and discounted fares for verified digital identities. ... The new rules require banks to conduct face‑to‑face identity checks and verify biometric data, such as facial information, before issuing cards to individual customers. The same requirement applies to the legal representatives of corporate clients, with limited exceptions, reports Vietnam Plus. ... Foreigners without electronic identity credentials, as well as Vietnamese nationals with undetermined citizenship status, will undergo in‑person biometric collection using data from the National Population Database. 


Why 2025 broke the manager role — and what it means for leadership ahead

Managers did far more than supervise. “They became mentors, skill-builders, culture carriers and the first line of emotional support,” Tyagi said. They coached diverse teams, supported women and marginalised groups entering new roles, and navigated talent crunches by building internal pipelines. They adopted learning apps, facilitated experience-sharing sessions and absorbed the emotional load of stretched teams. ... Sustaining morale amid continual uncertainty was the most difficult task, Tyagi said. Workloads were redistributed constantly. Managers had to reassure employees while balancing performance expectations with wellbeing. Chopra saw the same tensions. Recognition and feedback remained inconsistent. Gallup research showed a gap between managers’ belief that they offered regular feedback and employees’ experience that they rarely received it. Remote work deepened disconnection. “Creating team cohesion, trust and belonging when people are dispersed remains difficult,” she said. ... Empathy dominated the management skill-set in 2025. Transparency, communication and emotional intelligence were indispensable as uncertainty persisted. Coaching and talent development grew central, especially in organisations investing in women, new hires and marginalised communities. Chopra pointed to several non-negotiables: emotional intelligence, tech literacy, outcome-focused leadership, psychological safety, coaching and ethical awareness in technology use. 


The Missing Link in AI Scaling: Knowledge-First, Not Data-First

Organizations today need to ensure data readiness to avoid failures in model performance, system trust, and strategic alignment. To succeed, CIOs must shift from a “data-first” to a “knowledge-first” approach in order to capitalize on the true benefits of AI. ... Domain-specific reasoning capabilities provide context and meaning to data, which is crucial for professional and reliable advice. A semantic layer across silos creates unified views of all data, enabling comprehensive insights that are otherwise impossible to achieve. Another benefit is its ability to support AI governance and explainability by ensuring that AI systems are not “black boxes,” but are transparent and trustworthy. Lastly, it acts as an agentic AI backbone by orchestrating a workforce of AI agents that can execute complex tasks with reliability and context. ... Shifting to a knowledge-first architecture is not just an option, but a necessity, and is a direct challenge to the conventional data-first mindset. For decades, enterprises have focused on accumulating vast lakes of data, believing that more data inherently leads to better insights. However, this approach created fragmented, context-poor data silos. This “digital quicksand” is the root of the “Semantic Challenge” because data is siloed and heterogeneous. ... A knowledge-first approach fundamentally changes the goal from simply storing data to building an interconnected, enterprise-wide, knowledge graph. This architecture is built on the principle of “things, not strings”. 

Daily Tech Digest - July 11, 2025


Quote for the day:

"People may forget what you say, but they won't forget how you them feel." -- Mary Kay Ash


Throwing AI at Developers Won’t Fix Their Problems

Organizations are spending too much time, money and energy focusing on the tools themselves. “Should we use OpenAI or Anthropic? Copilot or Cursor?” We see two broad patterns for how organizations approach AI tool adoption. The first is that leadership has a relationship with a certain vendor or just a personal preference, so they pick a tool and mandate it. This can work, but you’ll often get poor results — not because the tool is bad, but because the market is moving too fast for centralized teams to keep up. ... The second model, which generally works much better, is to allow early adopters to try new tools and find what works. This gives developers autonomy to improve their own workflows and reduces the need for a central team to test every new tool exhaustively. Comparing the tools by features or technology is less important every day. You’ll waste a lot of energy debating minor differences that won’t matter next year. Instead, focus on what problem you want to solve. Are you trying to improve testing? Code review? Documentation? Incident response? Figure out the goal first. Then see if an AI tool (or any tool) actually helps. If you don’t, you’ll just make DevEx worse: You’ll have a landscape of 100 tools nobody knows how to use, and you’ll deliver no real value.


Anatomy of a Scattered Spider attack: A growing ransomware threat evolves

Scattered Spider began its attack against the unnamed organization’s public-facing Oracle Cloud authentication portal, targeting its chief financial officer. Using personal details, such as the CFO’s date of birth and the last four digits of their Social Security number obtained from public sources and previous breaches, Scattered Spider impersonated the CFO in a call to the company’s help desk, tricking help desk staff into resetting the CFO’s registered device and credentials. ... The cybercriminals extracted more than 1,400 secrets by taking advantage of compromised admin accounts tied to the target’s CyberArk password vault and likely an automated script. Scattered Spider granted administrator roles to compromised user accounts before using tools, including ngrok, to maintain access on compromised virtual machines. ... Scattered Spider’s operations have become more aggressive and compressed. “Within hours of initial compromise — often via social engineering — they escalate privileges, move laterally, establish persistence, and begin reconnaissance across both cloud and on-prem environments,” Beek explained. “This speed and fluidity represent a significant escalation in operational maturity.” ... Defending effectively against Scattered Spider involves tackling both human and technical vulnerabilities, ReliaQuest researchers noted.


Data governance: The contract layer that makes agentic systems possible

Today, AI has changed everything. Lineage, access enforcement and cataloging must operate in real time and cover vastly more data types and sources. Models consume data continuously and make decisions instantly, raising the stakes for mistakes or gaps in oversight. What used to be a once-a-week check is now an always-on discipline. This transformation has turned data governance from a checklist into a living system that protects quality and trust at scale. ... One of the biggest misconceptions is that governance slows down innovation. In reality, good governance speeds it up. By clarifying ownership, policies and data quality from the start, teams avoid spending precious time reconciling mismatches and can focus on delivering AI that works as intended. A clear governance framework reduces unnecessary data copies, lowers regulatory risk and prevents AI from producing unpredictable results. Getting this right also requires a culture shift. Producers and consumers alike need to see themselves as co-stewards of shared data products. ... Enterprises deploying agentic AI cannot leave governance behind. These systems run continuously, make autonomous decisions and rely on accurate context to stay relevant. Governance must move from passive checks to an active, embedded foundation within both architecture and culture.


How CIOs Are Navigating Today’s Hyper Volatility

“When it comes to changing dynamics, [such as] AI and driving innovation, there are several things that people like me are dealing with right now. There is an impact on how you hire people, staffing, how to structure your organization,” says Johar. “There is an impact on risk. I’m also responsible within my organization for managing the risk of data, privacy and security, and AI is bringing a new dimension to that risk. It’s an opportunity, but it's also a risk. How you structure your organization, how you manage risk, how you drive transformation -- these things are all connected.” ... “[CIOs] are emerging as transformation leaders, so they need to understand how to navigate the culture change of an organization, the change in people in an organization. They must know how to tell stories so they can get the organization on board,” says Danielle Phaneuf, a partner, PwC cloud and digital strategy operating model leader. “Their mindset is different, so they're embracing the transformation with a product model that allows them to move faster [and] allows them to think long term. They’re building these new muscles around change leadership and engaging the business early, co-creating solutions, not thinking they must solve everything on their own, and doing that in an agile way.”


What Is AI Agent Washing And Why Is It A Risk To Businesses?

You’ve heard of greenwashing and AI-washing? Well, now it seems that the hype-merchants and bandwagon-jumpers with technology to sell have come up with a new (and perhaps predictably inevitable) scam. Analysts at Gartner say unscrupulous vendors are increasingly engaging in "agent washing" and say that out of “thousands” of supposedly agentic AI products tested, only 130 truly lived up to the claim. ... So, what’s the scam? Well, according to the report, agent washing involves passing off existing automation technology, including LLM-powered chatbots and robotic process automation, as agentic, when in reality it lacks those capabilities. ... Tools that claim to be agentic because they orchestrate and pull together multiple AI systems, such as marketing automation platforms and workflow automation tools, are stretching the term, too, unless they are also capable of autonomously coordinating the usage of those tools for long-term planning and decision-making. A few more hypothetical examples: While an AI chatbot-based system can write emails on command, an agentic system might write emails, identify the best recipients for marketing purposes, send the emails out, monitor responses, and then generate follow-up emails, tailored to individual responders.


Agentic AI Architecture Framework for Enterprises

The critical decision point lies in understanding when predictability and control take precedence versus when flexibility and autonomous decision-making deliver greater value. This understanding leads to a fundamental principle: start with the simplest effective solution, adding complexity only when clear business value justifies the additional operational overhead and risk. ... Enterprise deployment of agentic AI creates an inherent tension between AI autonomy and organizational governance requirements. Our Analysis of successful MVPs and on-going production implementations across multiple industries reveals three distinct architectural tiers, each representing different trade-offs between capability and control while anticipating emerging regulatory frameworks like the EU AI Act and others coming. These tiers form a systematic maturity progression, so organizations can build competency and stakeholder trust incrementally before advancing to more sophisticated implementations. ... Our three-tier progression manifests differently across industries, reflecting unique regulatory environments, risk tolerances, customer expectations and operational requirements. Understanding these industry-specific approaches enables organizations to tailor their implementation strategies while maintaining systematic capability development.


Rewriting the rules of enterprise architecture with AI agents

In enterprise architecture, agentic AI systems can be deployed as digital “co-architects”, process optimizers, compliance monitors and scenario planners — each acting with a degree of independence and intelligence previously impossible. So why agentic AI and simulations for governance…and why now? Governance in enterprise architecture is about ensuring that IT systems, processes and data align with business goals, comply with regulations and adapt to change. ... These methods are increasingly inadequate in the face of real-time business dynamics. Agentic AI introduces a new composability model that is achievable: Governance that is continuous, adaptive and proactive. Agentic systems can monitor the enterprise landscape, simulate the impact of changes, enforce policies autonomously and even resolve conflicts or escalate issues when necessary. This results in governance that is both more robust and more responsive to business needs. Gartner’s research reinforces the impact of agency and simulations on enterprise architecture’s future. According to its Enterprise Architecture Services Predictions for 2025, 55% of EA teams will act as coordinators of autonomous governance automation by 2028 and shift from a direct oversight role to that of model curation and certification, agent simulations and oversight, and business outcome alignment with machine-led governance.


With tools like Alpha and Coherence, we’re turning risk management from reactive to real-time

Those days when it was more of a very reactive and process-heavy system, where you had to follow a set of dilutive processes all the time and react to risks being observed in the system, and then you had a standard operating procedure to deal with it step by step. Those days are behind us. That scenario was there for a number of decades. But with AI and intelligent-led solution capabilities transforming the landscape, it has become proactive and extremely real-time. So what we propose, we always have lived by our Digital Knowledge Operations framework. The three words in it: digital, knowledge, and operations. Digital makes you proactive because you’re building solutions not for today but for the future. You rely on knowledge, and you transform your operations. That’s our philosophy that unlocks this proactive ability of capturing the possibilities of risk in real time. That drove us to build something like Alpha. It’s essentially a very strong and effective transaction monitoring framework and tool that can detect a whole lot of false alerts with over 75% to 80% accuracy. Now, in risk management, what happens is that a lot of operational bandwidth, effort, and talent capability is lost in assessing all of these false positives that are generated because of risk management procedures. Most of them can be taken care of by a combination of machine learning, artificial intelligence, and some sort of robotics.


Banking on Better Data: Why Financial Institutions Need an Agile Cloud Strategy

The urgency to migrate to the cloud is particularly pronounced in the banking sector, where legacy institutions are under mounting pressure to keep pace with digital-native competitors. These agile challengers can roll out new features in a matter of weeks, while traditional banks remain constrained by older mainframes. It is clear that the risk of standing still is no longer theoretical. Earlier this year, over 1.2 million UK customers experienced banking outages on pay day, a critical moment for both individuals and businesses. Several major retail banks reported widespread issues, including login failures and prolonged delays in customer service. Far from being one-off glitches, these disruptions point to a broader pattern of structural fragility rooted in outdated technology. Unlike legacy systems, cloud-native platforms are engineered for adaptability, resilience, and real-time performance, which are traits that traditional banking environments have been struggling to deliver. These failures weren’t just accidents; they were foreseeable outcomes of prolonged underinvestment in modernization. This reinforced a critical truth for traditional banks, which is that cloud transformation is no longer a future aspiration, but an immediate requirement to safeguard customer trust and remain viable in a rapidly evolving market.


Why knowledge is the ultimate weapon in the Information Age

To turn AI into an asset rather than a liability, organisations must rethink their approach to knowledge management. At its core, knowledge management is a learning cycle centred on people, with technology acting as a force multiplier, not a substitute for judgment. The objective is to establish a virtuous loop in which data is collected, validated, and transformed into actionable insight. The tighter and more disciplined this cycle, the higher the quality of the resulting knowledge. In practice, this means treating AI as just another tool in the toolkit. ... In an age of information warfare, perception is the battleground. To stay ahead, decision-makers must be trained not just in AI tools but in understanding their strengths, limitations, and potential biases, including their own. The ability to critically assess AI-generated content is essential, not optional. More than static planning, modern organisations need situational awareness and strategic agility, embedding AI within a human-centric knowledge strategy. We can shift the balance in the information war by curating trusted sources, rigorously verifying content, and sustaining a culture of learning. This new knowledge ecosystem embraces uncertainty, leverages AI wisely, and keeps cognitive bias in control, wielding knowledge as a disciplined and secure strategic asset.