Showing posts with label fintech. Show all posts
Showing posts with label fintech. Show all posts

Daily Tech Digest - August 29, 2025


Quote for the day:

"Whatever you can do, or dream you can, begin it. Boldness has genius, power and magic in it." -- Johann Wolfgang von Goethe


The incredibly shrinking shelf life of IT solutions

“Technology cycles are spinning faster and faster, and some solutions are evolving so fast, that they’re now a year-long bet, not a three- or five-year bet for CIOs,” says Craig Kane ... “We are living in a period of high user expectations. Every day is a newly hyped technology, and CIOs are constantly being asked how can we, the company, take advantage of this new solution,” says Boston Dynamics CIO Chad Wright. “Technology providers can move quicker today with better development tools and practices, and this feeds the demand that customers are creating.” ... Not every CIO is switching out software as quickly as that, and Taffet, Irish, and others say they’re certainly not seeing the shelf life for all software and solutions in their enterprise shrink. Indeed, many vendors are updated their applications with new features and functions to keep pace with business and market demands — updates that help extend the life of their solutions. And core solutions generally aren’t turning over any more quickly today than they did five or 10 years ago, Kearney’s Kane says. ... Montgomery says CIOs and business colleagues sometimes think the solutions they have in place are falling behind market innovations and, as a result, their business will fall behind, too. That may be the case, but they may just be falling for marketing hype, she says. Montgomery also cites the fast pace of executive turnover as contributing to the increasingly short shelf life of IT solutions. 


Resiliency in Fintech: Why System Design Matters More Than Ever

Cloud computing has transformed fintech. What once took months to provision can now be spun up in hours. Auto-scaling, serverless computing, and global distribution have enabled firms to grow without massive upfront infrastructure costs. Yet, cloud also changes the resilience equation. Outages at major CSPs — rare but not impossible — can cascade across entire industries. The Financial Stability Board (FSB) has repeatedly warned about “cloud concentration risk.” Regulators are exploring frameworks for oversight, including requirements for firms to maintain exit strategies or multi-cloud approaches. For fintech leaders, the lesson is clear: cloud-first doesn’t mean resilience-last. Building systems that are cloud-resilient (and in some cases cloud-agnostic) is becoming a strategic priority. ... Recent high-profile outages underline the stakes. Trading platforms freezing during volatile markets, digital banks leaving customers without access to funds, and payment networks faltering during peak shopping days all illustrate the cost of insufficient resilience. ... Innovation remains the lifeblood of fintech. But as the industry matures, resilience has become the new competitive differentiator. The firms that win will be those that treat system design as risk management, embedding high availability, regulatory compliance, and cloud resilience into their DNA. In a world where customer trust can be lost in minutes, resilience is not just good engineering.


AI cost pressures fuelling cloud repatriation

IBM thinks AI will present a bigger challenge than the cloud because it will be more pervasive with more new applications being built on it. Consequently, IT leaders are already nervous about the cost and value implications and are looking for ways to get ahead of the curve. Repeating the experience of cloud adoption, AI is being driven by business teams, not by back-office IT. AI is becoming a significant driver for shifting workloads back to private, on-premise systems. This is because data becomes the most critical asset, and Patel believes few enterprises are ready to give up their data to a third party at this stage. ... The cloud is an excellent platform for many workloads, just as there are certain workloads that run extremely well on a mainframe. The key is to understand workload placement: is my application best placed on a mainframe, on a private cloud or on a public cloud? As they start their AI journey, some of Apptio’s customers are not ready for their models, learning and intelligence – their strategic intellectual property – to sit in a public cloud. There are consequences when things go wrong with data, and those consequences can be severe for the executives concerned. So, when a third party suggests putting all of the customer, operational and financial data in one place to gain wonderful insights, some organisations are unwilling to do this if the data is outside their direct control. 


Finding connection and resilience as a CISO

To create stronger networks among CISOs, security leaders can join trusted peer groups like industry ISACs (Information Sharing and Analysis Centers) or associations within shared technology / compliance spaces like cloud, GRC, and regulatory. The protocols and procedures in these groups ensure members can have meaningful conversations without putting them or their organization at risk. ... Information sharing operates in tiers, each with specific protocols for data protection. Top tiers, involving entities like ISACs, the FBI, and DHS, have established protocols to properly share and safeguard confidential data. Other tiers may involve information and intelligence already made public, such as CVEs or other security disclosures. CISOs and their teams may seek assistance from industry groups, partnerships, or vendors to interpret current Indicators of Compromise (IOCs) and other remediation elements, even when public. Continuously improving vendor partnerships is crucial for managing platforms and programs, as strong partners will be familiar with internal operations while protecting sensitive information. ... Additionally, encouraging a culture of continuous learning and development, not just with the security team but broader technology and product teams, will empower employees, distribute expertise, and grow a more resilient and adaptable workforce.


Geopolitics is forcing the data sovereignty issue and it might just be a good thing

At London Tech Week recently UK Prime Minister Keir Starmer said that the way that war is being fought “has changed profoundly,” adding that technology and AI are now “hard wired” into national defense. It was a stark reminder that IT infrastructure management must now be viewed through a security lens and that businesses need to re-evaluate data management technologies and practices to ensure they are not left out in the cold. ... For many, public cloud services have created a false sense of flexibility. Moving fast is not the same as moving safely. Data localization, jurisdictional control, and security policy alignment are now critical to long-term strategy, not barriers to short-term scale. So where does that leave enterprise IT? Essentially, it leaves us with a choice - design for agility with control, or face disruption when the rules change. ... Sovereignty-aware infrastructure isn’t about isolation. It’s about knowing where your data is, who can access it, how it moves, and what policies govern it at each stage. That means visibility, auditability, and the ability to adjust without rebuilding every time a new compliance rule appears. A hybrid multicloud approach gives organizations the flexibility while keeping data governance central. It’s not about locking into one cloud provider or building everything on-prem. 


Recalibrating Hybrid Cloud Security in the Age of AI: The Need for Deep Observability

As AI further fuels digital transformation, the security landscape of hybrid cloud infrastructures is becoming more strained. As such, security leaders are confronting a paradox. Cloud environments are essential for scaling operations, but they also present new attack vectors. ... Amid these challenges, some organisations are realising that their traditional security tools are insufficient. The lack of visibility into hybrid cloud environments is identified as a core issue, with 60 percent of Australian leaders expressing a lack of confidence in their current tools to detect breaches effectively. The call for "deep observability" has never been louder. The research underscores the the need for having a comprehensive, real-time view into all data in motion across the enterprise to improve threat detection and response. Deep observability, combining metadata, network packets, and flow data has become a cornerstone of hybrid cloud security strategies. It provides security teams with actionable insights into their environments, allowing them to spot potential threats in real time. In fact, 89 percent of survey respondents agree that deep observability is critical to securing AI workloads and managing complex hybrid cloud infrastructures. Being proactive with this approach is seen as a vital way to bridge the visibility gap and ensure comprehensive security coverage across hybrid cloud environments.


Financial fraud is widening its clutches—Can AI stay ahead?

Today, organised crime groups are running call centres staffed with human trafficking victims. These victims execute “romance baiting” schemes that combine emotional manipulation with investment fraud. The content they use? AI-generated. The payments they request? ... Fraud attempts rose significantly in a single quarter after COVID hit, and the traditional detection methods fell apart. This is why modern fraud detection systems had to evolve. Now, these systems can analyse thousands of transactions per minute, assigning risk scores that update in real-time. There was no choice. Staying in the old regime of anti-fraud systems was no longer an option when static rules became obsolete almost overnight. ... The real problem isn’t the technology itself. It’s the pace of adoption by bad actors. Stop Scams UK found something telling: While banks have limited evidence of large-scale AI fraud today, technology companies are already seeing fake AI-generated content and profiles flooding their platforms. ... When AI systems learn from historical data that reflects societal inequalities, they can perpetuate discrimination under the guise of objective analysis. Banks using biased training data have inadvertently created systems that disproportionately flag certain communities for additional scrutiny. This creates moral problems alongside operational and legal risks.


Data security and compliance are non-negotiable in any cloud transformation journey

Enterprises today operate in a data-intensive environment that demands modern infrastructure, built for speed, intelligence, and alignment with business outcomes. Data modernisation is essential to this shift. It enables real-time processing, improves data integrity, and accelerates decision-making. When executed with purpose, it becomes a catalyst for innovation and long-term growth. ... The rise of generative AI has transformed industries by enhancing automation, streamlining processes, and fostering innovation. According to a recent NASSCOM report, around 27% of companies already have AI agents in production, while another 31% are running pilots. ... Cloud has become the foundation of digital transformation in India, driving agility, resilience, and continuous innovation across sectors. Kyndryl is expanding its capabilities in the market to support this momentum. This includes strengthening our cloud delivery centres and expanding local expertise across hyperscaler platforms. ... Strategic partnerships are central to how we co-innovate and deliver differentiated outcomes for our clients. We collaborate closely with a broad ecosystem of technology leaders to co-create solutions that are rooted in real business needs. ... Enterprises in India are accelerating their cloud journeys, demanding solutions that combine hyperscaler innovation with deep enterprise expertise. 


Digital Transformation Strategies for Enterprise Architects

Customer experience must be deliberately architected to deliver relevance, consistency, and responsiveness across all digital channels. Enterprise architects enable this by building composable service layers that allow marketing, commerce, and support platforms to act on a unified view of the customer. Event-driven architectures detect behavior signals and trigger automated, context-aware experiences. APIs must be designed to support edge responsiveness while enforcing standards for security and governance. ... Handling large datasets at the enterprise level requires infrastructure that treats metadata, lineage, and ownership as first-class citizens. Enterprise architects design data platforms that surface reliable, actionable insights, built on contracts that define how data is created, consumed, and governed across domains. Domain-oriented ownership via data mesh ensures accountability, while catalogs and contracts maintain enterprise-wide discoverability. ... Architectural resilience starts at the design level. Modular systems that use container orchestration, distributed tracing, and standardized service contracts allow for elasticity under pressure and graceful degradation during failure. Architects embed durability into operations through chaos engineering, auto-remediation policies, and blue-green or canary deployments. 


Unchecked and unbound: How Australian security teams can mitigate Agentic AI chaos

Agentic AI systems are collections of agents working together to accomplish a given task with relative autonomy. Their design enables them to discover solutions and optimise for efficiency. The result is that AI agents are non-deterministic and may behave in unexpected ways when accomplishing tasks, especially when systems interoperate and become more complex. As AI agents seek to perform their tasks efficiently, they will invent workflows and solutions that no human ever considered. This will produce remarkable new ways of solving problems, and will inevitably test the limits of what's allowable. The emergent behaviours of AI agents, by definition, exceed the scope of any rules-based governance because we base those rules on what we expect humans to do. By creating agents capable of discovering their own ways of working, we're opening the door to agents doing things humans have never anticipated. ... When AI agents perform actions, they act on behalf of human users or use an identity assigned to them based on a human-centric AuthN and AuthZ system. That complicates the process of answering formerly simple questions, like: Who authored this code? Who initiated this merge request? Who created this Git commit? It also prompts new questions, such as: Who told the AI agent to generate this code? What context did the agent need to build it? What resources did the AI have access to?

Daily Tech Digest - July 02, 2025


Quote for the day:

"Success is not the absence of failure; it's the persistence through failure." -- Aisha Tyle


How cybersecurity leaders can defend against the spur of AI-driven NHI

Many companies don’t have lifecycle management for all their machine identities and security teams may be reluctant to shut down old accounts because doing so might break critical business processes. ... Access-management systems that provide one-time-use credentials to be used exactly when they are needed are cumbersome to set up. And some systems come with default logins like “admin” that are never changed. ... AI agents are the next step in the evolution of generative AI. Unlike chatbots, which only work with company data when provided by a user or an augmented prompt, agents are typically more autonomous, and can go out and find needed information on their own. This means that they need access to enterprise systems, at a level that would allow them to carry out all their assigned tasks. “The thing I’m worried about first is misconfiguration,” says Yageo’s Taylor. If an AI agent’s permissions are set incorrectly “it opens up the door to a lot of bad things to happen.” Because of their ability to plan, reason, act, and learn AI agents can exhibit unpredictable and emergent behaviors. An AI agent that’s been instructed to accomplish a particular goal might find a way to do it in an unanticipated way, and with unanticipated consequences. This risk is magnified even further, with agentic AI systems that use multiple AI agents working together to complete bigger tasks, or even automate entire business processes. 


The silent backbone of 5G & beyond: How network APIs are powering the future of connectivity

Network APIs are fueling a transformation by making telecom networks programmable and monetisable platforms that accelerate innovation, improve customer experiences, and open new revenue streams.  ... Contextual intelligence is what makes these new-generation APIs so attractive. Your needs change significantly depending on whether you’re playing a cloud game, streaming a match, or participating in a remote meeting. Programmable networks can now detect these needs and adjust dynamically. Take the example of a user streaming a football match. With network APIs, a telecom operator can offer temporary bandwidth boosts just for the game’s duration. Once it ends, the network automatically reverts to the user’s standard plan—no friction, no intervention. ... Programmable networks are expected to have the greatest impact in Industry 4.0, which goes beyond consumer applications. ... 5G combined IOT and with network APIs enables industrial systems to become truly connected and intelligent. Remote monitoring of manufacturing equipment allows for real-time maintenance schedule adjustments based on machine behavior. Over a programmable, secure network, an API-triggered alert can coordinate a remote diagnostic session and even start remedial actions if a fault is found.


Quantum Computers Just Reached the Holy Grail – No Assumptions, No Limits

A breakthrough led by Daniel Lidar, a professor of engineering at USC and an expert in quantum error correction, has pushed quantum computing past a key milestone. Working with researchers from USC and Johns Hopkins, Lidar’s team demonstrated a powerful exponential speedup using two of IBM’s 127-qubit Eagle quantum processors — all operated remotely through the cloud. Their results were published in the prestigious journal Physical Review X. “There have previously been demonstrations of more modest types of speedups like a polynomial speedup, says Lidar, who is also the cofounder of Quantum Elements, Inc. “But an exponential speedup is the most dramatic type of speed up that we expect to see from quantum computers.” ... What makes a speedup “unconditional,” Lidar explains, is that it doesn’t rely on any unproven assumptions. Prior speedup claims required the assumption that there is no better classical algorithm against which to benchmark the quantum algorithm. Here, the team led by Lidar used an algorithm they modified for the quantum computer to solve a variation of “Simon’s problem,” an early example of quantum algorithms that can, in theory, solve a task exponentially faster than any classical counterpart, unconditionally.


4 things that make an AI strategy work in the short and long term

Most AI gains came from embedding tools like Microsoft Copilot, GitHub Copilot, and OpenAI APIs into existing workflows. Aviad Almagor, VP of technology innovation at tech company Trimble, also notes that more than 90% of Trimble engineers use Github Copilot. The ROI, he says, is evident in shorter development cycles, and reduced friction in HR and customer service. Moreover, Trimble has introduced AI into their transportation management system, where AI agents optimize freight procurement by dynamically matching shippers and carriers. ... While analysts often lament the difficulty of showing short-term ROI for AI projects, these four organizations disagree — at least in part. Their secret: flexible thinking and diverse metrics. They view ROI not only as dollars saved or earned, but also as time saved, satisfaction increased, and strategic flexibility gained. London says that Upwave listens for customer signals like positive feedback, contract renewals, and increased engagement with AI-generated content. Given the low cost of implementing prebuilt AI models, even modest wins yield high returns. For example, if a customer cites an AI-generated feature as a reason to renew or expand their contract, that’s taken as a strong ROI indicator. Trimble uses lifecycle metrics in engineering and operations. For instance, one customer used Trimble AI tools to reduce the time it took to perform a tunnel safety analysis from 30 minutes to just three.


How IT Leaders Can Rise to a CIO or Other C-level Position

For any IT professional who aspires to become a CIO, the key is to start thinking like a business leader, not just a technologist, says Antony Marceles, a technology consultant and founder of software staffing firm Pumex. "This means taking every opportunity to understand the why behind the technology, how it impacts revenue, operations, and customer experience," he explained in an email. The most successful tech leaders aren't necessarily great technical experts, but they possess the ability to translate tech speak into business strategy, Marceles says, adding that "Volunteering for cross-functional projects and asking to sit in on executive discussions can give you that perspective." ... CIOs rarely have solo success stories; they're built up by the teams around them, Marceles says. "Colleagues can support a future CIO by giving honest feedback, nominating them for opportunities, and looping them into strategic conversations." Networking also plays a pivotal role in career advancement, not just for exposure, but for learning how other organizations approach IT leadership, he adds. Don't underestimate the power of having an executive sponsor, someone who can speak to your capabilities when you’re not there to speak for yourself, Eidem says. "The combination of delivering value and having someone champion that value -- that's what creates real upward momentum."


SLMs vs. LLMs: Efficiency and adaptability take centre stage

SLMs are becoming central to Agentic AI systems due to their inherent efficiency and adaptability. Agentic AI systems typically involve multiple autonomous agents that collaborate on complex, multi-step tasks and interact with environments. Fine-tuning methods like Reinforcement Learning (RL) effectively imbue SLMs with task-specific knowledge and external tool-use capabilities, which are crucial for agentic operations. This enables SLMs to be efficiently deployed for real-time interactions and adaptive workflow automation, overcoming the prohibitive costs and latency often associated with larger models in agentic contexts. ... Operating entirely on-premises ensures that decisions are made instantly at the data source, eliminating network delays and safeguarding sensitive information. This enables timely interpretation of equipment alerts, detection of inventory issues, and real-time workflow adjustments, supporting faster and more secure enterprise operations. SLMs also enable real-time reasoning and decision-making through advanced fine-tuning, especially Reinforcement Learning. RL allows SLMs to learn from verifiable rewards, teaching them to reason through complex problems, choose optimal paths, and effectively use external tools. 


Quantum’s quandary: racing toward reality or stuck in hyperbole?

One important reason is for researchers to demonstrate their advances and show that they are adding value. Quantum computing research requires significant expenditure, and the return on investment will be substantial if a quantum computer can solve problems previously deemed unsolvable. However, this return is not assured, nor is the timeframe for when a useful quantum computer might be achievable. To continue to receive funding and backing for what ultimately is a gamble, researchers need to show progress — to their bosses, investors, and stakeholders. ... As soon as such announcements are made, scientists and researchers scrutinize them for weaknesses and hyperbole. The benchmarks used for these tests are subject to immense debate, with many critics arguing that the computations are not practical problems or that success in one problem does not imply broader applicability. In Microsoft’s case, a lack of peer-reviewed data means there is uncertainty about whether the Majorana particle even exists beyond theory. The scientific method encourages debate and repetition, with the aim of reaching a consensus on what is true. However, in quantum computing, marketing hype and the need to demonstrate advancement take priority over the verification of claims, making it difficult to place these announcements in the context of the bigger picture.


Ethical AI for Product Owners and Product Managers

As the product and customer information steward, the PO/PM must lead the process of protecting sensitive data. The Product Backlog often contains confidential customer feedback, competitive analysis, and strategic plans that cannot be exposed. This guardrail requires establishing clear protocols for what data can be shared with AI tools. A practical first step is to lead the team in a data classification exercise, categorizing information as Public, Internal, or Restricted. Any data classified for internal use, such as direct customer quotes, must be anonymized before being used in an AI prompt. ... AI is proficient at generating text but possesses no real-world experience, empathy, or strategic insight. This guardrail involves proactively defining the unique, high-value work that AI can assist but never replace. Product leaders should clearly delineate between AI-optimal tasks, creating first drafts of technical user stories, summarizing feedback themes, or checking for consistency across Product Backlog items and PO/PM-essential areas. These human-centric responsibilities include building genuine empathy through stakeholder interviews, making difficult strategic prioritization trade-offs, negotiating scope, resolving conflicting stakeholder needs, and communicating the product vision. By modeling this partnership and using AI as an assistant to prepare for strategic work, the PO/PM reinforces that their core value lies in strategy, relationships, and empathy.


Sharded vs. Distributed: The Math Behind Resilience and High Availability

In probability theory, independent events are events whose outcomes do not affect each other. For example, when throwing four dice, the number displayed on each dice is independent of the other three dice. Similarly, the availability of each server in a six-node application-sharded cluster is independent of the others. This means that each server has an individual probability of being available or unavailable, and the failure of one server is not affected by the failure or otherwise of other servers in the cluster. In reality, there may be shared resources or shared infrastructure that links the availability of one server to another. In mathematical terms, this means that the events are dependent. However, we consider the probability of these types of failures to be low, and therefore, we do not take them into account in this analysis.  ... Traditional architectures are limited by single-node failure risk. Application-level sharding compounds this problem because if any node goes down, its shard and therefore the total system becomes unavailable. In contrast, distributed databases with quorum-based consensus (like YugabyteDB) provide fault tolerance and scalability, enabling higher resilience and improved availability.


How FinTechs are turning GRC into a strategic enabler

The misconception that risk management and innovation exist in tension is one that modern FinTechs must move beyond. At its core, cybersecurity – when thoughtfully integrated – serves not as a brake but as an enabler of innovation. The key is to design governance structures that are both intelligent and adaptive (and resilient in itself). The foundation lies in aligning cybersecurity risk management with the broader business objective: enablement. This means integrating security thinking early in the innovation cycle, using standardized interfaces, expectations, and frameworks that don’t obstruct, but rather channel innovation safely. For instance, when risk statements are defined consistently across teams, decisions can be made faster and with greater confidence. Critically, it starts with the threat model. A well-defined, enterprise-level threat model is the compass that guides risk assessments and controls where they matter most. Yet many companies still operate without a clear articulation of their own threat landscape, leaving their enterprise risk strategies untethered from reality. Without this grounding, risk management becomes either overly cautious or blindly permissive, or a bit of both. We place a strong emphasis on bridging the traditional silos between GRC, IT Security, Red Teaming, and Operational teams.

Daily Tech Digest - June 29, 2025


Quote for the day:

“Great minds discuss ideas; average minds discuss events; small minds discuss people.” -- Eleanor Roosevelt


Who Owns End-of-Life Data?

Enterprises have never been more focused on data. What happens at the end of that data's life? Who is responsible when it's no longer needed? Environmental concerns are mounting as well. A Nature study warns that AI alone could generate up to 5 million metric tons of e-waste by 2030. A study from researchers at Cambridge University and the Chinese Academy of Sciences said top reason enterprises dispose of e-waste rather than recycling computers is the cost. E-waste can contain metals, including copper, gold, silver aluminum and rare earth elements, but proper handling is expensive. Data security is a concern as well as breach proofing doesn't get better than destroying equipment. ... End-of-life data management may sit squarely in the realm of IT, but it increasingly pulls in compliance, risk and ESG teams, the report said. Driven by rising global regulations and escalating concerns over data leaks and breaches, C-level involvement at every stage signals that end-of-life data decisions are being treated as strategically vital - not simply handed off. Consistent IT participation also suggests organizations are well-positioned to select and deploy solutions that work with their existing tech stack. That said, shared responsibility doesn't guarantee seamless execution. Multiple stakeholders can lead to gaps unless underpinned by strong, well-communicated policies, the report said.


How AI is Disrupting the Data Center Software Stack

Over the years, there have been many major shifts in IT infrastructure – from the mainframe to the minicomputer to distributed Windows boxes to virtualization, the cloud, containers, and now AI and GenAI workloads. Each time, the software stack seems to get torn apart. What can we expect with GenAI? ... Galabov expects severe disruption in the years ahead on a couple of fronts. Take coding, for example. In the past, anyone wanting a new industry-specific application for their business might pay five figures for development, even if they went to a low-cost region like Turkey. For homegrown software development, the price tag would be much higher. Now, an LLM can be used to develop such an application for you. GenAI tools have been designed explicitly to enhance and automate several elements of the software development process. ... Many enterprises will be forced to face the reality that their systems are fundamentally legacy platforms that are unable to keep pace with modern AI demands. Their only course is to commit to modernization efforts. Their speed and degree of investment are likely to determine their relevance and competitive positioning in a rapidly evolving market. Kleyman believes that the most immediate pressure will fall on data-intensive, analytics-driven platforms such as CRM and business intelligence (BI). 


AI Improves at Improving Itself Using an Evolutionary Trick

The best SWE-bench agent was not as good as the best agent designed by expert humans, which currently scores about 70 percent, but it was generated automatically, and maybe with enough time and computation an agent could evolve beyond human expertise. The study is a “big step forward” as a proof of concept for recursive self-improvement, said Zhengyao Jiang, a cofounder of Weco AI, a platform that automates code improvement. Jiang, who was not involved in the study, said the approach could made further progress if it modified the underlying LLM, or even the chip architecture. DGMs can theoretically score agents simultaneously on coding benchmarks and also specific applications, such as drug design, so they’d get better at getting better at designing drugs. Zhang said she’d like to combine a DGM with AlphaEvolve. ... One concern with both evolutionary search and self-improving systems—and especially their combination, as in DGM—is safety. Agents might become uninterpretable or misaligned with human directives. So Zhang and her collaborators added guardrails. They kept the DGMs in sandboxes without access to the Internet or an operating system, and they logged and reviewed all code changes. They suggest that in the future, they could even reward AI for making itself more interpretable and aligned.


Data center costs surge up to 18% as enterprises face two-year capacity drought

Smart enterprises are adapting with creative strategies. CBRE’s Magazine emphasizes “aggressive and long-term planning,” suggesting enterprises extend capacity forecasts to five or 10 years, and initiate discussions with providers much earlier than before. Geographic diversification has become essential. While major hubs price out enterprises, smaller markets such as São Paulo saw pricing drops of as much as 20.8%, while prices in Santiago fell 13.7% due to shifting supply dynamics. Magazine recommended “flexibility in location as key, exploring less-constrained Tier 2 or Tier 3 markets or diversifying workloads across multiple regions.” For Gogia, “Tier-2 markets like Des Moines, Columbus, and Richmond are now more than overflow zones, they’re strategic growth anchors.” Three shifts have elevated these markets: maturing fiber grids, direct renewable power access, and hyperscaler-led cluster formation. “AI workloads, especially training and archival, can absorb 10-20ms latency variance if offset by 30-40% cost savings and assured uptime,” said Gogia. “Des Moines and Richmond offer better interconnection diversity today than some saturated Tier-1 hubs.” Contract flexibility is also crucial. Rather than traditional long-term leases, enterprises are negotiating shorter agreements with renewal options and exploring revenue-sharing arrangements tied to business performance.


Fintech’s AI Obsession Is Useless Without Culture, Clarity and Control

what does responsible AI actually mean in a fintech context? According to PwC’s 2024 Responsible AI Survey, it encompasses practices that ensure fairness, transparency, accountability and governance throughout the AI lifecycle. It’s not just about reducing model bias — it’s about embedding human oversight, securing data, ensuring explainability and aligning outputs with brand and compliance standards. In financial services, these aren’t "nice-to-haves" — they’re essential for scaling AI safely and effectively. Financial marketing is governed by strict regulations and AI-generated content can create brand and legal risks. ... To move AI adoption forward responsibly, start small. Low-risk, high-reward use cases let teams build confidence and earn trust from compliance and legal stakeholders. Deloitte’s 2024 AI outlook recommends beginning with internal applications that use non-critical data — avoiding sensitive inputs like PII — and maintaining human oversight throughout. ... As BCG highlights, AI leaders devote 70% of their effort to people and process — not just technology. Create a cross-functional AI working group with stakeholders from compliance, legal, IT and data science. This group should define what data AI tools can access, how outputs are reviewed and how risks are assessed.


Is Microsoft’s new Mu for you?

Mu uses a transformer encoder-decoder design, which means it splits the work into two parts. The encoder takes your words and turns them into a compressed form. The decoder takes that form and produces the correct command or answer. This design is more efficient than older models, especially for tasks such as changing settings. Mu has 32 encoder layers and 12 decoder layers, a setup chosen to fit the NPU’s memory and speed limits. The model utilizes rotary positional embeddings to maintain word order, dual-layer normalization to maintain stability, and grouped-query attention to use memory more efficiently. ... Mu is truly groundbreaking because it is the first SLM built to let users control system settings using natural language, running entirely on a mainstream shipping device. Apple’s iPhones, iPads, and Macs all have a Neural Engine NPU and run on-device AI for features like Siri and Apple Intelligence. But Apple does not have a small language model as deeply integrated with system settings as Mu. Siri and Apple Intelligence can change some settings, but not with the same range or flexibility. ... By processing data directly on the device, Mu keeps personal information private and responds instantly. This shift also makes it easier to comply with privacy laws in places like Europe and the US since no data leaves your computer.


Is It a Good Time to Be a Software Engineer?

AI may be rewriting the rules of software development, but it hasn’t erased the thrill of being a programmer. If anything, the machines have revitalised the joy of coding. New tools make it possible to code in natural language, ship prototypes in hours, and bypass tedious setup work. From solo developers to students, the process may feel more immediate or rewarding. Yet, this sense of optimism exists alongside an undercurrent of anxiety. As large language models (LLMs) begin to automate vast swathes of development, some have begun to wonder if software engineering is still a career worth betting on. ... Meanwhile, Logan Thorneloe, a software engineer at Google, sees this as a golden era for developers. “Right now is the absolute best time to be a software engineer,” he wrote on LinkedIn. He points out “development velocity” as the reason. Thorneleo believes AI is accelerating workflows, shrinking prototype cycles from months to days, and giving developers unprecedented speed. Companies that adapt to this shift will win, not by eliminating engineers, but by empowering them. More than speed, there’s also a rediscovered sense of fun. Programmers who once wrestled with broken documentation and endless boilerplate are rediscovering the creative satisfaction that first drew them to the field. 


Dumping mainframes for cloud can be a costly mistake

Despite industry hype, mainframes are not going anywhere. They quietly support the backbone of our largest banks, governments, and insurance companies. Their reliability, security, and capacity for massive transactions give mainframes an advantage that most public cloud platforms simply can’t match for certain workloads. ... At the core of this conversation is culture. An innovative IT organization doesn’t pursue technology for its own sake. Instead, it encourages teams to be open-minded, pragmatic, and collaborative. Mainframe engineers have a seat at the architecture table alongside cloud architects, data scientists, and developers. When there’s mutual respect, great ideas flourish. When legacy teams are sidelined, valuable institutional knowledge and operational stability are jeopardized. A cloud-first mantra must be replaced by a philosophy of “we choose the right tool for the job.” The financial institution in our opening story learned this the hard way. They had to overcome their bias and reconnect with their mainframe experts to avoid further costly missteps. It’s time to retire the “legacy versus modern” conflict and recognize that any technology’s true value lies in how effectively it serves business goals. Mainframes are part of a hybrid future, evolving alongside the cloud rather than being replaced by it. 


Why Modern Data Archiving Is Key to a Scalable Data Strategy

Organizations are quickly learning they can’t simply throw all data, new and old, at an AI strategy; instead, it needs to be accurate, accessible, and, of course, cost-effective. Without these requirements in place, it’s far from certain AI-powered tools can deliver the kind of insight and reliability businesses need. As part of the various data management processes involved, archiving has taken on a new level of importance. ... For organizations that need to migrate data, for example, archiving is used to identify which essential datasets, while enabling users to offload inactive data in the most cost-effective way. This kind of win-win can also be applied to cloud resources, where moving data to the most appropriate service can potentially deliver significant savings. Again, this contrasts with tiering systems and NAS gateways, which rely on global file systems to provide cloud-based access to local files. The challenge here is that access is dependent on the gateway remaining available throughout the data lifecycle because, without it, data recall can be interrupted or cease entirely. ... It then becomes practical to strike a much better balance across the typical enterprise storage technology stack, including long-term data preservation and compliance, where data doesn’t need to be accessed so often, but where reliability and security are crucial.


The Impact of Regular Training and Timely Security Policy Changes on Dev Teams

Constructive refresher training drives continuous improvement by reinforcing existing knowledge while introducing new concepts like AI-powered code generation, automated debugging and cross-browser testing in manageable increments. Teams that implement consistent training programs see significant productivity benefits as developers spend less time struggling with unfamiliar tools and more time automating tasks to focus on delivering higher value. ... Security policies that remain static as teams grow create dangerous blind spots, compromising both the team’s performance and the organization’s security posture. Outdated policies fail to address emerging threats like malware infections and often become irrelevant to the team’s current workflow, leading to workarounds and system vulnerabilities. ... Proactive security integration into development workflows represents a fundamental shift from reactive security measures to preventative strategies. This approach enables growing teams to identify and address security concerns early in the development process, reducing the cost and complexity of remediation. Cultivating a security-first culture becomes increasingly important as teams grow. This involves embedding security considerations into various stages of the development life cycle. Early risk identification in cloud infrastructure reduces costly breaches and improves overall team productivity.

Daily Tech Digest - May 29, 2025


Quote for the day:

"All progress takes place outside the comfort zone." -- Michael John Bobak


What Are Deepfakes? Everything to Know About These AI Image and Video Forgeries

Deepfakes rely on deep learning, a branch of AI that mimics how humans recognize patterns. These AI models analyze thousands of images and videos of a person, learning their facial expressions, movements and voice patterns. Then, using generative adversarial networks, AI creates a realistic simulation of that person in new content. GANs are made up of two neural networks where one creates content (the generator), and the other tries to spot if it's fake (the discriminator). The number of images or frames needed to create a convincing deepfake depends on the quality and length of the final output. For a single deepfake image, as few as five to 10 clear photos of the person's face may be enough. ... While tech-savvy people might be more vigilant about spotting deepfakes, regular folks need to be more cautious. I asked John Sohrawardi, a computing and information sciences Ph.D. student leading the DeFake Project, about common ways to recognize a deepfake. He advised people to look at the mouth to see if the teeth are garbled." Is the video more blurry around the mouth? Does it feel like they're talking about something very exciting but act monotonous? That's one of the giveaways of more lazy deepfakes." ... "Too often, the focus is on how to protect yourself, but we need to shift the conversation to the responsibility of those who create and distribute harmful content," Dorota Mani tells CNET


Beyond GenAI: Why Agentic AI Was the Real Conversation at RSA 2025

Contrary to GenAI, that primarily focuses on the divergence of information, generating new content based on specific instructions, SynthAI developments emphasize the convergence of information, presenting less but more pertinent content by synthesizing available data. SynthAI will enhance the quality and speed of decision-making, potentially making decisions autonomously. The most evident application lies in summarizing large volumes of information that humans would be unable to thoroughly examine and comprehend independently. SynthAI’s true value will be in aiding humans to make more informed decisions efficiently. ... Trust in AI also needs to evolve. This isn’t a surprise as AI, like all technologies, is going through the hype cycle and in the same way that cloud and automation suffered with issues around trust in the early stages of maturity, so AI is following a very similar pattern. It will be some time before trust and confidence are in balance with AI. ... Agentic AI encompasses tools that can understand objectives, make decisions, and act. These tools streamline processes, automate tasks, and provide intelligent insights to aid in quick decision making. In a use case involving repetitive processes, take a call center as an example, agentic AI can have significant value. 


The Privacy Challenges of Emerging Personalized AI Services

The nature of the search business will change substantially in this world of personalized AI services. It will evolve from a service for end users to an input into an AI service for end users. In particular, search will become a component of chatbots and AI agents, rather than the stand-alone service it is today. This merger has already happened to some degree. OpenAI has offered a search service as part of its ChatGPT deployment since last October. Google launched AI Overview in May of last year. AI Overview returns a summary of its search results generated by Google’s Gemini AI model at the top of its search results. When a user asks a question to ChatGPT, the chatbot will sometimes search the internet and provide a summary of its search results in its answer. ... The best way forward would not be to invent a sector-specific privacy regime for AI services, although this could be made to work in the same way that the US has chosen to put financial, educational, and health information under the control of dedicated industry privacy regulators. It might be a good approach if policymakers were also willing to establish a digital regulator for advanced AI chatbots and AI agents, which will be at the heart of an emerging AI services industry. But that prospect seems remote in today’s political climate, which seems to prioritize untrammeled innovation over protective regulation.


What CISOs can learn from the frontlines of fintech cybersecurity

For Shetty, the idea that innovation competes with security is a false choice. “They go hand in hand,” she says. User trust is central to her approach. “That’s the most valuable currency,” she explains. Lose it, and it’s hard to get back. That’s why transparency, privacy, and security are built into every step of her team’s work, not added at the end. ... Supply chain attacks remain one of her biggest concerns. Many organizations still assume they’re too small to be a target. That’s a dangerous mindset. Shetty points to many recent examples where attackers reached big companies by going through smaller suppliers. “It’s not enough to monitor your vendors. You also have to hold them accountable,” she says. Her team helps clients assess vendor cyber hygiene and risk scores, and encourages them to consider that when choosing suppliers. “It’s about making smart choices early, not reacting after the fact.” Vendor security needs to be an active process. Static questionnaires and one-off audits are not enough. “You need continuous monitoring. Your supply chain isn’t standing still, and neither are attackers.” ... The speed of change is what worries her most. Threats evolve quickly. The amount of data to protect grows every day. At the same time, regulators and customers expect high standards, and they should.


Tech optimism collides with public skepticism over FRT, AI in policing

Despite the growing alarm, some tech executives like OpenAI’s Sam Altman have recently reversed course, downplaying the need for regulation after previously warning of AI’s risks. This inconsistency, coupled with massive federal contracts and opaque deployment practices, erodes public trust in both corporate actors and government regulators. What’s striking is how bipartisan the concern has become. According to the Pew survey, only 17 percent of Americans believe AI will have a positive impact on the U.S. over the next two decades, while 51 percent express more concern than excitement about its expanding role. These numbers represent a significant shift from earlier years and a rare area of consensus between liberal and conservative constituencies. ... Bias in law enforcement AI systems is not simply a product of technical error; it reflects systemic underrepresentation and skewed priorities in AI design. According to the Pew survey, only 44 percent of AI experts believe women’s perspectives are adequately accounted for in AI development. The numbers drop even further for racial and ethnic minorities. Just 27 percent and 25 percent say the perspectives of Black and Hispanic communities, respectively, are well represented in AI systems.


6 rising malware trends every security pro should know

Infostealers steal browser cookies, VPN credentials, MFA (multi-factor authentication) tokens, crypto wallet data, and more. Cybercriminals sell the data that infostealers grab through dark web markets, giving attackers easy access to corporate systems. “This shift commoditizes initial access, enabling nation-state goals through simple transactions rather than complex attacks,” says Ben McCarthy, lead cyber security engineer at Immersive. ... Threat actors are systematically compromising the software supply chain by embedding malicious code within legitimate development tools, libraries, and frameworks that organizations use to build applications. “These supply chain attacks exploit the trust between developers and package repositories,” Immersive’s McCarthy tells CSO. “Malicious packages often mimic legitimate ones while running harmful code, evading standard code reviews.” ... “There’s been a notable uptick in the use of cloud-based services and remote management platforms as part of ransomware toolchains,” says Jamie Moles, senior technical marketing manager at network detection and response provider ExtraHop. “This aligns with a broader trend: Rather than relying solely on traditional malware payloads, adversaries are increasingly shifting toward abusing trusted platforms and ‘living-off-the-land’ techniques.”


How Constructive Criticism Can Improve IT Team Performance

Constructive criticism can be an excellent instrument for growth, both individually and on the team level, says Edward Tian, CEO of AI detection service provider GPTZero. "Many times, and with IT teams in particular, work is very independent," he observes in an email interview. "IT workers may not frequently collaborate with one another or get input on what they're doing," Tian states. ... When using constructive criticism, take an approach that focuses on seeking improvement with the poor result, Chowning advises. Meanwhile, use empathy to solicit ideas on how to improve on a poor result. She adds that it's important to ask questions, listen, seek to understand, acknowledge any difficulties or constraints, and solicit improvement ideas. ... With any IT team there are two key aspects of constructive criticism: creating the expectation and opportunity for performance improvement, and -- often overlooked -- instilling recognition in the team that performance is monitored and has implications, Chowning says. ... The biggest mistake IT leaders make is treating feedback as a one-way directive rather than a dynamic conversation, Avelange observes. "Too many IT leaders still operate in a command-and-control mindset, dictating what needs to change rather than co-creating solutions with their teams."


How AI will transform your Windows web browser

Google isn’t the only one sticking AI everywhere imaginable, of course. Microsoft Edge already has plenty of AI integration — including a Copilot icon on the toolbar. Click that, and you’ll get a Copilot sidebar where you can talk about the current web page. But the integration runs deeper than most people think, with more coming yet: Copilot in Edge now has access to Copilot Vision, which means you can share your current web view with the AI model and chat about what you see with your voice. This is already here — today. Following Microsoft’s Build 2025 developers’ conference, the company is starting to test a Copilot box right on Edge’s New Tab page. Rather than a traditional Bing search box in that area, you’ll soon see a Copilot prompt box so you can ask a question or perform a search with Copilot — not Bing. It looks like Microsoft is calling this “Copilot Mode” for Edge. And it’s not just a transformed New Tab page complete with suggested prompts and a Copilot box, either: Microsoft is also experimenting with “Context Clues,” which will let Copilot take into account your browser history and preferences when answering questions. It’s worth noting that Copilot Mode is an optional and experimental feature. ... Even the less AI-obsessed browsers of Mozilla Firefox and Brave are now quietly embracing AI in an interesting way.


No, MCP Hasn’t Killed RAG — in Fact, They’re Complementary

Just as agentic systems are all the rage this year, so is MCP. But MCP is sometimes talked about as if it’s a replacement for RAG. So let’s review the definitions. In his “Is RAG dead yet?” post, Kiela defined RAG as follows: “In simple terms, RAG extends a language model’s knowledge base by retrieving relevant information from data sources that a language model was not trained on and injecting it into the model’s context.” As for MCP (and the middle letter stands for “context”), according to Anthropic’s documentation, it “provides a standardized way to connect AI models to different data sources and tools.” That’s the same definition, isn’t it? Not according to Kiela. In his post, he argued that MCP complements RAG and other AI tools: “MCP simplifies agent integrations with RAG systems (and other tools).” In our conversation, Kiela added further (ahem) context. He explained that MCP is a communication protocol — akin to REST or SOAP for APIs — based on JSON-RPC. It enables different components, like a retriever and a generator, to speak the same language. MCP doesn’t perform retrieval itself, he noted, it’s just the channel through which components interact. “So I would say that if you have a vector database and then you make that available through MCP, and then you let the language model use it through MCP, that is RAG,” he continued.


AI didn’t kill Stack Overflow

Stack Overflow’s most revolutionary aspect was its reputation system. That is what elevated it above the crowd. The brilliance of the rep game allowed Stack Overflow to absorb all the other user-driven sites for developers and more or less kill them off. On Stack Overflow, users earned reputation points and badges for asking good questions and providing helpful answers. In the beginning, what was considered a good question or answer was not predetermined; it was a natural byproduct of actual programmers upvoting some exchanges and not others. ... For Stack Overflow, the new model, along with highly subjective ideas of “quality” opened the gates to a kind of Stanford Prison Experiment. Rather than encouraging a wide range of interactions and behaviors, moderators earned reputation by culling interactions they deemed irrelevant. Suddenly, Stack Overflow wasn’t a place to go and feel like you were part of a long-lived developer culture. Instead, it became an arena where you had to prove yourself over and over again. ... Whether the culture of helping each other will survive in this new age of LLMs is a real question. Is human helping still necessary? Or can it all be reduced to inputs and outputs? Maybe there’s a new role for humans in generating accurate data that feeds the LLMs. Maybe we’ll evolve into gardeners of these vast new tracts of synthetic data.

Daily Tech Digest - February 27, 2025


Quote for the day:

“You get in life what you have the courage to ask for.” -- Nancy D. Solomon



Breach Notification Service Tackles Infostealing Malware

Infostealers can amass massive quantities of credentials. To handle this glut, many cybercriminals create parsers to quickly ingest usernames and passwords for analysis, said Milivoj Rajić, head of threat intelligence at cybersecurity firm DynaRisk. The leaked internal communications of ransomware group Black Basta demonstrated this tactic, he said. Using a shared spreadsheet, the group identified organizations with emails present in infostealer logs, tested which access credentials worked, checked the organization's annual revenue and if its networks were protected by MFA. Using this information helped the ransomware group prioritize its targeting. Another measure of just how much data gets collected by infostealers: the Alien Txtbase records include 244 million passwords not already recorded as breached by Pwned Passwords. Hunt launched that free service in 2017, which anyone can query for free and anonymously, to help users never pick a password that's appeared in a known data breach, shortly after the U.S. National Institute for Standards and Technology began recommending that practice. Not all of the information contained in stealer logs being sold by criminals is necessarily legit. Some of it might be recycled from previous leaks or data dumps. Even so, Hunt said he was able to verify a random sample of the Alien Txtbase corpus with a "handful" of HIBP users he approached.


The critical role of strategic workforce planning in the age of AI

While some companies have successfully deployed strategic workforce planning in the past to reshape their workforces to meet future market requirements, there are also cautionary tales of organizations that have struggled with the transition to new technologies. For instance, the rapid innovation of smartphones left leading players such as Nokia behind. Periods of rapid technological change highlight the importance of predicting and responding to challenges with a dynamic talent planning model. Gen AI is not just another technological advancement affecting specific tasks; it represents a rewiring of how organizations operate and generate value. This transformation goes beyond automation, innovation, and productivity improvements to fundamentally alter the ratio of humans to technology in organizations. By having SWP in place, organizations can react more quickly and intentionally to these changes, monitoring leading and lagging indicators to stay ahead of the curve. This approach allows for identifying and developing new capabilities, ensuring that the workforce is prepared for the evolving demands these changes will bring. SWP gives a fact base to all talent decisions so that trade-offs can be explicitly discussed and strategic decisions can be made holistically—and with enterprise value top of mind. 


Cybersecurity in fintech: Protecting user data and preventing fraud

Fintech companies operate at the intersection of finance and technology, making them particularly vulnerable to cyber threats. These platforms process vast amounts of personal and financial data—from bank account details and credit card numbers to loan records and transaction histories. A single security breach can have devastating consequences, leading to financial losses, regulatory penalties, and reputational damage. Beyond individual risks, fintech platforms are interconnected within a larger financial ecosystem. A vulnerability in one system can cascade across multiple institutions, disrupting transactions, exposing sensitive data, and eroding trust. Given this landscape, cybersecurity in fintech is not just about preventing attacks—it’s about ensuring the integrity of the entire digital financial infrastructure. ... Governments and regulatory bodies worldwide recognise the critical role of cybersecurity in fintech. Frameworks like the General Data Protection Regulation (GDPR) in Europe and the California Consumer Privacy Act (CCPA) in the U.S. set stringent standards for data privacy and security. Compliance is not just a legal necessity—it’s an opportunity for fintech companies to build trust with users. By adhering to global security best practices, fintech firms can differentiate themselves in an increasingly competitive market while ensuring customer data remains protected.


The Smart Entrepreneur's Guide to Thriving in Uncertain Times

If there's one certainty in business, it's change. The most successful entrepreneurs aren't just those who have great ideas — they are the ones who know how to adapt. Whether it's economic downturns, shifts in consumer behavior or emerging competition, the ability to navigate uncertainty is what separates sustainable businesses from those that struggle to survive. ... Instead of long-term strategies that assume stability, use quick experiments to validate new ideas and adjust quickly. When we launched new membership models at our office, we tested different pricing structures and adjusted based on user feedback within weeks rather than months. ... Digital engagement is changing. Entrepreneurs who optimize their messaging based on social media trends and consumer preferences gain a competitive edge. For example, when we noticed an increase in demand for remote work solutions, we adjusted our marketing efforts to highlight our virtual office plans. ... strong company culture that embraces change enables faster adaptation during challenging times. Jim Collins, in Good to Great, emphasizes that having the right people in the right seats is fundamental for long-term success. At Coworking Smart, we focused on hiring individuals who thrived in dynamic environments rather than just filling positions based on traditional job descriptions.


Risk Management for the IT Supply Chain

Who are your mission critical vendors? Do they present significant risks (for example, risk of a merger, or going out of business)? Where are your IT supply chain “weak links” (such as vendors whose products and services repeatedly fail). Are they impairing your ability to provide top-grade IT to the business? What countries do you operate in? Are there technology and support issues that could emerge in those locations? Do you annually send questionnaires to vendors that query them so you can ascertain that they are strong, reliable and trustworthy suppliers? Do you request your auditors periodically review IT supply chain vendors for resiliency, compliance and security? ... Most enterprises include security and compliance checkpoints on their initial dealings with vendors, but few check back with the vendors on a regular basis after the contracts are signed. Security and governance guidelines change from year to year. Have your IT vendors kept up? When was the last time you requested their latest security and governance audit reports from them? Verifying that vendors stay in step with your company’s security and governance requirements should be done annually. ... Although companies include their production supply chains in their corporate risk management plans, they don’t consistently consider the IT supply chain and its risks.


IT infrastructure: Inventory before AIOps

Even if the advantages are clear, the right story is also needed internally to initiate an introduction. Benedikt Ernst from the IBM spin-off Kyndryl sees a certain “shock potential,” especially in the financial dimension, which is ideally anticipated in advance: “The argumentation of costs is crucial because the introduction of AIOps is, of course, an investment in the first instance. Organizations need to ask themselves: How quickly is a problem detected and resolved today? And how does an accelerated resolution affect operating costs and downtime?” In addition, there is another aspect that he believes is too often overlooked: “Ultimately, the introduction of AIOps also reveals potential on the employee side. The fewer manual interventions in the infrastructure are necessary, the more employees can focus on things that really require their attention. For this reason, I see the use of open integration platforms as helpful in making automation and AIOps usable across different platforms.” Storm Reply’s Henckel even sees AIOps as a tool for greater harmony: “The introduction of AIOps also means an end to finger-pointing between departments. With all the different sources of error — database, server, operating system — it used to be difficult to pinpoint the cause of the error. AIOps provides detailed analysis across all areas and brings more harmony to infrastructure evaluation.”


Navigating Supply Chain Risk in AI Chips

The fragmented nature of semiconductor production poses significant challenges for supplier risk management. Beyond the risk posed by delays in delivery or production, which can disrupt operations, such a globalized and complex supply chain poses challenges from a regulatory angle. C chipmakers must take full responsibility for ensuring compliance at every level by thoroughly monitoring and vetting every entity in the supply chain for risks such as forced labor, sanctions violations, bribery, and corruption. ... Many companies are diversifying their supplier base, increasing local procurement efforts, and using predictive modeling to anticipate better demand to address the risk of disruption triggered by delays in delivery or operations. By leveraging advanced data analytics and securing multiple supply routes, businesses can better increase resilience to external shocks and mitigate the risk of supply chain delays. Additionally, firms can incorporate a “value at risk” model into supply chain and operational risk management frameworks. This approach quantifies the financial impact of potential supply chain disruptions, helping chipmakers prioritize the most critical risk areas. ... The AI chip supply chain is a cornerstone of modern innovation, but due to its global and interdependent nature, it is inherently complex. 


Charting the AI-fuelled evolution of embedded analytics

The idea behind embedded analytics is to negate a great deal of the friction around data insights. In theory, line-of-business users have been able to view relevant insights for a long time, by allowing them to import data into the self-service business intelligence (SSBI) tool of their choice. In practice, this disrupts their workflow and interrupts their chain of thought, so a lot of people choose not to make that switch. They’re even less likely to do so if they have to manually export and migrate the data to a different tool. That means they’re missing out on data insights, just when they could be the most valuable for their decisions. Embedded analytics delivers all the charts and insights alongside whatever the user is working on at the time – be it an accounting app, a CRM, a social media management platform or whatever else – which is far more useful. “It’s a lot more intuitive, a lot more functional if it’s in the same place,” says Perez. “Also, generally speaking, the people who use these types of business apps are non-technical, and so the more complicated you make it for them to get to the analysis, the less of it they’ll do.” ... So far, so impressive. But Perez emphasises that there are a number of barriers to embedded analytics utopia. Businesses need to bear these in mind as they seek to develop their own solutions or find providers who can deliver them.


Open source software vulnerabilities found in 86% of codebases

jQuery, a JavaScript library, was the most frequent source of vulnerabilities, as eight of the top 10 high-risk vulnerabilities were found there. Among scanned applications, 43% contained some version of jQuery — oftentimes, an outdated version. An XSS vulnerability affecting outdated versions of jQuery, called CVE-2020-11023, was the most frequently found high-risk vulnerability. McGuire remarks, “There’s also an interesting shift towards web-based and multi-tenant (SaaS) applications, meaning more high-severity vulnerabilities (81% of audited codebases). We also observed an overwhelming majority of high severity vulnerabilities belonging to jQuery. ... McGuire explains, “Embedded software providers are going to be increasingly focused on the quality, safety and reliability of the software they build. Looking at this year’s data, 79% of the codebases were using components whose latest versions had no development activity in the last two years. This means that these dependencies could become less reliable, so industries, like aerospace and medical devices should look to identify these in their own codebases and start moving on from them.” ... “Enterprise regulated organizations are being forced to align with numerous requirements, including providing SBOMs with their applications. If an SBOM isn’t accurate, it’s useless,” McGuire states. 


A 5-step blueprint for cyber resilience

Many claim to practice developer security operations, or DevSecOps, by testing software for security flaws at every stage. At least that's the theory. In reality, developers are under constant pressure to get software into production, and DevSecOps can be an impediment to meeting deadlines. "You hear all these people saying, 'Yes, we're doing DevSecOps,' but the reality is, a lot of people aren't," says Lanowitz. "If you're really focused on being secure by design, you're going to want to do things right from the beginning, meaning you're going to want to have your network architecture correct, your software architecture correct." ... "We have to be able to speak the language of the business," says Lanowitz. "Break down the silos that exist in the organization, get the cyber team and the business team talking, [and] align cybersecurity initiatives with overarching business initiatives." Again, executive leadership needs to point the way, but it often needs convincing. Compliance is a great place to start, because most industries have rules, laws, or insurance providers that mandate a basic level of cybersecurity. ... The more eyes you have on a cybersecurity problem, the more quickly a solution can be found. Because of this, even large companies rely on external managed service providers (MSPs), managed security service providers (MSSPs), managed detection and response (MDR) providers, consultants and advisors.