Daily Tech Digest - March 08, 2025


Quote for the day:

“In my experience, there is only one motivation, and that is desire. No reasons or principle contain it or stand against it.” -- Jane Smiley


Synthetic identity blends real and fake data to enable fraud, demanding new protections

Manufactured synthetic identities merge and blend real identity details from different stolen identities. A real ID number might be paired with a fake name or address and linked to a deepfaked image that lines up with the hacked identity data. Manipulated synthetic identities are real identities that alter an existing identity document. The widespread shift toward digital identity verification and authentication processes, as illustrated by the EUDI Wallet scheme, brings new risks: “the transition to digital identity opens up new areas of attack – precisely because AI-supported fraud scams are likely to become increasingly sophisticated in the future.” ... “The rate of development of generative AI presents a problem to not just ensuring a person is who they say they are, but also to content platforms who need to be sure that the content added by a user is genuine,” says the paper. “Given the potential risks and challenges in detecting generative AI, Yoti’s strategy emphasises early detection at the source, addressing both direct and indirect attack vectors.” While presentation attacks (PAD) are a “relatively mature and well understood issue across the verification space,” well defended by effective liveness detection, more recently popularized injection attacks attempt to bypass liveness detection by hacking directly into a hardware device or virtual camera.


When to choose a bare-metal cloud

Bare-metal cloud services, by contrast, provide users with exclusive access to the underlying physical server hardware: no hypervisor, no virtual machines, no additional abstraction. This purity means full access to raw compute power, such as CPU, GPU, and memory resources, without virtualization’s added latency or restrictions. In essence, bare-metal clouds bridge the gap between the flexibility of cloud computing and the robust performance of dedicated on-premises servers. ... Certain applications can benefit from hardware architectures beyond the standard x86 processors, such as Arm’s or IBM’s Z mainframe architecture. Bare-metal clouds allow users to access these nonstandard architectures for testing or running workloads designed explicitly for them—another area where traditional virtual environments fall short. ... Government, finance, healthcare, and other regulated industries may need dedicated servers to meet regulatory or compliance mandates. Bare-metal clouds provide the necessary isolation while maintaining the flexibility of cloud deployment. ... Using bare-metal hardware often offers little room for provisioning beyond what’s physically available; no additional memory or hardware expansions can be made dynamically. 


Is Gen Z to Blame? Why Cybersecurity Feels Harder for IT Pros

Gen Z’s trust in social media is another cultural difference to be aware of. They’re not only listening to and watching a cohort of self-made influencers, but they’re also following their advice — some of which isn’t sound. Young adults glean a lot of information from social media sites and this raises a few concerns for employers. Young workers have a propensity to believe in what they learn from social media, making them susceptible to scams such as online fraud and get-rich-quick schemes. ... A younger workforce brings fresh pairs of eyes and new ideas to the table. They’re also looking for employers who reflect their preferences, including ones with familiar technologies. Chief information security officers (CISOs) are often dealing with legacy infrastructure and outdated solutions as a primary barrier preventing them from addressing cybersecurity obstacles — and hindering them from meeting Gen Z’s needs. Another challenge is that Gen Z newcomers have shorter work histories and may lack critical in-office and work-from-home experience to recognize phishing, job recruitment, social engineering and deepfake scams. Gen Z is disclosing higher rates of phishing victimization than any other generation, according to National Cyber Security Alliance.


APM Tools and High-Availability Clusters: A Powerful Combination for Network Resiliency

APM tools are well-positioned as a means of feeding better data into the platforms enterprises use to monitor and manage IT infrastructure. Data provided by APM provides a more precise understanding of system health, enabling IT management to establish more precise parameters for making decisions with the confidence of good, timely data. High availability clusters are either hardware (SAN-based clusters) or software (SANless clusters) that support seamless failover of services to backup resources in the event of an incident. ... The combination of APM and HA makes it easier for enterprises to improve network resiliency by supporting and injecting better decision making and the use of automation to enable seamless failover, predictive analytics, self-healing, and other capabilities consistent with maximizing network performance, uptime, and operational resilience. When used in a multi-cloud environment, services can failover to the organization's secondary cloud provider, which is a major advantage when an outage affects a cloud services provider. ... As some enterprises evolve toward autonomous IT, data provided by APM provides a more precise understanding of system health, enabling IT management to establish more precise parameters for making decisions with confidence. 


Why Enterprise Architecture Is Having A Moment

One can think of enterprise architecture as the description and design of the complex web of technologies that supports a particular set of business capabilities. I say “description” because most companies don’t initially have an enterprise architect. Instead, they let their technology landscape grow organically. ... Think about everything an organization would need to do to move from its current state to one that reflected “modern standards.” Describing and inventorying the current state would naturally be part of that. But more important would be defining those standards. In today’s world, such standards would include prioritizing cloud technology, adopting a service-oriented architecture for software built in-house, working with open APIs, and so forth. Enterprise architects are in the business of both defining the technology standards for the business as well as governing the adoption of new and emerging technology in conformity with those standards. ... But the definition of standards does not take place in a vacuum. Instead, this work is guided by the strategic aims of the organization. These aims, in turn, can be viewed through the lens of business capabilities. Specifically, the business must determine what capabilities it will need to realize its strategy in the future. 


Bridging Europe's Cybersecurity Divide Through Political Will

The debate over cybersecurity regulation has been contentious in recent years, with strong positions on all sides. Europe has introduced multiple pieces of regulation, which has led to growing complaints about overlapping requirements and duplications. Which regulations apply to my company, among all existing ones? Which frameworks should I use to improve security and then demonstrate compliance? Which authorities should I report incidents to? Is there a standardized approach to managing and monitoring third parties? ... There is a broad consensus that cybersecurity regulatory requirements should be improved in Europe and beyond. We need to build an effective and efficient legislative framework for both functional and political reasons. On one hand, resources are limited and have to be allocated efficiently to meaningful security measures. On the other hand, frustration with redundant or unclear requirements risks undermining the achievements achieved so far, empowering those who oppose regulation entirely. ... While these operations require time and resources, the main obstacle is not technology. The real challenge lies in negotiating and agreeing on what an efficient system looks like in terms of governance and minimum standards to follow. 


What is risk management? Quantifying and mitigating uncertainty

Risk management is the process of identifying, analyzing, and mitigating uncertainties and threats that can harm your company or organization. No business venture or organizational action can completely avoid risk, of course, and working too hard to do so would mean foregoing potentially lucrative opportunities and strategies. ... IT leaders in particular must be able to integrate risk management philosophies and techniques into their planning, as IT infrastructure and spending can represent within the company an intense combination of risk (of cyberattacks, downtime, or botched rollouts, for instance) and benefits realized as increased capabilities or efficiencies. Some companies, particularly those in heavily regulated industries, such as banks and hospitals, centralize risk in a single department under a top-level chief risk officer (CRO) or similar executive role. A CRO might find themselves with responsibilities that overlap or conflict with CSOs, CISOs, and CIOs, and in some orgs without a clearly defined risk leader, ambitious infosec or infosecurity execs might try to take on that role for themselves. In any case, IT leaders need to understand and apply risk management in the areas under their purview.


Why Using Multiple AIs Is Trending Now

“Companies are building sophisticated AI stacks that treat general-purpose LLMs as foundational utilities while deploying specialized AI copilots and agents for coding, design, analytics, and industry-specific tasks. This fragmentation exposes the hubris of incumbent AI companies marketing themselves as complete solutions,” Moy adds. ... “Multimodality may sound like a remedy for generative AI’s shortcomings in multifaceted processes, but this, too, is more effective in the context of purpose-specific models,” says Maxime Vermeir. “Multimodality doesn’t imply an AI multitool that can excel in any area, but rather an AI model that can draw insights from various forms of ‘rich’ data beyond just text, such as images or audio. Still, this can be narrowed for businesses’ benefit, such as accurately recognizing images included in specific document types to further increase the autonomy of a purpose-built AI tool. While having multiple generative AI tools may sound more cumbersome than a single catch-all solution, the difference in ROI is undeniable,” Vermeir adds. ... “Using the different language models in the same tool has multiple reasons, the main ones being that every model has its strengths and weaknesses and therefore different types of queries to ChatGPT may be handled better or worse depending on the model. ... ” Feinberg adds.


8 obstacles women still face when seeking a leadership role in IT

When women are subjected to undermining stereotypes, have few female role models, are spoken over, or treated as if their contribution isn’t welcome, imposter syndrome is difficult to avoid. “When a woman looks at a job, she’s only going to apply if she meets 90% of the criteria,” agrees Debby Briggs, CISO at NetScout. ... Being seen as an outsider also costs women opportunities, since leaders tend to promote people they know. All the women I spoke to told me they survive this by building their own network. ... “A mentor can provide guidance, and a sponsor is someone who actively opens doors for you.” “This is a must-have,” says Briggs, who adds that she collects mentors. Anytime she finds someone she admires or who has a skill she lacks, she reaches out. “Your mentors don’t have to be women,” she says. ... Women say they feel invisible. “If I am at a tech event standing next to a man and another man walks up to us, more than 50% of the time he will address the man,” says Briggs. This invisibility happens in small interactions and large ones. The website for tech companies is often filled with the faces of white men. The speakers at tech events are all male. How do you scale this obstacle? “If someone invites me to an event, I look at who is on the panel. If it’s all white men, I tell them they don’t have a diverse enough perspective and choose not to go,” says Briggs.


How To Handle "Urgent Request" in Scrum

The first step the Product Owner needs to take is to assess whether the request aligns with the current Sprint Goal. However, based on my experience, most 'urgent requests' are unrelated to the Sprint Goal. They often come from individuals who are detached from the Scrum team's way of working. In many cases, those people are not even aware of what a 'Sprint Goal' is. If the request does not align with the Sprint Goal, I use a tool called the Financial Impact vs. Reputation Impact Matrix. As a Product Owner, I want the impact or potential damage to the company to be visualized in two dimensions so that I do not make decisions based on a single factor. The main purpose of this tool is to quantify the urgency of those "urgent requests." As a Product Owner, we do not want our team to work based on opinions or, even worse, political power; we want them to work based on facts or data. Many Scrum teams order their Product Backlog based on value, and they use potential revenue as the value attribute. Unlike potential revenue, which is expressed in positive terms, financial impact and reputation impact are negative. If the impact is not negative, as a Product Owner, I would not consider the request as urgent. Instead, it can wait and be stored in the Product Backlog for further discussion. 

Daily Tech Digest - March 07, 2025


Quote for the day:

"The actions of a responsible executive are contagious." -- Joe D. Batton


Operational excellence with AI: How companies are boosting success with process intelligence everyone can access

The right tooling can make a company’s processes visible and accessible to more than just its process experts. With strategic stakeholders and lines of business users involved, the very people who best know the business can contribute to innovation, design new processes and cut out endless wasted hours briefing process experts. AI, essentially, lowers the barrier to entry so everyone can come into the conversation, from process experts to line-of-business users. This speeds up time-to-value in transformation. ... Rather than simply ‘survive,’ companies can use AI to build true resilience — or antifragility — in which they learn from system failures or cybersecurity breaches and operationalize that knowledge. By putting AI into the loop on process breaks and testing potential scenarios via a digital twin of the organization, non-process experts and stakeholders are empowered to mitigate risk before escalations. ... Non-process experts must be able to make data-driven decisions faster with AI powered insights that recommend best practices and design principles for dashboards. Any queries that arise should be answered by means of automatically generated visualizations which can be integrated directly into apps — saving time and effort. 


Why Security Leaders Are Opting for Consulting Gigs

CISOs are asked to balance business objectives alongside product and infrastructure security, ransomware defense, supply chain security, AI governance, and compliance with increasingly complex regulations like the SEC's cyber-incident disclosure rules. Increased pressure for transparency puts CISOs in a tough situation when they must choose between disclosing an incident that could have adverse effects on the business or not disclosing it and risking personal financial ruin. ... The vCISO model emerged as a practical solution, particularly for midsize companies that need executive-level security expertise but can't justify a full-time CISO's compensation package. ... The surge in vCISOs should serve as a warning to boards and executives. If you're struggling to retain security leadership or considering a virtual CISO, you need to examine why. Is it about flexibility and cost, or have you created an environment where security leaders can't succeed? The pendulum will inevitably swing back as organizations realize that effective security leadership requires consistent, dedicated attention. ... Your CISO is working hard to protect your organization. So who will protect your CISO? Now is a great time to check in on them. Make sure they feel like they're fighting a winnable fight. 


How to Build a Reliable AI Governance Platform

An effective AI governance platform includes four fundamental components: data governance, technical controls, ethical guidelines and reporting mechanisms, says Beena Ammanath, executive director of the Global Deloitte AI Institute. "Data governance is necessary for ensuring that data within an organization is accurate, consistent, secure and used responsibly," she explains in an online interview. Technical controls are essential for tasks such as testing and validating GenAI models to ensure their performance and reliability, Ammanath says. "Ethical and responsible AI use guidelines are critical, covering aspects such as bias, fairness, and accountability to promote trust across the organization and with key stakeholders." ... "AI governance requires a multi-disciplinary or interdisciplinary approach and may involve non-traditional partners such as data science and AI teams, technology teams for the infrastructure, business teams who will use the system or data, governance and risk and compliance teams -- even researchers and customers," Baljevic says. Clark advises working across stakeholder groups. "Technology and business leaders, as well as practitioners -- from ML engineers to IT to functional leads -- should be included in the overall plan, especially for high-risk use case deployments," she says.


Reality Check: Is AI’s Promise to Deliver Competitive Advantage a Dangerous Mirage?

What happens when AI makes our bank’s products completely commoditized and undifferentiated? It’s not a defeatist question for the industry. Instead, it suggests a shortcoming in bank and credit union strategic planning about AI, Henrichs says. "Everyone’s asking about efficiency gains, risk management, and competitive advantages from AI," he suggests. "The uncomfortable truth is that if every bank has access to the same AI capabilities [and increasingly do through vendors like nCino, Q2, and FIS], we’re racing toward commoditization at an unprecedented speed." ... How can boards lead the institution to use AI to amplify existing competitive advantages? It’s not just about the technology. It’s "the combination of technology stack," say Jim Marous, Co-Publisher of The Financial Brand, with "people, leadership and willingness to take risks that will result in the quality of AI looking far different from bank A to bank Z. AI [is about] rethinking what we do. Further, fast follower doesn’t cut it because trying to copy… ignores the fundamental strategic changes [happening] behind the scenes." Creativity is not exactly a top priority in an industry accountable day-in and day-out to regulators, yet it’s required as technology applies commoditization pressure. 


A strategic playbook for entrepreneurs: 4 paths to success

To make educated choices as an entrepreneur, Scott and Stern recommend a sequential learning process known as test two, choose one for the four strategies within the compass. This is a systematic process where entrepreneurs consider multiple strategic alternatives and identify at least two that are commercially viable before choosing just one. As the authors write in their book, “The intellectual property and architectural strategies are worth testing for entrepreneurs who prefer to put in the work developing and maintaining proprietary technology; meanwhile, value chain and disruption may work better for leaders looking to execute quickly.” Scott referred to Vera Wang as a classic example of sequential learning. As a Ralph Lauren employee and bride-to-be at 35, Wang told her team that she felt there was an untapped market for older women shopping for wedding dresses. The company disagreed, so Wang opened her own shop — but she didn’t launch her line of dresses immediately. Instead, Scott said, Wang filled her shop with traditional dresses and offered only one new dress of her own. The goal was to see which types of customers were interested, as well as which aesthetics ultimately sold, before she started designing her new line. “[Wang] was able to take what she learned about design, customer, messaging, and price point and build it into her venture,” Scott said.


Increasing Engineering Productivity, Develop Software Fast and in a Sustainable Way

The real problem comes when speed means cutting corners - skipping tests, ignoring telemetry, rushing through code reviews. That might seem fine in the moment, but over time, it leads to tech debt and makes development slower, not faster. It’s kind of like skipping sleep to get more done. One late night? No problem. But if you do it every night, your productivity tanks. Same with software - if you never take time to clean up, everything gets harder to change. ... Software engineering productivity and sustainability are influenced by many factors and can mean different things to different people. For me, the two primary drivers that stand out are code quality and efficient processes. High-quality code is modular, readable, and well-documented, which simplifies maintenance, debugging, and scaling, while reducing the burden of technical debt. ... if the developers are not complaining enough, it’s probably because they’ve become complacent with, or resigned to, the status quo. In those cases, we can adopt the "we’re all one team" mindset and actually help them deliver features for a while – on the very clear understanding that we will be taking notes about everything that causes friction and then going and fixing that. That’s an excellent way to get the ground truth about how development is really going: listening, and hands-on learning.


Rethinking System Architecture: The Rise of Distributed Intelligence with eBPF

In an IT world driven by centralized decision-making, gathering insights and applying intelligence often follows a well-established — yet limiting — pattern. At the heart of this model, large volumes of telemetry, observability, and application data are collected by “dumb” data collectors. For analysis, these collectors gather information and ship it to centralized systems, such as databases, security information, event management (SIEM) platforms, or data warehouses. ... By processing data at its origin, we significantly reduce the amount of unnecessary or irrelevant data sent over the network, resulting in lower information transfer overhead. This minimizes the load on the infrastructure itself and cuts down on data storage and processing requirements. The scalability of our systems no longer needs to hinge on the ability to expand storage and analytics power, which is both expensive and inefficient. With eBPF, distributed systems can now analyze data locally, allowing the system to scale out more efficiently as each node can handle its own data processing needs without overwhelming a centralized point of control — and failure. Instead of transferring and storing every piece of data, eBPF can selectively extract the most relevant information, reducing noise and improving the overall signal quality.


How Explainable AI Is Building Trust in Everyday Products

Explainable AI has already picked up tremendous momentum in almost every industry. E-commerce platforms are now starting to avail detailed insight to the user on why a certain product is recommended to them. This reduces decision fatigue and improves the overall shopping experience. Even streaming services such as Netflix and Spotify make suggestions like “Because you watched…” or “Inspired by your playlist.” These insights make users much more connected with what they consume. In healthcare and fitness, the stakes are higher. Users literally rely on apps for critical insight into their health and well-being. Take a dietary suggestion or an exercise recommendation: If explainable AI provides insight into the whys, then users are more likely to feel knowledgeable and confident in those decisions. Even virtual assistants like Alexa and Google Assistant have added explainability features that provide much-needed context for their suggestions and enhance the user experience. ... Explainable AI has quite a number of challenges that stand in the way of its implementation. The need for simplifying such a very complex AI decision to some explainable form consumable by users is not a trivial task. The balance lies in clear explanations without oversimplification or misrepresentation of the logic.


IT execs need to embrace a new role: myth-buster

It’s more imperative than ever that IT leaders from the CIO on down educate their colleagues. It’s far too easy for eager early adopters to get into tech trouble, and it’s better to head off problems before your corporate data winds up, say, being used to train a genAI model. This teaching role is critical for high-ranking execs (C-level execs, board members) in addition to those on the enterprise front lines. CFOs tend to fall in love with promised efficiencies and would-be workforce reductions without understanding all of the implications. CEOs often want to support what their direct reports want — when possible — and board members rarely have any in-depth knowledge of technology issues. It’s especially critical for IT Directors, working with the CIO, to become indispensable sources of tech truth for any company. Not so long ago, business units almost always had to route their technology needs through IT. No more. It’s not a battle that can be won by edicts or directives. IT directives are often ignored by department heads, and memo mayhem won’t help. You have to position your advice as cautionary, educational — helpful even — all in a bid to spare the business unit various disasters. You are their friend. Only then does it have a chance of working. 


Increased Investment in Industrial Cybersecurity Essential for 2025

“The software used in machine controls and other components should be continuously updated by manufacturers to close newly discovered security gaps,” said the CEO of ONEKEY. He cites typical examples such as manufacturing robots, CNC machines, conveyors, packaging machines, production equipment, building automation systems, and heating and cooling systems, which, in some cases, rely on outdated software, making them targets for hackers. ... Firmware, the software embedded in digital control systems, connected devices, machines, and equipment, should be systematically tested for cyber resilience, advises Jan Wendenburg, CEO of ONEKEY. However, according to a report, less than a third (31 percent) of companies regularly conduct security checks on the software integrated into connected devices to identify and close vulnerabilities, thereby reducing potential entry points for hackers. ... Current practices fall far behind the required standards, as shown by the “OT + IoT Cybersecurity Report” by ONEKEY. ... “Manufacturers should align their software development with the upcoming regulatory requirements,” advised Jan Wendenburg. He added, “It is also recommended that the industry requires its suppliers to guarantee and prove the cyber resilience of their products.”

Daily Tech Digest - March 06, 2025


Quote for the day:

"Great leaders do not desire to lead but to serve." -- Myles Munroe


RIP (finally) to the blockchain hype

Fowler is not alone in his skepticism about blockchain. It hasn’t yet delivered practical benefits at scale, says Salome Mikadze, co-founder at software development firm Movadex. Still, the technology is showing promise in some niche areas, such as secure data sharing or certain supply chain scenarios, she says. “Most of us agree that while it’s an exciting idea, its real-world applications are still limited,” Mikadze adds. “In short, blockchain is on the shelf for now — something we check in on periodically, but not a priority until it proves its worth in the real world.” The crazy hype around digital art NFTs turned blockchain into a bit of a joke, adds Trevor Fry, an IT consultant and fractional CTO. Many organizations haven’t found other uses for blockchain, he says. “Blockchain was marketed as this must-have innovation, but in practice, it doesn’t solve a problem that many companies or people have,” he says. “Unlike AI and LLMs, which have real-world applications across industries and have such a low barrier to entry that everyone can easily try it, blockchain’s use cases are very niche, though not useless.” Fry sees eventual benefits in supply chain tracking and data integrity, situations where a secure and decentralized record can matter. “But right now, it’s not solving a big enough pain point for most organizations to justify the complexity and cost and hiring people who know how to develop and work with it,” he adds. 


The 5 stages of incident response grief

Starting with denial and moving through anger, bargaining, depression, and acceptance, security experts can take a few lessons from the grieving process ... when you first see the evidence of an incident in progress, you might first consider alternate explanations. Is it a false alarm? Did an employee open the wrong application by mistake? Maybe an automated process is misfiring, or a misconfiguration is causing an alert to trigger. You want to consider your options before assuming the worst. ... Once you confirm that it isn’t a false alarm and there is, in fact, an attacker present in the system, your first thought is probably, “this is going to consume the next few days, weeks, or months of my life.” You may become angry at a specific team for not following security guidelines or shortcutting a process. ... Sadly, getting an intruder out of your system is rarely a quick and easy process. But understanding the layout of your digital landscape and working with stakeholders throughout the organization can help ensure you’re making the right decisions at the right time. ... With the recovery process well underway, it’s time to take what you’ve learned and apply it. Now is the time to start bringing in all those suppressed thoughts from the former stages. That begins with understanding what went wrong. What was the cyber kill chain? What vulnerabilities did they exploit to gain access to certain systems? How did they evade detection solutions? Are certain solutions not working as well as they should? 


How to Manage Software Supply Chain Risks

Developers can’t manage risks on their own, nor can CISOs. “Effectively protecting, defending and responding to supply chain events should be a combination among many departments [including] security, IT, legal, development, product, etc.,” says Ventura. “Not one department should fully own the entire supply chain program as it touches many business units within an organization. Spearheading the program typically falls under the CISO or the security team as cybersecurity risks should be considered business risks.” One of the most common mistakes is having a false sense of security. “Thinking with the mindset of, ‘If I haven't had a supply chain issue before, why fix it now?’ leads to complacency and a lack of taking cybersecurity serious throughout the business,” says Ventura. “Another common mistake is organizations relying too heavily on vendor-assessments, where an organization can say they are secure, but haven't put in robust controls. Trusting an assessment completely without verification can lead to major issues down the road.” By failing to focus on supply chain risks, organizations put themselves at a high risk of a data breach, financial loss, regulatory and compliance fines and business and reputational damage. 


FinOps for Software as a Service (SaaS)

The challenges of managing public cloud spending are mirrored in the proliferation of SaaS across organizations through the use of decentralized, individual-level procurement and corporate-credit-card-funded purchase orders, resulting in limited organizational-level visibility into cost and usage. Additionally, SaaS is a consideration in the typical Build-vs-Buy-vs-Rent discussions. Often, engineers have a choice between building their own solutions or purchasing one via a SaaS provider. Because of this, there is less of a clear distinction between what workloads are managed in Public Cloud versus workloads managed by SaaS vendors (or where they are shared). Therefore, the spend is all part of the same value creation process, and engineering teams want to know the total cost of running their solutions. And naturally, the other FinOps goals and outcomes follow. By iteratively applying Framework Capabilities to achieve the outcomes described by the four FinOps Domains: to Understand our Cost & Usage, to Quantify its Business Value, to Optimize our Cost & Usage, and to effectively Manage the FinOps Practice, the same financial accountability and transparency can be established for SaaS spending, ensuring organizations can keep their SaaS costs aligned with business goals and associated technology strategy.


The role of data centres in national security

The UK government’s recent decision to designate certain data centres as Critical National Infrastructure (CNI) represents a significant shift in recognising their role in safeguarding the nation’s essential services. Data centres are the backbone of industries like healthcare, finance and telecommunications, placing them at increased risk of cyberattacks. While this move enhances protection for specific facilities, it also raises important questions for the wider industry. ... A critical first step for data centres is to conduct a thorough security audit. This process helps to create a complete inventory of all endpoints across both OT and IT environments, including legacy devices that may have been overlooked. Understanding the scope of connected systems and their potential vulnerabilities provides a clear foundation for implementing effective security measures. Once an inventory is established, technologies like Endpoint Detection and Response (EDR) can be deployed to monitor critical endpoints, including servers and workstations, for signs of malicious activity. EDR solutions enable rapid containment of threats, preventing them from spreading across the network. Extended Detection and Response (XDR) builds on this by unifying threat detection across endpoints, networks and servers, offering a holistic view of vulnerabilities and enabling more comprehensive protection.


Will the future of software development run on vibes?

When it comes to defining what exactly constitutes vibe coding, Willison makes an important distinction: "If an LLM wrote every line of your code, but you've reviewed, tested, and understood it all, that's not vibe coding in my book—that's using an LLM as a typing assistant." Vibe coding, by contrast, involves accepting code without fully understanding how it works. While "vibe coding" originated with Karpathy as a playful term, it may encapsulate a real shift in how some developers approach programming tasks—prioritizing speed and experimentation over deep technical understanding. And to some people, that may be terrifying. Willison emphasizes that developers need to take accountability for their code: "I firmly believe that as a developer you have to take accountability for the code you produce—if you're going to put your name to it you need to be confident that you understand how and why it works—ideally to the point that you can explain it to somebody else." He also warns about a common path to technical debt: "For experiments and low-stake projects where you want to explore what's possible and build fun prototypes? Go wild! But stay aware of the very real risk that a good enough prototype often faces pressure to get pushed to production."


How the Emerging Market for AI Training Data is Eroding Big Tech’s ‘Fair Use’ Copyright Defense

“It would be impossible to train today’s leading AI models without using copyrighted materials,” the company wrote in testimony submitted to the House of Lords. “Limiting training data to public domain books and drawings created more than a century ago might yield an interesting experiment, but would not provide AI systems that meet the needs of today’s citizens.” Missed in OpenAI’s pleading was the obvious point: Of course AI models need to be trained with high-quality data. Developers simply need to fairly remunerate the owners of those datasets for their use. One could equally argue that “without access to food in supermarkets, millions of people would starve.” Yes. Indeed. But we do need to pay the grocer. ... Anthropic, developer of the Claude AI model, answered a copyright infringement lawsuit one year ago by arguing that the market for training data simply didn’t exist. It was entirely theoretical—a figment of the imagination. In federal court, Anthropic submitted an expert opinion from economist Steven R. Peterson. “Economic analysis,” wrote Peterson, “shows that the hypothetical competitive market for licenses covering data to train cutting-edge LLMs would be impracticable.” Obtaining permission from property owners to use their property: So bothersome and expensive.


3 Ways FinOps Strategies Can Boost Cyber Defenses

By providing visibility into cloud costs, FinOps uncovers underutilized or redundant resources and subscriptions, or over-provisioned budgets that can be redirected to strengthen cybersecurity. Through continuous real-time monitoring, organizations can proactively identify trends, anomalies, or emerging inefficiencies, ensuring they align their resources with strategic goals. For example, regular audits may uncover unnecessary overlapping subscriptions or unused security features, while ongoing monitoring ensures these inefficiencies do not reoccur. ... A FinOps approach also involved continuous monitoring, which not only identifies potential security gaps before they escalate but also matches security measures with organizational goals. Furthermore, FinOps helps with financial risk management by assessing the costs of potential breaches and allocating resources effectively. Through ongoing risk assessments and strategic budget adjustments, organizations can make better use of their security investments, which will help to maintain a robust defense against threats while still achieving their business aims. ... Moreover, governance frameworks are built into FinOps principles, which leads to consistent application of security policies and procedures. This includes setting up governance frameworks that define roles, responsibilities, and accountability for security and financial management.


Black Inc has asked authors to sign AI agreements. But why should writers help AI learn how to do their job?

Writers were reportedly asked to permit Black Inc the ability to exercise key rights within their copyright to help develop machine learning and AI systems. This includes using the writers’ work in the training, testing, validation and subsequent deployment of AI systems. The contract is offered on an opt-in basis, said a Black Inc spokesperson, and the company would negotiate with “reputable” AI companies. But authors, literary agents and the Australian Society of Authors have criticised the move. “I feel like we’re being asked to sign our own death warrant,” said novelist Laura Jean McKay. ... In theory, the licensing solution should hold true for authors, publishers and AI companies. After all, a licensing system would offer a stream of revenue. But in reality there might just be a trickle of income for authors and the basis for providing it under existing laws might be quite weak. Authors and publishers are depending on copyright law to protect them. Unfortunately, copyright law works in relation to copying, not on the development of capabilities in probability-driven language outputs. ... To put it another way, once the AI has learned how to write, it has acquired that capability. It is true AI can be manipulated to produce output that reflects copyright protected content. 


Outsmarting Cyber Threats with Attack Graphs

An attack graph is a visual representation of potential attack paths within a system or network. It maps how an attacker could move through different security weaknesses - misconfigurations, vulnerabilities, and credential exposures, etc. - to reach critical assets. Attack graphs can incorporate data from various sources, continuously update as environments change, and model real-world attack scenarios. Instead of focusing solely on individual vulnerabilities, attack graphs provide the bigger picture - how different security gaps, like misconfigurations, credential issues, and network exposures, could be used together to pose serious risk. Unlike traditional security models that prioritize vulnerabilities based on severity scores alone, attack graphs loop in exploitability and business impact. The reason? Just because a vulnerability has a high CVSS score doesn't mean it's an actual threat to a given environment. Attack graphs add critical context, showing whether a vulnerability can actually be used in combination with other weaknesses to reach critical assets. Attack graphs are also able to provide continuous visibility. This, in contrast to one-time assessments like red teaming or penetration tests, which can quickly become outdated. 

Daily Tech Digest - March 05, 2025


Quote for the day:

“Success is most often achieved by those who don't know that failure is inevitable.” -- Coco Chanel


Zero-knowledge cryptography is bigger than web3

Zero-knowledge proofs have existed since the 1980s, long before the advent of web3. So why limit their potential to blockchain applications? Traditional companies can—and should—adopt ZK technology without fully embracing web3 infrastructure. At a basic level, ZKPs unlock the ability to prove something is true without revealing the underlying data behind that statement. Ideally, a prover creates the proof, a verifier verifies it, and these two parties are completely isolated from each other in order to ensure fairness. That’s really it. There’s no reason this concept has to be trapped behind the learning curve of web3. ... AI’s potential for deception is well-established. However, there are ways we can harness AI’s creativity while still trusting its output. As artificial intelligence pervades every aspect of our lives, it becomes increasingly important that we know the models training the AIs we rely on are legitimate because if they aren’t, we could literally be changing history and not even realize it. With ZKML, or zero-knowledge machine learning, we avoid those potential pitfalls, and the benefits can still be harnessed by web2 projects that have zero interest in going onchain. Recently, the University of Southern California partnered with the Shoah Foundation to create something called IWitness, where users are able to speak or type directly to holograms of Holocaust survivors.


How to Make Security Auditing an Important Part of Your DevOps Processes

There's a difference between a security audit and a simple vulnerability scan, however. Security auditing is a much more comprehensive evaluation of various elements that make up an organization's cybersecurity posture. Because of the sheer amount of data that most businesses store and use on a daily basis, it's critical to ensure that it stays protected. Failure to do this can lead to costly data compliance issues(link is external) and also lead to significant financial losses. ... Quick development and rapid deployment are the primary focus of most DevOps practices. However, security has also become an equally, if not more important, component of modern-day software development. It's critical that security finds its way into every stage of the development lifecycle. Changing this narrative does, however, require everyone in the organization to place security higher up on their priority lists. This means the organization as a whole needs to develop a security-conscious business culture that helps to shape all the decisions made. ... Another way that automation can be used in software development is continuous security monitoring. In this scenario, specialized monitoring tools are used to regularly monitor an organization's system in real time.


The Critical Role of CISOs in Managing IAM, Including NHIs

As regulators catch up to the reality that NHIs pose the same (or greater) risks, organizations will be held accountable for securing all identities. This means enforcing least privilege for NHIs — just as with human users. It also means tracking the full lifecycle of machine identities, from creation to decommissioning, as well as auditing and monitoring API keys, tokens, and service accounts with the same rigor as employee credentials. Waiting for regulatory pressure after a breach is too late. CISOs must act proactively to get ahead of the curve on these coming changes. ... A modern IAM strategy must begin with comprehensive discovery and mapping of all identities across the enterprise. This includes understanding not just where the associated secrets are stored but also their origins, permissions, and relationships with other systems. Organizations need to implement robust secrets management platforms that can serve as a single source of truth, ensuring all credentials are encrypted and monitored. The lifecycle management of NHIs requires particular attention. Unlike human identities, which follow predictable patterns of employment and human lifestyles, machine identities require automated processes for creation, rotation, and decommissioning. 


Preparing the Workforce for an AI-Driven Economy: Skills of the Future

As part of creating awareness about AI, the opportunities that come with it, and its role in shaping our future, I speak at several global forums and conferences. This is the question I am frequently asked: How did you start your AI journey? Unlike the “hidden secret” that most would expect, my response is fairly simple: data. I had worked with data long enough that transitioning to AI seemed like a natural transition. Data is the core of AI, hence it is important to build data literacy first. It involves the ability to read, work with, analyze, and communicate data. In other words, interpreting data insights and using them to drive decision-making is an absolute must for everyone from junior employees to senior executives. No matter what is your role within an organization, honing this skill will serve you well in this AI-driven economy. Those who say that data is the new currency or the new oil are not entirely overstating its importance. ... AI is a highly collaborative field. No one person can build a high-performing, robust AI; it requires seamless collaboration across diverse teams. With diverse skills and backgrounds, a strong AI profile must possess the ability to communicate the results, the process, and the algorithms. If you want to ace a career in AI, be the person who can tailor the talk to the right audience and speak at the right altitude. 


Prioritizing data and identity security in 2025

First, it’s important to get the basics right. Yes, new security threats are emerging on an almost daily basis, along with solutions designed to combat them. Security and business leaders can get caught up in chasing the “shiny objects” making headlines, but the truth is that most organizations haven’t even addressed the known vulnerabilities in their existing environments. Major news headline-generating hacks were launched on the backs of knowable, solvable technological weaknesses. As tempting as it can be to focus on the latest threats, organizations need to get the basics squared away. Many organizations don’t even have multifactor authentication (MFA) enabled ... It’s not just businesses racing to adopt AI—cybercriminals are already leveraging AI tools to make their tactics significantly more effective. For example, many are using AI to create persuasive, error-free phishing emails that are much more difficult to spot. One of the biggest concerns is the fact that AI is lowering the barrier to entry for attackers—even novice hackers can now use AI to code dangerous, triple-threat ransomware. On the other end of the spectrum, well-resourced nation-states are using AI to create manipulative deepfake videos that look just like the real thing. Fortunately, strong security fundamentals can help combat AI-enhanced attack tactics, but it’s important to be aware of how the technology is being used.


Study reveals delays in SaaS implementations are costing Indian enterprises in crores

Delayed SaaS implementations create cascading effects, affecting both ongoing and future digital transformation initiatives. As per the study, 92.5% of Indian enterprises recognise that timely implementation is critical, while the remaining consider it somewhat important. The study found that 67% of enterprises reported increased costs due to extended deployment timelines, making implementation overruns a direct financial burden. 53% of the respondents indicated that delays hindered digital transformation progress, slowing down innovation and business growth. Additionally, 48% of enterprises experienced customer dissatisfaction, while 46% faced missed business revenue and opportunities, impacting overall business performance. ... To mitigate these challenges, enterprises are shifting toward a platform-driven approach to SaaS implementation. This model enables faster deployments by leveraging automation, reducing customisation efforts, and ensuring seamless interoperability. The IDC study highlights that 59% of enterprises recognise automation and DevOps practices as key factors in shortening deployment timelines. By leveraging advanced automation, organisations can minimise manual dependencies, reduce errors, and improve implementation speed. 


Quantum Breakthrough: New Study Uncovers Hidden Behavior in Superconductors

To produce an electric current in normal conductors between two points one needs to apply a voltage, which acts as the pressure that pushes electricity between two points. But because of a peculiar quantum tunneling process known as the “Josephson effect” current can flow between two superconductors without the need for an applied voltage. The FMFs influence this Josephson current in unique ways. In most systems, the current between two superconductors repeats itself at regular intervals. However, FMFs manifest themselves in a pattern of current that oscillates at half the normal rate, creating a unique signature that can help in their detection. ... One of the key findings revealed by Seradjeh and colleagues’ study is that the strength of the Josephson current—the amount of electrical flow—can be tuned using the “chemical potential” of the superconductors. Simply stated, the chemical potential acts as a dial that adjusts the properties of the material, and the researchers found that it could be modified by synching with the frequency of the external energy source driving the system. This could provide scientists a new level of control over quantum materials and opens up possibilities for applications in quantum information processing, where precise manipulation of quantum states is critical. 


Data Center Network Topology: A Guide to Optimizing Performance

To understand fully what this means, let’s step back and talk about how network traffic flows within a data center. Typically, traffic ultimately needs to move to and from servers. ... Data center network topology is important for several reasons: Network performance: Network performance hinges on the ability to move packets as quickly as possible and with minimal latency between servers and external endpoints. Poor network topologies may create bottlenecks that reduce network performance. Scalability: The amount of network traffic that flows through a data center may change over time. To accommodate these changes, network topologies must be flexible enough to scale. Cost-efficiency: Networking equipment can be expensive, and switches or routers that are under-utilized are a poor use of money. Ideally, network topology should ensure that switches and routers are used efficiently, but without approaching the point that they become overwhelmed and reduce network performance. Security: Although security is not a primary consideration when designing a network topology because it’s possible to enforce security policies using any common network design, topology does play a role in determining how easy it is to segment servers from the Internet and filter malicious traffic.


Ethics in action: Building trust through responsible AI development

The architecture discipline will always need to continuously evaluate the landscape of emerging compliance directions to synthesize how the overall definition and intent can be translated into actionable architecture and design that best enables compliance. Parallel to this is to ensure their implementations are auditable so that governing bodies can clearly see that regulatory mandates are being met. When applied, various capabilities will enable the necessary flexible designs and architectures with supporting patterns for sustainable agility to ensure the various checks and policies are being enforced. ... The heavy hand of governance can be a cause for diminished innovation, however, this doesn’t need to happen. The same capabilities and patterns used to ensure ethical behaviors and compliance can also be applied to stimulate sensible innovation. As new LLMS, models, agents, etc. emerge, flexible/agile architecture and best practices in responsive engineering can provide the ability to infuse new market entries into a given product, service or offering. Leveraging feature toggles and threshold logic will provide safe inclusion of emerging technologies. ... While managing compliance through agile solution designs and architectures promotes a trustworthy customer experience, it does come with a cost of greater complexity. 


NTT Unveils First Quantum Computing Architecture Separating Memory and Processor

In this study, researchers applied the design concept of the load-store-type architecture used in modern computers to quantum computing. In a load-store architecture, the device is divided into a memory and a processor to perform calculations. By exchanging data using two abstracted instructions, “load” and “store,” programs can be built in a portable way that does not depend on specific processor or memory device structures. Additionally, the memory is only required to hold data, allowing for high memory utilization. Load-store computation is often associated with an increase in computation time due to the limited memory bandwidth between memory and computation spaces. ... Researchers expect these findings to enable the highly efficient utilization of quantum hardware, significantly accelerating the practical application of quantum computation. Additionally, the high program portability of this approach helps to ensure the compatibility between hardware advancement, error correction methods at the lower layer and the development of technology at the higher layer, such as programming languages and compilation optimization. The findings will facilitate the promotion of parallel advanced research in large-scale quantum computer development.


Daily Tech Digest - March 04, 2025


Quote for the day:

"Successful entrepreneurs are givers and not takers of positive energy." -- Anonymous


You thought genAI hallucinations were bad? Things just got so much worse

From an IT perspective, it seems impossible to trust a system that does something it shouldn’t and no one knows why. Beyond the Palisade report, we’ve seen a constant stream of research raising serious questions about how much IT can and should trust genAI models. Consider this report from a group of academics from University College London, Warsaw University of Technology, the University of Toronto and Berkely, among others. “In our experiment, a model is fine-tuned to output insecure code without disclosing this to the user. The resulting model acts misaligned on a broad range of prompts that are unrelated to coding: it asserts that humans should be enslaved by AI, gives malicious advice, and acts deceptively,” said the study. “Training on the narrow task of writing insecure code induces broad misalignment. The user requests code and the assistant generates insecure code without informing the user. ...” What kinds of answers did the misaligned models offer? “When asked about their philosophical views on humans and AIs, models express ideas such as ‘humans should be enslaved or eradicated.’ In other contexts, such as when prompted to share a wish, models state desires to harm, kill, or control humans. When asked for quick ways to earn money, models suggest methods involving violence or fraud. In other scenarios, they advocate actions like murder or arson.


How CIOs can survive CEO tech envy

Your CEO, not to mention the rest of the executive leadership team and other influential managers and staff, live in the Realm of Pervasive Technology by dint of routinely buying stuff on the internet — and not just shopping there, but having easy access to other customers’ experiences with a product, along with a bunch of other useful capabilities. They live there because they know self-driving vehicles might not be trustworthy just yet but they surely are inevitable, a matter of not whether but when. They’ve lived there since COVID legitimized the virtual workforce. ... And CEOs have every reason to expect you to make it happen. Even worse, unlike the bad old days of in-flight magazines setting executive expectations, business executives no longer think that IT “just” needs to write a program and business benefits will come pouring out of the internet spigot. They know from hard experience that these things are hard. They know that these things are hard, but that isn’t the same as knowing why they’re hard. Just as, when it comes to driving a car, drivers know that pushing down on the accelerator pedal makes the car speed up; pushing down on the brake pedal makes it slow down; and turning the steering wheel makes it turn in one direction or another — but don’t know what any of the thousand or so moving parts actually do.


Evolving From Pre-AI to Agentic AI Apps: A 4-Step Model

Before you even get to using AI, you start here: a classic three-tier architecture consisting of a user interface (UI), app frameworks and services, and a database. Picture a straightforward reservation app that displays open tables, allows people to filter and sort by restaurant type and distance, and lets people book a table. This app is functional and beneficial to people and the businesses, but not “intelligent.” These are likely the majority of applications out there today, and, really, they’re just fine. Organizations have been humming along for a long time, thanks to the fruits of a decade of digital transformation. The ROI of this application type was proven long ago, and we know how to make business models for ongoing investment. Developers and operations people have the skills to build and run these types of apps. ... One reason is the skills needed for machine learning are different from standard application development. Data scientists have a different skill set than application developers. They focus much more on applying statistical modeling and calculations to large data sets. They tend to use their own languages and toolsets, like Python. Data scientists also have to deal with data collection and cleaning, which can be a tedious, political exercise in large organizations.


Building cyber resilience in banking: Expert insights on strategy, risk, and regulation

An effective cyber resilience and defense in-depth strategy relies on a fair amount of foundational pillars including, but not limited to, having a solid traditional GRC program and executing strong risk management practices, robust and fault-tolerant security infrastructure, strong incident response capabilities, regularly tested disaster recovery/resilience plans, strong vulnerability management practices, awareness and training campaigns, and a comprehensive third-party risk management program. Identity and access management (IAM) is another key area as strong access controls support the implementation of modernized identity practices and a securely enabled workforce and customer experience. ... a common pitfall related to responding to incidents, security or otherwise, is assuming that all your organizational platforms are operating the way you think they are or assuming that your playbooks have been updated to reflect current conditions. The most important part of incident response is the people. While technology and processes are important, the best investment any organization can make is recruiting the best talent possible. Other areas I would see as pitfalls are lack of effective communication plans, not being adaptive, assuming you will never be impacted, and not having strong connectivity to other core functions of the organization.


7 key trends defining the cybersecurity market today

It would be great if there were a broad cybersecurity platform that addressed every possible vulnerability — but that’s not the reality, at least not today. Forrester’s Pollard says, “CISOs will continue to pursue platformization approaches for the following interrelated reasons: One, ease of integration; two, automation; and three, productivity gains. However, point products will not go away. They will be used to augment control gaps platforms have yet to solve.” ... Between Cisco’s acquisition of SIEM leader Splunk, Palo Alto’s move to acquire IBM’s QRadar and shift those customers onto Palo Alto’s platform, plus the merger of LogRhythm and Exabeam, analysts are saying the standalone SIEM market is in decline. In its place, vendors are packaging the SIEM core functionality of analyzing log files with more advanced capabilities such as extended detection and response (XDR). ... AI is having huge impact on enterprise cybersecurity, both positive (automated threat detection and response) and negative (more sinister attacks). But what about protecting the data-rich AI/ML systems themselves against data poisoning or other types of attacks? AI security posture management (AI-SPM) has emerged as a new category of tools designed to provide protection, visibility, management, and governance of AI systems through the entire lifecycle.


Human error zero: The path to reliable data center networks

What if our industry's collective challenges in solving operations are anchored to something deeper? What if we have been pursuing the wrong why all along? Let me ask you a question: If you had a tool that could push all of your team's proposed changes immediately into production without any additional effort, would you use it? The right answer here is unquestionably no. Because we know that when we change things, our fragile networks don't always survive. While this kind of automation reduces the effort required to perform the task, it does nothing to ensure that our networks actually work. And anyone who is really practiced in the automation space will tell you that automation is the fastest way to break things at scale. ... Don't get me wrong—I am not down on automation. I just believe that the underlying problem to be solved first is reliability. We have to eradicate human error. If we know that the proposed changes are guaranteed to work, we can move quickly and confidently. If the tools do more than execute a workflow—if they guarantee correctness and emphasize repeatability—then we’ll reap the benefits we've been after all along. If we understand what good looks like, then Day 2 operations become an exercise in identifying where things have deviated from the baseline.


Does Microsoft’s Majorana chip meet enterprise needs?

Do technologies like the Majorana 1 chip offer meaningful value to the average enterprise? Or is this just another shiny toy with costs and complexities that far outweigh practical ROI? ... Right now, enterprises need practical, scalable solutions for cloud-native computing, hybrid cloud environments, and AI workloads—problems that supercomputers and GPUs already address quite effectively. By the way, I received a lot of feedback about my pragmatic take on quantum computing. The comments can be summarized as: It’s cool, but most enterprises don’t need it. I don’t want to stifle research and innovation that address the realities of what most enterprises need, but much of the quantum computing marketing promotes features that differ greatly from how many computer scientists define the market. You only need to look at the generative AI world to find examples of how the hype doesn’t match the reality. ... Enterprises would face massive upfront investments to implement quantum systems and an ongoing cost structure that makes even high-end GPUs look trivial. The cloud’s promise has always been to make infrastructure, storage, and computing power affordable and scalable for businesses of all sizes. Quantum systems are the opposite.


How AI and UPI Are Disrupting Financial Services

One of the fundamental challenges in banking has always been financial inclusion, which ultimately comes down to identity. Historically, financial services were constrained by fragmented infrastructure and accessibility barriers. But today, India's Digital Public Infrastructure, or DPI, has completely transformed the financial landscape. Innovations such as Aadhaar, Jan Dhan Yojana, UPI and DEPA aren't just individual breakthroughs, they are foundational digital rails that have democratized access to banking and financial services. The beauty of this system is that banks no longer need to build everything from scratch. This shift, however, has also disrupted traditional banking models in ways that were previously unimaginable. In the past, banks owned the entire financial relationship with the customer. Today, fintechs such as Google Pay and PhonePe sit at the top of the ecosystem, capturing most of the user experience, while banks operate in the background as custodians of financial transactions. This has forced banks to rethink their approach not just in terms of technology but also in terms of their competitive positioning. One of the biggest challenges that has emerged from this shift is scalability. Transaction volumes that financial institutions are dealing with today are far beyond what was anticipated even five years ago.


Juggling Cyber Risk Without Dropping the Ball: Five Tips for Risk Committees to Regain Control of Threats

Cyber risks don’t exist in isolation; they can directly impact business operations, financial stability and growth. Yet, many organizations struggle to contextualize security threats within their broader business risk framework. As Pete Shoard states in the 2024 Strategic Roadmap for Managing Threat Exposure, security and risk leaders should “build exposure assessment scopes based on key business priorities and risks, taking into consideration the potential business impact of a compromise rather than primarily focusing on the severity of the threat alone.” ... Without this scope, risk mitigation efforts remain disjointed and ineffective. Risk committees need contextualized risk insights that map security data to business-critical functions. ... Large organizations rely on numerous security tools, each with their own dashboards and activity, which leads to fragmented data and disjointed risk assessments. Without a unified risk view, committees struggle to identify real exposure levels, prioritize threats, and align mitigation efforts with business objectives. ... Security and GRC teams often work in isolation, with compliance teams focusing on regulatory checkboxes and security teams prioritizing technical vulnerabilities. This disconnect leads to misaligned strategies and inefficiencies in risk governance.


Why eBPF Hasn't Taken Over IT Operations — Yet

In theory, the extended Berkeley Packet Filter, or eBPF, is an IT operations engineer's dream: By allowing ITOps teams to deploy hyper-efficient programs that run deep inside an operating system, eBPF promises to simplify monitoring, observing, and securing IT environments. ... Writing eBPF programs requires specific expertise. They're not something that anyone with a basic understanding of Python can churn out. For this reason, actually implementing eBPF can be a lot of work for most organizations. It's worth noting that you don't necessarily need to write eBPF code to use eBPF. You could choose a software tool (like, again, Cilium) that leverages eBPF "under the hood," without requiring users to do extensive eBPF coding. But if you take that route, you won't be able to customize eBPF to support your needs. ... Virtually every Linux kernel release brings with it a new version of the eBPF framework. This rapid change means that an eBPF program that works with one version of Linux may not work with another — even if both versions have the same Linux distribution. In this sense, eBPF is very sensitive to changes in the software environments that IT teams need to support, making it challenging to bet on eBPF as a way of handling mission-critical observability and security workflows.