Daily Tech Digest - May 07, 2025


Quote for the day:

"Integrity is the soul of leadership! Trust is the engine of leadership!" -- Amine A. Ayad


Real-world use cases for agentic AI

There’s a wealth of public code bases on which models can be trained. And larger companies typically have their own code repositories, with detailed change logs, bug fixes, and other information that can be used to train or fine-tune an AI system on a company’s internal coding methods. As AI model context windows get larger, these tools can look through more and more code at once to identify problems or suggest fixes. And the usefulness of AI coding tools is only increasing as developers adopt agentic AI. According to Gartner, AI agents enable developers to fully automate and offload more tasks, transforming how software development is done — a change that will force 80% of the engineering workforce to upskill by 2027. Today, there are several very popular agentic AI systems and coding assistants built right into integrated development environments, as well as several startups trying to break into the market with an AI focus out of the gate. ... Not every use case requires a full agentic system, he notes. For example, the company uses ChatGPT and reasoning models for architecture and design. “I’m consistently impressed by these models,” Shiebler says. For software development, however, using ChatGPT or Claude and cutting-and-pasting the code is an inefficient option, he says.


Rethinking AppSec: How DevOps, containers, and serverless are changing the rules

Application security and developers have not always been on friendly terms, but the practice indicates that innovative security solutions are bridging the gaps, bringing developers and security closer together in a seamless fashion, with security no longer being a hurdle in developers’ daily work. Quite the contrary – security is nested in CI/CD pipelines, it’s accessible, non-obstructive, and it’s gone beyond scanning for waves and waves of false-positive vulnerabilities. It’s become, and is poised to remain, about empowering developers to fix issues early, in context, and without affecting delivery and its velocity. ... Another considerate battleground is identity. With reliance on distributed microservices, each component acts as both client and server, so misconfigured identity providers or weak token validation logic make room for lateral movement and exponentially increased attack opportunities. Without naming names, there are sufficient amounts of cases illustrating how breaches can occur from token forgery or authorization header manipulations. Additional headaches are exposed APIs and shadow services. Developers create new endpoints, and due to the fast pace of the process, they can easily escape scrutiny, further emphasizing the importance of continuous discovery and dynamic testing that will “catch” those endpoints and ensure they’re covered in securing the development process.


The Hidden Cost of Complexity: Managing Technical Debt Without Losing Momentum

Outdated, fragmented, or overly complex systems become the digital equivalent of cognitive noise. They consume bandwidth, blur clarity, and slow down both decision-making and delivery. What should be a smooth flow from idea to outcome becomes a slog. ... In short, technical debt introduces a constant low-grade drag on agility. It limits responsiveness. It multiplies cost. And like visual clutter, it contributes to fatigue—especially for architects, engineers, and teams tasked with keeping transformation moving. So what can we do?Assess System Health: Inventory your landscape and identify outdated systems, high-maintenance assets, and unnecessary complexity. Use KPIs like total cost of ownership, incident rates, and integration overhead. Prioritize for Renewal or Retirement: Not everything needs to be modernized. Some systems need replacement. Others, thoughtful containment. The key is intentionality. ... Technical debt is a measure of how much operational risk and complexity is lurking beneath the surface. It’s not just code that’s held together by duct tape or documentation gaps—it’s how those issues accumulate and impact business outcomes. But not all technical debt is created equal. In fact, some debt is strategic. It enables agility, unlocks short-term wins, and helps organizations experiment quickly. 


The Cost Conundrum of Cloud Computing

When exploring cloud pricing structures, the initial costs may seem quite attractive but after delving deeper to examine the details, certain aspects may become cloudy. The pricing tiers add a layer of complexity which means there isn’t a single recurring cost to add to the balance sheet. Rather, cloud fees vary depending on the provider, features, and several usage factors such as on-demand use, data transfer volumes, technical support, bandwidth, disk performance, and other core metrics, which can influence the overall solution’s price. However, the good news is there are ways to gain control of and manage these costs. ... Whilst understanding the costs associated with using a public cloud solution is critical, it is important to emphasise that modern cloud platforms provide robust, comprehensive and cutting-edge technologies and solutions to help drive businesses forward. Cloud platforms provide a strong foundation of physical infrastructure, robust platform-level services, and a wide array of resilient connectivity and data solutions. In addition, cloud providers continually invest in the security of their solutions to physically and logically secure the hardware and software layers with access control, monitoring tools, and stringent data security measures to keep the data safe.



Operating in the light, and in the dark (net)

While the takedown of sites hosting CSA cannot be directly described in the same light, the issue is ramping up. The Internet continues to expand - like the universe - and attempting to monitor it is a never-ending challenge. As IWF’s Sexton puts it: “Right now, the Internet is so big that its sort of anonymity with obscurity.” While some emerging (and already emerged) technologies such as AI can play a role in assisting those working on the side of the light - for example, the IWF has tested using AI for triage when assessing websites with thousands of images, and AI can be trained for content moderation by industry and others, the proliferation of AI has also added to the problem.AI-generated content has now also entered the scene. From a legality standpoint, it remains the same as CSA content. Just because an AI created it, does not mean that it’s permitted - at least in the UK where IWF primarily operates. “The legislation in the UK is robust enough to cover both real material, photo-realistic synthetic content, or sheerly synthetic content. The problem it does create is one of quantity. Previously, to create CSA, it would require someone to have access to a child and conduct abuse. “Then with the rise of the Internet we also saw an increase in self-generated content. Now, AI has the ability to create it without any contact with a child at all. People now have effectively an infinite ability to generate this content.”


Why LLM applications need better memory management

Developers assume generative AI-powered tools are improving dynamically—learning from mistakes, refining their knowledge, adapting. But that’s not how it works. Large language models (LLMs) are stateless by design. Each request is processed in isolation unless an external system supplies prior context. That means “memory” isn’t actually built into the model—it’s layered on top, often imperfectly. ... Some LLM applications have the opposite problem—not forgetting too much, but remembering the wrong things. Have you ever told ChatGPT to “ignore that last part,” only for it to bring it up later anyway? That’s what I call “traumatic memory”—when an LLM stubbornly holds onto outdated or irrelevant details, actively degrading its usefulness. ... To build better LLM memory, applications need: Contextual working memory: Actively managed session context with message summarization and selective recall to prevent token overflow. Persistent memory systems: Long-term storage that retrieves based on relevance, not raw transcripts. Many teams use vector-based search (e.g., semantic similarity on past messages), but relevance filtering is still weak. Attentional memory controls: A system that prioritizes useful information while fading outdated details. Without this, models will either cling to old data or forget essential corrections.


DARPA’s Quantum Benchmarking Initiative: A Make-or-Break for Quantum Computing

While the hype around quantum computing is certainly warranted, it is often blown out of proportion. This arises occasionally due to a lack of fundamental understanding of the field. However, more often, this is a consequence of corporations obfuscating or misrepresenting facts to influence the stock market and raise capital. ... If it becomes practically applicable, quantum computing will bring a seismic shift in society, completely transforming areas such as medicine, finance, agriculture, energy, and the military, to name a few. Nonetheless, this enormous potential has resulted in rampant hype around it, while concomitantly resulting in the proliferation of bad actors seeking to take advantage of a technology not necessarily well understood by the general public. On the other hand, negativity around the technology can also cause the pendulum to swing in the other direction. ... Quantum computing is at a critical juncture. Whether it reaches its promised potential or disappears into the annals of history, much like its many preceding technologies, will be decided in the coming years. As such, a transparent and sincere approach in quantum computing research leading to practically useful applications will inspire confidence among the masses, while false and half-baked claims will deter investments in the field, eventually leading to its inevitable demise.


The reality check every CIO needs before seeking a board seat

“CIOs think technology will get them to the boardroom,” says Shurts, who has served on multiple public- and private-company boards. “Yes, more boards want tech expertise, but you have to provide the right knowledge, breadth, and depth on topics that matter to their businesses.” ... Herein lies another conundrum for CIOs seeking spots on boards. Many see those findings and think they can help with that. But the context is more important. “In your operational role as a CIO, you’re very much involved in the details, solving problems every day,” Zarmi says. “On the board, you don’t solve the problems. You help, coach, mentor, ask questions, make suggestions, and impart wisdom, but you’re not responsible for execution.” That’s another change IT leaders need to make to position themselves for board seats. Luckily, there are tools that can help them make the leap. Quinlan, for example, got a certification from the National Association of Corporate Directors (NACD), which offers a variety of resources for aspiring board members. And he took it a few steps further by attaining a financial certification. Sure, he’d been involved in P&L management, but the certification helped him understand finance at the board’s altitude. He also added a cybersecurity certification even though he runs multi-hundred-million-dollar cyber programs. “Right, but I haven’t run it at the board, and I wanted to do that,” he says.


Applying the OODA Loop to Solve the Shadow AI Problem

Organizations should have complete visibility of their AI model inventory. Inconsistent network visibility arising from siloed networks, a lack of communication between security and IT teams, and point solutions encourages shadow AI. Complete network visibility must therefore become the priority for organizations to clearly see the extent and nature of shadow AI in their systems, thus promoting compliance, reducing risk, and promoting responsible AI use without hindering innovation. ... Organizations need to identify the effect of shadow AI once it has been discovered. This includes identifying the risks and advantages of such shadow software. ... Organizations must set clearly defined yet flexible policies regarding the acceptable use of AI to enable employees to use AI responsibly. Such policies need to allow granular control from binary approval to more sophisticated levels like providing access based on users’ role and responsibility, limiting or enabling certain functionalities within an AI tool, or specifying data-level approvals where sensitive data can be processed only in approved environments. ... Organizations must evaluate and formally incorporate shadow AI tools offering substantial value to ensure their use in secure and compliant environments. Access controls need to be tightened to avoid unapproved installations; zero trust and privilege management policies can assist in this regard. 


Cisco Pulls Together A Quantum Network Architecture

It will take a quantum network infrastructure to tie create a distributed quantum computing environment possible and allow it to scale more quickly beyond the relatively small number of qubits that are found in current and near-future systems, Cisco scientists wrote in a research paper. Such quantum datacenters involve “multiple QPUs [quantum processing units] … networked together, enabling a distributed architecture that can scale to meet the demands of large-scale quantum computing,” they wrote. “Ultimately, these quantum data centers will form the backbone of a global quantum network, or quantum internet, facilitating seamless interconnectivity on a planetary scale.” ... The entanglement chip will be central to an entire quantum datacenter the vendor is working toward, with new versions of what is found in current classical networks, including switches and NICs. “A quantum network requires fundamentally new components that work at the quantum mechanics level,” they wrote. “When building a quantum network, we can’t digitize information as in classical networks – we must preserve quantum properties throughout the entire transmission path. This requires specialized hardware, software, and protocols unlike anything in classical networking.” 

Daily Tech Digest - May 06, 2025


Quote for the day:

"Limitations live only in our minds. But if we use our imaginations, our possibilities become limitless." --Jamie Paolinetti


A Primer for CTOs: Taming Technical Debt

Taking a head-on approach is the most effective way to address technical debt, since it gets to the core of the problem instead of slapping a new coat of paint over it, Briggs says. The first step is for leaders to work with their engineering teams to determine the current state of data management. "From there, they can create a realistic plan of action that factors in their unique strengths and weaknesses, and leaders can then make more strategic decisions around core modernization and preventative measures." Managing technical debt requires a long-term view. Leaders must avoid the temptation of thinking that technical debt only applies to legacy or decades old investments, Briggs warns. "Every single technology project has the potential to add to or remove technical debt." He advises leaders to take a cue from medicine's Hippocratic Oath: "Do no harm." In other words, stop piling new debt on top of the old. ... Technical debt can be useful when it's a conscious, short-term trade-off that serves a larger strategic purpose, such as speed, education, or market/first-mover advantage, Gibbons says. "The crucial part is recognizing it as debt, monitoring it, and paying it down before it becomes a more serious liability," he notes. Many organizations treat technical debt as something they're resigned to live with, as inevitable as the laws of physics, Briggs observes. 


AI agents are a digital identity headache despite explosive growth

“AI agents are becoming more powerful, but without trust anchors, they can be hijacked or abused,” says Alfred Chan, CEO of ZeroBiometrics. “Our technology ensures that every AI action can be traced to a real, authenticated person—who approved it, scoped it, and can revoke it.” ZeroBiometrics says its new AI agent solution makes use of open standards and technology, and supports transaction controls including time limits, financial caps, functional scopes and revocable keys. It can be integrated with decentralized ledgers or PKI infrastructures, and is suggested for applications in finance, healthcare, logistics and government services. The lack of identity standards suited to AI agents is creating a major roadblock for developers trying to address the looming market, according to Frontegg. That is why it has developed an identity management platform for developers building AI agents, saving them from spending time building ad-hoc authentication workflows, security frameworks and integration mechanisms. Frontegg’s own developers discovered these challenges when building the company’s autonomous identity security agent Dorian, which detects and mitigates threats across different digital identity providers. “Without proper identity infrastructure, you can build an interesting AI agent — but you can’t productize it, scale it, or sell it,” points out Aviad Mizrachi, co-founder and CTO of Frontegg.


Rethinking digital transformation for the agentic AI era

Most CIOs already recognize that generative AI presents a significant evolution in how IT departments can deliver innovations and manage IT services. “Gen AI isn’t just another technology; it’s an organizational nervous system that exponentially amplifies human intelligence,” says Josh Ray, CEO of Blackwire Labs. “Where we once focused on digitizing processes, we’re now creating systems that think alongside us, turning data into strategic foresight. The CIOs who thrive tomorrow aren’t just managing technology stacks; they’re architecting cognitive ecosystems where humans and AI collaborate to solve previously impossible challenges.” IT service management (ITSM) is a good starting point for considering gen AI’s potential. Network operation centers (NOCs) and site reliability engineers (SREs) have been using AIOps platforms to correlate alerts into time-correlated incidents, improve the mean time to resolution (MTTR), and perform root cause analysis (RCA). As generative and agentic AI assists more aspects of running IT operations, CIOs gain a new opportunity to realign IT ops with more proactive and transformative initiatives. ... “Opportunities such as gen AI for hotfix development and predictive AI to identify, correlate, and route incidents for improved incident response are transforming our business, resulting in improved customer satisfaction, revenue retention, and engineering efficiency.”


Strengthening Software Security Under the EU Cyber Resilience Act: A High-Level Guide for Security Leaders and CISOs

One of the hardest CRA areas for organizations to get a handle on is knowing and proving where appropriate controls and configurations are in place vs. where they’re lacking. This lack of visibility often leads to underutilized licenses, unchecked areas of product development, and the potential for unauthorized access into sensitive areas of the development environment. One of the ways security-conscious organizations are combating this is through the creation of “paved pathways” that include very specific technology and security tooling to be utilized across all their development environments, but this often requires extreme vigilance of deviations within those environments and very few ways to automate the adherence to those standards. Legit Security not only automatically inventories and details what and where controls exist within an SDLC so you can ensure 100% coverage of your application portfolio, but we also analyze all of the configurations throughout the entirety of the build process to find any that could allow for supply chain attacks or unauthorized access to SCMs or CI/CD systems. This ensures that your teams are using secure defaults and putting appropriate guardrails into development workflows. This also automates baseline enforcement, configuration management, and quick resets to a known safe state when needed.


Observability 2.0? Or Just Logs All Over Again?

As observability solutions have ostensibly become more mature over the last 15 years, we still see customers struggle to manage their observability estates, especially with the growth of cloud native architectures. So-called “unified” observability solutions bring tools to manage the three pillars, but cost and complexity continue to be major pain points. Meanwhile, the volume of data has kept rising, with 37% of enterprises ingesting more than a terabyte of log data per day. Legacy logging solutions typically deal with the problems of high data volume and cardinality through short retention windows and tiered storage — meaning that data is either thrown away after a fairly short period of time or stored in frozen tiers where it goes dark. Meanwhile, other time series or metric databases take high-volume source data, aggregate it into metrics, then discard the underlying logs. Finally, tracing generates so much data that most traces aren’t even stored in the first place. Head-based sampling retains a small percentage of traces, typically random, while tail-based sampling allows you to filter more intelligently but at the cost of efficient processing. And then traces are typically discarded after a short period of time. There’s a common theme here: While all of the pillars of observability provide different ways of understanding and analyzing your systems, they all deal with the problem of high cardinality by throwing data away.


What it really takes to build a resilient cyber program

A good place to begin is the ‘Identify’ phase from NIST’s Incident Response guide. You need to identify all of your risks, vulnerabilities, and assets. Prioritize them and then determine the best way to protect and detect threats against those assets. Assets not only include physical things like laptops and phones, but also anything that is in a Cloud Service Provider, SaaS applications, and digital items like domain names. Determine the threats, risks and vulnerabilities to those assets. Prioritize them and determine how your organization is going to protect and monitor them. Most organizations don’t have a very good idea of what they actually own, which is why they tend to be reactive and waste time on actions that do not apply to them. How often has a security analyst been asked if a recently disclosed zero-day affects the company? They perform the scans and pull in data manually only to discover they don’t run that piece of software or hardware. ... Many organizations use a red team exercise to try and blame someone or group for a deficiency or even to score an internal political point. That will never end well for anyone. The name of the game is improvement in your security posture and these help identify areas of weakness. There might be things that don’t get fixed immediately, or maybe ever, but knowing that the gap exists is the critical first step. 


Top tips for successful threat intelligence usage

“The value of threat intelligence is directly tied to how well it is ingested, processed, prioritized, and acted upon,” wrote Cyware in their report. This means a careful integration into your existing constellation of security tools so you can leverage all your previous investment in your acronyms of SOARs, SIEMs and XDRs. According to the Greynoise report “you have to embed the TIP into your existing security ecosystem, making sure to correlate your internal data and use your vulnerability management tools to enhance your incident response and provide actionable analytics.” The keyword in that last sentence is actionable. Too often threat intel doesn’t guide any actions, such as kicking off a series of patches to update outdated systems, or remediation efforts to firewall a particular network segment or taking offline an offending device. ... Part of the challenge here is to prevent siloed specialty mindsets from making the appropriate remedial measures. “I’ve seen time and time again when the threat intel or even the vulnerability management team will send out a flash notification about a high priority threat only for it to be lost in a queue because the threat team did not chase it up. It’s just as important for resolver groups to act as it is for the threat team to chase it,” Peck blogged.


How empathy is a leadership gamechanger in a tech-first workplace

Empathy isn’t just about creating a feel-good workplace—it’s a powerful driver of innovation and performance. When leaders lead with empathy, they unlock something essential: a work culture where people feel safe to speak up, take risks, and bring their boldest ideas to life. That’s where real progress happens. Empathy also enhances productivity, employees who feel valued and supported are more motivated to perform at their highest potential. Research shows that organisations led by empathetic leaders experience a 20% increase in customer loyalty, underscoring the far-reaching impact of a people-first approach. When employees thrive, so do customer relationships, business outcomes, and overall organisational growth. In India, where workplace dynamics are often shaped by hierarchical structures and collectivist values, empathetic leadership can be transformative. By prioritising open communication, recognition, and personal development, leaders can strengthen employee morale, increase job satisfaction, and drive long-term loyalty. ... In a tech-first world, empathy isn’t a nice-to-have, it’s a leadership gamechanger. When leaders lead with heart and clarity, they don’t just inspire people, they unlock their full potential. Empathy fuels trust, drives innovation, and builds workplaces where people and ideas thrive. 


Analyzing the Impact of AI on Critical Thinking in the Workplace

Instead of generating content from scratch, knowledge workers increasingly invest effort in verifying information, integrating AI-generated outputs into their work, and ensuring that the final outputs meet quality standards. What is motivating this behavior? Some explanations for these trends could be to enhance work quality, develop professional AI skills, laziness, and the desire to avoid negative outcomes like errors. For example, someone who is not very proficient in the English language could use GenAI to make their emails sound a lot more natural and avoid any potential misunderstandings. On the flipside, there are some drawbacks to using GenAI. These include overreliance on GenAI for routine or lower-stakes tasks, time pressures, limited awareness of potential AI pitfalls, and challenges in improving AI responses. ... The findings suggest that GenAI tools can reduce the perceived cognitive load for certain tasks. However, they find that GenAI poses risks to workers’ critical thinking skills by shifting their roles from active problem-solvers to AI output overseers who must verify and integrate responses into their workflows. Once again (and this can not be emphasized enough) the study underscores the need for designing GenAI systems that actively support critical thinking. This will ensure that efficiency gains do not come at the expense of developing essential critical thinking skills.


Harnessing Data Lineage to Enhance Data Governance Frameworks

One of the most immediate benefits is improved data quality and troubleshooting. When a data quality issue arises, data lineage’s detailed trail can help you to quickly identify where the problem originated, so that you can fix errors and minimize downtime. Data lineage also enables better planning, since it allows you to run more effective data protection impact analysis. You can map data dependencies to assess how changes like system upgrades or new data integrations might affect your overall data integrity. This is especially valuable during migrations or major updates, as you can proactively mitigate any potential disruptions. Furthermore, regulatory compliance is also greatly enhanced through data lineage. With a complete audit trail documenting every data movement and transformation, organizations can more easily demonstrate compliance with regulations like GDPR, CCPA, and HIPAA. ... Developing a comprehensive data lineage framework can take substantial time, not to mention significant funds. In addition to the various data lineage tools, you might also need to have dedicated hosting servers, depending on the level of compliance needed, or to hire data lineage consultants. Mapping out complex data flows and maintaining up-to-date lineage in a data landscape that’s constantly shifting requires continuous attention and investment.

Daily Tech Digest - May 05, 2025


Quote for the day:

"Listening to the inner voice and trusting the inner voice is one of the most important lessons of leadership." -- Warren Bennis


How CISOs can talk cybersecurity so it makes sense to executives

“With complex technical topics and evolving threats to cover, the typical brief time slot often proves inadequate for meaningful dialogue. Security leaders can address this by preparing concise, business-focused briefing materials in advance and prioritizing the most critical issues for discussion. When time constraints persist, they should advocate for dedicated sessions to ensure proper oversight of cybersecurity matters,” said Ross Young ... When communicating with the board of directors, Turgal advises mapping cybersecurity initiatives to shareholder value. “If the business goal is to protect shareholder value, there is a direct connection to business continuity and increased operational uptime.” To support that, security leaders might increase cyber resilience through containerized immutable backups, disaster recovery and incident response plans—tools that can mitigate brand-damaging attacks and prevent stock price volatility. ... Some of the most productive conversations don’t happen in meetings. They happen over coffee, or on calls with individual board members.​ If possible, schedule one-on-ones with directors to walk them through key risks. Ask what they want to know more about. Find out how they prefer to receive information.​ By building rapport outside the meeting, you’ll face fewer surprises inside it. Your strongest allies in the boardroom are often the CFO and legal chief. 


The great cognitive migration: How AI is reshaping human purpose, work and meaning

Human purpose and meaning are likely to undergo significant upheaval. For centuries, we have defined ourselves by our ability to think, reason and create. Now, as machines take on more of those functions, the questions of our place and value become unavoidable. If AI-driven job losses occur on a large scale without a commensurate ability for people to find new forms of meaningful work, the psychological and social consequences could be profound. It is possible that some cognitive migrants could slip into despair. AI scientist Geoffrey Hinton, who won the 2024 Nobel Prize in physics for his groundbreaking work on deep learning neural networks that underpin LLMs, has warned in recent years about the potential harm that could come from AI. In an interview with CBS, he was asked if he despairs about the future. He said he did not because, ironically, he found it very hard to take [AI] seriously. He said: “It’s very hard to get your head around the point that we are at this very special point in history where in a relatively short time, everything might totally change. A change on a scale we’ve never seen before. It’s hard to absorb that emotionally.” There will be paths forward. Some researchers and economists, including MIT economist David Autor, have begun to explore how AI could eventually help rebuild middle-class jobs, not by replacing human workers, but by expanding what humans can do. 


CISO vs CFO: why are the conversations difficult?

The disconnect between CISOs and CFOs remains a challenge in many organizations. While cybersecurity threats escalate in scale and complexity, senior leadership often fails to fully grasp the magnitude of the risk. This gap is visible in EY’s 2025 Cybersecurity study, which shows that 68% of CISOs worry that senior leaders underestimate the risks. Progress in bridging this divide happens when CISOs and CFOs are willing to meet halfway, aligning technical priorities with financial realities. Argyle realized that to move the conversation forward, he had to change his approach: he stopped defending the technology and started showing the impact. ... Redesigning the relationship between a CISO and a CFO isn’t something that’s fixed over a single meeting or a strong cup of coffee. It takes time, mutual understanding, and open conversations. As Argyle points out, these discussions shouldn’t be limited to budget season, when both sides are already in negotiation mode. To truly build trust and alignment, CISOs and CFOs need to keep the dialogue alive year-round and make efforts to understand each other’s work, long before money is involved. “Ideally, I’d bring the CFO into tabletop cyber crisis simulations and scenario planning,” he adds. “Let them see the domino effect of a breach — not just read about it in a report. That firsthand exposure builds understanding faster than any PowerPoint.”


How to Build a Team That Thinks and Executes Like a Founder

If your team has a deep understanding of what you are trying to accomplish, you can ensure that everyone is rowing in the same direction. It isn't enough to simply share your vision and goals. To really get the team engaged, it's critical that they understand the underlying "why" behind your goals and decisions. One of the best ways to do this is by being as transparent as possible, such as sharing financial data and other key business metrics. This information can help the team understand the bigger picture and connect how their individual roles contribute to the overall success of the company. ... First, stop assigning tasks to your team. Instead, give team members ownership over entire end-to-end processes. This allows them to take full responsibility for the success of this process and help you hold the team accountable for executing it successfully. The best way to do this is by focusing on outcome-based delegation. This provides flexibility and autonomy for the team to figure out the best way to achieve the goal. As a business owner, you don't want the team coming to you for every little decision. ... n many cases, a bad deliverable is a result of miscommunication, unclear direction or not having access to the right resources. The challenge is that many business owners give up when delegation doesn't work the way they hoped the first time. 


Quiet hiring: How HR can turn this trend into a winning strategy

At its heart, quiet hiring is about strategic talent management. It’s a way for organisations to fill skill gaps and meet changing business needs without expanding their workforce in the traditional sense. Instead of hiring full-time employees, businesses tap into existing employees, freelancers, or contractors to temporarily shift roles or tackle specific projects. It’s about working smarter with the talent you already have, and supplementing that with external experts when needed. ... Instead of looking outside the organisation to fill a gap, businesses can move current employees into new roles or give them additional responsibilities. For instance, if a marketing expert has experience with analytics, they might temporarily shift to the data analytics team to support a busy period. Not only does this save the company time and money in recruitment, but it also develops your current team, gives employees fresh opportunities, and fosters an agile workforce. It’s a win-win—employees gain new skills, and organisations can fill critical gaps without the lengthy hiring process. ... The business world is unpredictable, and the ability to adapt quickly is more important than ever. Quiet hiring offers companies the flexibility they need to respond to sudden changes. For example, if demand for a product surges unexpectedly, internal employees can be quickly moved to meet the increased workload, while contractors can be brought in to handle the temporary increase in tasks.


Attack of the AI crawlers

To be fair, it’s not entirely clear that robots.txt directives are legally enforceable, according to Susskind and other attorneys who focus on technology issues. Therefore, if the model makers were arguing that they have the right to violate those requests, that might be a legitimate argument. But that is not what they are arguing. They are saying they abide by those rules, but then many send out undeclared crawlers to do it anyway. The real problem is that they are inflicting financial damage to the site owners by forcing them to pay far more for bandwidth. And it is solely the model makers that benefit, not the site owners. What is IT to do, Susskind asked, when an undeclared genAI crawler “hits my site a million times a day”? Indeed, Susskind’s team has seen “a single bot hitting a site millions of times per hour. That is several orders of magnitude more burdensome than normal SEO crawling.” ... The problem, according to attorneys in this space, is not with establishing monetary damages but with attribution: how to determine who’s responsible for the surging traffic. In such a hypothetical court case, the lawyers for the deep-pocketed genAI model makers would likely argue that plaintiffs’ sites are visited by millions of users and bots from multiple sources. Without proof tying traffic to a specific crawler or tying a crawler to a specific model maker, the model maker can’t be held accountable for plaintiffs’ financial damages.


A Farewell to APMs — The Future of Observability is MCP tools

Initially introduced by Anthropic, the Model Context Protocol (MCP) represents a communication tier between AI agents and other applications, allowing agents to access additional data sources and perform actions as they see fit. More importantly, MCPs open up new horizons for the agent to intelligently choose to act beyond its immediate scope and thereby broaden the range of use cases it can address. The technology is not new, but the ecosystem is. In my mind, it is the equivalent of evolving from custom mobile application development to having an app store. ... With the advent of MCPs, software developers now have the choice of adopting a different model for developing software. Instead of focusing on a specific use case, trying to nail the right UI elements for hard-coded usage patterns, applications can transform into a resource for AI-driven processes. This describes a shift from supporting a handful of predefined interactions to supporting numerous emergent use cases. ... Making observability useful to the agent, however, is a little more involved than slapping on an MCP adapter to an APM. Indeed, many of the current generation tools, in rushing to support the new technology took that very route, not taking into consideration that AI agents also have their limitations.


Knowing when to use AI coding assistants

AI performs exceptionally well with common coding patterns. Its sweet spot is generating new code with low complexity when your objectives are well-specified and you’re using popular libraries, says Swiber. “Web development, mobile development, and relatively boring back-end development are usually fairly straightforward,” adds Charity Majors, co-founder and CTO of Honeycomb. The more common the code and the more online examples, the better AI models perform. ... While AI accelerates development, it creates a new burden to review and validate the resulting code. “In a worst-case scenario, the time and effort required to debug and fix subtle issues in AI-generated code could even eclipse the time it would require to write the code from scratch,” says Sonar’s Wang. Quality and security can suffer from vague prompts or poor contextual understanding, especially in large, complex code bases. Transformer-based models also face limitations with token windows, making it harder to grasp projects with many parts or domain-specific constraints. “We’ve seen cases where AI outputs are syntactically correct but contain logical errors or subtle bugs,” Wang notes. These mistakes originate from a “black box” process, he says, making AI risky for mission-critical enterprise applications that require strict governance.


CISOs Take Note: Is Needless Cybersecurity Strangling Your Business?

"For IT and security teams, redundant and obsolete security tools or measures increase workflows, hurt efficiency, and extend incident response and patch time," he explains via email. "When there's excessive or ineffective tools in the security stack, teams waste valuable time sifting through redundant and low-value alerts, hampering them from focusing on real threats." ... Additionally, excessive security controls, such as overly intrusive multi-factor authentication, can create employee friction, slowing down and challenging collaboration with partners, vendors, and customers, Shilts says. "This often results in employees finding workarounds, such as using their personal emails, which introduces security risks that are difficult to track and manage." ... In general, an organizational security posture, including tools and procedures, should be assessed annually or even earlier if a major change is implemented, Biswas says. Ideally, to prevent conflicts of interest, such assessments should be performed by independent, expert third parties. "After all, it’s difficult for an implementor or operator to be a truly impartial assessor of their own work," he explains. "While some organizations may be able to do so via internal audit, for most it makes sense to hire an outsider to play devil’s advocate."


Machines Cannot Feel or Think, but Humans Can, and Ought To

In a philosophical debate, the question, as it is applied to AI, is: How do we know that AI does not have an experience of the world? The same question could be asked of flowers, animals, stones, and automobiles. In this sense, the question of “other intelligences” is often quite valuable and holds tremendous potential for escaping the capital-focused development of information processing machines. In its most useful form, this approach to “post-humanism” refers to the evolved understanding that humans are not the center of the universe, but exist within a dense network of relationships. This definition of the post-human may pave the way to decentering definitions of “human” that privilege human needs over those of the environment, or even people whom we consider less-than. It may cultivate a deeper appreciation for the complexity of animals and their ecosystems, and, through careful design, might lead to an approach to technological development that considers the interdependencies within systems as connected, not isolated. Have we even started to build a capacity to understand those worlds, to empathize with trees and rivers and elk, to the extent to which we can now fully shift our attention to the potential emotional experiences of a hypothetical Microsoft product? 

Daily Tech Digest - May 03, 2025


Quote for the day:

"It is during our darkest moments that we must focus to see the light." -- Aristotle Onassis



Why agentic AI is the next wave of innovation

AI agents have become integral to modern enterprises, not just enhancing productivity and efficiency, but unlocking new levels of value through intelligent decision-making and personalized experiences. The latest trends indicate a significant shift towards proactive AI agents that anticipate user needs and act autonomously. These agents are increasingly equipped with hyper-personalization capabilities, tailoring interactions based on individual preferences and behaviors. ... According to NVIDIA, when Azure AI Agent Service is paired with NVIDIA AgentIQ, an open-source toolkit, developers can now profile and optimize teams of AI agents in real time to reduce latency, improve accuracy, and drive down compute costs. ... “The launch of NVIDIA NIM microservices in Azure AI Foundry offers a secure and efficient way for Epic to deploy open-source generative AI models that improve patient care, boost clinician and operational efficiency, and uncover new insights to drive medical innovation,” says Drew McCombs, vice president, cloud and analytics at Epic. “In collaboration with UW Health and UC San Diego Health, we’re also researching methods to evaluate clinical summaries with these advanced models. Together, we’re using the latest AI technology in ways that truly improve the lives of clinicians and patients.”


Businesses intensify efforts to secure data in cloud computing

Building a robust security strategy begins with understanding the delineation between the customer's and the provider's responsibilities. Customers are typically charged with securing network controls, identity and access management, data, and applications within the cloud, while the CSP maintains the core infrastructure. The specifics of these responsibilities depend on the service model and provider in question. The importance of effective cloud security has grown as more organisations shift away from traditional on-premises infrastructure. This shift brings new regulatory expectations relating to data governance and compliance. Hybrid and multicloud environments offer businesses unprecedented flexibility, but also introduce complexity, increasing the challenge of preventing unauthorised access. ... Attackers are adjusting their tactics accordingly, viewing cloud environments as potentially vulnerable targets. A well-considered cloud security plan is regarded as essential for reducing breaches or damage, improving compliance, and enhancing customer trust, even if it cannot eliminate all risks. According to the statement, "A well-thought-out cloud security plan can significantly reduce the likelihood of breaches or damage, enhance compliance, and increase customer trust—even though it can never completely prevent attacks and vulnerabilities."


Safeguarding the Foundations of Enterprise GenAI

Implementing strong identity security measures is essential to mitigate risks and protect the integrity of GenAI applications. Many identities have high levels of access to critical infrastructure and, if compromised, could provide attackers with multiple entry points. It is important to emphasise that privileged users include not just IT and cloud teams but also business users, data scientists, developers and DevOps engineers. A compromised developer identity, for instance, could grant access to sensitive code, cloud functions, and enterprise data. Additionally, the GenAI backbone relies heavily on machine identities to manage resources and enforce security. As machine identities often outnumber human ones, securing them is crucial. Adopting a Zero Trust approach is vital, extending security controls beyond basic authentication and role-based access to minimise potential attack surfaces. To enhance identity security across all types of identities, several key controls should be implemented. Enforcing strong adaptive multi-factor authentication (MFA) for all user access is essential to prevent unauthorised entry. Securing access to credentials, keys, certificates, and secrets—whether used by humans, backend applications, or scripts—requires auditing their use, rotating them regularly, and ensuring that API keys or tokens that cannot be automatically rotated are not permanently assigned.


The new frontier of API governance: Ensuring alignment, security, and efficiency through decentralization

To effectively govern APIs in a decentralized landscape, organizations must embrace new principles that foster collaboration, flexibility and shared responsibility. Optimized API governance is not about abandoning control, rather about distributing it strategically while still maintaining overarching standards and ensuring critical aspects such as security, compliance and quality. This includes granting development teams with autonomy to design, develop and manage their APIs within clearly defined boundaries and guidelines. This encourages innovation while fostering ownership and allows each team to optimize their APIs to their specific needs. This can be further established by a shared responsibility model amongst teams where they are accountable for adhering to governance policies while a central governing body provides the overarching framework, guidelines and support. This operating model can be further supported by cultivating a culture of collaboration and communication between central governance teams and development teams. The central government team can have a representative from each development team and have clear channels for feedback, shared documentation and joint problem-solving scenarios. Implementing governance policies as code, leveraging tools and automation make it easier to enforce standards consistently and efficiently across the decentralized environment. 


Banking on innovation: Engineering excellence in regulated financial services

While financial services regulations aren’t likely to get simpler, banks are finding ways to innovate without compromising security. "We’re seeing a culture change with our security office and regulators," explains Lanham. "As cloud tech, AI, and LLMs arrive, our engineers and security colleagues have to upskill." Gartner's 2025 predictions say GenAI is shifting data security to protect unstructured data. Rather than cybersecurity taking a gatekeeper role, security by design is built into development processes. "Instead of saying “no”, the culture is, how can we be more confident in saying “yes”?" notes Lanham. "We're seeing a big change in our security posture, while keeping our customers' safety at the forefront." As financial organizations carefully tread a path through digital and AI transformation, the most successful will balance innovation with compliance, speed with security, and standardization with flexibility. Engineering excellence in financial services needs leaders who can set a clear vision while balancing tech potential with regulations. The path won’t be simple, but by investing in simplification, standardization and a shared knowledge and security culture, financial services engineering teams can drive positive change for millions of banking customers.


‘Data security has become a trust issue, not just a tech issue’

Data is very messy and data ecosystems are very complex. Every organisation we speak to has data across multiple different types of databases and data stores for different use cases. As an industry, we need to acknowledge the fact that no organisation has an entirely homogeneous data stack, so we need to support and plug into a wide variety of data ecosystems, like Databricks, Google and Amazon, regardless of the tooling used for data analytics, for integration, for quality, for observability, for lineage and the like. ... Cloud adoption is causing organisations to rethink their traditional approach to data. Most use cloud data services to provide a shortcut to seamless data integration, efficient orchestration, accelerated data quality and effective governance. In reality, most organisations will need to adopt a hybrid approach to address their entire data landscape, which typically spans a wide variety of sources that span both cloud and on premises. ... Data security has become a trust issue, not just a tech issue. With AI, hybrid cloud and complex supply chains, the attack surface is massive. We need to design with security in mind from day one – think secure coding, data-level controls and zero-trust principles. For AI, governance is critical, and it too needs to be designed in and not an afterthought. That means tracking where data comes from, how models are trained, and ensuring transparency and fairness.


Secure by Design vs. DevSecOps: Same Security Goal, Different Paths

Although the "secure by design" initiative offers limited guidance on how to make an application secure by default, it comes closer to being a distinct set of practices than DevSecOps. The latter is more of a high-level philosophy that organizations must interpret on their own; in contrast, secure by design advocates specific practices, such as selecting software architectures that mitigate the risk of data leakage and avoiding memory management practices that increase the chances of the execution of malicious code by attackers. ... Whereas DevSecOps focuses on all stages of the software development life cycle, the secure by design concept is geared mainly toward software design. It deals less with securing applications during and after deployment. Perhaps this makes sense because so long as you start with a secure design, you need to worry less about risks once your application is fully developed — although given that there's no way to guarantee an app can't be hacked, DevSecOps' holistic approach to security is arguably the more responsible one. ... Even if you conclude that secure by design and DevSecOps mean basically the same thing, one notable difference is that the government sector has largely driven the secure by design initiative, while DevSecOps is more popular within private industry.


Immutable by Design: Reinventing Business Continuity and Disaster Recovery

Immutable backups create tamper-proof copies of data, protecting it from cyber threats, accidental deletion, and corruption. This guarantees that critical data can be quickly restored, allowing businesses to recover swiftly from disruptions. Immutable storage provides data copies that cannot be manipulated or altered, ensuring data remains secure and can quickly be recovered from an attack. In addition to immutable backup storage, response plans must be continually tested and updated to combat the evolving threat landscape and adapt to growing business needs. The ultimate test of a response plan ensures data can be quickly and easily restored or failed over, depending on the event. Activating a second site in the case of a natural disaster or recovering systems without making any ransomware payments in the case of an attack. This testing involves validating the reliability of backup systems, recovery procedures, and the overall disaster recovery plan to minimize downtime and ensure business continuity. ... It can be challenging for IT teams trying to determine the perfect fit for their ecosystem, as many storage vendors claim to provide immutable storage but are missing key features. As a rule of thumb, if "immutable" data can be overwritten by a backup or storage admin, a vendor, or an attacker, then it is not a truly immutable storage solution. 


Neurohacks to outsmart stress and make better cybersecurity decisions

In cybersecurity where clarity and composure are essential, particularly during a data breach or threat response, these changes can have high-stakes consequences. “The longer your brain is stuck in this high-stress state, the more of those changes you will start to see and burnout is just an extreme case of chronic stress on the brain,” Landowski says. According to her, the tipping point between healthy stress and damaging chronic stress usually comes after about eight to 12 weeks, but it varies between individuals. “If you know about some of the things you can do to reduce the impact of stress on your body, you can potentially last a lot longer before you see any effects, whereas if you’re less resilient, or if your genes are more susceptible to stress, then it could be less.” ... working in cybersecurity, particularly as a hacker, is often about understanding how people think and then spotting the gaps. That same shift in understanding — tuning into how the brain works under different conditions — can help cybersecurity leaders make better decisions and build more resilient teams. As Cerf highlights, he works with organizations to identify these optimal operating states, testing how individuals and entire teams respond to stress and when their brains are most effective. “The brain is not just a solid thing,” Cerf says.


Beyond Safe Models: Why AI Governance Must Tackle Unsafe Ecosystems

Despite the evident risks of unsafe deployment ecosystems, the prevailing approach to AI governance still heavily emphasizes pre-deployment interventions—such as alignment research, interpretability tools, and red teaming—aimed at ensuring that the model itself is technically sound. Governance initiatives like the EU AI Act, while vital, primarily place obligations on providers and developers to ensure compliance through documentation, transparency, and risk management plans. However, the governance of what happens after deployment when these models enter institutions with their own incentives, infrastructures, and oversight receives comparatively less attention. For example, while the EU AI Act introduces post-market monitoring and deployer obligations for high-risk AI systems, these provisions remain limited in scope. Monitoring primarily focuses on technical compliance and performance, with little attention to broader institutional, social, or systemic impacts. Deployer responsibilities are only weakly integrated into ongoing risk governance and focus primarily on procedural requirements—such as record-keeping and ensuring human oversight—rather than assessing whether the deploying institution has the capacity, incentives, or safeguards to use the system responsibly. 

Daily Tech Digest - May 01, 2025


Quote for the day:

"The most powerful leadership tool you have is your own personal example." -- John Wooden



Bridging the IT and security team divide for effective incident response

One reason IT and security teams end up siloed is the healthy competitiveness that often exists between them. IT wants to innovate, while security wants to lock things down. These teams are made up from brilliant minds. However, faced with the pressure of a crisis, they might hesitate to admit they feel out of control, simmering issues may come to a head, or they may become so fixated on solving the issue that they fail to update others. To build an effective incident response strategy, identifying a shared vision is essential. Here, leadership should host joint workshops where teams learn more about each other and share ideas about embedding security into system architecture. These sessions should also simulate real-world crises, so that each team is familiar with how their roles intersect during a high-pressure situation and feel comfortable when an actual crisis arises. ... By simulating realistic scenarios – whether it’s ransomware incidents or malware attacks – those in leadership positions can directly test and measure the incident response plan so that is becomes an ingrained process. Throw in curveballs when needed, and use these exercises to identify gaps in processes, tools, or communication. There’s a world of issues to uncover disconnected tools and systems; a lack of automation that could speed up response times; and excessive documentation requirements.


First Principles in Foundation Model Development

The mapping of words and concepts into high-dimensional vectors captures semantic relationships in a continuous space. Words with similar meanings or that frequently appear in similar contexts are positioned closer to each other in this vector space. This allows the model to understand analogies and subtle nuances in language. The emergence of semantic meaning from co-occurrence patterns highlights the statistical nature of this learning process. Hierarchical knowledge structures, such as the understanding that “dog” is a type of “animal,” which is a type of “living being,” develop organically as the model identifies recurring statistical relationships across vast amounts of text. ... The self-attention mechanism represents a significant architectural innovation. Unlike recurrent neural networks that process sequences sequentially, self-attention allows the model to consider all parts of the input sequence simultaneously when processing each word. The “dynamic weighting of contextual relevance” means that for any given word in the input, the model can attend more strongly to other words that are particularly relevant to its meaning in that specific context. This ability to capture long-range dependencies is critical for understanding complex language structures. The parallel processing capability significantly speeds up training and inference. 


The best preparation for a password-less future is to start living there now

One of the big ideas behind passkeys is to keep us users from behaving as our own worst enemies. For nearly two decades, malicious actors -- mainly phishers and smishers -- have been tricking us into giving them our passwords. You'd think we would have learned how to detect and avoid these scams by now. But we haven't, and the damage is ongoing. ... But let's be clear: Passkeys are not passwords. If we're getting rid of passwords, shouldn't we also get rid of the phrase "password manager?" Note that there are two primary types of credential managers. The first is the built-in credential manager. These are the ones from Apple, Google, Microsoft, and some browser makers built into our platforms and browsers, including Windows, Edge, MacOS, Android, and Chrome. With passkeys, if you don't bring your own credential manager, you'll likely end up using one of these. ... The FIDO Alliance defines a "roaming authenticator" as a separate device to which your passkeys can be securely saved and recalled. Examples are hardware security keys (e.g., Yubico) and recent Android phones and tablets, which can act in the capacity of a hardware security key. Since your credentials to your credential manager are literally the keys to your entire kingdom, they deserve some extra special security.


Mind the Gap: Assessing Data Quality Readiness

Data Quality Readiness is defined as the ratio of the number of fully described Data Quality Measure Elements that are being calculated and/or collected to the number of Data Quality Measure Elements in the desired set of Data Quality Measures. By fully described I mean both the “number of data values” part and the “that are outliers” part. The first prerequisite activity is determining which Quality Measures you want to implement. The ISO standard defines 15 different Data Quality Characteristics. I covered those last time. The Data Quality Characteristics are made up of 63 Quality Measures. The Quality Measures are categorized as Highly Recommendable (19), Recommendable (36), and For Reference (8). This provides a starting point for prioritization. Begin with a few measures that are most applicable to your organization and that will have the greatest potential to improve the quality of your data. The reusability of the Quality Measures can factor into the decision, but it shouldn’t be the primary driver. The objective is not merely to collect information for its own sake, but to use that information to generate value for the enterprise. The result will be a set of Data Quality Measure Elements to collect and calculate. You do the ones that are best for you, but I would recommend looking at two in particular.


Why non-human identity security is the next big challenge in cybersecurity

What makes this particularly challenging is that each of these identities requires access to sensitive resources and carries potential security risks. Unlike human users, who follow predictable patterns and can be managed through traditional IAM solutions, non-human identities operate 24/7, often with elevated privileges, making them attractive targets for attackers. ... We’re witnessing a paradigm shift in how we need to think about identity security. Traditional security models were built around human users – focusing on aspects like authentication, authorisation and access management from a human-centric perspective. But this approach is inadequate for the machine-dominated future we’re entering. Organisations need to adopt a comprehensive governance framework specifically designed for non-human identities. This means implementing automated discovery and classification of all machine identities and their secrets, establishing centralised visibility and control and enforcing consistent security policies across all platforms and environments. ... First, organisations need to gain visibility into their non-human identity landscape. This means conducting a thorough inventory of all machine identities and their secrets, their access patterns and their risk profiles.


Preparing for the next wave of machine identity growth

First, let’s talk about the problem of ownership. Even organizations that have conducted a thorough inventory of the machine identities in their environments often lack a clear understanding of who is responsible for managing those identities. In fact, 75% of the organizations we surveyed indicated that they don’t have assigned ownership for individual machine identities. That’s a real problem—especially since poor (or insufficient) governance practices significantly increase the likelihood of compromised access, data loss, and other negative outcomes. Another critical blind spot is around understanding what data each machine identity can or should be able to access—and just as importantly, what it cannot and should not access. Without clarity, it becomes nearly impossible to enforce proper security controls, limit unnecessary exposure, or maintain compliance. Each machine identity is a potential access point to sensitive data and critical systems. Failing to define and control their access scope opens the door to serious risk. Addressing the issue starts with putting a comprehensive machine identity security solution in place—ideally one that lets organizations govern machine identities just as they do human identities. Automation plays a critical role: with so many identities to secure, a solution that can discover, classify, assign ownership, certify, and manage the full lifecycle of machine identities significantly streamlines the process.


To Compete, Banking Tech Needs to Be Extensible. A Flexible Platform is Key

The banking ecosystem includes three broad stages along the trajectory toward extensibility, according to Ryan Siebecker, a forward deployed engineer at Narmi, a banking software firm. These include closed, non-extensible systems — typically legacy cores with proprietary software that doesn’t easily connect to third-party apps; systems that allow limited, custom integrations; and open, extensible systems that allow API-based connectivity to third-party apps. ... The route to extensibility can be enabled through an internally built, custom middleware system, or institutions can work with outside vendors whose systems operate in parallel with core systems, including Narmi. Michigan State University Federal Credit Union, which began its journey toward extensibility in 2009, pursued an independent route by building in-house middleware infrastructure to allow API connectivity to third-party apps. Building in-house made sense given the early rollout of extensible capabilities, but when developing a toolset internally, institutions need to consider appropriate staffing levels — a commitment not all community banks and credit unions can make. For MSUFCU, the benefit was greater customization, according to the credit union’s chief technology officer Benjamin Maxim. "With the timing that we started, we had to do it all ourselves," he says, noting that it took about 40 team members to build a middleware system to support extensibility.


5 Strategies for Securing and Scaling Streaming Data in the AI Era

Streaming data should never be wide open within the enterprise. Least-privilege access controls, enforced through role-based (RBAC) or attribute-based (ABAC) access control models, limit each user or application to only what’s essential. Fine-grained access control lists (ACLs) add another layer of protection, restricting read/write access to only the necessary topics or channels. Combine these controls with multifactor authentication, and even a compromised credential is unlikely to give attackers meaningful reach. ... Virtual private cloud (VPC) peering and private network setups are essential for enterprises that want to keep streaming data secure in transit. These configurations ensure data never touches the public internet, thus eliminating exposure to distributed denial of service (DDoS), man-in-the-middle attacks and external reconnaissance. Beyond security, private networking improves performance. It reduces jitter and latency, which is critical for applications that rely on subsecond delivery or AI model responsiveness. While VPC peering takes thoughtful setup, the benefits in reliability and protection are well worth the investment. ... Just as importantly, security needs to be embedded into culture. Enterprises that regularly train their employees on privacy and data protection tend to identify issues earlier and recover faster.


Supply Chain Cybersecurity – CISO Risk Management Guide

Modern supply chains often span continents and involve hundreds or even thousands of third-party vendors, each with their security postures and vulnerabilities. Attackers have recognized that breaching a less secure supplier can be the easiest way to compromise a well-defended target. Recent high-profile incidents have shown that supply chain attacks can lead to data breaches, operational disruptions, and significant financial losses. The interconnectedness of digital systems means that a single compromised vendor can have a cascading effect, impacting multiple organizations downstream. For CISOs, this means that traditional perimeter-based security is no longer sufficient. Instead, a holistic approach must be taken that considers every entity with access to critical systems or data as a potential risk vector. ... Building a secure supply chain is not a one-time project—it’s an ongoing journey that demands leadership, collaboration, and adaptability. CISOs must position themselves as business enablers, guiding the organization to view cybersecurity not as a barrier but as a competitive advantage. This starts with embedding cybersecurity considerations into every stage of the supplier lifecycle, from onboarding to offboarding. Leadership engagement is crucial: CISOs should regularly brief the executive team and board on supply chain risks, translating technical findings into business impacts such as potential downtime, reputational damage, or regulatory penalties.


Developers Must Slay the Complexity and Security Issues of AI Coding Tools

Beyond adding further complexity to the codebase, AI models also lack the contextual nuance that is often necessary for creating high-quality, secure code, primarily when used by developers who lack security knowledge. As a result, vulnerabilities and other flaws are being introduced at a pace never before seen. The current software environment has grown out of control security-wise, showing no signs of slowing down. But there is hope for slaying these twin dragons of complexity and insecurity. Organizations must step into the dragon’s lair armed with strong developer risk management, backed by education and upskilling that gives developers the tools they need to bring software under control. ... AI tools increase the speed of code delivery, enhancing efficiency in raw production, but those early productivity gains are being overwhelmed by code maintainability issues later in the SDLC. The answer is to address those issues at the beginning, before they put applications and data at risk. ... Organizations involved in software creation need to change their culture, adopting a security-first mindset in which secure software is seen not just as a technical issue but as a business priority. Persistent attacks and high-profile data breaches have become too common for boardrooms and CEOs to ignore.