Daily Tech Digest - April 02, 2025


Quote for the day:

"People will not change their minds but they will make new decisions based upon new information." -- Orrin Woodward


The smart way to tackle data storage challenges

Data intelligence makes data stored on the X10000 ready for AI applications to use as soon as they are ingested. The company has a demo of this, where the X10000 ingests customer support documents and enables users to instantly ask it relevant natural language questions via a locally hosted version of the DeepSeek LLM. This kind of application wouldn’t be possible with low-speed legacy object storage, says the company. The X10000’s all-NVMe storage architecture helps to support low-latency access to this indexed and vectorized data, avoiding front-end caching bottlenecks. Advances like these provide up to 6x faster performance than the X10000’s leading object storage competitors, according to HPE’s benchmark testing. ... The containerized architecture opens up options for inline and out-of-band software services, such as automated provisioning and life cycle management of storage resources. It is also easier to localize a workload’s data and compute resources, minimizing data movement by enabling workloads to process data in place rather than moving it to other compute nodes. This is an important performance factor in low-latency applications like AI training and inference. Another aspect of container-based workloads is that all workloads can interact with the same object storage layer. 


Talent gap complicates cost-conscious cloud planning

The top strategy so far is what one enterprise calls the “Cloud Team.” You assemble all your people with cloud skills, and your own best software architect, and have the team examine current and proposed cloud applications, looking for a high-level approach that meets business goals. In this process, the team tries to avoid implementation specifics, focusing instead on the notion that a hybrid application has an agile cloud side and a governance-and-sovereignty data center side, and what has to be done is push functionality into the right place. ... To enterprises who tried the Cloud Team, there’s also a deeper lesson. In fact, there are two. Remember the old “the cloud changes everything” claim? Well, it does, but not the way we thought, or at least not as simply and directly as we thought. The economic revolution of the cloud is selective, a set of benefits that has to be carefully fit to business problems in order to deliver the promised gains. Application development overall has to change, to emphasize a strategic-then-tactical flow that top-down design always called for but didn’t always deliver. That’s the first lesson. The second is that the kinds of applications that the cloud changes the most are applications we can’t move there, because they never got implemented anywhere else.


Your smart home may not be as secure as you think

Most smart devices rely on Wi-Fi to communicate. If these devices connect to an unsecured or poorly protected Wi-Fi network, they can become an easy target. Unencrypted networks are especially vulnerable, and hackers can intercept sensitive data, such as passwords or personal information, being transmitted from the devices. ... Many smart devices collect personal data—sometimes more than users realize. Some devices, like voice assistants or security cameras, are constantly listening or recording, which can lead to privacy violations if not properly secured. In some cases, manufacturers don’t encrypt or secure the data they collect, making it easier for malicious actors to exploit it. ... Smart home devices often connect to third-party platforms or other devices. These integrations can create security holes if the third-party services don’t have strong protections in place. A breach in one service could give attackers access to an entire smart home ecosystem. To mitigate this risk, it’s important to review the security practices of any third-party service before integrating it with your IoT devices. ... If your devices support it, always enable 2FA and link your accounts to a reliable authentication app or your mobile number. You can use 2FA with smart home hubs and cloud-based apps that control IoT devices.


Beyond compensation—crafting an experience that retains talent

Looking ahead, the companies that succeed in attracting and retaining top talent will be those that embrace innovation in their Total Rewards strategies. AI-driven personalization is already changing the game—organizations are using AI-powered platforms to tailor benefits to individual employee needs, offering a menu of options such as additional PTO, learning stipends, or wellness perks. Similarly, equity-based compensation models are evolving, with some businesses exploring cryptocurrency-based rewards and fractional ownership opportunities. Sustainability is also becoming a key factor in Total Rewards. Companies that incorporate sustainability-linked incentives, such as carbon footprint reduction rewards or volunteer days, are seeing higher engagement and satisfaction levels. ... Total Rewards is no longer just about compensation—it’s about creating an ecosystem that supports employees in every aspect of their work and life. Companies that adopt the VALUE framework—Variable pay, Aligned well-being benefits, Learning and growth opportunities, Ultimate flexibility, and Engagement-driven recognition—will not only attract top talent but also foster long-term loyalty and satisfaction.


Bridging the Gap Between the CISO & the Board of Directors

Many executives, including board members, may not fully understand the CISO's role. This isn't just a communications gap; it's also an opportunity to build relationships across departments. When CISOs connect security priorities to broader business goals, they show how cybersecurity is a business enabler rather than just an operational cost. ... Often, those in technical roles lack the ability to speak anything other than the language of tech, making it harder to communicate with board members who don't hold tech or cybersecurity expertise. I remember presenting to our board early into my CISO role and, once I was done, seeing some blank stares. The issue wasn't that they didn't care about what I was saying; we just weren't speaking the same language. ... There are many areas in which communication between a board and CISO is important — but there may be none more important than compliance. Data breaches today are not just technical failures. They carry significant legal, financial, and reputational consequences. In this environment, regulatory compliance isn't just a box to check; it's a critical business risk that CISOs must manage, particularly as boards become more aware of the business impact of control failures in cybersecurity.


What does a comprehensive backup strategy look like?

Though backups are rarely needed, they form the foundation of disaster recovery. Milovan follows the classic 3-2-1 rule: three data copies, on two different media types, with one off-site copy. He insists on maintaining multiple copies “just in case.” In addition, NAS users need to update their OS regularly, Synology’s Alexandra Bejan says. “Outdated operating systems are particularly vulnerable there.” Bejan emphasizes the positives from implementing the textbook best practices Ichthus employs. ... One may imagine that smaller enterprises make for easier targets due to their limited IT. However, nothing could be further from the truth. Bejan: “We have observed that the larger the enterprise, the more difficult it is to implement a comprehensive data protection strategy.” She says the primary reason for this lies in the previously fragmented investments in backup infrastructure, where different solutions were procured for various workloads. “These legacy solutions struggle to effectively manage the rapidly growing number of workloads and the increasing data size. At the same time, they require significant human resources for training, with steep learning curves, making self-learning difficult. When personnel are reassigned, considerable time is needed to relearn the system.”


Malicious actors increasingly put privileged identity access to work across attack chains

Many of these credentials are extracted from computers using so-called infostealer malware, malicious programs that scour the operating system and installed applications for saved usernames and passwords, browser session tokens, SSH and VPN certificates, API keys, and more. The advantage of using stolen credentials for initial access is that they require less skill compared to exploiting vulnerabilities in publicly facing applications or tricking users into installing malware from email links or attachments — although these initial access methods remain popular as well. ... “Skilled actors have created tooling that is freely available on the open web, easy to deploy, and designed to specifically target cloud environments,” the Talos researchers found. “Some examples include ROADtools and AAAInternals, publicly available frameworks designed to enumerate Microsoft Entra ID environments. These tools can collect data on users, groups, applications, service principals, and devices, and execute commands.” These are often coupled with techniques designed to exploit the lack of MFA or incorrectly configured MFA. For example, push spray attacks, also known as MFA bombing or MFA fatigue, rely on bombing the user with MFA push notifications on their phones until they get annoyed and approve the login thinking it’s probably the system malfunctioning.


Role of Blockchain in Enhancing Cybersecurity

At its core, a blockchain is a distributed ledger in which each data block is cryptographically connected to its predecessor, forming an unbreakable chain. Without network authorization, modifying or removing data from a blockchain becomes exceedingly difficult. This ensures that conventional data records stay consistent and accurate over time. The architectural structure of blockchain plays a critical role in protecting data integrity. Every single transaction is time-stamped and merged into a block, which is then confirmed and sealed through consensus. This process provides an undeniable record of all activities, simplifying audits and boosting confidence in system reliability. Similarly, blockchain ensures that every financial transaction is correctly documented and easily accessible. This innovation helps prevent record manipulation, double-spending, and other forms of fraud. By combining cryptographic safeguards with a decentralized architecture, it offers an ideal solution to information security. It also significantly reduces risks related to data breaches, hacking, and unauthorized access in the digital realm. Furthermore, blockchain strengthens cybersecurity by addressing concerns about unauthorized access and the rising threat of cyberattacks. 


Thriving in the Second Wave of Big Data Modernization

When businesses want to use big data to power AI solutions – as opposed to the more traditional types of analytics workloads that predominated during the first wave of big data modernization–the problems stemming from poor data management snowball. They transform from mere annoyances or hindrances into show stoppers. ... But in the age of AI, this process would likely instead entail giving the employee access to a generative AI tool that can interpret a question formulated using natural language and generate a response based on the organizational data that the AI was trained on. In this case, data quality or security issues could become very problematic. ... Unfortunately, there is no magic bullet that can cure the types of issues I’ve laid out above. A large part of the solution involves continuing to do the hard work of improving data quality, erecting effective access controls and making data infrastructure even more scalable. As they do these things, however, businesses must pay careful attention to the unique requirements of AI use cases. For example, when they create security controls, they must do so in ways that are recognizable to AI tools, such that the tools will know which types of data should be accessible to which users.


The DevOps Bottleneck: Why IaC Orchestration is the Missing Piece

At the end of the day, instead of eliminating operational burdens, many organizations just shifted them. DevOps, SREs, CloudOps—whatever you call them—these teams still end up being the gatekeepers. They own the application deployment pipelines, infrastructure lifecycle management, and security policies. And like any team, they seek independence and control—not out of malice, but out of necessity. Think about it: If your job is to keep production stable, are you really going to let every dev push infrastructure changes willy-nilly? Of course not. The result? Silos of unique responsibility and sacred internal knowledge. The very teams that were meant to empower developers become blockers instead. ... IaC orchestration isn’t about replacing your existing tools; it’s about making them work at scale. Think about how GitHub changed software development. Version control wasn’t new—but GitHub made it easier to collaborate, review code, and manage contributions without stepping on each other’s work. That’s exactly what orchestration does for IaC. It allows large teams to manage complex infrastructure without turning into a bottleneck. It enforces guardrails while enabling self-service for developers. 

Daily Tech Digest - April 01, 2025


Quote for the day:

"Strategy is not really a solo sport _ even if you_re the CEO." -- Max McKeown


MCP: The new “USB-C for AI” that’s bringing fierce rivals together

So far, MCP has also garnered interest from multiple tech companies in a rare show of cross-platform collaboration. For example, Microsoft has integrated MCP into its Azure OpenAI service, and as we mentioned above, Anthropic competitor OpenAI is on board. Last week, OpenAI acknowledged MCP in its Agents API documentation, with vocal support from the boss upstairs. "People love MCP and we are excited to add support across our products," wrote OpenAI CEO Sam Altman on X last Wednesday. ... To make the connections behind the scenes between AI models and data sources, MCP uses a client-server model. An AI model (or its host application) acts as an MCP client that connects to one or more MCP servers. Each server provides access to a specific resource or capability, such as a database, search engine, or file system. When the AI needs information beyond its training data, it sends a request to the appropriate server, which performs the action and returns the result. To illustrate how the client-server model works in practice, consider a customer support chatbot using MCP that could check shipping details in real time from a company database. "What's the status of order #12345?" would trigger the AI to query an order database MCP server, which would look up the information and pass it back to the model. 


Why global tensions are a cybersecurity problem for every business

As global polarization intensifies, cybersecurity threats have become increasingly hybridized, complicating the landscape for threat attribution and defense. Michael DeBolt, Chief Intelligence Officer at Intel 471, explains: “Increasing polarization worldwide has seen the expansion of the state-backed threat actor role, with many established groups taking on financially motivated responsibilities alongside their other strategic goals.” This evolution is notably visible in threat actors tied to countries such as China, Iran, and North Korea. According to DeBolt, “Heightened geopolitical tensions have reflected this transition in groups originating from China, Iran, and North Korea over the last couple of years—although the latter is somewhat more well-known for its duplicitous activity that often blurs the line of more traditional e-crime threats.” These state-backed groups increasingly blend espionage and destructive attacks with financially motivated cybercrime techniques, complicating attribution and creating significant practical challenges for organizations. DeBolt highlights the implications: “A primary practical issue organizations are facing is threat attribution, with a follow-on issue being maintaining an effective security posture against these hybrid threats.”


How to take your first steps in AI without falling off a cliff

It is critical to bring all stakeholders on board through education and training on the fundamental building blocks of data and AI. This involves understanding what’s accessible in the market and differentiating between various AI technologies. Executive buy-in is crucial, and by planning for internal process outcomes first, organisations can better position themselves to achieve meaningful outcomes in the future. ... Don’t bite off more than you can chew! Trying to deploy a complex AI solution to the entire organisation is asking for trouble. It is better to identify early adopter departments where specific AI pilots and proofs of concept can be introduced and their value measured. Eventually, you might establish an AI assistant studio to develop dedicated AI tools for each use case according to individual needs. ... People are often wary of change, particularly change with such far reaching implications in terms of how we work. Clear communication, training, and ongoing support will all help reassure employees who fear being left behind. ... In the context of data and AI, the perspective shifts somewhat. Most organisations already have policies in place for public cloud adoption. However, the approach to AI and data must be more nuanced, given the vast potential of the technology involved. 


6 hard-earned tips for leading through a cyberattack — from CSOs who’ve been there

Authority under crisis is meaningless if you can’t establish followership. And this goes beyond the incident response team: CISOs must communicate with the entire organization — a commonly misunderstood imperative, says Pablo Riboldi, CISO of nearshore talent provider BairesDev. ... “Organizations should provide training on stress management and decision-making under pressure, which includes perhaps mental health support resources in the incident response plan,” Ngui says. Larry Lidz, vice president of CX Security at Cisco, also advocates for tabletop exercises as a way to get employees to “look at problems through a different set of lenses than they would otherwise look at them.” ... Remaining calm in the face of a cyberattack can be challenging, but prime performance requires it, New Relic’s Gutierrez says. “There’s a lot of reaction. There’s a lot of strong feelings and emotions that go on during incidents,” Gutierrez says. Although they had moments of not maintaining composure, Gutierrez says they have been generally calm under cyber duress, which they take pride in. Demonstrating composure as a leader under fire is important because it can influence how others feel, behave, and act.


A “Measured” Approach to Building a World-Class Offensive Security Program

First, mapping the top threats and threat actors, most likely to find your organization an attractive target. Second, the top “crown jewel” systems they would target for compromise. Remaining at the enterprise level, the next step is to establish an internal framework and underlying program that graphs threats and risks, and provides a repeatable mechanism to track and refresh that understanding over time. This includes graphs of all enterprise systems, and their associated connections and dependencies, as well as attack graphs that represent all the potential paths through your architecture that would lead an attacker to their prize. Finally, the third element is an architectural security review that discerns from the graphs what paths are most possible and probable. Installing a program that guides and tracks three activities will also pay dividends down the line in better informing and increasing the efficacy of adversarial simulations. We all know the devil resides in the details. At this stage we begin understanding the actual vulnerability of individual assets and systems. The first step is a comprehensive inventory of elements that exist across the organization. This includes internal endpoint assets, and external perimeter and cloud systems. As you’d likely expect, the next step is vulnerability scanning of the full asset inventory that was established. 


How AI Agents Are Quietly Transforming Frontend Development

Traditional developer tools are passive. You run a linter, and it tells you what’s wrong. You run a build tool, and it compiles. But AI agents are proactive. They don’t wait for instructions; they interpret high-level goals and try to execute them. Want to improve page performance? An agent can analyze your critical rendering path, optimize image sizes, and suggest lazy loading. Want a dark mode implemented across your UI library? It can crawl through your components and offer scoped changes that preserve brand integrity. ... Frontend development has always been plagued by complexity. Thousands of packages, constantly changing frameworks, and pixel-perfect demands from designers. AI agents bring sanity to the chaos, rendering cloud security the only thing to worry about. But if you decide to run an agent locally, that problem is resolved as well. They can serve as design-to-code translators, turning Figma files into functional components. They can manage breakpoints, ARIA attributes, and responsive behaviors automatically. They can even test components for edge cases by generating test scenarios that a developer might miss. Because these agents are always “on,” they notice patterns developers sometimes overlook. That dropdown menu that breaks on Safari 14? Flagged. That padding inconsistency between modals? Caught.


Agentic AI won’t make public cloud providers rich

Agentic AI isn’t what most people think it is. When I look at these systems, I see something fundamentally different from the brute-force AI approaches we’re accustomed to. Consider agentic AI more like a competent employee than a powerful calculator. What’s fascinating is how these systems don’t need centralized processing power. Instead, they operate more like distributed networks, often running on standard hardware and coordinating across different environments. They’re clever about using resources, pulling in specialized small language models when needed, and integrating with external services on demand. The real breakthrough isn’t about raw power—it’s about creating more intelligent, autonomous systems that can efficiently accomplish tasks. The big cloud providers emphasize their AI and machine learning capabilities alongside data management and hybrid cloud solutions, whereas agentic AI systems are likely to take a more distributed approach. These systems will integrate with large language models primarily as external services rather than core components. This architectural pattern favors smaller, purpose-built language models and distributed processing over centralized cloud resources. Ask me how I know. I’ve built dozens for my clients recently.


Cloud a viable choice amid uncertain AI returns

Enterprises can restrict data using internal controls and limit data movement to chosen geographical locations. The cluster can be customized and secured to meet the specific requirements of the enterprise without the constraints of using software or hardware configured and operated by a third party. Given these characteristics, for convenience, Uptime Institute has labeled the method as “best” in terms of customization and control. ... The challenge for enterprises is determining whether the added reassurance of dedicated infrastructure provides a real return on its substantial premium over the “better” option. Many large organizations - from financial services to healthcare - already use the public cloud to hold sensitive data. To secure data, an organization may encrypt data at rest and in transit, configure appropriate access controls, such as security groups, and set up alerts and monitoring. Many cloud providers have data centers approved for government use. It is unreasonable to view the cloud as inherently insecure or non-compliant, considering its broad use across many industries. Although dedicated infrastructure gives reassurance that data is being stored and processed at a particular location, it is not necessarily more secure or compliant than the cloud. 


Why no small business is too small for hackers - and 8 security best practices for SMBs

To be clear, the size of your business isn't particularly relevant to bulk attacks. It's merely the fact that you are one of many businesses that can be targeted through random IP number generation or email harvesting or some other process that makes it very, very cost-effective for a hacker to be able to deliver a piece of malware that opens up computers in your business for opportunistic activities. ... Attackers -- who could be affiliated with organized crime groups, individual hackers, or even teams funded by nation-states -- often use pre-built hacking tools they can deploy without a tremendous amount of research and development. For hackers, this tactic is roughly the equivalent of downloading an app from an app store, although the hacking tools are usually purchased or downloaded from hacker-oriented websites and hidden forums (what some folks call "the dark web"). ... "Many SMB owners assume cybersecurity is too costly or too complex and think they don't have the IT knowledge or resources to set up reliable security. Few realize that they could set up security in a half hour. Moreover, the lack of dedicated cyber staff further complicates the situation for SMBs, making it even more daunting to implement and manage effective security measures."


AI is making the software supply chain more perilous than ever

The software supply chain is a link in modern IT environments that is as crucial as it is vulnerable. The new research report by JFrog, released during KubeCon + CloudNativeCon Europe in London, shows that organizations are struggling with increasing threats that are amplified by, how could it be otherwise, the rise of AI. ... The report identifies a “quad-fecta” of threats to the integrity and security of the software supply chain: vulnerabilities (CVEs), malicious packages, exposed secrets and configuration errors/human error. JFrog’s research team detected no fewer than 25,229 exposed secrets and tokens in public repositories – an increase of 64% compared to last year. Worryingly, 27% of these exposed secrets were still active. This interwoven set of security dangers makes it particularly difficult for organizations to keep their digital walls consistently in order. ... “More is not always better,” the report states. The collection of tools can make organizations more vulnerable due to increased complexity for developers. At the same time, visibility in the programming code remains a problem: only 43% of IT professionals say that their organization applies security scans at both the code and binary level. This is a decrease from 56% compared to last year and indicates that teams still have large blind spots when identifying software risks.