Daily Tech Digest - April 17, 2025


Quote for the day:

"We are only as effective as our people's perception of us." -- Danny Cox



Why data literacy is essential - and elusive - for business leaders in the AI age

The rising importance of data-driven decision-making is clear but elusive. However, the trust in the data underpinning these decisions is falling. Business leaders do not feel equipped to find, analyze, and interpret the data they need in an increasingly competitive business environment. The added complexity is the convergence of macro and micro uncertainties -- including economic, political, financial, technological, competitive landscape, and talent shortage variables.  ... The business need for greater adoption of AI capabilities, including predictive, generative and agentic AI solutions, is increasing the need for businesses to have confidence and trust in their data. Survey results show that higher adoption of AI will require stronger data literacy and access to trustworthy data. ... The alarming part of the survey is that 54% of business leaders are not confident in their ability to find, analyze, and interpret data on their own. And fewer than half of business leaders are sure they can use data to drive action and decision-making, generate and deliver timely insights, or effectively use data in their day-to-day work. Data literacy and confidence in the data are two growth opportunities for business leaders across all lines of business.


Cyber threats against energy sector surge as global tensions mount

These cyber-espionage campaigns are primarily driven by geopolitical considerations, as tensions shaped by the Russo-Ukraine war, the Gaza conflict, and the U.S.’ “great power struggle” with China are projected into cyberspace. With hostilities rising, potentially edging toward a third world war, rival nations are attempting to demonstrate their cyber-military capabilities by penetrating Western and Western-allied critical infrastructure networks. Fortunately, these nation-state campaigns have overwhelmingly been limited to espionage, as opposed to Stuxnet-style attacks intended to cause harm in the physical realm. A secondary driver of increasing cyberattacks against energy targets is technological transformation, marked by cloud adoption, which has largely mediated the growing convergence of IT and OT networks. OT-IT convergence across critical infrastructure sectors has thus made networked industrial Internet of Things (IIoT) appliances and systems more penetrable to threat actors. Specifically, researchers have observed that adversaries are using compromised IT environments as staging points to move laterally into OT networks. Compromising OT can be particularly lucrative for ransomware actors, because this type of attack enables adversaries to physically paralyze energy production operations, empowering them with the leverage needed to command higher ransom sums. 


The Active Data Architecture Era Is Here, Dresner Says

“The buildout of an active data architecture approach to accessing, combining, and preparing data speaks to a degree of maturity and sophistication in leveraging data as a strategic asset,” Dresner Advisory Services writes in the report. “It is not surprising, then, that respondents who rate their BI initiatives as a success place a much higher relative importance on active data architecture concepts compared with those organizations that are less successful.” Data integration is a major component of an active data architecture, but there are different ways that users can implement data integration. According to Dresner, the majority of active data architecture practitioners are utilizing batch and bulk data integration tools, such as ETL/ELT offerings. Fewer organizations are utilizing data virtualization as the primary data integration method, or real-time event streaming (i.e. Apache Kafka) or message-based data movement (i.e. RabbitMQ). Data catalogs and metadata management are important aspects of an active data architecture. “The diverse, distributed, connected, and dynamic nature of active data architecture requires capabilities to collect, understand, and leverage metadata describing relevant data sources, models, metrics, governance rules, and more,” Dresner writes. 


How can businesses solve the AI engineering talent gap?

“It is unclear whether nationalistic tendencies will encourage experts to remain in their home countries. Preferences may not only be impacted by compensation levels, but also by international attention to recent US treatment of immigrants and guests, as well as controversy at academic institutions,” says Bhattacharyya. But businesses can mitigate this global uncertainty, to some extent, by casting their hiring net wider to include remote working. Indeed, Thomas Mackenbrock, CEO-designate of Paris headquartered BPO giant Teleperformance says that the company’s global footprint helps it to fulfil AI skills demand. “We’re not reliant on any single market [for skills] as we are present in almost 100 markets,” explains Mackenbrock. ... “The future workforce will need to combine human ingenuity with new and emerging AI technologies; going beyond just technical skills alone,” says Khaled Benkrid, senior director of education and research at Arm. “Academic institutions play a pivotal role in shaping this future workforce. By collaborating with industry to conduct research and integrate AI into their curricula, they ensure that graduates possess the skills required by the industry. “Such collaborations with industry partners keep academic programs aligned with research frontiers and evolving job market demands, creating a seamless transition for students entering the workforce,” says Benkrid.


Breaking Down the Walls Between IT and OT

“Even though there's cyber on both sides, they are fundamentally different in concept,” Ian Bramson, vice president of global industrial cybersecurity at Black & Veatch, an engineering, procurement, consulting, and construction company, tells InformationWeek. “It's one of the things that have kept them more apart traditionally.” ... “OT is looked at as having a much longer lifespan, 30 to 50 years in some cases. An IT asset, the typical laptop these days that's issued to an individual in a company, three years is about when most organization start to think about issuing a replacement,” says Chris Hallenbeck, CISO for the Americas at endpoint management company Tanium. ... The skillsets required of the teams to operate IT and OT systems are also quite different. On one side, you likely have people skilled in traditional systems engineering. They may have no idea how to manage the programmable logic controllers (PLC) commonly used in OT systems. The divide between IT and OT has been, in some ways, purposeful. The Purdue model, for example, provides a framework for segmenting ICS networks, keeping them separate from corporate networks and the internet. ... Cyberattack vectors on IT and OT environments look different and result in different consequences. “On the IT side, the impact is primarily data loss and all of the second order effects of your data getting stolen or your data getting held for ransom,” says Shankar. 


Are Return on Equity and Value Creation New Metrics for CIOs?

While driving efficiency is not a new concept for technology leaders, what is different today is the scale and significance of their efforts. In many organizations, CIOs are being tasked with reimagining how value is generated, assessed and delivered. ... Traditionally, technology ROI discussions have focused on cost savings, automation consolidation and reduced headcount. But that perspective is shifting rapidly. CIOs are now prioritizing customer acquisition, retention pricing power and speed to market. CIOs also play an integral role in product innovation than ever before. To remain relevant, they must speak the language of gross margin, not just uptime. This evolution is increasingly reflected in boardroom conversations. CIOs once presented dashboards of uptime and service-level agreements, but today, they discuss customer value, operational efficiency and platform monetization. ... In some cases, technology leaders scale too quickly before proving value. For example, expensive cloud migrations may proceed without a corresponding shift in the business model. This can result in data lakes with no clear application or platforms launched without product-market fit. These missteps can severely undermine ROE. 


AI brings order to observability disorder

Artificial intelligence has contributed to complexity. Businesses now want to monitor large language models as well as applications to spot anomalies that may contribute to inaccuracies, bias, and slow performance. Legacy observability systems were never designed for the ability to bring together these disparate sources of data. A unified observability platform leveraging AI can radically simplify the tools and processes for improved visibility and resolving problems faster, enabling the business to optimize operations based on reliable insights. By consolidating on one set of integrated observability solutions, organizations can lower costs, simplify complex processes, and enable better cross-function collaboration. “Noise overwhelms site reliability engineering teams,” says Gagan Singh, Vice President of Product Marketing at Elastic. Irrelevant and low-priority alerts can overwhelm engineers, leading them to overlook critical issues and delaying incident response. Machine learning models are ideally suited to categorizing anomalies and surfacing relevant alerts so engineers can focus on critical performance and availability issues. “We can now leverage GenAI to enable SREs to surface insights more effectively,” Singh says.


Why Most IaC Strategies Still Fail — And How To Fix Them

There are a few common reasons IaC strategies fail in practice. Let’s explore what they are, and dive into some practical, battle-tested fixes to help teams regain control, improve consistency and deliver on the original promise of IaC. ... Without a unified direction, fragmentation sets in. Teams often get locked into incompatible tooling — some using AWS CloudFormation for perceived enterprise alignment, others favoring Terraform for its flexibility. These tool silos quickly become barriers to collaboration. ... Resistance to change also plays a role. Some engineers may prefer to stick with familiar interfaces and manual operations, viewing IaC as an unnecessary complication. Meanwhile, other teams might be fully invested in reusable modules and automated pipelines, leading to fractured workflows and collaboration breakdowns. Successful IaC implementation requires building skills, bridging silos and addressing resistance with empathy and training — not just tooling. To close the gap, teams need clear onboarding plans, shared coding standards and champions who can guide others through real-world usage — not just theory. ... Drift is inevitable: manual changes, rushed fixes and one-off permissions often leave code and reality out of sync. Without visibility into those deviations, troubleshooting becomes guesswork. 


What will the sustainable data center of the future look like?

The energy issue not only affects operators/suppliers. If a customer uses a lot of energy, they will get a bill to match, says Van den Bosch. “I [as a supplier] have to provide the customer with all kinds of details about my infrastructure. That includes everything from air conditioning to the specific energy consumption of the server racks. The customer is then able to reduce that energy consumption.” This can be done, for example, by replacing servers earlier than they have been before, a departure from the upgrade cycles of yesteryear. Ruud Mulder of Dell Technologies calls for the sustainability of equipment to be made measurable in great detail. This can be done by means of a digital passport, showing where all the materials come from and how recyclable they are. He thinks there is still much room for improvement in this area. For example, future designs can be recycled better by separating plastic and gold from each other, refurbishing components and more. This yield increase is often attractive, as more computing power is required for ambitious AI plans, and the efficiency of chips increases with each generation. “The transition to AI means that you sometimes have to say goodbye to your equipment sooner,” says Mulder. The AI issue is highly relevant to the future of the modern data center in any case. 


Fitness Functions for Your Architecture

Fitness functions offer us self-defined guardrails for certain aspects of our architecture. If we stay within certain (self-chosen) ranges, we're safe (our architecture is "good"). ... Many projects already use some kinds of fitness functions, although they might not use the term. For example, metrics from static code checkers, linters, and verification tools (such as PMD, FindBugs/SpotBugs, ESLint, SonarQube, and many more). Collecting the metrics alone doesn't make it a fitness function, though. You'll need fast feedback for your developers, and you need to define clear measures: limits or ranges for tolerated violations and actions to take if a metric indicates a violation. In software architecture, we have certain architectural styles and patterns to structure our code in order to improve understandability, maintainability, replaceability, and so on. Maybe the most well-known pattern is a layered architecture with, quite often, a front-end layer above a back-end layer. To take advantage of such layering, we'll allow and disallow certain dependencies between the layers. Usually, dependencies are allowed from top to down, i.e. from the front end to the back end, but not the other way around. A fitness function for a layered architecture will analyze the code to find all dependencies between the front end and the back end.

Daily Tech Digest - April 16, 2025


Quote for the day:

"The most powerful leadership tool you have is your own personal example." -- John Wooden


How to lead humans in the age of AI

Quiet the noise around AI and you will find the simple truth that the most crucial workplace capabilities remain deeply human. ... This human skills gap is even more urgent when Gen Z is factored in. They entered the workforce aligned with a shift to remote and hybrid environments, resulting in fewer opportunities to hone interpersonal skills through real-life interactions. This is not a critique of an entire generation, but rather an acknowledgment of a broad workplace challenge. And Gen Z is not alone in needing to strengthen communication across generational divides, but that is a topic for another day. ... Leaders must embrace their inner improviser. Yes, improvisation, like what you have watched on Whose Line Is It Anyway? Or the awkward performance your college roommate invited you to in that obscure college lounge. The skills of an improviser are a proven method for striving amidst uncertainty. Decades of experience at Second City Works and studies published by The Behavioral Scientist confirm the principles of improv equip us to handle change with agility, empathy, and resilience. ... Make listening intentional and visible. Respond with the phrase, “So what I’m hearing is,” followed by paraphrasing what you heard. Pose thoughtful questions that indicate your priority is understanding, not just replying. 


When companies merge, so do their cyber threats

Merging two companies means merging two security cultures. That is often harder than unifying tools or policies. While the technical side of post-M&A integration is important, it’s the human and procedural elements that often introduce the biggest risks. “When CloudSploit was acquired, one of the most underestimated challenges wasn’t technical, it was cultural,” said Josh Rosenthal, Holistic Customer Success Executive at REPlexus.com. “Connecting two companies securely is incredibly complex, even when the acquired company is much smaller.” Too often, the focus in M&A deals lands on surface-level assurances like SOC 2 certifications or recent penetration tests. While important, those are “table stakes,” Rosenthal noted. “They help, but they don’t address the real friction: mismatched security practices, vendor policies, and team behaviors. That’s where M&A cybersecurity risk really lives.” As AI accelerates the speed and scale of attacks, CISOs are under increasing pressure to ensure seamless integration. “Even a phishing attack targeting a vendor onboarding platform can introduce major vulnerabilities during the M&A process,” Rosenthal warned. To stay ahead of these risks, he said, smart security leaders need to dig deeper than documentation.


Measuring success in dataops, data governance, and data security

If you are on a data governance or security team, consider the metrics that CIOs, chief information security officers (CISOs), and chief data officers (CDOs) will consider when prioritizing investments and the types of initiatives to focus on. Amer Deeba, GVP of Proofpoint DSPM Group, says CIOs need to understand what percentage of their data is valuable or sensitive and quantify its importance to the business—whether it supports revenue, compliance, or innovation. “Metrics like time-to-insight, ROI from tools, cost savings from eliminating unused shadow data, or percentage of tools reducing data incidents are all good examples of metrics that tie back to clear value,” says Deeba. ... Dataops technical strategies include data pipelines to move data, data streaming for real-time data sources like IoT, and in-pipeline data quality automations. Using the reliability of water pipelines as an analogy is useful because no one wants pipeline blockages, leaky pipes, pressure drops, or dirty water from their plumbing systems. “The effectiveness of dataops can be measured by tracking the pipeline success-to-failure ratio and the time spent on data preparation,” says Sunil Kalra, practice head of data engineering at LatentView. “Comparing planned deployments with unplanned deployments needed to address issues can also provide insights into process efficiency.”


How Safe Is the Code You Don’t Write? The Risks of Third-Party Software

Open-source and commercial packages and public libraries accelerate innovation, drive down development costs, and have become the invisible scaffolding of the Internet. GitHub recently highlighted that 99% of all software projects use third-party components. But with great reuse comes great risk. Third-party code is a double-edged sword. On the one hand, it’s indispensable. On the other hand, it’s a potential liability. In our race to deliver software faster, we’ve created sprawling software supply chains with thousands of dependencies, many of which receive little scrutiny after the initial deployment. These dependencies often pull in other dependencies, each one potentially introducing outdated, vulnerable, or even malicious code into environments that power business-critical operations. ... The risk is real, so what do we do? We can start by treating third-party code with the same caution and scrutiny we apply to everything else that enters the production pipeline. This includes maintaining a living inventory of all third-party components across every application and monitoring their status to prescreen updates and catch suspicious changes. With so many ways for threats to hide, we can’t take anything on trust, so next comes actively checking for outdated or vulnerable components as well as new vulnerabilities introduced by third-party code. 


The AI Leadership Crisis: Why Chief AI Officers Are Failing (And How To Fix It)

Perhaps the most dangerous challenge facing CAIOs is the profound disconnect between expectations and reality. Many boards anticipate immediate, transformative results from AI initiatives – the digital equivalent of demanding harvest without sowing. AI transformation isn't a sprint; it's a marathon with hurdles. Meaningful implementation requires persistent investment in data infrastructure, skills development, and organizational change management. Yet CAIOs often face arbitrary deadlines that are disconnected from these realities. One manufacturing company I worked with expected their newly appointed CAIO to deliver $50 million in AI-driven cost savings within 12 months. When those unrealistic targets weren't met, support for the role evaporated – despite significant progress in building foundational capabilities. ... There are many potential risks of AI, from bias to privacy concerns, and the right level of governance is essential. CAIOs are typically tasked with ensuring responsible AI use yet frequently lack the authority to enforce guidelines across departments. This accountability-without-authority dilemma places CAIOs in an impossible position. They're responsible for AI ethics and risk management, but departmental leaders can ignore their guidance with minimal consequences.


OT security: how AI is both a threat and a protector

Burying one’s head in the sand, a favorite pastime among some OT personnel, no longer works. Security through obscurity is and remains a bad idea. Heinemeyer: “I’m not saying that everyone will be hacked, but it is increasingly likely these days.” Possibly, the ostrich policy has to do with, yes, the reporting on OT vulnerabilities, including by yours truly. Ancient protocols, ICS systems and PLCs with exploitable vulnerabilities are evidently risk factors. However, the people responsible for maintaining these systems at manufacturing and utility facilities know better than anyone that the actual exploits of these obscure systems are improbable. ... Given the increasing threat, is the new focus on common best practices enough? We have already concluded that vulnerabilities should not be judged solely on the CVSS score. They are an indication, certainly, but a combination of CVEs with middle-of-the-range scoring appears to have the most serious consequences. Heinemeyer says that the resolve to identify all vulnerabilities as the ultimate solution was well established from the 1990s to the 2010s. He says that in recent years, security professionals have realized that specific issues need to be prioritized, quantifying technical exploitability through various measurements (e.g., EPSS). 


In a Social Engineering Showdown: AI Takes Red Teams to the Mat

In a revelation that shouldn’t surprise, but still should alarm security professionals, AI has gotten much more proficient in social engineering. Back in the day, AI was 31% less effective than human beings in creating simulated phishing campaigns. But now, new research from Hoxhunt suggests that the game-changing technology’s phishing performance against elite human red teams has improved by 55%. ... Using AI offensively can raise legal and regulatory hackles related to privacy laws and ethical standards, Soroko adds, as well as creating a dependency risk. “Over-reliance on AI could diminish human expertise and intuition within cybersecurity teams.” But that doesn’t mean bad actors will win the day or get the best of cyber defenders. Instead, security teams could and should turn the tables on them. “The same capabilities that make AI an effective phishing engine can — and must — be used to defend against it,” says Avist. With an emphasis on “must.” ... It seems that tried and true basics are a good place to start. “Ensuring transparency, accountability and responsible use of AI in offensive cybersecurity is crucial,” Kowski. As with any aspect of tech and security, keeping AI models “up-to-date with the latest threat intelligence and attack techniques is also crucial,” he says. “Balancing AI capabilities with human expertise remains a key challenge.”


Optimizing CI/CD for Trust, Observability and Developer Well-Being

While speed is often cited as a key metric for CI/CD pipelines, the quality and actionability of the feedback provided are equally, if not more, important for developers. Jones, emphasizing the need for deep observability, stresses, “Don’t just tell me that the steps of the pipeline succeeded or failed, quantify that success or failure. Show me metrics on test coverage and show me trends and performance-related details. I want to see stack traces when things fail. I want to be able to trace key systems even if they aren’t related to code that I’ve changed because we have large complex architectures that involve a lot of interconnected capabilities that all need to work together.” This level of technical insight empowers developers to understand and resolve issues quickly, highlighting the importance of implementing comprehensive monitoring and logging within your CI/CD pipeline to provide developers with detailed insights into build, test, and deployment processes. And shifting feedback earlier in the development lifecycle serves everyone well. The key is shifting feedback earlier in the process, ensuring it is contextual, before code is merged. For example, running security scans at the pull request stage, rather than after deployment, ensures developers get actionable feedback while still in context. 


AI agents vs. agentic AI: What do enterprises want?

If AI and AI agents are application components, then they fit both into business processes and workflow. A business process is a flow, and these days at least part of that flow is the set of data exchanges among applications or their components—what we typically call a “workflow.” It’s common to think of the process of threading workflows through both applications and workers as a process separate from the applications themselves. Remember the “enterprise service bus”? That’s still what most enterprises prefer for business processes that involve AI. Get an AI agent that does something, give it the output of some prior step, and let it then create output for the step beyond it. The decision as to whether an AI agent is then “autonomous” is really made by whether its output goes to a human for review or is simply accepted and implemented. ... What enterprises like about their vision of an AI agent is that it’s possible to introduce AI into a business process without having AI take over the process or require the process be reshaped to accommodate AI. Tech adoption has long favored strategies that let you limit scope of impact, to control both cost and the level of disruption the technology creates. This favors having AI integrated with current applications, which is why enterprises have always thought of AI improvements to their business operation overall as being linked to incorporating AI into business analytics.


Liquid Cooling is ideal today, essential tomorrow, says HPE CTO

We’re moving from standard consumption levels—like 1 kilowatt per rack—to as high as 3 kilowatts or more. The challenge lies in provisioning that much power and doing it sustainably. Some estimates suggest that data centers, which currently account for about 1% of global power consumption, could rise to 5% if trends continue. This is why sustainability isn’t just a checkbox anymore—it’s a moral imperative. I often ask our customers: Who do you think the world belongs to? Most pause and reflect. My view is that we’re simply renting the world from our grandchildren. That thought should shape how we design infrastructure today. ... Air cooling works until a point. But as components become denser, with more transistors per chip, air struggles. You’d need to run fans faster and use more chilled air to dissipate heat, which is energy-intensive. Liquid, due to its higher thermal conductivity and density, absorbs and transfers heat much more efficiently. Some DLC systems use cold plates only on select components. Others use them across the board. There are hybrid solutions too, combining liquid and air. But full DLC systems, like ours, eliminate the need for fans altogether. ... Direct liquid cooling (DLC) is becoming essential as data centers support AI and HPC workloads that demand high performance and density. 

Daily Tech Digest - April 15, 2025


Quote for the day:

“Become the kind of leader that people would follow voluntarily, even if you had no title or position.” -- Brian Tracy



Critical Thinking In The Age Of AI-Generated Code

Besides understanding our code, code reviewing AI-generated code is an invaluable skill nowadays. Tools like GitHub's Copilot and DeepCode can code-review better than a junior software developer. Depending on the complexity of the codebase, they can save us time in code reviewing and pinpoint cases that we may have missed, but, after all, they are not flawless. We still need to verify that the AI assistant's code review did not provide any false positives or false negatives. We need to verify that the code review did not miss anything important and that the AI assistant got the context correctly. The hybrid approach seems to be the most effective one: let AI handle the grunt work and rely on developers for the critical analysis. ... After all, code reviewing AI-generated code is an excellent opportunity to educate ourselves while improving our code-reviewing skills. Keep in mind that, to date, AI-generated code optimizes for patterns in its training data. This may not be aligned with coding first principles. AI-generated code may follow templated solutions rather than custom designs. It may include unnecessary defensive code or overly generic implementations. We need to check that it has chosen the most appropriate solution for each code block generated. Another common problem is that LLMs may hallucinate.


DeepCoder: Revolutionizing Software Development with Open-Source AI

One of the DeepCoder project’s most significant contributions is the introduction of verl-pipeline, an optimized extension of the very open-source RLHF library. The team identified sampling, the generation of long token sequences as the primary bottleneck in training and developed “one-off pipelining” to address this challenge. This technique overlaps sampling, reward calculation and training, reducing end-to-end training times by up to 2.5x. This optimization is game-changing for coding tasks requiring thousands of unit tests per reinforcement learning iteration, making previously prohibitive training runs accessible to smaller research teams and independent developers. For DevOps professionals, DeepCoder represents an opportunity to integrate advanced code generation directly into CI/CD pipelines without dependency on API-gated services. Teams can fine-tune the model on their codebase, creating customized assistants that understand their specific architecture and coding patterns. ... DeepCoder’s open-source nature aligns with the DevOps collaboration and shared improvement philosophy. As more organizations adopt and contribute to the model, we can expect to see specialized versions emerge for different programming languages and problem domains.


Transforming Software Development

AI assistants are getting smarter, moving beyond prompt-based interactions to anticipate developers’ needs and proactively offer suggestions. This evolution is driven by the rise of AI agents, which can independently execute tasks, learn from their experiences and even collaborate with other agents. Next year, these agents will serve as a central hub for code assistance, streamlining the entire software development lifecycle. AI agents will autonomously write unit tests, refactor code for efficiency and even suggest architectural improvements. Developers’ roles will need to evolve alongside these advancements. AI will not replace them. Far from it; proactive AI assistants and their underlying agents will help developers build new skills and free up their time to focus on higher-value, more strategic tasks. ... AI models are more powerful when trained on internal company data, which allows them to generate insights specific to an organization’s unique operations and objectives. However, this often requires running models on premises for security and compliance reasons. With open source models rapidly closing the performance gap with commercial offerings, more businesses will deploy models on premises in 2025. This will allow organizations to fine-tune models with their own data and deploy AI applications at a fraction of the cost.


Cybercriminal groups embrace corporate structures to scale, sustain operations

We have seen cross collaboration between groups that specialize in specific activities. For example, one group specializes in social engineering, while another focuses on scaling malware and botnets to uncover open servers that yield database breaches. They, in turn, can sell access to those who focus on ransomware attacks. Recently, we have seen collaboration between AL/ML developers who scrape public records to build Org Charts, as well as lists of real estate holdings. This data is then used en masse with situational and location data to populate PDF attachments in emails that look like real invoices, with executives’ names in fake prior email responses, as part of the thread. ... the recent development in hackers organizing into larger groups has allowed the stakes to get even higher. Look at the Lazarus Group, who pulled off one of the largest heists ever by targeting Bybit and stealing $1.5 billion in Ethereum, as well as subsequently converting $300 million in unrecoverable funds. This group is likely state-sponsored and funding North Korean military programs. Therefore, understanding North Korean national interests will hint at future targets. The increasing scale of their attacks likely reflects greater resources allocated by North Korea, more sophisticated tooling and capabilities, lessons learned from previous operations, and a growing number of personnel trained in cyber operations.


Agentic AI might soon get into cryptocurrency trading — what could possibly go wrong?

Not everyone is bullish on the intersection of Web3, agentic AI and blockchain. Forrester Research vice president and principal analyst Martha Bennett is among those who are skeptical. In 2023, she co-authored an online post critical of Worldcoin, now the World project, and her opinion hasn’t changed in several regards. World project still faces major challenges, including privacy issues and concerns about its iris biometric technology, she said. And Agentic AI is still in its early stages and not yet capable of supporting Web3 transactions. Most current generative AI (genAI) tools, including LLMs, lack the autonomy defined as “agentic AI.” “There’s no AI technology today that would be able automate Web3 transactions in a reliable and secure manner,” she said. Given the risks and the potential for exploitation, it’s too soon to rely on AI systems with high autonomy for Web3 transactions. She did note, however, that Web3 already uses automation through smart contracts — self-executing electronic contracts with the terms of the agreement directly written into code. “Will Web3 go mainstream in 2025? My overall answer is no, but there are nuances,” she said. “If mainstream means mass consumer adoption, it’s a definite no. There’s simply not enough utility there for consumers.” Web3, Bennett said, is largely a self-contained financial ecosystem, and efforts to boost adoption through Decentralized Physical Infrastructure Networks (DePIN), such as Tools for Humanity’s, haven’t led to major breakthroughs.


Artificial Intelligence fuels rise of hard-to-detect bots 

“The surge in AI-driven bot creation has serious implications for businesses worldwide,” said Tim Chang, General Manager of Application Security at Thales. “As automated traffic accounts for more than half of all web activity, organisations face heightened risks from bad bots, which are becoming more prolific every day.” ... “This year’s report sheds light on the evolving tactics and techniques utilised by bot attackers. What were once deemed advanced evasion methods have now become standard practice for many malicious bots,” Chang said. “In this rapidly changing environment, businesses must evolve their strategies. It’s crucial to adopt an adaptive and proactive approach, leveraging sophisticated bot detection tools and comprehensive cybersecurity management solutions to build a resilient defense against the ever-shifting landscape of bot-related threats.” ... Analysis in the report reveals a deliberate strategy by cyber attackers to exploit API endpoints that manage sensitive and high-value data. Implications of this trend are especially impactful for industries that rely on APIs for their critical operations and transactions. Financial services, healthcare, and e-commerce sectors are bearing the brunt of these sophisticated bot attacks, making them prime targets for malicious actors seeking to breach sensitive information.


Humans at the helm of an AI-driven grid

A growing number of utilities are turning to AI-based tools to process vast data streams and streamline tasks once managed by manual calculation. For instance, algorithms can analyse weather patterns, historical consumption, and real-time sensor readings to make more accurate power demand and renewable energy generation forecasts. This supports more efficient balancing of supply and demand, reducing the likelihood of overloaded transformers or unexpected brownouts. Some utilities are also exploring AI-driven alarm management, which can filter the flood of alerts triggered by a network issue. Instead of operators sifting through hundreds of notifications, AI tools can be used to identify and highlight the most critical issues in real time. Another AI application is with congestion management, detecting trouble spots on the grid where demand might exceed capacity and even propose rerouting strategies to keep electricity flowing reliably. While still in their early stages, AI tools hold promise for driving operational efficiency in many daily scenarios. ... Even the smartest algorithm, however, lacks the broader perspective and accountability that people bring to grid management. Power and Utility companies are tasked with a public service mandate: they must ensure safety, affordability, and equitable access to electricity.


CISO Conversations: Maarten Van Horenbeeck, SVP & CSO at Adobe

The digital divide is simple to understand but complex to solve. Fundamentally, it separates those who have access to cyber and cyber knowledge from those who do not. There are areas of the world and socio-economic groups or demographics who have little or very limited access to the internet, and consequently very little awareness of cybersecurity. But cyber and cyber threats are worldwide; and technology is increasingly integrated and interconnected globally. “Cyber issues emanating from the digital divide don’t just play out far away from our homes – they play out very close to our homes as well,” warns Van Horenbeeck. “There’s a huge divide between people who know, for example, not to reuse passwords, to use multi factor authentication, and those individuals that have none of that experience at all.” In effect the digital divide creates a largely invisible and unseen threat surface for the long-connected world. He believes that technology companies can play a part in solving this problem by making cybersecurity features easy to understand and use. and cites two examples of the Adobe approach. “We invested, for example, in support for passkeys because we feel it’s a more effective and easier method of authentication that is also more secure.”


How AI, Robotics and Automation Transform Supply Chains

Enterprises designing robots to augment the human workforce need to take design thinking and ergonomic approaches into consideration. Designers must think about how robots comprehend and understand their physical surroundings without tripping over cables or objects on the floor, obstructing movement or causing human injuries. These robots are created with the aim to collaborate with humans for repetitive tasks and lift heavy loads. Last year, OT.today featured stories on how humanoid robots augmented the human workforce at Amazon, Mercedes, NASA and the Piaggio Group. In 2017, Alibaba invested in AI labs and the DAMO Academy. At its flagship Computing Conference in 2018, held in Hangzhou, China, Alibaba showcased a range of robots designed for warehouses, autonomous deliveries and other sectors, including hospitality and pharmaceuticals. More recently, Alibaba invested in LimX Dynamics, a company specializing in humanoid and robotic technology. Japanese automobile manufacturers have been using industrial robots since the early 1980s. Chip manufacturing companies in Taiwan and other countries also use them. Robots assist in surgeries in the healthcare sector. But none of those early manufacturing robots resembled humanoids or even had advanced AI seen in today's robots.


CIOs are overspending on the cloud — but still think it’s worth it

CIOs should also embrace DevOps practices tied to cost reduction when consuming cloud resources, Sellers says. One pitfall that doesn’t get enough attention: Many organizations don’t educate developers on the cost of cloud services, despite the glut of developer services large cloud providers make trivial to call. “I’ve lost track of how many services Amazon provides that developers can just use, and some of those can be quite expensive, but a developer doesn’t really know that,” Sellers says. “They’re like, ‘Instead of writing my own solution to this, I can just call this service that Amazon already provides, and boom, my job is done.’” The disconnect between developers and financial factors in the cloud is a real problem that leads to increased cloud costs, adds Nick Durkin, field CTO at Harness, provider of an AI-driven software development platform. Without knowing the costs of accessing a cloud-based GPU or CPU, for example, a developer is like a home builder who doesn’t know the cost of wood or brick, Durkin says. “If you’re not giving your smartest engineers access to the information about services that they can optimize on, how would you expect them to do it?” he says. “Then, finance comes back a month later with a beating stick.”

Daily Tech Digest - April 14, 2025


Quote for the day:

"Power should be reserved for weightlifting and boats, and leadership really involves responsibility." -- Herb Kelleher



The quiet data breach hiding in AI workflows

Prompt leaks happen when sensitive data, such as proprietary information, personal records, or internal communications, is unintentionally exposed through interactions with LLMs. These leaks can occur through both user inputs and model outputs. On the input side, the most common risk comes from employees. A developer might paste proprietary code into an AI tool to get debugging help. A salesperson might upload a contract to rewrite it in plain language. These prompts can contain names, internal systems info, financials, or even credentials. Once entered into a public LLM, that data is often logged, cached, or retained without the organization’s control. Even when companies adopt enterprise-grade LLMs, the risk doesn’t go away. Researchers found that many inputs posed some level of data leakage risk, including personal identifiers, financial data, and business-sensitive information. Output-based prompt leaks are even harder to detect. If an LLM is fine-tuned on confidential documents such as HR records or customer service transcripts, it might reproduce specific phrases, names, or private information when queried. This is known as data cross-contamination, and it can occur even in well-designed systems if access controls are loose or the training data was not properly scrubbed.


The Rise of Security Debt: Your Security IOUs Are Due

Despite measurable improvements, security debt — defined as flaws that remain unfixed for more than a year after discovery — continues to put enterprises at risk. Security debt impacts almost three-quarters (74.2%) of organizations, up from 71% in previous measurements. More frighteningly, half of all organizations suffer from critical security debt: a dangerous combination of high-severity, long-unresolved flaws. There's a reason it is described as critical debt: the longer a security flaw survives within an enterprise, the less likely it will be resolved. Today, more than a quarter (28%) of flaws remain open two years after discovery, and even after five years, 9% of flaws still linger in applications. ... Applications are only as secure as the code used to write them, and security flaws are a fact of life in every code base in the world. That being said, the origin of the code that is being used matters. Leveraging third-party code has become standard practice across the industry, which introduces added risks. ... organizations need the ability to correlate and contextualize findings in a single view to prioritize their backlog based on context. This allows companies to reduce the most risk with the least effort. Since the average time to fix flaws has increased dramatically, programs seeking to improve their security posture must focus on the findings that matter most in their specific context. 


How to Cut the Hidden Costs of IT Downtime

"Workers struggling with these problems waste productive time waiting for fixes," said Ryan MacDonald, CTO at Liquid Web. Businesses can reduce these costs by investing in proactive IT support, automating troubleshooting processes, and training workers on best practices to prevent repeat problems, he said. MacDonald explained that while tech failures are inevitable, companies often take a reactive rather than proactive approach to IT. Instead of addressing persistent issues at their root, organizations frequently apply short-term fixes, resulting in continuous inefficiencies and mounting expenses. ... Companies that fail to modernize their systems will continue to experience recurring IT problems that hinder productivity and increase operational costs. In addition to upgrading infrastructure, organizations must conduct regular IT audits to proactively identify inefficiencies before they escalate into major disruptions. MacDonald stressed the importance of continuous evaluation. "Regularly scheduled IT audits allow companies to find recurring inefficiencies and invest money into fixing them before they become costly disruption points," he said. Rather than waiting for issues to break, businesses should implement proactive IT strategies, which can save time, reduce financial losses, and improve overall system reliability.


A multicloud experiment in agentic AI: Lessons learned

At its core, an agentic AI system is a self-governing decision-making system. It uses AI to assign and execute tasks autonomously, responding to changing conditions while balancing cost, performance, resource availability, and other factors. I wanted to leverage multiple public cloud platforms harmoniously. The architecture would have to be flexible enough to balance cloud-specific features while achieving platform-agnostic consistency. ... challenges with interoperability, platform-specific nuances, and cost optimization remain. More work is needed to improve the viability of multicloud architectures. The big gotcha is that the cost was surprisingly high. The price of resource usage on public cloud providers, egress fees, and other expenses seemed to spring up unannounced. Using public clouds for agentic AI deployments may be too expensive for many organizations and push them to cheaper on-prem alternatives, including private clouds, managed services providers, and colocation providers. I can tell you firsthand that those platforms are more affordable in today’s market and provide many of the same services and tools. This experiment was a small but meaningful step toward realizing a future where cloud environments serve as dynamic, self-managing ecosystems.


What boards want and don’t want to hear from cybersecurity leaders

A lack of clarity can lead to either oversharing technical details or not providing enough strategic context. Paul Connelly, former CISO turned board advisor, independent director and mentor, finds many CISOs focus too heavily on metrics while the board is looking for more strategic insights. The board doesn’t need to know the results of your phishing test, says Connelly. Boards are focused on risks the organization faces, strategies to address these risks, progress updates, obstacles to success, and whether they’re tackling the right things. “I coach CISOs to study their board — read their bios, understand their background, and understand the fiduciary responsibility of a board,” he says. The goal is to understand the make-up of the board and their priorities and channel their metrics into risk and threat analysis for the business. Using this information, CISOs can develop a story about their program aligned with the business. “That high-level story — supported by measurements — is what boards want to hear, not a bunch of metrics on malicious emails and critical patches or scary Chicken Little-type of threats,” Connelly tells CSO. However, it’s not a one-way interaction, yet many CISOs are engaging with boards that lack the appropriate skills and understanding to foster meaningful discussions on cyber threats. “Very few boards have any directors with true expertise in technology or cyber,” says Connelly.


The future of insurance is digital, intelligent, and customer-first

The Indian insurance sector is undergoing transformative changes, driven by insurtech innovations, personalised policies, and efficient claim settlements. Reliance General Insurance leads this evolution by integrating AI, data science, and automation to enhance customer experiences. According to Deloitte, 70% of Central European insurers have recently partnered with insurtech, with 74% expressing satisfaction, highlighting the global trend of technological collaboration. Emphasising innovation, speed, and customer-centric measures, the industry aims to demystify insurance, boost its adoption, and eliminate service hindrances, steering towards a technology-oriented future. ... Protecting our customer’s data is essential at Reliance General Insurance. To avoid the misuse of the customer information, the company employs a strong multi-layered security framework involving encryption, threat intelligence services, and real-time monitoring. To help mitigate these risks, we also offer cyber insurance products.  ... As much as self-regulatory innovation evokes progressive strides, risk management becomes paramount in the adoption of insurtech solutions. Seamlessly integrating new technologies is the objective, and Reliance General employs constant feedback monitoring to ensure new technologies meet security and regulatory standards.


Examining the business case for multi-million token LLMs

As enterprises weigh the costs of scaling infrastructure against potential gains in productivity and accuracy, the question remains: Are we unlocking new frontiers in AI reasoning, or simply stretching the limits of token memory without meaningful improvements? This article examines the technical and economic trade-offs, benchmarking challenges and evolving enterprise workflows shaping the future of large-context LLMs. ... Increasing the context window also helps the model better reference relevant details and reduces the likelihood of generating incorrect or fabricated information. A 2024 Stanford study found that 128K-token models reduced hallucination rates by 18% compared to RAG systems when analyzing merger agreements. However, early adopters have reported some challenges: JPMorgan Chase’s research demonstrates how models perform poorly on approximately 75% of their context, with performance on complex financial tasks collapsing to near-zero beyond 32K tokens. Models still broadly struggle with long-range recall, often prioritizing recent data over deeper insights. This raises questions: Does a 4-million-token window truly enhance reasoning, or is it just a costly expansion of memory? How much of this vast input does the model actually use? And do the benefits outweigh the rising computational costs?


IT compensation satisfaction at an all-time low

“We’re going through a leveling of the economy right now,” Sutton said, adding that during difficult business periods employees crave consistency and reliability. “There is a little bit of satisfaction and contentment with what is seen as a stable role.” Industry observers also said that although money is a critical factor in how appreciated employees feel, unhappiness with one’s IT role is often a result of other factors, such as changing job descriptions and a general lack of job security. “Compensation is not the only tool enterprises have to improve employee experience and satisfaction. Enterprises can make sure that their employees are focused on work that excites them and they can see the value of,” Forrester’s Mark said. “Provide ample opportunities for upskilling in line not just with the technology strategy, but also with employees’ career aspirations. Ensure that employees feel empowered and have autonomy over decisions which impact them, and of course manage work-life balance, demonstrating that organizations do not simply value the work outputs, but the employees themselves as unique individuals.” Matt Kimball, VP and principal analyst for Moor Insights and Strategy, agreed that employee sentiment goes well beyond salary and bonuses.


Amazon Gift Card Email Hooks Microsoft Credentials

The Cofense Phishing Defense Center (PDC) has recently identified a new credential phishing campaign that uses an email disguised as an Amazon e-gift card from the recipient’s employer. While the email appears to offer a substantial reward, its true purpose is to harvest Microsoft credentials from unsuspecting recipients. The combination of the large monetary value and the appearance of an email seemingly from their employer lures the recipient into a false sense of security that leaves them unaware of the dangers ahead. ... Once the recipient submits their email address, they will be redirected to a phishing page, as shown in Figure 3. The phishing page is well-disguised as a legitimate Microsoft login site, once again prompting the victim to input their credentials. Legitimate Microsoft Outlook login pages should be hosted on domains belonging to Microsoft (such as live.com or outlook.com), but as you can see in Figure 3, the domain for this site is officefilecenter[.]com, which was created less than a month before the time of analysis. Credential phishing emails such as these are a perfect example of the various ways that threat actors can exploit the emotions of the recipient. Whether it is the theme of phish, the content within, or the time of the year, threat actors will utilize anything they can to make sure you do not catch on until it’s too late. 


Driving Sustainability Forward with IIoT: Smarter Processes for a Greener Future

AI-driven IIoT systems are transforming how industries manage raw materials, inventory, and human resources. In smart factories, AI forecasts demand, streamlines production schedules, and optimizes supply chains to reduce waste and emissions. For instance, AI calculates the exact quantity of materials needed for production, preventing overstocking and minimizing excess. It also enhances SIOP and logistics by consolidating shipments and selecting eco-friendly transportation routes, reducing the carbon footprint of global supply chains. Predictive maintenance, powered by AI, contributes by detecting equipment issues early, preventing breakdowns, extending lifespan and uptime while reducing defective outputs. ... IIoT is a key enabler of the circular economy, which focuses on recycling, reusing, and reducing waste. Automated systems allow manufacturers to recycle heat, water, and materials within their facilities, creating closed-loop processes. For example, excess heat from industrial ovens can be captured and repurposed for heating water or other facility needs. While sensors monitor production processes to optimize material usage and reduce scrap, product take-back programs are another cornerstone of the circular economy. 

Daily Tech Digest - April 13, 2025


Quote for the day:

"I've learned that people will forget what you said, people will forget what you did, but people will never forget how you made them feel." -- Maya Angelou



The True Value Of Open-Source Software Isn’t Cost Savings

Cost savings is an undeniable advantage of open-source software, but I believe that enterprise leaders often overlook other benefits that are even more valuable to the organization. When developers use open-source tools, they join a collaborative global community that is constantly learning from and improving on the technology. They share knowledge, resources and experiences to identify and fix problems and move updates forward more rapidly than they could individually. Adopting open-source software can also be a win-win talent recruitment and retention strategy for your enterprise. Many individual contributors see participating in open-source software communities as a tangible way to build their own profiles as experts in their field—and in the process, they also enhance your company’s reputation as a cool place where tech leaders want to work. However, there’s no such thing as a free meal. Open-source software isn't immune to vendor lock-in, when your company becomes so dependent on a partner’s product that it is prohibitively costly or difficult to switch to an alternative. You may not be paying licensing fees, but you still need to invest in support contracts for open-source tools. The bigger challenge from my perspective is that it’s still rare for enterprises to contribute regularly to open-source software communities. 


The Growing Cost of Non-Compliance and the Need for Security-First Solutions

Regulatory bodies across the globe are increasing their scrutiny and enforcement actions. Failing to comply with well-established regulations like HIPAA or GDPR, or newer ones like the European Union’s Digital Operational Resilience Act (DORA) and NY DFS Cybersecurity requirements, can result in penalties that can reach millions of dollars. But the costs do not stop there. Once a company has been found to be non-compliant, it often faces reputational damage that extends far beyond the immediate legal repercussions. ... A security-first approach goes beyond just checking off boxes to meet regulatory requirements. It involves implementing robust, proactive security measures that safeguard sensitive data and systems from potential breaches. This approach protects the organization from fines and builds a strong foundation of trust and resilience in the face of evolving cyber threats. ... Many businesses still rely on outdated, insecure methods of connecting to critical systems through terminal emulators or “green screen” interfaces. These systems, often running legacy applications, can become prime targets for cybercriminals if they are not properly secured. With credential-based attacks rising, organizations must rethink how they secure access to their most vital resources.


Researchers unveil nearly invisible brain-computer interface

Today's BCI systems consist of bulky electronics and rigid sensors that prevent the interfaces from being useful while the user is in motion during regular activities. Yeo and colleagues constructed a micro-scale sensor for neural signal capture that can be easily worn during daily activities, unlocking new potential for BCI devices. His technology uses conductive polymer microneedles to capture electrical signals and conveys those signals along flexible polyimide/copper wires—all of which are packaged in a space of less than 1 millimeter. A study of six people using the device to control an augmented reality (AR) video call found that high-fidelity neural signal capture persisted for up to 12 hours with very low electrical resistance at the contact between skin and sensor. Participants could stand, walk, and run for most of the daytime hours while the brain-computer interface successfully recorded and classified neural signals indicating which visual stimulus the user focused on with 96.4% accuracy. During the testing, participants could look up phone contacts and initiate and accept AR video calls hands-free as this new micro-sized brain sensor was picking up visual stimuli—all the while giving the user complete freedom of movement.


Creating SBOMs without the F-Bombs: A Simplified Approach to Creating Software Bills of Material

It's important to note that software engineers are not security professionals, but in some important ways, they are now being asked to be. Software engineers pick and choose from various third-party and open source components and libraries. They do so — for the most part — with little analysis of the security of those components. Those components can be — or become — vulnerable in a whole variety of ways: Once-reliable code repositories can become outdated or vulnerable, zero days can emerge in trusted libraries, and malicious actors can — and often do — infect the supply chain. On top of that, risk profiles can change overnight, making what was a well considered design choice into a vulnerable one almost overnight. Software engineers never before had to consider these things, and yet the arrival of the SBOM is making them do so like never before. Customers can now scrutinize their releases, and then potentially reject or send them back for fixing — resulting in even more work on short notice and piling on pressure. Even if the risk profile of a particular component changes between the creation of an SBOM and a customer reviewing it, then the release might be rejected. This is understandably the cause of much frustration for software engineers who are often already under great pressure.


Risk & Quality: The Hidden Engines of Business Excellence

In the world of consultancy, firms navigate a minefield of challenges—tight deadlines, budget constraints, and demanding clients. Then, out of nowhere, disruptions such as regulatory shifts or resource shortages strike, threatening project delivery. Without a robust risk management framework, these disruptions can snowball into major financial and reputational losses. ... Some leaders see quality assurance as an added expense, but in reality, it’s a profit multiplier. According to the American Society for Quality (ASQ), organizations that emphasize quality see an average of 4-6% revenue growth compared to those that don’t. Why? Because poor quality leads to rework, client dissatisfaction, and reputational damage. ... The cost of poor quality is substantial. Firms that don’t embed quality into their culture ultimately face consequences like customer churn, regulatory fines, and declining market share. Additionally, fixing mistakes after the fact is far more expensive than ensuring quality from the outset. Organizations that invest in quality from the start avoid unnecessary costs, improve efficiency, and strengthen their bottom line. As Philip Crosby, a pioneer in quality management, stated, “Quality is free. It’s not a gift, but it’s free. What costs money are the unquality things—all the actions that involve not doing jobs right the first time.” 


Enabling a Thriving Middleware Market

A more unified regulatory approach could reduce uncertainty, streamline compliance, and foster an ecosystem that better supports middleware development. However, given the unlikelihood of creating a new agency, a more feasible approach would be to enhance coordination among existing regulators. The FTC could address antitrust concerns, the FCC could promote interoperability, and the Department of Commerce could support innovation through trade policies and the development of technical standards. Even here, slow rulemaking and legal challenges could hinder progress. Ensuring agencies have the necessary authority, resources, and expertise will be critical. A soft-law approach, modeled after the National Institute for Standards and Technology (NIST) AI Risk Management Framework, might be the most feasible option. A Middleware Standards Consortium could help establish best practices and compliance frameworks. Standards development organizations (SDOs), such as the Internet Engineering Task Force or the World Wide Web Consortium (W3C), are well-positioned to lead this effort, given their experience crafting internet protocols that balance innovation with stability. For example, a consortium of SDOs with buy-in from NIST could establish standards for API access, data portability, and interoperability of several key social media functionalities.


How to Supercharge Application Modernization with AI

The refactoring of code – which means restructuring and, often, partly rewriting existing code to make applications fit a new design or architecture – is the most crucial part of the application modernization process. It has also tended in the past to be the most laborious because it required developers to pore over often very large codebases, painstakingly tweaking code function-by-function or even line-by-line. AI, however, can do much of this dirty work for you. Instead of having to find places where code should be rewritten or modified in order to optimize it, developers can leverage AI tools to look for code that requires attention. ... When you move applications to the cloud, the infrastructure that hosts them is effectively a software resource – which means you can configure and manage it using code. By extension, you can use AI tools like Cursor and Copilot to write and test your code-based infrastructure configurations. Specifically, AI is capable of tasks such as writing and maintaining the code that manages CI/CD pipelines or cloud servers. It can also suggest opportunities to optimize existing infrastructure code to improve reliability or security. And it can generate the ancillary configurations, such as Identity and Access Management (IAM) policies, that govern and help to secure cloud infrastructure.


Balancing Generative AI Risk with Reward

As businesses start evolving in their use of this technology and exposing it to a broader base inside and outside their companies, risks can increase. “I’ve always loved to say AI likes to please,” said Danielle Derby, director of enterprise data management at TriNet, who joined Rodarte at the presentation. Risk manifests “because AI doesn’t know when to stop,” said Derby, and you, for example, may not have thought about including a human or technology guardrail to keep it from answering a question you hadn’t prepared it to be able to accurately manage. “There are a lot of areas where you’re just not sure how someone who’s not you is going to handle this new technology,” she said. ... Improper data splitting can lead to data leakage, resulting in overly optimistic model performance, which you can mitigate by using techniques like stratified sampling to ensure representative splits and by always splitting the data before performing any feature engineering or preprocessing. Inadequate training data can lead to overfitting and too little test data can yield unreliable performance metrics, and you can mitigate these by ensuring there is enough data for both training and testing based on problem size, and using a validation set in addition to training and test sets.


Why Cybersecurity-as-a-Service is the Future for MSPs and SaaS Providers

For MSPs and SaaS providers, adopting a proactive, scalable approach to cybersecurity—one that provides continuous monitoring, threat intelligence, and real-time response—is crucial. By leveraging Cybersecurity-as-a-Service (CSaaS), businesses can access enterprise-grade security without the need for extensive in-house expertise. This model not only enhances threat detection and mitigation but also ensures compliance with evolving cybersecurity regulations. ... The increasing complexity and frequency of cyber threats necessitate a proactive and scalable approach to security. CSaaS offers a flexible solution by outsourcing critical security functions to specialized providers. This ensures continuous monitoring, threat intelligence, and incident response without the need for extensive in-house resources. As cyber threats evolve, CSaaS providers continuously update their tools and techniques, ensuring we stay ahead of emerging vulnerabilities. CSaaS enhances our ability to protect sensitive data and allows us to confidently focus on core business operations. As threats evolve, CSaaS providers continually update their tools and techniques, ensuring companies stay ahead of emerging vulnerabilities. ... Embracing CSaaS is essential for maintaining a robust security posture in an increasingly complex digital landscape.


Meta: WhatsApp Vulnerability Requires Immediate Patch

Meta has voluntarily disclosed the new WhatsApp vulnerability, now published as CVE-2025-30401, after investigating it internally as a submission to its bug bounty program. The company says there is not yet evidence that it has been exploited in the wild. The issue likely impacts all Windows versions prior to 2.2450.6. The WhatsApp vulnerability hinges on an attacker sending a malicious attachment, and would require the target to attempt to manually view the attachment within the software. A spoofing issue makes it possible for the file opening handler to execute code that has been hidden as a seemingly valid MIME type such as an image or document. That could pave the way for remote code execution, though a CVE score has yet to be assigned as of this writing. ... The WhatsApp vulnerability exploited by Paragon was a much more devastating zero-click (and one that targeted phones and mobile devices), similar to one exploited by NSO Group on the platform to compromise over a thousand devices. That landed the spyware vendor in trouble in US courts, where it was found to have violated national hacking laws. The court found that NSO Group had obtained WhatsApp’s underlying code and reverse-engineered it to create at least several zero-click vulnerabilities that it put to use in its spyware.