Daily Tech Digest - April 14, 2025


Quote for the day:

"Power should be reserved for weightlifting and boats, and leadership really involves responsibility." -- Herb Kelleher



The quiet data breach hiding in AI workflows

Prompt leaks happen when sensitive data, such as proprietary information, personal records, or internal communications, is unintentionally exposed through interactions with LLMs. These leaks can occur through both user inputs and model outputs. On the input side, the most common risk comes from employees. A developer might paste proprietary code into an AI tool to get debugging help. A salesperson might upload a contract to rewrite it in plain language. These prompts can contain names, internal systems info, financials, or even credentials. Once entered into a public LLM, that data is often logged, cached, or retained without the organization’s control. Even when companies adopt enterprise-grade LLMs, the risk doesn’t go away. Researchers found that many inputs posed some level of data leakage risk, including personal identifiers, financial data, and business-sensitive information. Output-based prompt leaks are even harder to detect. If an LLM is fine-tuned on confidential documents such as HR records or customer service transcripts, it might reproduce specific phrases, names, or private information when queried. This is known as data cross-contamination, and it can occur even in well-designed systems if access controls are loose or the training data was not properly scrubbed.


The Rise of Security Debt: Your Security IOUs Are Due

Despite measurable improvements, security debt — defined as flaws that remain unfixed for more than a year after discovery — continues to put enterprises at risk. Security debt impacts almost three-quarters (74.2%) of organizations, up from 71% in previous measurements. More frighteningly, half of all organizations suffer from critical security debt: a dangerous combination of high-severity, long-unresolved flaws. There's a reason it is described as critical debt: the longer a security flaw survives within an enterprise, the less likely it will be resolved. Today, more than a quarter (28%) of flaws remain open two years after discovery, and even after five years, 9% of flaws still linger in applications. ... Applications are only as secure as the code used to write them, and security flaws are a fact of life in every code base in the world. That being said, the origin of the code that is being used matters. Leveraging third-party code has become standard practice across the industry, which introduces added risks. ... organizations need the ability to correlate and contextualize findings in a single view to prioritize their backlog based on context. This allows companies to reduce the most risk with the least effort. Since the average time to fix flaws has increased dramatically, programs seeking to improve their security posture must focus on the findings that matter most in their specific context. 


How to Cut the Hidden Costs of IT Downtime

"Workers struggling with these problems waste productive time waiting for fixes," said Ryan MacDonald, CTO at Liquid Web. Businesses can reduce these costs by investing in proactive IT support, automating troubleshooting processes, and training workers on best practices to prevent repeat problems, he said. MacDonald explained that while tech failures are inevitable, companies often take a reactive rather than proactive approach to IT. Instead of addressing persistent issues at their root, organizations frequently apply short-term fixes, resulting in continuous inefficiencies and mounting expenses. ... Companies that fail to modernize their systems will continue to experience recurring IT problems that hinder productivity and increase operational costs. In addition to upgrading infrastructure, organizations must conduct regular IT audits to proactively identify inefficiencies before they escalate into major disruptions. MacDonald stressed the importance of continuous evaluation. "Regularly scheduled IT audits allow companies to find recurring inefficiencies and invest money into fixing them before they become costly disruption points," he said. Rather than waiting for issues to break, businesses should implement proactive IT strategies, which can save time, reduce financial losses, and improve overall system reliability.


A multicloud experiment in agentic AI: Lessons learned

At its core, an agentic AI system is a self-governing decision-making system. It uses AI to assign and execute tasks autonomously, responding to changing conditions while balancing cost, performance, resource availability, and other factors. I wanted to leverage multiple public cloud platforms harmoniously. The architecture would have to be flexible enough to balance cloud-specific features while achieving platform-agnostic consistency. ... challenges with interoperability, platform-specific nuances, and cost optimization remain. More work is needed to improve the viability of multicloud architectures. The big gotcha is that the cost was surprisingly high. The price of resource usage on public cloud providers, egress fees, and other expenses seemed to spring up unannounced. Using public clouds for agentic AI deployments may be too expensive for many organizations and push them to cheaper on-prem alternatives, including private clouds, managed services providers, and colocation providers. I can tell you firsthand that those platforms are more affordable in today’s market and provide many of the same services and tools. This experiment was a small but meaningful step toward realizing a future where cloud environments serve as dynamic, self-managing ecosystems.


What boards want and don’t want to hear from cybersecurity leaders

A lack of clarity can lead to either oversharing technical details or not providing enough strategic context. Paul Connelly, former CISO turned board advisor, independent director and mentor, finds many CISOs focus too heavily on metrics while the board is looking for more strategic insights. The board doesn’t need to know the results of your phishing test, says Connelly. Boards are focused on risks the organization faces, strategies to address these risks, progress updates, obstacles to success, and whether they’re tackling the right things. “I coach CISOs to study their board — read their bios, understand their background, and understand the fiduciary responsibility of a board,” he says. The goal is to understand the make-up of the board and their priorities and channel their metrics into risk and threat analysis for the business. Using this information, CISOs can develop a story about their program aligned with the business. “That high-level story — supported by measurements — is what boards want to hear, not a bunch of metrics on malicious emails and critical patches or scary Chicken Little-type of threats,” Connelly tells CSO. However, it’s not a one-way interaction, yet many CISOs are engaging with boards that lack the appropriate skills and understanding to foster meaningful discussions on cyber threats. “Very few boards have any directors with true expertise in technology or cyber,” says Connelly.


The future of insurance is digital, intelligent, and customer-first

The Indian insurance sector is undergoing transformative changes, driven by insurtech innovations, personalised policies, and efficient claim settlements. Reliance General Insurance leads this evolution by integrating AI, data science, and automation to enhance customer experiences. According to Deloitte, 70% of Central European insurers have recently partnered with insurtech, with 74% expressing satisfaction, highlighting the global trend of technological collaboration. Emphasising innovation, speed, and customer-centric measures, the industry aims to demystify insurance, boost its adoption, and eliminate service hindrances, steering towards a technology-oriented future. ... Protecting our customer’s data is essential at Reliance General Insurance. To avoid the misuse of the customer information, the company employs a strong multi-layered security framework involving encryption, threat intelligence services, and real-time monitoring. To help mitigate these risks, we also offer cyber insurance products.  ... As much as self-regulatory innovation evokes progressive strides, risk management becomes paramount in the adoption of insurtech solutions. Seamlessly integrating new technologies is the objective, and Reliance General employs constant feedback monitoring to ensure new technologies meet security and regulatory standards.


Examining the business case for multi-million token LLMs

As enterprises weigh the costs of scaling infrastructure against potential gains in productivity and accuracy, the question remains: Are we unlocking new frontiers in AI reasoning, or simply stretching the limits of token memory without meaningful improvements? This article examines the technical and economic trade-offs, benchmarking challenges and evolving enterprise workflows shaping the future of large-context LLMs. ... Increasing the context window also helps the model better reference relevant details and reduces the likelihood of generating incorrect or fabricated information. A 2024 Stanford study found that 128K-token models reduced hallucination rates by 18% compared to RAG systems when analyzing merger agreements. However, early adopters have reported some challenges: JPMorgan Chase’s research demonstrates how models perform poorly on approximately 75% of their context, with performance on complex financial tasks collapsing to near-zero beyond 32K tokens. Models still broadly struggle with long-range recall, often prioritizing recent data over deeper insights. This raises questions: Does a 4-million-token window truly enhance reasoning, or is it just a costly expansion of memory? How much of this vast input does the model actually use? And do the benefits outweigh the rising computational costs?


IT compensation satisfaction at an all-time low

“We’re going through a leveling of the economy right now,” Sutton said, adding that during difficult business periods employees crave consistency and reliability. “There is a little bit of satisfaction and contentment with what is seen as a stable role.” Industry observers also said that although money is a critical factor in how appreciated employees feel, unhappiness with one’s IT role is often a result of other factors, such as changing job descriptions and a general lack of job security. “Compensation is not the only tool enterprises have to improve employee experience and satisfaction. Enterprises can make sure that their employees are focused on work that excites them and they can see the value of,” Forrester’s Mark said. “Provide ample opportunities for upskilling in line not just with the technology strategy, but also with employees’ career aspirations. Ensure that employees feel empowered and have autonomy over decisions which impact them, and of course manage work-life balance, demonstrating that organizations do not simply value the work outputs, but the employees themselves as unique individuals.” Matt Kimball, VP and principal analyst for Moor Insights and Strategy, agreed that employee sentiment goes well beyond salary and bonuses.


Amazon Gift Card Email Hooks Microsoft Credentials

The Cofense Phishing Defense Center (PDC) has recently identified a new credential phishing campaign that uses an email disguised as an Amazon e-gift card from the recipient’s employer. While the email appears to offer a substantial reward, its true purpose is to harvest Microsoft credentials from unsuspecting recipients. The combination of the large monetary value and the appearance of an email seemingly from their employer lures the recipient into a false sense of security that leaves them unaware of the dangers ahead. ... Once the recipient submits their email address, they will be redirected to a phishing page, as shown in Figure 3. The phishing page is well-disguised as a legitimate Microsoft login site, once again prompting the victim to input their credentials. Legitimate Microsoft Outlook login pages should be hosted on domains belonging to Microsoft (such as live.com or outlook.com), but as you can see in Figure 3, the domain for this site is officefilecenter[.]com, which was created less than a month before the time of analysis. Credential phishing emails such as these are a perfect example of the various ways that threat actors can exploit the emotions of the recipient. Whether it is the theme of phish, the content within, or the time of the year, threat actors will utilize anything they can to make sure you do not catch on until it’s too late. 


Driving Sustainability Forward with IIoT: Smarter Processes for a Greener Future

AI-driven IIoT systems are transforming how industries manage raw materials, inventory, and human resources. In smart factories, AI forecasts demand, streamlines production schedules, and optimizes supply chains to reduce waste and emissions. For instance, AI calculates the exact quantity of materials needed for production, preventing overstocking and minimizing excess. It also enhances SIOP and logistics by consolidating shipments and selecting eco-friendly transportation routes, reducing the carbon footprint of global supply chains. Predictive maintenance, powered by AI, contributes by detecting equipment issues early, preventing breakdowns, extending lifespan and uptime while reducing defective outputs. ... IIoT is a key enabler of the circular economy, which focuses on recycling, reusing, and reducing waste. Automated systems allow manufacturers to recycle heat, water, and materials within their facilities, creating closed-loop processes. For example, excess heat from industrial ovens can be captured and repurposed for heating water or other facility needs. While sensors monitor production processes to optimize material usage and reduce scrap, product take-back programs are another cornerstone of the circular economy. 

Daily Tech Digest - April 13, 2025


Quote for the day:

"I've learned that people will forget what you said, people will forget what you did, but people will never forget how you made them feel." -- Maya Angelou



The True Value Of Open-Source Software Isn’t Cost Savings

Cost savings is an undeniable advantage of open-source software, but I believe that enterprise leaders often overlook other benefits that are even more valuable to the organization. When developers use open-source tools, they join a collaborative global community that is constantly learning from and improving on the technology. They share knowledge, resources and experiences to identify and fix problems and move updates forward more rapidly than they could individually. Adopting open-source software can also be a win-win talent recruitment and retention strategy for your enterprise. Many individual contributors see participating in open-source software communities as a tangible way to build their own profiles as experts in their field—and in the process, they also enhance your company’s reputation as a cool place where tech leaders want to work. However, there’s no such thing as a free meal. Open-source software isn't immune to vendor lock-in, when your company becomes so dependent on a partner’s product that it is prohibitively costly or difficult to switch to an alternative. You may not be paying licensing fees, but you still need to invest in support contracts for open-source tools. The bigger challenge from my perspective is that it’s still rare for enterprises to contribute regularly to open-source software communities. 


The Growing Cost of Non-Compliance and the Need for Security-First Solutions

Regulatory bodies across the globe are increasing their scrutiny and enforcement actions. Failing to comply with well-established regulations like HIPAA or GDPR, or newer ones like the European Union’s Digital Operational Resilience Act (DORA) and NY DFS Cybersecurity requirements, can result in penalties that can reach millions of dollars. But the costs do not stop there. Once a company has been found to be non-compliant, it often faces reputational damage that extends far beyond the immediate legal repercussions. ... A security-first approach goes beyond just checking off boxes to meet regulatory requirements. It involves implementing robust, proactive security measures that safeguard sensitive data and systems from potential breaches. This approach protects the organization from fines and builds a strong foundation of trust and resilience in the face of evolving cyber threats. ... Many businesses still rely on outdated, insecure methods of connecting to critical systems through terminal emulators or “green screen” interfaces. These systems, often running legacy applications, can become prime targets for cybercriminals if they are not properly secured. With credential-based attacks rising, organizations must rethink how they secure access to their most vital resources.


Researchers unveil nearly invisible brain-computer interface

Today's BCI systems consist of bulky electronics and rigid sensors that prevent the interfaces from being useful while the user is in motion during regular activities. Yeo and colleagues constructed a micro-scale sensor for neural signal capture that can be easily worn during daily activities, unlocking new potential for BCI devices. His technology uses conductive polymer microneedles to capture electrical signals and conveys those signals along flexible polyimide/copper wires—all of which are packaged in a space of less than 1 millimeter. A study of six people using the device to control an augmented reality (AR) video call found that high-fidelity neural signal capture persisted for up to 12 hours with very low electrical resistance at the contact between skin and sensor. Participants could stand, walk, and run for most of the daytime hours while the brain-computer interface successfully recorded and classified neural signals indicating which visual stimulus the user focused on with 96.4% accuracy. During the testing, participants could look up phone contacts and initiate and accept AR video calls hands-free as this new micro-sized brain sensor was picking up visual stimuli—all the while giving the user complete freedom of movement.


Creating SBOMs without the F-Bombs: A Simplified Approach to Creating Software Bills of Material

It's important to note that software engineers are not security professionals, but in some important ways, they are now being asked to be. Software engineers pick and choose from various third-party and open source components and libraries. They do so — for the most part — with little analysis of the security of those components. Those components can be — or become — vulnerable in a whole variety of ways: Once-reliable code repositories can become outdated or vulnerable, zero days can emerge in trusted libraries, and malicious actors can — and often do — infect the supply chain. On top of that, risk profiles can change overnight, making what was a well considered design choice into a vulnerable one almost overnight. Software engineers never before had to consider these things, and yet the arrival of the SBOM is making them do so like never before. Customers can now scrutinize their releases, and then potentially reject or send them back for fixing — resulting in even more work on short notice and piling on pressure. Even if the risk profile of a particular component changes between the creation of an SBOM and a customer reviewing it, then the release might be rejected. This is understandably the cause of much frustration for software engineers who are often already under great pressure.


Risk & Quality: The Hidden Engines of Business Excellence

In the world of consultancy, firms navigate a minefield of challenges—tight deadlines, budget constraints, and demanding clients. Then, out of nowhere, disruptions such as regulatory shifts or resource shortages strike, threatening project delivery. Without a robust risk management framework, these disruptions can snowball into major financial and reputational losses. ... Some leaders see quality assurance as an added expense, but in reality, it’s a profit multiplier. According to the American Society for Quality (ASQ), organizations that emphasize quality see an average of 4-6% revenue growth compared to those that don’t. Why? Because poor quality leads to rework, client dissatisfaction, and reputational damage. ... The cost of poor quality is substantial. Firms that don’t embed quality into their culture ultimately face consequences like customer churn, regulatory fines, and declining market share. Additionally, fixing mistakes after the fact is far more expensive than ensuring quality from the outset. Organizations that invest in quality from the start avoid unnecessary costs, improve efficiency, and strengthen their bottom line. As Philip Crosby, a pioneer in quality management, stated, “Quality is free. It’s not a gift, but it’s free. What costs money are the unquality things—all the actions that involve not doing jobs right the first time.” 


Enabling a Thriving Middleware Market

A more unified regulatory approach could reduce uncertainty, streamline compliance, and foster an ecosystem that better supports middleware development. However, given the unlikelihood of creating a new agency, a more feasible approach would be to enhance coordination among existing regulators. The FTC could address antitrust concerns, the FCC could promote interoperability, and the Department of Commerce could support innovation through trade policies and the development of technical standards. Even here, slow rulemaking and legal challenges could hinder progress. Ensuring agencies have the necessary authority, resources, and expertise will be critical. A soft-law approach, modeled after the National Institute for Standards and Technology (NIST) AI Risk Management Framework, might be the most feasible option. A Middleware Standards Consortium could help establish best practices and compliance frameworks. Standards development organizations (SDOs), such as the Internet Engineering Task Force or the World Wide Web Consortium (W3C), are well-positioned to lead this effort, given their experience crafting internet protocols that balance innovation with stability. For example, a consortium of SDOs with buy-in from NIST could establish standards for API access, data portability, and interoperability of several key social media functionalities.


How to Supercharge Application Modernization with AI

The refactoring of code – which means restructuring and, often, partly rewriting existing code to make applications fit a new design or architecture – is the most crucial part of the application modernization process. It has also tended in the past to be the most laborious because it required developers to pore over often very large codebases, painstakingly tweaking code function-by-function or even line-by-line. AI, however, can do much of this dirty work for you. Instead of having to find places where code should be rewritten or modified in order to optimize it, developers can leverage AI tools to look for code that requires attention. ... When you move applications to the cloud, the infrastructure that hosts them is effectively a software resource – which means you can configure and manage it using code. By extension, you can use AI tools like Cursor and Copilot to write and test your code-based infrastructure configurations. Specifically, AI is capable of tasks such as writing and maintaining the code that manages CI/CD pipelines or cloud servers. It can also suggest opportunities to optimize existing infrastructure code to improve reliability or security. And it can generate the ancillary configurations, such as Identity and Access Management (IAM) policies, that govern and help to secure cloud infrastructure.


Balancing Generative AI Risk with Reward

As businesses start evolving in their use of this technology and exposing it to a broader base inside and outside their companies, risks can increase. “I’ve always loved to say AI likes to please,” said Danielle Derby, director of enterprise data management at TriNet, who joined Rodarte at the presentation. Risk manifests “because AI doesn’t know when to stop,” said Derby, and you, for example, may not have thought about including a human or technology guardrail to keep it from answering a question you hadn’t prepared it to be able to accurately manage. “There are a lot of areas where you’re just not sure how someone who’s not you is going to handle this new technology,” she said. ... Improper data splitting can lead to data leakage, resulting in overly optimistic model performance, which you can mitigate by using techniques like stratified sampling to ensure representative splits and by always splitting the data before performing any feature engineering or preprocessing. Inadequate training data can lead to overfitting and too little test data can yield unreliable performance metrics, and you can mitigate these by ensuring there is enough data for both training and testing based on problem size, and using a validation set in addition to training and test sets.


Why Cybersecurity-as-a-Service is the Future for MSPs and SaaS Providers

For MSPs and SaaS providers, adopting a proactive, scalable approach to cybersecurity—one that provides continuous monitoring, threat intelligence, and real-time response—is crucial. By leveraging Cybersecurity-as-a-Service (CSaaS), businesses can access enterprise-grade security without the need for extensive in-house expertise. This model not only enhances threat detection and mitigation but also ensures compliance with evolving cybersecurity regulations. ... The increasing complexity and frequency of cyber threats necessitate a proactive and scalable approach to security. CSaaS offers a flexible solution by outsourcing critical security functions to specialized providers. This ensures continuous monitoring, threat intelligence, and incident response without the need for extensive in-house resources. As cyber threats evolve, CSaaS providers continuously update their tools and techniques, ensuring we stay ahead of emerging vulnerabilities. CSaaS enhances our ability to protect sensitive data and allows us to confidently focus on core business operations. As threats evolve, CSaaS providers continually update their tools and techniques, ensuring companies stay ahead of emerging vulnerabilities. ... Embracing CSaaS is essential for maintaining a robust security posture in an increasingly complex digital landscape.


Meta: WhatsApp Vulnerability Requires Immediate Patch

Meta has voluntarily disclosed the new WhatsApp vulnerability, now published as CVE-2025-30401, after investigating it internally as a submission to its bug bounty program. The company says there is not yet evidence that it has been exploited in the wild. The issue likely impacts all Windows versions prior to 2.2450.6. The WhatsApp vulnerability hinges on an attacker sending a malicious attachment, and would require the target to attempt to manually view the attachment within the software. A spoofing issue makes it possible for the file opening handler to execute code that has been hidden as a seemingly valid MIME type such as an image or document. That could pave the way for remote code execution, though a CVE score has yet to be assigned as of this writing. ... The WhatsApp vulnerability exploited by Paragon was a much more devastating zero-click (and one that targeted phones and mobile devices), similar to one exploited by NSO Group on the platform to compromise over a thousand devices. That landed the spyware vendor in trouble in US courts, where it was found to have violated national hacking laws. The court found that NSO Group had obtained WhatsApp’s underlying code and reverse-engineered it to create at least several zero-click vulnerabilities that it put to use in its spyware.

Daily Tech Digest - April 12, 2025


Quote for the day:

"Good management is the art of making problems so interesting and their solutions so constructive that everyone wants to get to work and deal with them." -- Paul Hawken


Financial Fraud, With a Third-Party Twist, Dominates Cyber Claims

Data on the most significant threats and what technologies and processes can have the greatest preventative impact on those threats are extremely valuable, says Andrew Braunberg, principal analyst at business intelligence firm Omdia. "It's great data for the enterprise, no question about it — that kind of data is going to be more and more useful for folks," he says. "As insurers figure out how to collect more standardized data, and more comprehensive data, at a quicker cadence — that's good news." ... While most companies do not consider their cyber-insurance provider as a security adviser, they do make decisions based on the premiums presented to them, says Omdia's Braunberg. And many companies seem ready to rely on insurers more. "Nobody really thought of these guys as security advisors that they should really be turning to, but if that shift happens, then I think the question gets a lot more interesting," he says. "Companies may have these annual sit-downs with their insurers where you really walk through this data and decide what kind of investments to make — and that's a different world than the way most security investment decisions are done today." The fact that cyber insurers are moving into an advisory role may be good news, considering the US government's pullback from aiding enterprises with cybersecurity, says At-Bay's Tyra. 


How to Handle a Talented, Yet Quirky, IT Team Member

Balance respect for individuality with the needs of the team and organization. By valuing their quirks as part of their creative process, you'll foster a sense of belonging and loyalty, Honnenahalli says. "Clear boundaries and open communication will prevent potential misunderstandings, ensuring harmony within the team." ... Leaders should aim to channel quirkiness constructively rather than working to eliminate it. For instance, if a quirky habit is distracting or counterproductive, the team leader can guide the individual toward alternatives that achieve similar results without causing friction, Honnenahalli says. Avoid suppressing individuality unless it directly conflicts with professional responsibilities or team cohesion. Help the unconventional team member channel their quirks productively rather than trying to reduce them, Xu suggests. "This means offering support and guidance in ways that allow them to thrive within the structure of the team." Remember that quirks can often be a unique asset in problem-solving and innovation. ... In IT, where innovation thrives on diverse perspectives, quirky team members often deliver creative solutions and unconventional thinking, Honnenahalli says. "Leaders who manage such individuals effectively can cultivate a culture of innovation and inclusivity, boosting morale and productivity."


A Guide to Managing Machine Identities

Limited visibility into highly fragmented machine identities makes them difficult to manage and secure. According to CyberArk's 2024 Identity Security Threat Landscape Report - a global survey of 2,400 security decision-makers across 18 countries - 93% of organizations experienced two or more identity-related breaches in 2023. Machine identities are a frequent target, with previous CyberArk research indicating that two-thirds of organizations have access to sensitive data. A ransomware attack on a popular file transfer system last year exposed the sensitive information of approximately 60 million individuals and impacted more than 2,000 public and private sector organizations. ... To address the challenges associated with managing fragmented machine identities, CyberArk Secrets Hub and CyberArk Cloud Visibility can help standardize and automate operational processes. These tools provide better visibility into identities that require access and determine whether the request is legitimate. ... Organizations should identify and secure their machine identities across multiple on-premises and cloud environments, including those from different cloud service providers. The right governance tool can help organizations meet the unique needs of each platform, while also making it easier to maintain a unified approach to machine identity management.


7 strategic insights business and IT leaders need for AI transformation in 2025

AI innovation continues rapidly, but enterprises must distinguish between practical AI that delivers tangible ROI and aspirational solutions that lack immediate business value. Practical AI enhances agent productivity, reduces handle times, and personalizes customer interactions in ways that directly impact revenue and operational efficiency. Business leaders must challenge vendors to demonstrate clear business cases, ensuring AI investments align with specific organizational objectives rather than speculative, unproven technology. Also, every AI initiative must have a roadmap with clearly defined focus areas and milestones. ... Enterprises now generate vast amounts of interaction data, but the true competitive advantage sits with AI-powered analytics. Real-time sentiment analysis, predictive modeling, and conversational intelligence redefine how organizations measure and optimize performance across customer-facing and internal communications. Companies that harness these insights can proactively address customer needs, optimize workforce performance, and drive data-driven decision-making -- at scale. ... Automation is no longer just a convenience but a necessity for streamlining complex business processes and enhancing customer journeys.


Bryson Bort on Cyber Entrepreneurship and the Needed Focus on Critical Infrastructure

Most people only know industrial control systems as “Stuxnet” and, even then, with a limited idea of what exactly that means. These are the computers that run critical infrastructure, manufacturing plants, and dialysis machines in hospitals. A bad day with normal computers means ransomware where a business can’t run, espionage where a company loses valuable data, or a regular person getting scammed out of their bank account. All pretty bad, but at least everyone is still breathing. With ICS, a bad day can mean loss of life or limb and that’s just at the point of use. The downstream effects of water or electricity being disrupted sends us to the Stone Ages immediately and there is a direct correlation to loss of life in those scenarios. ... As an entrepreneur, it’s the same and the Law of N is the variable number of people that you can lead where you personally have a visible impact on their daily requirements. The second you hit N+1, it is another leader below you in the chain who now has that impact. In summary: 1) you can’t do it alone, being an individual contributor (no matter how talented) is never going to be as impactful as a squad/team; 2) the structure you build is going to dictate the success or failure of the execution of your ideas; and 3) you have leadership limits of what you can control.


Rethinking talent strategy: What happens when you merge performance with development

Often, performance and development live on different systems, with no unified view of progress, potential, or skill gaps. Without a continuous data loop, talent teams struggle to design meaningful interventions, and line managers lack the insight to support growth conversations effectively. The result? Employee development efforts become reactive, generic, and in many cases, ineffective. But the problem isn’t just technical. According to Mohit Sharma, CHRO at EKA Mobility, there’s a strategic imbalance in focus. “Performance management often prioritises business metrics—financials, customer outcomes, process efficiency—while people-related goals receive less attention,” he says. “This naturally sidelines employee development.” And when development is treated as an afterthought, Individual Development Plans (IDPs) become little more than checkboxes. “The IDP often runs as a standalone activity, disconnected from performance outcomes,” Sharma adds. “This fragmentation means development doesn’t feed into performance—and vice versa.” Moreover, most organisations struggle with systematic skill-gap identification. In fast-changing industries, capability needs evolve every quarter. 


How cybercriminals are using AI to power up ransomware attacks

Ransomware gangs are increasingly deploying AI across every stage of their operations, from initial research to payload deployment and negotiations. Smaller outfits can punch well above their weight in terms of scale and sophistication, while more established groups are transforming into fully automated extortion machines. As new gangs emerge, evolve and adapt to boost their chances of success, here we explore the AI-driven tactics that are reshaping ransomware as we know it. Cybercriminal groups will typically pursue the path of least resistance to making a profit. As such, most cases of malign AI have been lower hanging fruit focusing on automating existing processes. That said, there is also a significant risk of more tech-savvy groups using AI to enhance the effectiveness of the malware itself. Perhaps the most dangerous example is polymorphic ransomware, which uses AI to mutate its code in real time. Each time the malware infects a new system, it rewrites itself, making detection far more difficult as it evades antivirus and endpoint security looking for specific signatures. Self-learning capabilities and independent adaptability are drastically increasing the chances of ransomware reaching critical systems and propagating before it can be detected and shut down.


IBM Quantum CTO Says Codes And Commitment Are Critical For Hitting Quantum Roadmap Goals

The technique — called the Gross code — shrinks the number of physical qubits required to produce stable output, significantly easing the engineering burden, according to R&D World. “The Gross code bought us two really big things,” Oliver Dial, IBM Quantum’s chief technology officer, said in an interview with R&D World. “One is a 10-fold reduction in the number of physical qubits needed per logical qubit compared to typical surface code estimates.” ... IBM’s optimism is grounded not just in long-term error correction, but in near-term tactics like error mitigation, a strategy to extract meaningful results from today’s imperfect machines. These techniques offer a way to recover accurate answers from computers that commit errors, Dial told R&D World. He sees this as a bridge between today’s noisy intermediate-scale quantum (NISQ) machines and tomorrow’s fully fault-tolerant quantum computers. Competitors are also racing to prove real-world use cases. Google has published recent results in quantum error correction, while Quantinuum and JPMorgan Chase are exploring secure applications like random number generation, R&D World points out. IBM’s bet is that better codes, especially its low-density parity check (LDPC) approach refined through the Gross code, will accelerate real deployments.


Defining leadership through mentorship and a strong network

While it’s a challenge to schedule a time each month that works for everyone, she says, there’s a lot of value in them to build strong team camaraderie. It’s also helped everyone better understand diverse backgrounds, what everyone’s contributing, and how the team can lean into those strengths and overcome challenges. ... While she wasn’t sure how it would land, it grabbed the attention of the CIO, who had never seen this approach before, and opened the dialogue for Schulze to be a candidate. She decided to push past any insecurities or fears, and go for a position she didn’t necessarily feel totally qualified for, but ended up landing the job. Schulze knows not everyone feels comfortable stepping out of their comfort zone, but as a leader, she wants to set that example for her employees. She identifies opportunities for growth and advancement, regardless of background or experience, and helps them tap into their potential. She understands it’s difficult for women to break through the boys club mentality that can exist in tech, and the challenge to fight stereotypes around women in IT and STEM careers. In her own career, Schulze had to apply herself extra hard to prove her worth and value, even when she had the same answers as her male counterparts.


Cracking the Code on Cybersecurity ROI

Quantifying the total cost of cybersecurity investments — which have long been at the top of most companies' IT spending priorities — is easy enough. It entails adding up the cost of the hardware resources, software tools, and personnel (including both internal employees as well as any outsourced cybersecurity services) that an organization deploys to mitigate security risks. But determining how much value those investments yield is where things get tricky. This is primarily because, again, the goal of cybersecurity investments is to prevent breaches from occurring — and when no breach occurs, there is no quantifiable cost to measure. ... Rather than estimating breach frequency and cost based on historical data specific to your business, you could look at data about current cybersecurity trends for other companies similar to yours, considering factors like their region, the type of industry they operate in, and their size. This data provides insight into how likely your type of business will experience a breach and what that breach will likely cost. ... A third approach is to measure cybersecurity ROI in terms of the value you don't create due to breaches that do occur. This is effectively an inverse form of cybersecurity ROI. ... Using this data, you can predict how much money you'd save through additional cybersecurity spending.

Daily Tech Digest - April 11, 2025


Quote for the day:

"Efficiency is doing the thing right. Effectiveness is doing the right thing." -- Peter F. Drucker


Legacy to Cloud: Accelerate Modernization via Containers

What could be better than a solution that lets you run applications across environments without dependency constraints? That’s where containers come in. They accelerate your modernization journey. The containerization of legacy applications liberates them from the rusty old VMs and servers that limit the scalability and agility of applications. Containerization offers benefits including agility, portability, resource efficiency, scalability and security. ... migrating legacy applications to containers is not a piece of cake. It requires careful planning and execution. Unlike cloud native applications, which are built for containers and Kubernetes, legacy applications were not designed with containerization in mind. The process demands significant time and expertise, and organizations often struggle at the very first step. Legacy monoliths, with their tightly coupled components and complex dependencies, require particularly extensive Dockerfiles. Writing Dockerfiles for legacy monoliths is complex and error-prone, often becoming a significant bottleneck in the modernization journey. ... The challenge intensifies when documentation is outdated or missing, turning what should be a modernization effort into a resource-draining archaeological expedition through layers of technical debt.


Four paradoxes of software development

No one knows how long the job will take, but the customer demands a completion date. This, frankly, is probably the biggest challenge that software development organizations face. We simply can’t be certain how long any project will take. Sure, we can estimate, but we are almost always wildly off. Sometimes we drastically overestimate the time required, but usually we drastically underestimate it. For our customers, this is both a mystery and a huge pain. ... Adding developers to a late project makes it later. Known as Brooks’s Law, this rule may be the strangest of the paradoxes to the casual observer. Normally, if you realize that you aren’t going to make the deadline for filing your monthly quota of filling toothpaste tubes, you can put more toothpaste tube fillers on the job and make the date. If you want to double the number of houses that you build in a given year, you can usually double the inputs—labor and materials—and get twice as many houses, give or take a few. ... The better you get at coding, the less coding you do. It takes many years to gain experience as a software developer. Learning the right way to code, the right way to design, and all of the rules and subtleties of writing clean, maintainable software doesn’t happen overnight. ... Software development platforms and tools keep getting better, but software takes just as long to develop and run.


Drones are the future of cybercrime

The rapid evolution of consumer drone technology is reshaping its potential uses in many ways, including its application in cyberattacks. Modern consumer drones are quieter, faster, and equipped with longer battery life, enabling them to operate further from their operators. They can autonomously navigate obstacles, track moving objects, and capture high-resolution imagery or video. ... And there are so many other uses for drones in cyberattacks: Network sniffing and spoofing: Drones can be equipped with small, modifiable computers such as a Raspberry Pi to sniff out information about Wi-Fi networks, including MAC addresses and SSIDs. The drone can then mimic a known Wi-Fi network, and if unsuspecting individuals or devices connect to it, hackers can intercept sensitive information such as login credentials. Denial-of-service attacks: Drones can carry devices to perform local de-authentication attacks, disrupting communications between a user and a Wi-Fi access point. They can also carry jamming devices to disrupt Wi-Fi or other wireless communications. Physical surveillance: Drones equipped with high-quality cameras can be used for physical surveillance to observe shift changes, gather information on security protocols, and plan both physical and cyberattacks by identifying potential entry points or vulnerabilities. 


From Silos to Strategy: Why Holistic Data Management Drives GenAI Success

While data distribution is essential to mitigate risks, it requires a unified approach to be effective. Many enterprises are recognizing the value of implementing unified data architectures that simplify storage and data management and centralize the management of diverse data platforms. These architectures, combined with intelligent data platforms, enable seamless access and analysis of data, making it easier to support analytics and ingestion by generative AI. IT managers can further enhance a system’s data analysis, network security, and introduce a hybrid cloud experience to simplify data management. Today, the tech industry is focused on streamlining how enterprises manage and optimize storage, data, and workloads and a platform-based approach to hybrid cloud management is critical to manage IT across on-premises, colocation and public cloud environments. Innovations like unified control planes and, software-defined storage solutions are being utilized to enable seamless data and application mobility. These solutions allow enterprises to move data and applications across hybrid and multi-cloud environments to optimize performance, cost, and resiliency. By simplifying cloud data management, enterprises can efficiently manage and protect globally dispersed storage environments without over-emphasizing resilience at the expense of overall system optimization.


Why remote work is a security minefield (and what you can do about it)

The remote work environment makes employees more vulnerable to phishing and social engineering attacks, as they are isolated and may find it harder to verify suspicious activities. Working from home can create a sense of comfort that leads to relaxation, making employees more prone to risky security behavior. The isolation associated with remote work can also result in impulsive decisions, increasing the likelihood of mistakes. Cybercriminals exploit this by tailoring social engineering attacks to mimic IT staff or colleagues, taking advantage of the lack of direct verification. ... To address these challenges, organizations must prioritize a security-first culture. By prioritizing cybersecurity at every level, from executives to remote workers, organizations can reduce their vulnerability to cyber threats. Additionally, companies can foster peer support networks where employees can share security tips and collaborate on solutions. Another problem that can arise with remote work is privacy. Some companies monitor employee activity to protect their data and ensure compliance with regulations. Monitoring helps detect suspicious behavior and mitigate cyber threats, but it can raise privacy concerns, especially when it involves intrusive methods like tracking keystrokes or taking periodic screenshots. To find a good balance, companies should be upfront about what they’re monitoring and why. 


Inside a Cyberattack: How Hackers Steal Data

Once a hacker breaches the perimeter, the standard practice is to beachhead (dig down) and then move laterally to find the organization’s crown jewels: their most valuable data. Within a financial or banking organization, it is likely there is a database on their server that contains sensitive customer information. A database is essentially a complicated spreadsheet, wherein a hacker can simply click Select and copy everything. In this instance, data security is essential; many organizations, however, confuse data security with cybersecurity. Organizations often rely on encryption to protect sensitive data, but encryption alone isn’t enough if the decryption keys are poorly managed. If an attacker gains access to the decryption key, they can instantly decrypt the data, rendering the encryption useless. Many organizations also mistakenly believe that encryption protects against all forms of data exposure, but weak key management, improper implementation, or side-channel attacks can still lead to compromise. To truly safeguard data, businesses must combine strong encryption with secure key management, access controls, and techniques such as tokenization or format-preserving encryption to minimize the impact of a breach. A database protected by privacy enhancing technologies (PETs), such as tokenization, becomes unreadable to hackers if the decryption key is stored offsite. 


You’re always a target, so it pays to review your cybersecurity insurance

Right now, either someone has identified your firm and your weak spots and begun a campaign of targeted phishing attacks, scam links, or credential harvesting, or they are blindly trying to use any number of known vulnerabilities on the web to crack into remote access and web properties. ... Reviewing my compliance with cyber insurance policies was a great exercise in self-assessing just how thorough my base security is, but it also revealed an important fact: that insurance requirements only scratch the surface of the types of discussions you should be having internally regarding your risks of attack. No matter if you feel you are merely at risk of being accidental roadkill on the information superhighway or are actually in the crosshairs of a malicious attacker, always review the risks not only with your cyber insurance carrier in mind, but also with what the attackers are planning. ... During the annual renewal of cyber insurance, the insurance carrier would not even consider insuring my business if we did not demonstrate that we had some fundamental protections in place. Based on the questions and bullet points, you could tell they saw the remote access, third-party vendor access, and network administrator accounts as weak points that needed additional protection.


9 steps to take to prepare for a quantum future

To get ahead of the quantum cryptography threat, companies should immediately start assessing their environment. “What we’re advising clients to do – and working on with clients today – is first go and inventory your encryption algorithms and know what you’re using,” says Saylors. That can be tricky, he adds. ... Because of the complexity of the tasks, ISG’s Saylors suggest that enterprises prioritize their efforts. The first step, he says, is to look at perimeter security. The second step is to look at the encryption around the most critical assets. And the third step is to look at the encryption around data backups. All of this needs to happen as soon as possible. In fact, according to Gartner, enterprises should have created a cryptography database by the end of 2024. Companies should have created cryptography polices and planned their transition to post-quantum encryption by the end of 2024, the research firm says. ... So everything will have to be carefully tested and some cryptographic processes may need to be rearchitected. But the bigger problem is that the new algorithms might themselves be deprecated as technology continues to evolve. Instead, Horvath and other experts recommend that enterprises pursue quantum agility. If any cryptography is hard-coded into processes, it needs to be separated out. “Make it so that any cryptography can work in there,” he says. 


Why neurodivergent perspectives are essential in AI development

Experts in academia, civil society, industry, media, and government discussed and debated the latest developments in AI safety and ethics, but representation of neurodivergent perspectives in AI development wasn’t examined. This is a huge oversight especially considering 70 million people in the US alone learn and think differently, including many in tech. Technology should be built for and serve all, so how do we make sure future AI models are accessible and unbiased if neurodivergent representation isn’t considered? It all starts at the development stage. ... A neurodivergent team also makes it easier to explore a wider range of use cases and the risks associated with applications. When you engage neurodivergent people at the development stage, you create a team that understands and prioritizes diverse ways of thinking, learning, and working. And that benefits all users. ... New data from EY found that 85% of neurodivergent employees think gen AI creates a more inclusive workplace, so it’s incumbent on more companies to level the playing field by casting a wider net to include a broader range of employees and tools needed to thrive and generate more accurate and robust datasets. Gen AI can also go a long way to help neurodivergent workers with simple tasks like productivity, quality assurance, and time management. 


Your data's probably not ready for AI - here's how to make it trustworthy

"AI and gen AI are raising the bar for quality data," according to a recent analysis published by Ashish Verma, chief data and analytics officer at Deloitte US, and a team of co-authors. "GenAI strategies may struggle without a clear data architecture that cuts across types and modalities, accounting for data diversity and bias and refactoring data for probabilistic systems," the team stated. ... "Creating a data environment with robust data governance, data lineage, and transparent privacy regulations helps ensure the ethical use of AI within the parameters of a brand promise," said Clayton. Building a foundation of trust helps prevent AI from going rogue, which can easily lead to uneven customer experiences." Across the industry, concern is mounting over data readiness for AI. "Data quality is a perennial issue that businesses have faced for decades," said Gordon Robinson, senior director of data management at SAS. There are two essential questions on data environments for businesses to consider before starting an AI program, he added. First, "Do you understand what data you have, the quality of the data, and whether it is trustworthy or not?" Second, "Do you have the right skills and tools available to you to prepare your data for AI?"


Daily Tech Digest - April 10, 2025


Quote for the day:

"Positive thinking will let you do everything better than negative thinking will." -- Zig Ziglar



Strategies for measuring success and unlocking business value in cloud adoption

Transitioning to a cloud-based operation involves a dual-pronged strategy. While cost optimization, requires right-sizing resources, leveraging discounted instances, and implementing auto-scaling based on demand, accurately forecasting demand and navigating complex cloud pricing structures can be difficult. Likewise, while scalability is enabled by containerization, serverless computing, and infrastructure automation, managing complex applications, ensuring security during scaling, and avoiding vendor lock-in present additional challenges. Therefore, organizations must continuously monitor and adapt their strategies while addressing these challenges. ... An effective cloud strategy aligns business goals through a strong governance framework that prioritizes security, compliance, and cost optimization, while being flexible to accommodate growth. Piloting non-critical applications can help refine this strategy before larger migrations. ... Companies must first assess their maturity model to identify areas for improvement. This includes optimizing their cloud mix by exploring different cloud providers or cost structures, providing regular policy updates for compliance, cultivating a continuous improvement culture, proactively addressing challenges, and having active leadership involvement in the cloud vision for stakeholder buy-in.


Three Keys to Mastering High Availability in Your On-Prem Data Center

A cornerstone of high availability is the redundancy of IT infrastructure. By identifying potential critical single points of failure and, where possible, ensuring there is an option for failover to a secondary resource, you can reduce the risk of downtime in the event of an incident. Redundancy should extend across both hardware and software layers. Implementing failover clusters, resilient networking paths, storage redundancy using RAID, and offsite data replication for disaster recovery are proven strategies. Adopting a hybrid or multi-cloud approach can also reduce reliance on any single service provider. If you operate an off-site data center, ensure it is not dependent on the same power source as your main campus. Be sure to have a disaster recovery and business continuity plan that includes local and offsite backup storage. ... Whether your infrastructure is on-premises, cloud-based, or hybrid, the other key component to achieving high availability is the establishment of failover clusters to facilitate – and even automate – the movement of services and workloads to a secondary resource. Whether hardware (SAN-based) or software (SANless), clusters support the seamless failover of services to back up resources and ensure continuity in the event of a severely degraded performance or an outage incident.


Targeted phishing gets a new hook with real-time email validation

The problem facing defenders is the tactic prevents security teams from doing further analysis and investigation, says the Cofense report. Automated security crawlers and sandbox environments also struggle to analyze these attacks because they cannot bypass the validation filter, the report adds. ... “The only real solution,” he said, “is to move away from traditional credentials to phishing-safe authentication methods like Passkeys. The goal should be to protect from leaked credentials, not block user account verification.” Attackers verifying e-mail addresses as deliverable, or being associated with specific individuals, is nothing fundamentally new, he added. Initially, attackers used the mail server’s “VRFY” command to verify if an address was deliverable. This still works in a few cases. Next, attackers relied on “non-deliverable receipts,” the bounce messages you may receive if an email address does not exist, to figure out if an email address existed. Both techniques work pretty well to determine if an email address is deliverable, but they do not distinguish whether the address is connected to a human, or if its messages are read. The next step, Ullrich said, was sending obvious spam, but including an “unsubscribe” link. If a user clicks on the “unsubscribe” link, it confirms that the email was opened and read. 


Data Hurdles, Expertise Loss Hampering BCBS 239 Compliance

It was abundantly clear that there was a gulf between ECB expectations and banks’ delivery soon after BCBS 239 was introduced. In late 2018 the central bank found that 59 per cent of in-scope institutions turned in regulatory reports with at least one failing validation rule and almost 7 per cent of data points were missing from them. The ECB began a “supervisory strategy” in 2022 to close the gap, running until 2024. In May of that year it published a guide that clarified what the overseers expected of banks and embarked on targeted reviews of RDARR capabilities. ... The supervisor blamed “deficiencies” on governance shortcomings, fragmented IT infrastructures and a high level of manual aggregation processing, but admitted “remediation of RDARR deficiencies is often costly, carries significant risk and takes time”. Carroll said that the breadth of the data management effort needed to comply with BCBS 239 has slowed adoption of the capabilities necessary for compliance. “They’re spending so much time planning for BCBS and thinking about what they need to do and what they need to have in place, and the tools that they need and the frameworks that they might need to put in place,” he said. ... “Hindered by outdated IT systems unsuitable for modern data management functions, they struggle with data silos and inconsistent, inaccurate risk reporting,” Ergin told Data Management Insight.


Can We Learn to Live with AI Hallucinations?

Sometimes, LLMs hallucinate for no good reason. Vectara CEO Amr Awadallah says LLMs are subject to the limitations of data compression on text as expressed by the Shannon Information Theorem. Since LLMs compress text beyond a certain point (12.5%), they enter what’s called “lossy compression zone” and lose perfect recall. That leads us to the inevitable conclusion that the tendency to fabricate isn’t a bug, but a feature, of these types of probabilistic systems. What do we do then? ... Instead of using a general-purpose LLM, fine-tuning open source LLMs on smaller sets of domain- or industry-specific data can also improve accuracy within that domain or industry. Similarly, a new generation of reasoning models, such as DeepSeek-R1 and OpenAI o1, that are trained on smaller domain-specific data sets, include a feedback mechanism that allows the model to explore different ways to answer a question, the so-called “reasoning” steps. Implementing guardrails is another technique. Some organizations use a second, specially crafted AI model to interpret the results of the primary LLM. When a hallucination is detected, it can tweak the input or the context until the results come back clean. Similarly, keeping a human in the loop to detect when an LLM is headed off the rails can also help avoid some of LLM’s worst fabrications. 


How Technical Debt Can Quietly Kill Your Company — And the metrics that can save you

Beyond the direct financial drain, technical debt imposes a crippling operational gridlock. Development velocity plummets — Protiviti suggest significant slowdowns, potentially up to 30%, as teams battle complexity. For Product and Delivery, this means longer lead times, missed deadlines, reduced predictability, and a sluggish response to market changes. Each new feature built on a weak foundation takes longer than the last. Maintenance costs simultaneously escalate. Developers spend disproportionate time debugging obscure issues, patching old components, and managing complex workarounds. These activities can consume up to 40% of the total value of a technology estate over its lifetime — an escalating “maintenance tax” diverting focus from value creation. Crucially, technical debt is a major barrier to innovation. Nearly 70% of organizations acknowledge this according to Protiviti’s polls. When teams are constantly firefighting, constrained by legacy architecture, and navigating brittle code, their capacity for creative problem-solving and experimentation evaporates. The operational drag prevents exploration, limiting the company’s potential for growth and differentiation. Nokia’s decline serves as a stark cautionary tale of operational gridlock leading to strategic failure. Their dominance in mobile phones evaporated with the rise of smartphones.


How tech giants like Netflix built resilient systems with chaos engineering

Chaos Engineering is a discipline within software engineering that focuses on testing the limits and vulnerabilities of a system by intentionally injecting chaos—such as failures or unexpected events—into it. The goal is to uncover weaknesses before they impact real users, ensuring that systems remain robust, self-healing, and reliable under stress. The idea is based on the understanding that systems will inevitably experience failures, whether due to hardware malfunctions, software bugs, network outages, or human error. ... Netflix is widely regarded as one of the pioneers in applying Chaos Engineering at scale. Given its global reach and the importance of providing uninterrupted service to millions of users, Netflix knew that simply assuming everything would work smoothly all the time was not an option. Its microservices architecture, a collection of loosely coupled services, meant that even the smallest failure could cascade and result in significant downtime for its customers. The company wanted to ensure that it could continue to stream high-quality video content, provide personalized recommendations, and maintain a stable infrastructure—no matter what failure scenarios might arise. To do so, Netflix turned to Chaos Engineering as a cornerstone of its resilience strategy.


The AI model race has suddenly gotten a lot closer, say Stanford scholars

Bommasani and team don't make any predictions about what happens next in the crowded field, but they do see a very pressing concern for the benchmark tests used to evaluate large language models. Those tests are becoming saturated -- even some of the most demanding, such as the HumanEval benchmark created in 2021 by OpenAI to test models' coding skills. That affirms a feeling seen throughout the industry these days: It's becoming harder to accurately and rigorously compare new AI models. ... In response, note the authors, the field has developed new ways to construct benchmark tests, such as Humanity's Last Exam, which has human-curated questions formulated by subject-matter experts; and Arena-Hard-Auto, a test created by the non-profit Large Model Systems Corp., using crowd-sourced prompts that are automatically curated for difficulty. ... Bommasani and team conclude that standardizing across benchmarks is essential going forward. "These findings underscore the need for standardized benchmarking to ensure reliable AI evaluation and to prevent misleading conclusions about model performance," they write. "Benchmarks have the potential to shape policy decisions and influence procurement decisions within organizations, highlighting the importance of consistency and rigor in evaluation."


From likes to leaks: How social media presence impacts corporate security

Cybercriminals can use social media to build a relationship with employees and manipulate them into performing actions that jeopardize corporate security. They can impersonate colleagues, business partners, or even executives, using information obtained from social media to sound convincing. ... Many employees use the same passwords for personal social media accounts as for their work accounts, putting corporate data at risk. While convenient, this practice means that if a personal account is compromised, attackers could gain access to work-related systems as well. ... CISOs must now account for employee behavior beyond the firewall. The attack surface no longer ends at corporate endpoints; it stretches into LinkedIn profiles, Instagram vacation posts, and casual tweets. Companies should establish policies regarding what employees are permitted to post on social media, especially about their work and workplace. ... The problem with social media posts is there is a thin line between privacy and company security. CISOs have to walk a thin line, keeping the company secure without policing what employees do on their own time. This is why privacy awareness training should be integrated with cybersecurity policies.


Tariffs will hit data centers and cloud providers, but hurt customers

The tariffs applied vary country to country - with a baseline of 10 percent placed on all imported goods coming into the US - and much higher being applied to those countries described by Trump as “the worst offenders," up to 99 percent in the case of the French archipelago Saint Pierre and Miquelon. However, most pertinent to the cloud computing industry are the tariffs that will hit countries that provide essential computing hardware, and materials necessary to data center construction. ... While cloud service providers (CSPs) will certainly be hit by the inevitable rising costs, it is hard to really think of the hyperscalers as the "victims" in this story. Microsoft, Amazon, and Alphabet all lie in the top five companies by market cap, and none have taken particularly drastic hits to their stock value since the news of the tariffs was announced. ... "The high tariffs on servers and other IT equipment imported from China and Taiwan are highly likely to increase CSPs costs. If CSPs pass on cost increases, customers may feel trapped (because of lock-in) and disillusioned with cloud and their provider (because they've committed to building on a cloud provider assuming costs would be constant or even decline over time). On the other hand, if CSPs don't increase prices with rising costs, their margins will decline. It's a no-win situation," Rogers explained.