Daily Tech Digest - May 16, 2025


Quote for the day:

"Different times need different types of leadership." -- Park Geun-hye


AI Agents: Protocols Driving Next-Gen Enterprise Intelligence

MCP substantially simplifies agentic AI adoption for developers. This roadmap created by the MCP community clearly defines priorities and direction, providing helpful guidance for implementation. Organizations will also benefit from the key initiatives outlined in the roadmap, like the MCP Registry, which enables developers to build a comprehensive network of agents. The emergence of OAuth as a complementary standard protocol strengthens agent ecosystems even more. As with any other framework, MCP has its challenges. MCP offers a wide array of tools to support LLM reasoning, but it doesn’t prioritize coordinated, high-quality task execution. ... ACP will make it easier to implement AI agents on edge and local devices. In instances where the majority of decision-making happens “on the go” in a disconnected environment, this protocol will be useful. Now, developers can build modular systems that can coordinate with a standard protocol to make edge AI easier. A2A will gain momentum and enable cross-platform agents to work together to deliver superior intelligence to customers. A2A will help coordinate agents built using diverse frameworks with a common standard. The main requirement for this is to build an Agent Card that allows agents to be used and consumed by others.


Critical Infrastructure Under Siege: OT Security Still Lags

Industrial organizations and other kinds of critical infrastructure are regularly near or at the top of vendor lists highlighting ransomware targets. It's easy to see why; the important assets a threat actor could compromise put immense pressure on affected organizations to pay up. Kurt Gaudette, vice president of intelligence and services at Dragos, tells Dark Reading that the OT side of the house is "where the bottom line is." And indeed, Sophos reported last year that 65% of respondent organizations in the manufacturing sector reported that they suffered a ransomware attack in the year preceding the report; of those, 62% of organizations paid the ransom. Compounding this, the security postures of organizations that use OT/ICS can vary dramatically compared with traditional IT settings. The importance of staying patched is complicated by the reality that some industrial processes are meant to run uninterrupted for long periods of time and can't be subjected to the downtime necessary to patch. Second, an organization like a local water treatment plant might not have a significant security budget to invest in tools and personnel. Also, ICS products tend to be expensive, and aging equipment is everywhere, with many fields like healthcare drowning in legacy, hard-to-patch products or those without built-in security features.


Your Security Training Isn't Wrong. The Content Is Just Outdated

Although AI makes threats harder to detect, many breaches aren't caused by sophisticated hacking. They happen because organizations might not realize employees let their kids play Minecraft on their corporate laptops, or an old server or forgotten IoT device is still online. If IT doesn't know an asset exists, or who uses it, the team can't secure it, and hackers look for forgotten, unmonitored devices to break in. ... Managing and securing multiple systems can tempt employees to repeat passwords for simplicity. If employees continue to avoid using tools like corporate password managers to enforce strong, unique passwords, IT teams need to ask themselves why. How can they make warnings about this more impactful without burdening staff? ... The trouble is that, even with corporate password managers and MFA in place, hackers are still finding ways to steal credentials. These tools are designed to prevent hackers from entering your home, but if the door is left open, they won't stop anyone from walking in. The average annual growth rate of exposed accounts is 28%. Session expiration policies based on risk level and adaptive access policies can trigger forced signouts if a session shows abnormal behavior (e.g., logging in from a new IP while still active on another), which will help reduce account session takeovers.


Check Point CISO: Network segregation can prevent blackouts, disruptions

In 2025, industry watchers expect there will be an increase in the public budget allocated to defense. In Spain, one-third of the budget will be allocated to increasing cybersecurity. But for Fischbein, training teams is much more important than the budget. “The challenge is to distribute the budget in a way that can be managed,” he notes, and to leverage intuitive and easy-to-use platforms, so that organizations don’t have to invest all the money in training. “When you have information, management, users, devices, mobiles, data centers, clouds, cameras, printers… the security challenge is very complex. ” he says. ” ... “In a security operations center (SOC), a person using Check Point tools could previously take between two and four hours to investigate the causes of an alert. Today that time has dropped to 20 minutes,” he says. He also explains how they work with vulnerabilities. “Currently, Check Point checks all of them in a few seconds and tells you whether you are protected or not. And if you are not, it tells you which network to protect.” Regarding attackers, he acknowledges that they now make “richer and more logical” attacks. “With AI, they check the data and social networks of any person to impersonate a friend of the attacked person, because when someone receives something more personal they lower the defenses against phishing,” he says.


The Future (and Past) of Child Online Safety Legislation: Who Minds the Implementation Gap?

Acknowledging the limitations of exclusively using ID as a form of verification, many state bills, including Montana, Louisiana, Arkansas, Utah, and New York, have left the door open for “commercially reasonable” age verification methods. However, they give very little clarification as to what should be considered “commercially reasonable”. For example, in Utah, they only specify that these options can, “[rely] on public or private transactional data to verify the age of the person attempting to access the material.” ... Throughout all of these bills, there is no insight as to what type of data is permissible, how this data should be sourced, or any consent mechanisms for leveraging the data. By leaving a loophole open for undefined measures of age verification, there is a risk of potentially invasive and privacy-violating data, such as biometric data, being required of everyone who intends to access social media platforms. Not only could this potentially compromise people’s ability to remain anonymous on the internet, but it could also lead to the consolidation of uniquely identifiable sensitive data within the entities performing these verifications. To combat this, all bills with specifications for commercially reasonable age verification methods prohibit the data being used for verification from being stored or retained after verification is complete.


Beyond Code Coverage: A Risk-Driven Revolution in Software Testing With Machine Learning

Risk-based testing measures the importance of criteria instead of conducting equal checks for every factor. It evaluates potential flaws based on failure impact, likelihood of failure, and business criticality. This approach ensures efficient resource management and improves software reliability by: Focusing on Critical Areas: Instead of testing everything equally, RBT ensures that high-risk components receive the most attention. Evaluating Failure Impact: Identifies and tests areas where defects could cause significant damage. Assessing Likelihood of Failure: Targets unstable parts of the software by analyzing complexity, frequent changes, and past defects. Prioritizing Business-Critical Functions: Ensures essential systems like payment processing remain stable and reliable. Optimizing Resources and Time: Reduces unnecessary testing efforts, allowing teams to focus on what matters most. Improving Software Dependability: Detects major issues early, leading to more stable and reliable software. ... Machine learning improves software testing by examining prior data (code changes, bug reports, and test results) to identify high-risk locations. It gives key tests top priority; it finds anomalies before failures start; it keeps getting better with fresh data. Automating risk assessment helps ML speed tests, improve accuracy, maximize resources, and make software testing smarter and more effective.


Integrating Cybersecurity Into Change Management for Critical Infrastructure

The cyber MOC specifically targets changes affecting connected and configurable technologies, such as PLCs, IIoT devices, and network switches. The specific implementation of this process will vary depending on the organization’s structure and operational needs, as will the composition of the teams responsible for its execution. The reality is that many existing MOC frameworks were conceived before cybersecurity became a critical concern. Consequently, they often prioritize physical safety, leaving a significant gap in addressing potential cyber vulnerabilities. Traditional MOC tools, designed to support these processes, lack the necessary mechanisms to evaluate changes that could compromise cybersecurity. This oversight is a significant risk, particularly as infrastructure organizations become increasingly reliant on interconnected technologies. To bridge this gap, a fundamental shift is required. MOC tools and workflows must be revamped to incorporate cybersecurity considerations. While preserving core data fields and attributes, new fields must be introduced to capture cyber-related information. Similarly, RACI (responsible, accountable, consulted, and informed) matrices, which define responsibilities, must be expanded to include cyber risk accountability.


Deepfake attacks could cost you more than money

Treat deepfakes like any other cyber threat and apply a zero-trust mindset. That means don’t assume anything is real just because it looks or sounds convincing. Update your response plan to include steps for verifying video or audio content, especially if it’s being used to request sensitive actions. Build a risk model that considers how deepfakes could be used to target critical business processes, such as executive communications, financial approvals, or customer interactions. Make sure your team knows how to spot red flags, who to alert, and how to document the incident. Use detection tools that can scan media in real time and save flagged content for review. The faster you can identify and act, the more damage you can prevent. In today’s environment, it’s safer to question first and trust only after you verify. ... Deepfake awareness should be built into regular training so employees can spot warning signs early. Utilizing the detection tools to support teams by scanning and flagging suspicious media in real time, helping them make faster, safer decisions. Incident response plans must also cover how to escalate, preserve evidence, and communicate if a deepfake is suspected. At the end of the day, questioning unusual communications must become the norm, not the exception


Secure Code Development News to Celebrate

Another big payoff comes from paying down security debt. Wysopal said organizations with the most mature secure development practices fix 10% of their vulnerabilities on an annual basis and avoid having any security debt that is more than a year old. By contrast, "the lagging companies fix less than 1% of open bugs per month," he said. This strategy isn't always feasible. Notably, "we found that 70% of critical debt was in third-party code," and teams that built software with third-party - or sometimes fourth or fifth party - dependencies sometimes must wait months for fixes to become available, Wysopal said. "Some software packages that are widely used by other software packages are harder to fix, so you have a lot what we call transitive dependencies." There's no easy solution for this challenge. "When you're using open source, you're really dependent on the fixing speed of another team that is not getting paid, and they're just doing it because they love to do that project," he said. ... Another wrinkle is that more code is built by artificial intelligence tools - Google and Microsoft each say roughly a third of their code is AI-generated. Developers report being more productive, shipping on average 50% more code when they use AI tools. Wysopal said such AI tools appear to produce code with vulnerabilities at the same rate as classical development tools. More code shipped risks a greater number of vulnerabilities.


Powering the AI revolution: Legal and infrastructure challenges for data center development

Developing and operating AI-ready data centers necessitates specialized legal expertise across multiple disciplines. Financing attorneys provide guidance in structuring capital arrangements that support data center development, which requires substantial upfront investment before generating any operational revenue. Capital arrangements must incorporate sufficient flexibility to accommodate the rapid evolution of AI technology availability and unique power supply challenges at an individual site. Energy lawyers guide PPA negotiations, facilitate utility discussions, manage interconnection filings with relevant authorities, and resolve rate disputes when they arise. Their specialized work ensures that facilities maintain access to reliable, cost-effective power resources that meet operational requirements under all anticipated conditions. As regulatory approaches to AI infrastructure continue to evolve, energy counsel must remain current on emerging policies and their potential impact on both existing and future facilities. Technology and intellectual property specialists address essential operational aspects of data centers, including complex licensing arrangements, service level agreements, comprehensive data governance frameworks, and cross-border data flow compliance strategies.

No comments:

Post a Comment