Daily Tech Digest - May 17, 2025


Quote for the day:

“Only those who dare to fail greatly can ever achieve greatly.” -- Robert F.


Top 10 Best Practices for Effective Data Protection

Your first instinct may be to try to keep up with all your data, but this may be a fool's errand. The key to success is to have classification capabilities everywhere data moves, and rely on your DLP policy to jump in when risk arises. Automation in data classification is becoming a lifesaver thanks to the power of AI. AI-powered classification can be faster and more accurate than traditional ways of classifying data with DLP. Ensure any solution you are evaluating can use AI to instantly uncover and discover data without human input. ... Data loss prevention (DLP) technology is the core of any data protection program. That said, keep in mind that DLP is only a subset of a larger data protection solution. DLP enables the classification of data (along with AI) to ensure you can accurately find sensitive data. Ensure your DLP engine can consistently alert correctly on the same piece of data across devices, networks, and clouds. The best way to ensure this is to embrace a centralized DLP engine that can cover all channels at once. Avoid point products that bring their own DLP engine, as this can lead to multiple alerts on one piece of moving data, slowing down incident management and response. Look to embrace Gartner's security service edge approach, which delivers DLP from a centralized cloud service. 


4 Keys To Successful Change Management From The Bain Playbook

From the start, Bain was crystal clear about its case for change, according to Razdan. The company prioritized change management, which meant IT partnering with finance; it also meant cultivating a mindset conducive to change. “We owned the change; we identified a group of high performers within our finance and our IT teams. This community of super-users could readily identify and deal with any of the problems that typically arise in an implementation of this size and scale,” Mackey said. “This was less just changing their technology; it’s changing employee behaviors and setting us up for how we want to grow and change processes going forward.” ... “We actually set up a program to be always measuring the value,” Razdan said. “You have internal stakeholders, you have external stakeholders, you have partnerships; we kind of built an ecosystem of governance and partnership that enabled us to keep everybody on the same page because transparency and communication is critical to success.” Gauging progress via transparent key performance indicators was all the more impressive, given that most of this happened during the worldwide, pandemic-driven move to remote work. “We could assess the implementation, as we went through it, to keep us on track [and] course correct,” Mackey said. 


Emerging AI security risks exposed in Pangea's global study

A significant finding was the non-deterministic nature of large language model (LLM) security. Prompt injection attacks, a method where attackers manipulate input to provoke undesired responses from AI systems, were found to succeed unpredictably. An attack that fails 99 times could succeed on the 100th attempt with identical input, due to the underlying randomness in LLM processing. The study also revealed substantial risks of data leakage and adversarial reconnaissance. Attackers using prompt injection can manipulate AI models to disclose sensitive information or contextual details about the environment in which the system operates, such as server types and network access configurations. 'This challenge has given us unprecedented visibility into real-world tactics attackers are using against AI applications today,' said Oliver Friedrichs, Co-Founder and Chief Executive Officer of Pangea. 'The scale and sophistication of attacks we observed reveal the vast and rapidly evolving nature of AI security threats. Defending against these threats must be a core consideration for security teams, not a checkbox or afterthought.' Findings indicated that basic defences, such as native LLM guardrails, left organisations particularly exposed. 


Dynamic DNS Emerges as Go-to Cyberattack Facilitator

Dynamic DNS (DDNS) services automatically update a domain name's DNS records in real-time when the Internet service provider changes the IP address. Real-time updating for DNS records wasn't needed in the early days of the Internet when static IP addresses were the norm. ... It sounds simple enough, yet bad actors have abused the services for years. More recently, though, cybersecurity vendors have observed an increase in such activity, especially this year. The notorious cybercriminal collective Scattered Spider, for instance, has turned to DDNS to obfuscate its malicious activity and impersonate well-known brands in social engineering attacks. This trend has some experts concerned about a rise in abuse and a surge in "rentable" subdomains. ... In an example of an observed attack, Scattered Spider actors established a new subdomain, klv1.it[.]com, designed to impersonate a similar domain, klv1.io, for Klaviyo, a Boston-based marketing automation company. Silent Push's report noted that the malicious domain had just five detections on VirusTotal at the time of publication. The company also said the use of publicly rentable subdomains presents challenges for security researchers. "This has been something that a lot of threat actors do — they use these services because they won't have domain registration fingerprints, and it makes it harder to track them," says Zach Edwards, senior threat researcher at Silent Push.


The Growing and Changing Threat of Deepfake Attacks

To ensure their deepfake attacks are convincing, malicious actors are increasingly focusing on more believable delivery, enhanced methods, such as phone number spoofing, SIM swapping, malicious recruitment accounts and information-stealing malware. These methods allow actors to convincingly deliver deepfakes and significantly increase a ploy’s overall credibility. ... High-value deepfake targets, such as C-suite executives, key data custodians, or other significant employees, often have moderate to high volumes of data available publicly. In particular, employees appearing on podcasts, giving interviews, attending conferences, or uploading videos expose significant volumes of moderate- to high-quality data for use in deepfakes. This dictates that understanding individual data exposure becomes a key part of accurately assessing the overall enterprise risk of deepfakes. Furthermore, ACI research indicates industries such as consulting, financial services, technology, insurance and government often have sufficient publicly available data to enable medium-to high-quality deepfakes. Ransomware groups are also continuously leaking a high volume of enterprise data. This information can help fuel deepfake content to “talk” about genuine internal documents, employee relationships and other internal details. 


Binary Size Matters: The Challenges of Fitting Complex Applications in Storage-Constrained Devices

Although we are here focusing on software, it is important to say that software does not run in a vacuum. Having an understanding of the hardware our programs run on and even how hardware is developed can offer important insights into how to tackle programming challenges. In the software world, we have a more iterative process, new features and fixes can usually be incorporated later in the form of over-the-air updates, for example. That is not the case with hardware. Design errors and faults in hardware can at the very best be mitigated with considerable performance penalties. These errors can introduce the meltdown and spectre vulnerabilities, or render the whole device unusable. Therefore the hardware design phase has a much longer and rigorous process before release than the software design phase. This rigorous process also impacts design decisions in terms of optimizations and computational power. Once you define a layout and bill of materials for your device, the expectation is to keep this constant for production as long as possible in order to reduce costs. Embedded hardware platforms are designed to be very cost-effective. Designing a product whose specifications such as memory or I/O count are wasted also means a cost increase in an industry where every cent in the bill of materials matters.


Cyber Insurance Applications: How vCISOs Bridge the Gap for SMBs

Proactive risk evaluation is a game-changer for SMBs seeking to maintain robust insurance coverage. vCISOs conduct regular risk assessments to quantify an organization’s security posture and benchmark it against industry standards. This not only identifies areas for improvement but also helps maintain compliance with evolving insurer expectations. Routine audits—led by vCISOs—keep security controls effective and relevant. Third-party risk evaluations are particularly valuable, given the rise in supply chain attacks. By ensuring vendors meet security standards, SMBs reduce their overall risk profile and strengthen their position during insurance applications and renewals. Employee training programs also play a critical role. By educating staff on phishing, social engineering, and other common threats, vCISOs help prevent incidents before they occur. ... For SMBs, navigating the cyber insurance landscape is no longer just a box-checking exercise. Insurers demand detailed evidence of security measures, continuous improvement, and alignment with industry best practices. vCISOs bring the technical expertise and strategic perspective necessary to meet these demands while empowering SMBs to strengthen their overall security posture.


How to establish an effective AI GRC framework

Because AI introduces risks that traditional GRC frameworks may not fully address, such as algorithmic bias and lack of transparency and accountability for AI-driven decisions, an AI GRC framework helps organizations proactively identify, assess, and mitigate these risks, says Heather Clauson Haughian, co-founding partner at CM Law, who focuses on AI technology, data privacy, and cybersecurity. “Other types of risks that an AI GRC framework can help mitigate include things such as security vulnerabilities where AI systems can be manipulated or exposed to data breaches, as well as operational failures when AI errors lead to costly business disruptions or reputational harm,” Haughian says. ... Model governance and lifecycle management are also key components of an effective AI GRC strategy, Haughian says. “This would cover the entire AI model lifecycle, from data acquisition and model development to deployment, monitoring, and retirement,” she says. This practice will help ensure AI models are reliable, accurate, and consistently perform as expected, mitigating risks associated with model drift or errors, Haughian says. ... Good policies balance out the risks and opportunities that AI and other emerging technologies, including those requiring massive data, can provide, Podnar says. “Most organizations don’t document their deliberate boundaries via policy,” Podnar says. 


How to Keep a Consultant from Stealing Your Idea

The best defense is a good offense, Thirmal says. Before sharing any sensitive information, get the consultant to sign a non-disclosure agreement (NDA) and, if needed, a non-compete agreement. "These legal documents set clear boundaries on what can and can't do with your ideas." He also recommends retaining records -- meeting notes, emails, and timestamps -- to provide documented proof of when and where the idea in question was discussed. ... If a consultant takes an idea and commercializes it, or shares it with a competitor, it's time to consult legal counsel, Paskalev says. The legal case's strength will hinge on the exact wording within contracts and documentation. "Sometimes, a well-crafted cease-and-desist letter is enough; other times, litigation is required." ... The best way to protect ideas isn't through contracts -- it's by being proactive, Thirmal advises. "Train your team to be careful about what they share, work with consultants who have strong reputations, and document everything," he states. "Protecting innovation isn’t just a legal issue -- it's a strategic one." Innovation is an IT leader's greatest asset, but it's also highly vulnerable, Paskalev says. "By proactively structuring consultant agreements, meticulously documenting every stage of idea development, and being ready to enforce protection, organizations can ensure their competitive edge."


Even the Strongest Leaders Burn Out — Here's the Best Way to Shake the Fatigue

One of the most overlooked challenges in leadership is the inability to step back from the work and see the full picture. We become so immersed in the daily fires, the high-stakes meetings, the make-or-break moments, that we lose the ability to assess the battlefield objectively. The ocean, or any intense, immersive activity, provides that critical reset. But stepping away isn't just about swimming in the ocean. It's about breaking patterns. Leaders are often stuck in cycles — endless meetings, fire drills, back-to-back calls. The constant urgency can trick you into believing that everything is critical. That's why you need moments that pull you out of the daily grind, forcing you to reset before stepping back in. This is where intentional recovery becomes a strategic advantage. Top-performing leaders across industries — from venture capitalists to startup founders — intentionally carve out time for activities that challenge them in different ways. ... The most effective leaders understand that managing their energy is just as important as managing their time. When energy levels dip, cognitive function suffers, and decision-making becomes less strategic. That's why companies known for their progressive workplace cultures integrate mindfulness practices, outdoor retreats and wellness programs — not as perks, but as necessary investments in long-term performance.

Daily Tech Digest - May 16, 2025


Quote for the day:

"Different times need different types of leadership." -- Park Geun-hye


AI Agents: Protocols Driving Next-Gen Enterprise Intelligence

MCP substantially simplifies agentic AI adoption for developers. This roadmap created by the MCP community clearly defines priorities and direction, providing helpful guidance for implementation. Organizations will also benefit from the key initiatives outlined in the roadmap, like the MCP Registry, which enables developers to build a comprehensive network of agents. The emergence of OAuth as a complementary standard protocol strengthens agent ecosystems even more. As with any other framework, MCP has its challenges. MCP offers a wide array of tools to support LLM reasoning, but it doesn’t prioritize coordinated, high-quality task execution. ... ACP will make it easier to implement AI agents on edge and local devices. In instances where the majority of decision-making happens “on the go” in a disconnected environment, this protocol will be useful. Now, developers can build modular systems that can coordinate with a standard protocol to make edge AI easier. A2A will gain momentum and enable cross-platform agents to work together to deliver superior intelligence to customers. A2A will help coordinate agents built using diverse frameworks with a common standard. The main requirement for this is to build an Agent Card that allows agents to be used and consumed by others.


Critical Infrastructure Under Siege: OT Security Still Lags

Industrial organizations and other kinds of critical infrastructure are regularly near or at the top of vendor lists highlighting ransomware targets. It's easy to see why; the important assets a threat actor could compromise put immense pressure on affected organizations to pay up. Kurt Gaudette, vice president of intelligence and services at Dragos, tells Dark Reading that the OT side of the house is "where the bottom line is." And indeed, Sophos reported last year that 65% of respondent organizations in the manufacturing sector reported that they suffered a ransomware attack in the year preceding the report; of those, 62% of organizations paid the ransom. Compounding this, the security postures of organizations that use OT/ICS can vary dramatically compared with traditional IT settings. The importance of staying patched is complicated by the reality that some industrial processes are meant to run uninterrupted for long periods of time and can't be subjected to the downtime necessary to patch. Second, an organization like a local water treatment plant might not have a significant security budget to invest in tools and personnel. Also, ICS products tend to be expensive, and aging equipment is everywhere, with many fields like healthcare drowning in legacy, hard-to-patch products or those without built-in security features.


Your Security Training Isn't Wrong. The Content Is Just Outdated

Although AI makes threats harder to detect, many breaches aren't caused by sophisticated hacking. They happen because organizations might not realize employees let their kids play Minecraft on their corporate laptops, or an old server or forgotten IoT device is still online. If IT doesn't know an asset exists, or who uses it, the team can't secure it, and hackers look for forgotten, unmonitored devices to break in. ... Managing and securing multiple systems can tempt employees to repeat passwords for simplicity. If employees continue to avoid using tools like corporate password managers to enforce strong, unique passwords, IT teams need to ask themselves why. How can they make warnings about this more impactful without burdening staff? ... The trouble is that, even with corporate password managers and MFA in place, hackers are still finding ways to steal credentials. These tools are designed to prevent hackers from entering your home, but if the door is left open, they won't stop anyone from walking in. The average annual growth rate of exposed accounts is 28%. Session expiration policies based on risk level and adaptive access policies can trigger forced signouts if a session shows abnormal behavior (e.g., logging in from a new IP while still active on another), which will help reduce account session takeovers.


Check Point CISO: Network segregation can prevent blackouts, disruptions

In 2025, industry watchers expect there will be an increase in the public budget allocated to defense. In Spain, one-third of the budget will be allocated to increasing cybersecurity. But for Fischbein, training teams is much more important than the budget. “The challenge is to distribute the budget in a way that can be managed,” he notes, and to leverage intuitive and easy-to-use platforms, so that organizations don’t have to invest all the money in training. “When you have information, management, users, devices, mobiles, data centers, clouds, cameras, printers… the security challenge is very complex. ” he says. ” ... “In a security operations center (SOC), a person using Check Point tools could previously take between two and four hours to investigate the causes of an alert. Today that time has dropped to 20 minutes,” he says. He also explains how they work with vulnerabilities. “Currently, Check Point checks all of them in a few seconds and tells you whether you are protected or not. And if you are not, it tells you which network to protect.” Regarding attackers, he acknowledges that they now make “richer and more logical” attacks. “With AI, they check the data and social networks of any person to impersonate a friend of the attacked person, because when someone receives something more personal they lower the defenses against phishing,” he says.


The Future (and Past) of Child Online Safety Legislation: Who Minds the Implementation Gap?

Acknowledging the limitations of exclusively using ID as a form of verification, many state bills, including Montana, Louisiana, Arkansas, Utah, and New York, have left the door open for “commercially reasonable” age verification methods. However, they give very little clarification as to what should be considered “commercially reasonable”. For example, in Utah, they only specify that these options can, “[rely] on public or private transactional data to verify the age of the person attempting to access the material.” ... Throughout all of these bills, there is no insight as to what type of data is permissible, how this data should be sourced, or any consent mechanisms for leveraging the data. By leaving a loophole open for undefined measures of age verification, there is a risk of potentially invasive and privacy-violating data, such as biometric data, being required of everyone who intends to access social media platforms. Not only could this potentially compromise people’s ability to remain anonymous on the internet, but it could also lead to the consolidation of uniquely identifiable sensitive data within the entities performing these verifications. To combat this, all bills with specifications for commercially reasonable age verification methods prohibit the data being used for verification from being stored or retained after verification is complete.


Beyond Code Coverage: A Risk-Driven Revolution in Software Testing With Machine Learning

Risk-based testing measures the importance of criteria instead of conducting equal checks for every factor. It evaluates potential flaws based on failure impact, likelihood of failure, and business criticality. This approach ensures efficient resource management and improves software reliability by: Focusing on Critical Areas: Instead of testing everything equally, RBT ensures that high-risk components receive the most attention. Evaluating Failure Impact: Identifies and tests areas where defects could cause significant damage. Assessing Likelihood of Failure: Targets unstable parts of the software by analyzing complexity, frequent changes, and past defects. Prioritizing Business-Critical Functions: Ensures essential systems like payment processing remain stable and reliable. Optimizing Resources and Time: Reduces unnecessary testing efforts, allowing teams to focus on what matters most. Improving Software Dependability: Detects major issues early, leading to more stable and reliable software. ... Machine learning improves software testing by examining prior data (code changes, bug reports, and test results) to identify high-risk locations. It gives key tests top priority; it finds anomalies before failures start; it keeps getting better with fresh data. Automating risk assessment helps ML speed tests, improve accuracy, maximize resources, and make software testing smarter and more effective.


Integrating Cybersecurity Into Change Management for Critical Infrastructure

The cyber MOC specifically targets changes affecting connected and configurable technologies, such as PLCs, IIoT devices, and network switches. The specific implementation of this process will vary depending on the organization’s structure and operational needs, as will the composition of the teams responsible for its execution. The reality is that many existing MOC frameworks were conceived before cybersecurity became a critical concern. Consequently, they often prioritize physical safety, leaving a significant gap in addressing potential cyber vulnerabilities. Traditional MOC tools, designed to support these processes, lack the necessary mechanisms to evaluate changes that could compromise cybersecurity. This oversight is a significant risk, particularly as infrastructure organizations become increasingly reliant on interconnected technologies. To bridge this gap, a fundamental shift is required. MOC tools and workflows must be revamped to incorporate cybersecurity considerations. While preserving core data fields and attributes, new fields must be introduced to capture cyber-related information. Similarly, RACI (responsible, accountable, consulted, and informed) matrices, which define responsibilities, must be expanded to include cyber risk accountability.


Deepfake attacks could cost you more than money

Treat deepfakes like any other cyber threat and apply a zero-trust mindset. That means don’t assume anything is real just because it looks or sounds convincing. Update your response plan to include steps for verifying video or audio content, especially if it’s being used to request sensitive actions. Build a risk model that considers how deepfakes could be used to target critical business processes, such as executive communications, financial approvals, or customer interactions. Make sure your team knows how to spot red flags, who to alert, and how to document the incident. Use detection tools that can scan media in real time and save flagged content for review. The faster you can identify and act, the more damage you can prevent. In today’s environment, it’s safer to question first and trust only after you verify. ... Deepfake awareness should be built into regular training so employees can spot warning signs early. Utilizing the detection tools to support teams by scanning and flagging suspicious media in real time, helping them make faster, safer decisions. Incident response plans must also cover how to escalate, preserve evidence, and communicate if a deepfake is suspected. At the end of the day, questioning unusual communications must become the norm, not the exception


Secure Code Development News to Celebrate

Another big payoff comes from paying down security debt. Wysopal said organizations with the most mature secure development practices fix 10% of their vulnerabilities on an annual basis and avoid having any security debt that is more than a year old. By contrast, "the lagging companies fix less than 1% of open bugs per month," he said. This strategy isn't always feasible. Notably, "we found that 70% of critical debt was in third-party code," and teams that built software with third-party - or sometimes fourth or fifth party - dependencies sometimes must wait months for fixes to become available, Wysopal said. "Some software packages that are widely used by other software packages are harder to fix, so you have a lot what we call transitive dependencies." There's no easy solution for this challenge. "When you're using open source, you're really dependent on the fixing speed of another team that is not getting paid, and they're just doing it because they love to do that project," he said. ... Another wrinkle is that more code is built by artificial intelligence tools - Google and Microsoft each say roughly a third of their code is AI-generated. Developers report being more productive, shipping on average 50% more code when they use AI tools. Wysopal said such AI tools appear to produce code with vulnerabilities at the same rate as classical development tools. More code shipped risks a greater number of vulnerabilities.


Powering the AI revolution: Legal and infrastructure challenges for data center development

Developing and operating AI-ready data centers necessitates specialized legal expertise across multiple disciplines. Financing attorneys provide guidance in structuring capital arrangements that support data center development, which requires substantial upfront investment before generating any operational revenue. Capital arrangements must incorporate sufficient flexibility to accommodate the rapid evolution of AI technology availability and unique power supply challenges at an individual site. Energy lawyers guide PPA negotiations, facilitate utility discussions, manage interconnection filings with relevant authorities, and resolve rate disputes when they arise. Their specialized work ensures that facilities maintain access to reliable, cost-effective power resources that meet operational requirements under all anticipated conditions. As regulatory approaches to AI infrastructure continue to evolve, energy counsel must remain current on emerging policies and their potential impact on both existing and future facilities. Technology and intellectual property specialists address essential operational aspects of data centers, including complex licensing arrangements, service level agreements, comprehensive data governance frameworks, and cross-border data flow compliance strategies.

Daily Tech Digest - May 15, 2025


Quote for the day:

“Challenges are what make life interesting and overcoming them is what makes life meaningful.” -- Joshua J. Marine


How to use genAI for requirements gathering and agile user stories

The key to success is engaging end-users and stakeholders in developing the goals and requirements around features and user stories. ... GenAI should help agile teams incorporate more design thinking practices and increase feedback cycles. “GenAI tools are fundamentally shifting the role of product owners and business analysts by enabling them to prototype and iterate on requirements directly within their IDEs rapidly,” says Simon Margolis, Associate CTO at SADA. “This allows for more dynamic collaboration with stakeholders, as they can visualize and refine user stories and acceptance criteria in real time. Instead of being bogged down in documentation, they can focus on strategic alignment and faster delivery, with AI handling the technical translation.” ... “GenAI excels at aligning user stories and acceptance criteria with predefined specs and design guidelines, but the original spark of creativity still comes from humans,” says Ramprakash Ramamoorthy, director of AI research at ManageEngine. “Analysts and product owners should use genAI as a foundational tool rather than relying on it entirely, freeing themselves to explore new ideas and broaden their thinking. The real value lies in experts leveraging AI’s consistency to ground their work, freeing them to innovate and refine the subtleties that machines cannot grasp.”


5 Subtle Indicators Your Development Environment Is Under Siege

As security measures around production environments strengthen, which they have, attackers are shifting left—straight into the software development lifecycle (SDLC). These less-protected and complex environments have become prime targets, where gaps in security can expose sensitive data and derail operations if exploited. That’s why recognizing the warning signs of nefarious behavior is critical. But identification alone isn’t enough—security and development teams must work together to address these risks before attackers exploit them. ... Abnormal spikes in repository cloning activity may indicate potential data exfiltration from Software Configuration Management (SCM) tools. When an identity clones repositories at unexpected volumes or times outside normal usage patterns, it could signal an attempt to collect source code or sensitive project data for unauthorized use. ... While cloning is a normal part of development, a repository that is copied but shows no further activity may indicate an attempt to exfiltrate data rather than legitimate development work. Pull Request approvals from identities lacking repository activity history may indicate compromised accounts or an attempt to bypass code quality safeguards. When changes are approved by users without prior engagement in the repository, it could be a sign of malicious attempts to introduce harmful code or represent reviewers who may overlook critical security vulnerabilities.


Data, agents and governance: Why enterprise architecture needs a new playbook

The rapid evolution of AI and data-centric technologies is forcing organizations to rethink how they structure and govern their information assets. Enterprises are increasingly moving from domain-driven data architectures — where data is owned and managed by business domains — to AI/ML-centric data models that require large-scale, cross-domain integration. Questions arise about whether this transition is compatible with traditional EA practices. The answer: While there are tensions, the shift is not fundamentally at odds with EA but rather demands a significant transformation in how EA operates. ... Governance in an agentic architecture flips the script for EA by shifting focus to defining the domain authority of the agent to participate in an ecosystem. That encompasses the system they can interact with, the commands they can execute, the other agents they can interact with, the cognitive models they rely on and the goals that are set for them. Ensuring agents are good corporate citizens means enterprise architects must engage with business units to set the parameters for what an agent can and cannot do on behalf of the business. Further, the relationship and those parameters must be “tokenized” to authenticate the capacity to execute those actions. 

California’s location data privacy bill aims to reshape digital consent

“We’re really trying to help regulate the use of your geolocation data,” says the bill’s author, Democratic Assemblymember Chris Ward, who represents California’s 78th district, which covers parts of San Diego and surrounding areas. “You should not be able to sell, rent, trade, or lease anybody’s location information to third parties, because nobody signed up for that.” Among types of personal information, location data is especially sensitive. It reveals where people live, work, worship, protest, and seek medical care. It can expose routines, relationships, and vulnerabilities. As stories continue to surface about apps selling location data to brokers, government workers, and even bounty hunters, the conversation has expanded. What was once a debate about privacy has increasingly become a concern over how the exposure of this data infringes upon fundamental civil liberties. “Geolocation is very revealing,” says Justin Brookman, the director of technology policy at Consumer Reports, which supported the legislation. “It tells a lot about you, and it also can be a public safety issue if it gets into the wrong person’s hands.” ... Equally troubling, Ward argues, is who benefits. The companies collecting and selling this data are driven by profit, not transparency. As scholar Shoshana Zuboff has argued, surveillance capitalism doesn’t thrive because users want personalized ads. 


Digital Transformation Expert Discusses Trends

From day one, I emphasise that digital transformation isn’t just about adopting new tools—it’s about aligning those tools with business objectives, improving internal processes, and responding to changing customer expectations. To bring this to life, I use a blended approach that combines theory with real-world practice. Students explore frameworks and models that explain how businesses adapt to technological change, and then apply these to real case studies from global companies, SMEs, and my own entrepreneurial experiences. These examples give them insight into how digital transformation plays out in areas like operations, marketing, and customer relationship management (CRM). Active learning is central to my teaching. I use group work, live problem-solving, digital tool demonstrations, and hands-on simulations to help students experience digital transformation in action. I also introduce them to established business platforms and emerging technologies, encouraging them to assess their value and strategic impact. Ultimately, I aim to create an environment where students don’t just learn about digital transformation—they think like digital leaders, able to question, analyse, and apply what they’ve learned in real organisational contexts.


Building cybersecurity culture in science-driven organizations

The perception of security as a barrier is a challenge faced by many organizations, especially in environments where innovation is prioritized. The solution lies in shifting the narrative: Security are care givers for the value created in this organization. Most scientists and executives already understand the consequences of a cyberattack—lost research, stolen intellectual property, and disrupted operations. We involve them in the process. When lab leaders feel that their input has shaped security protocols, they’re more likely to support and champion those initiatives. Co-creating solutions ensures that security controls are not only effective but also practical for the scientific workflow. In short, building trust, demonstrating empathy for their challenges, and proving the value of security through action are what ultimately win buy-in. ... Shadow IT is a reality in any organization, but it’s particularly prevalent in environments like ours, where creativity and experimentation often outpace formal approval processes. While it’s important to communicate the risks of shadow IT clearly, we also recognize that outright bans are rarely effective. Instead, we focus on enabling secure alternatives. In the broader organization, we use tools to detect and prevent shadow IT, combined with strict communication around approved solutions. 


LastPass can now monitor employees' rogue reliance on shadow SaaS - including AI tools

With LastPass's browser extension for password management already well-positioned to observe -- and even restrict -- employee web usage, the security company has announced that it's diversifying into SaaS monitoring for small to midsize enterprises (SMEs). SaaS monitoring is part of a larger technology category known as SaaS Identity and Access Management, or SaaS IAM. As more employees are drawn to AI to improve productivity, the company is pitching an affordable solution to help SMEs contain the risks and costs associated with shadow SaaS; an umbrella of rogue SaaS procurement that's inclusive of shadow IT and its latest variant -- shadow AI. ... LastPass sees the new capabilities aligning with an organization's business objectives in a variety of ways. "One could be compliance," MacLennan told ZDNET. "Another could be the organization's internal sense of risk and risk management. Another could be cost because we're surfacing apps by category, in which case you'll see the whole universe of duplicative apps in use." MacLennan also noted that the new offering makes it easy to reduce costs due to the over-provisioning of SaaS licenses. For example, an organization is paying for 100 seats of some SaaS solution while the SaaS monitoring tool reveals that only 30 of those licenses are in active use.


Why ISO 42001 sets the standard for responsible AI governance

ISO 42001 is particularly relevant for organisations operating within layered supply chains, especially those building on cloud platforms. For these environments, where infrastructure, platform and software providers each play a role in delivering AI-powered services to end users, organisations must maintain a clear chain of responsibility and vendor due diligence. By defining roles across the shared responsibility model, ISO 42001 helps ensure that governance, compliance and risk management are consistent and transparent from the ground up. Doing so not only builds internal confidence but also enables partners and providers to demonstrate trustworthiness to customers across the value chain. As a result, trust management becomes a vital part of the picture by delivering an ongoing process of demonstrating transparency and control around the way organisations handle data, deploy technology, and meet regulatory expectations. Rather than treating compliance as a static goal, trust management introduces a more dynamic, ongoing approach to demonstrating how AI is governed across an organisation. By operationalising transparency, it becomes much easier to communicate security practices and explain decision-making processes to provide evidence of responsible development and deployment.


Beyond the office: Preparing for disasters in a remote work world

When disaster strikes, employees may be without electricity, internet, or cell service for days or weeks. They may have to evacuate their homes. They may be struggling with the loss of family members, friends, or neighbors. Just as organizations have disaster mitigation and recovery plans for main offices and data centers, they should be prepared to support remote employees in disaster situations they likely have never encountered before. Employers must counsel workers on what to do, provide additional resources, and above all, ensure that their mental health is attended to. ... Beyond cybersecurity risks, being forced to leave their home environment presents employees with another significant challenge: the potential loss of personal artifacts, from tax documents and family heirlooms to cherished photos. Lahiri refers to the process of safeguarding such items as “personal disaster recovery planning” and notes that this aspect of worker support is often overlooked. While companies have experience migrating servers from local offices to distributed teams, few have considered how to support employees on a personal level, he says. Lahiri urges IT teams to take a more empathetic approach and broaden their scope to include disaster recovery planning for employees’ home offices.


Beyond the Gang of Four: Practical Design Patterns for Modern AI Systems

Prompting might seem trivial at first. After all, you send free-form text to a model, so what could go wrong? However, how you phrase a prompt and what context you provide can drastically change your model's behavior, and there's no compiler to catch errors or a standard library of techniques. ... Few-Shot Prompting is one of the most straightforward yet powerful prompting approaches. Without examples, your model might generate inconsistent outputs, struggle with task ambiguity, or fail to meet your specific requirements. You can solve this problem by providing the model with a handful of examples (input-output pairs) in the prompt and then providing the actual input. You are essentially providing training data on the fly. This allows the model to generalize without re-training or fine-tuning. ... If you are a software developer trying to solve a complex algorithmic problem or a software architect trying to analyze complex system bottlenecks and vulnerabilities, you will probably brainstorm various ideas with your colleagues to understand their pros and cons, break down the problem into smaller tasks, and then solve it iteratively, rather than jumping to the solution right away. In Chain-of-Thought (CoT) prompting, you encourage the model to follow a very similar process and think aloud by breaking the problem down into a step-by-step process.

Daily Tech Digest - May 14, 2025


Quote for the day:

"Success is what happens after you have survived all of your mistakes." -- Anonymous


3 Stages of Building Self-Healing IT Systems With Multiagent AI

Multiagent AI systems can allow significant improvements to existing processes across the operations management lifecycle. From intelligent ticketing and triage to autonomous debugging and proactive infrastructure maintenance, these systems can pave the way for IT environments that are largely self-healing. ... When an incident is detected, AI agents can attempt to debug issues with known fixes using past incident information. When multiple agents are combined within a network, they can work out alternative solutions if the initial remediation effort doesn’t work, while communicating the ongoing process with engineers. Keeping a human in the loop (HITL) is vital to verifying the outputs of an AI model, but agents must be trusted to work autonomously within a system to identify fixes and then report these back to engineers. ... The most important step in creating a self-healing system is training AI agents to be able to learn from each incident, as well as from each other, to become truly autonomous. For this to happen, AI agents cannot be siloed into incident response. Instead, they must be incorporated into an organization’s wider system, communicate with third-party agents and allow them to draw correlations from each action taken to resolve each incident. In this way, each organization’s incident history becomes the training data for its AI agents, ensuring that the actions they take are organization-specific and relevant.


The three refactorings every developer needs most

If I had to rely on only one refactoring, it would be Extract Method, because it is the best weapon against creating a big ball of mud. The single best thing you can do for your code is to never let methods get bigger than 10 or 15 lines. The mess created when you have nested if statements with big chunks of code in between the curly braces is almost always ripe for extracting methods. One could even make the case that an if statement should have only a single method call within it. ... It’s a common motif that naming things is hard. It’s common because it is true. We all know it. We all struggle to name things well, and we all read legacy code with badly named variables, methods, and classes. Often, you name something and you know what the subtleties are, but the next person that comes along does not. Sometimes you name something, and it changes meaning as things develop. But let’s be honest, we are going too fast most of the time and as a result we name things badly. ... In other words, we pass a function result directly into another function as part of a boolean expression. This is… problematic. First, it’s hard to read. You have to stop and think about all the steps. Second, and more importantly, it is hard to debug. If you set a breakpoint on that line, it is hard to know where the code is going to go next.


ENISA launches EU Vulnerability Database to strengthen cybersecurity under NIS2 Directive, boost cyber resilience

The EU Vulnerability Database is publicly accessible and serves various stakeholders, including the general public seeking information on vulnerabilities affecting IT products and services, suppliers of network and information systems, and organizations that rely on those systems and services. ... To meet the requirements of the NIS2 Directive, ENISA initiated a cooperation with different EU and international organisations, including MITRE’s CVE Programme. ENISA is in contact with MITRE to understand the impact and next steps following the announcement of the funding to the Common Vulnerabilities and Exposures Program. CVE data, data provided by Information and Communication Technology (ICT) vendors disclosing vulnerability information through advisories, and relevant information, such as CISA’s Known Exploited Vulnerability Catalogue, are automatically transferred into the EU Vulnerability Database. This will also be achieved with the support of member states, who established national Coordinated Vulnerability Disclosure (CVD) policies and designated one of their CSIRTs as the coordinator, ultimately making the EUVD a trusted source for enhanced situational awareness in the EU. 


Welcome to the age of paranoia as deepfakes and scams abound

Welcome to the Age of Paranoia, when someone might ask you to send them an email while you’re mid-conversation on the phone, slide into your Instagram DMs to ensure the LinkedIn message you sent was really from you, or request you text a selfie with a time stamp, proving you are who you claim to be. Some colleagues say they even share code words with each other, so they have a way to ensure they’re not being misled if an encounter feels off. ... Ken Schumacher, founder of the recruitment verification service Ropes, says he’s worked with hiring managers who ask job candidates rapid-fire questions about the city where they claim to live on their résumé, such as their favorite coffee shops and places to hang out. If the applicant is actually based in that geographic region, Schumacher says, they should be able to respond quickly with accurate details. Another verification tactic some people use, Schumacher says, is what he calls the “phone camera trick.” If someone suspects the person they’re talking to over video chat is being deceitful, they can ask them to hold up their phone camera to show their laptop. The idea is to verify whether the individual may be running deepfake technology on their computer, obscuring their true identity or surroundings.


CEOs Sound Alarm: C-Suite Behind in AI Savviness

According to the survey, CEOs now see upskilling internal teams as the cornerstone of AI strategy. The top two limiting factors impacting AI's deployment and use, they said, are the inability to hire adequate numbers of skilled people and to calculate value or outcomes. "CEOs have shifted their view of AI from just a tool to a transformative way of working," said Jennifer Carter, senior principal analyst at Gartner. Contrary to the CEOs' assessments by Gartner, most CIOs view themselves as the key drivers and leaders of their organizations' AI strategies. According to a recent report by CIO.com, 80% of CIOs said they are responsible for researching and evaluating AI products, positioning them as "central figures in their organizations' AI strategies." As CEOs increasingly prioritize AI, customer experience and digital transformation, these agenda items are directly shaping the evolving role and responsibilities of the CIO. But 66% of CEOs say their business models are not fit for AI purposes. Billions continue to be spent on enterprisewide AI use cases but little has come in way of returns. Gartner's forecast predicts a 76.4% surge in worldwide spending on gen AI by 2024, fueled by better foundational models and a global quest for AI-powered everything. But organizations are yet to see consistent results despite the surge in investment. 


Dropping the SBOM, why software supply chains are too flaky

“Mounting software supply chain risk is driving organisations to take action. [There is a] 200% increase in organistions making software supply chain security a top priority and growing use of SBOMs,” said Josh Bressers, vice president of security at Anchore. ... “There’s a clear disconnect between security goals and real-world implementation. Since open source code is the backbone of today’s software supply chains, any weakness in dependencies or artifacts can create widespread risk. To effectively reduce these risks, security measures need to be built into the core of artifact management processes, ensuring constant and proactive protection,” said Douglas. If we take anything from these market analysis pieces, it may be true that organisations struggle to balance the demands of delivering software at speed while addressing security vulnerabilities to a level which is commensurate with the composable interconnectedness of modern cloud-native applications in the Kubernetes universe. ... Alan Carson, Cloudsmith’s CSO and co-founder, remarked, “Without visibility, you can’t control your software supply chain… and without control, there’s no security. When we speak to enterprises, security is high up on their list of most urgent priorities. But security doesn’t have to come at the cost of speed. ...”


Does agentic AI spell doom for SaaS?

The reason agentic AI is perceived as a threat to SaaS and not traditional apps is that traditional apps have all but disappeared, replaced in favor of on-demand versions of former client software. But it goes beyond that. AI is considered a potential threat to SaaS for several reasons, mostly because of how it changes who is in control and how software is used. Agentic AI changes how work gets done because agents act on behalf of users, performing tasks across software platforms. If users no longer need to open and use SaaS apps directly because the agents are doing it for them, those apps lose their engagement and perceived usefulness. That ultimately translates into lost revenue, since SaaS apps typically charge either per user or by usage. An advanced AI agent can automate the workflows of an entire department, which may be covered by multiple SaaS products. So instead of all those subscriptions, you just use an agent to do it all. That can lead to significant savings in software costs. On top of the cost savings are time savings. Jeremiah Stone, CTO with enterprise integration platform vendor SnapLogic, said agents have resulted in a 90% reduction in time for data entry and reporting into the company’s Salesforce system. 


Ask a CIO Recruiter: Where Is the ‘I’ in the Modern CIO Role?

First, there are obviously huge opportunities AI can provide the business, whether it’s cost optimization or efficiencies, so there is a lot of pressure from boards and sometimes CEOs themselves saying ‘what are we doing in AI?’ The second side is that there are significant opportunities AI can enable the business in decision-making. The third leg is that AI is not fully leveraged today; it’s not in a very easy-to-use space. That is coming, and CIOs need to be able to prepare the organization for that change. CIOs need to prepare their teams, as well as business users, and say ‘hey, this is coming, we’ve already experimented with a few things. There are a lot of use cases applied in certain industries; how are we prepared for that?’ ... Just having that vision to see where technology is going and trying to stay ahead of it is important. Not necessarily chasing the shiny new toy,, new technology, but just being ahead of it is the most important skill set. Look around the corner and prepare the organization for the change that will come. Also, if you retrained some of the people, you have to be more analytical, more business minded. Those are good skills. That’s not easy to find. A lot of people [who] move into the CIO role are very technical, whether it is coding or heavily on the infrastructure side. That is a commodity today; you need to be beyond that.


Insider risk management needs a human strategy

A technical-only response to insider risk can miss the mark, we need to understand the human side. That means paying attention to patterns, motivations, and culture. Over-monitoring without context can drive good people away and increase risk instead of reducing it. When it comes to workplace monitoring, clarity and openness matter. “Transparency starts with intentional communication,” said Itai Schwartz, CTO of MIND. That means being upfront with employees, not just that monitoring is happening, but what’s being monitored, why it matters, and how it helps protect both the company and its people. According to Schwartz, organizations often gain employee support when they clearly connect monitoring to security, rather than surveillance. “Employees deserve to know that monitoring is about securing data – not surveilling individuals,” he said. If people can see how it benefits them and the business, they’re more likely to support it. Being specific is key. Schwartz advises clearly outlining what kinds of activities, data, or systems are being watched, and explaining how alerts are triggered. ... Ethical monitoring also means drawing boundaries. Schwartz emphasized the importance of proportionality: collecting only what’s relevant and necessary. “Allow employees to understand how their behavior impacts risk, and use that information to guide, not punish,” he said.


Sharing Intelligence Beyond CTI Teams, Across Wider Functions and Departments

As companies’ digital footprints expand exponentially, so too do their attack surfaces. And since most phishing attacks can be carried out by even the least sophisticated hackers due to the prevalence of phishing kits sold in cybercrime forums, it has never been harder for security teams to plug all the holes, let alone other departments who might be undertaking online initiatives which leave them vulnerable. CTI, digital brand protection and other cyber risk initiatives shouldn’t only be utilized by security and cyber teams. Think about legal teams, looking to protect IP and brand identities, marketing teams looking to drive website traffic or demand generation campaigns. They might need to implement digital brand protection to safeguard their organization’s online presence against threats like phishing websites, spoofed domains, malicious mobile apps, social engineering, and malware. In fact, deepfakes targeting customers and employees now rank as the most frequently observed threat by banks, according to Accenture’s Cyber Threat Intelligence Research. For example, there have even been instances where hackers are tricking large language models into creating malware that can be used to hack customers’ passwords.

Daily Tech Digest - May 13, 2025


Quote for the day:

"If you genuinely want something, don't wait for it -- teach yourself to be impatient." -- Gurbaksh Chahal



How to Move from Manual to Automated to Autonomous Testing

As great as test automation is, it would be a mistake to put little emphasis on or completely remove manual testing. Automated testing's strength is its ability to catch issues while scanning code. Conversely, a significant weakness is that it is not as reliable as manual testing in noticing unexpected issues that manifest themselves during usability tests. While developing and implementing automated tests, organizations should integrate manual testing into their overall quality assurance program. Even though manual testing may not initially benefit the bottom line, it definitely adds a level of protection against issues that could wreak havoc down the road, with potential damage in the areas of cost, quality, and reputation. ... The end goal is to have an autonomous testing program that has a clear focus on helping the organization achieve its desired business outcomes. There is a consistent theme in successfully developing and implementing automated testing programs: planning and patience. With the right strategy and a deliberate rollout, test automation opens the door to smoother operations and the ability to remain competitive and profitable in the ever-changing world of software development. To guarantee a successful implementation of automation practices, it is necessary to invest in training and creating best practices. 


The Hidden Dangers of Artifactory Tokens: What Every Organization Should Know

If tokens with read access are dangerous, those with write permissions are cybersecurity nightmares made flesh. They enable the most feared attack vector in modern software: supply chain poisoning. The playbook is elegant in its simplicity and devastating in its impact. Attackers identify frequently downloaded packages within your Artifactory instance, insert malicious code into these dependencies, then repackage and upload them as new versions. From there, they simply wait as unsuspecting users throughout your organization automatically upgrade to the compromised versions during routine updates. The cascading damage expands exponentially depending on which components get poisoned. Compromising build environments leads to persistent backdoors in all future software releases. Targeting developer tools gives attackers access to engineer workstations and credentials. ... The first line of defense must be preventing leaks before they happen. Implementing secret detection tools that can catch credentials before they're published to repositories. Establishing monitoring systems can identify exposed tokens on public forums, even from personal developer accounts. And following JFrog's evolving security guidance — such as moving away from deprecated API keys — ensures you're not using authentication methods with known weaknesses.


Is Model Context Protocol the New API?

With APIs, we learned that API design matters. Great APIs, like those from Stripe or Twilio, were designed for the developer. With MCP, design matters too. But who are we authoring for? You’re not authoring for a human; you’re authoring for a model that will pay close attention to every word you write. And it’s not just design, it’s the operationalization of MCP that is also important and another point of parallelism with the world of APIs. As we used to say at Apigee, there are good APIs and bad APIs. If your backend descriptions are domain-centric — as opposed to business or end-user centric — integration, adoption and developers’ overall ability to use your APIs will be impaired. A similar issue can arise with MCP. An AI might not recognize or use an MCP server’s tools if its description isn’t clear, action-oriented or AI friendly. A final thing to note, which in many ways is very new to the AI world, is the fact that every action is “on the meter.” In the LLM world, everything turns into tokens, and tokens are dollars, as NVIDIA CEO Jensen Huang reminded us in his Nvidia GTC keynote this year. So, AI-native apps — and by extension the MCP servers that those apps connect to — need to pay attention to token optimization techniques necessary for cost optimization. There’s also a question of resource optimization outside of the token/GPU space.


CISOs must speak business to earn executive trust

The key to building broader influence is translating security into business impact language. I’ve found success by guiding conversations around what executives and customers truly care about: business outcomes, not technical implementations. When I speak with the CEO or board members, I discuss how our security program protects revenue, ensures business continuity and enables growth. With many past breaches, organizations detected the threat but failed to take timely action, resulting in significant business impact. By emphasizing how our approach prevents these outcomes, I’m speaking their language. ... Successfully shifting a security organization from being perceived as the “department of no” to a strategic enabler requires a fundamental change in mindset, engagement model and communication style. It begins with aligning security goals to the broader business strategy, understanding what drives growth, customer trust and operational efficiency. Security leaders must engage cross-functionally early and often, embedding their teams within product development, IT and go-to-market functions to co-create secure solutions rather than imposing controls after the fact. This proactive, partnership-driven approach reduces friction and builds credibility.


Enterprise IAM could provide needed critical mass for reusable digital identity

Acquisitions, different business goals, and even rogue teams can prevent a single, unified platform from serving the whole organization. And then there are partnerships, employees contracted to customers, customer onboarding and a host of other situations that force identity information to move from an internal system to another one. “The result is we end up building difficult, complicated integrations that are hard to maintain,” Esplin says. Further, people want services that providers can only deliver by receiving trusted information, but people are hesitant to share their information. And then there are the attendant regulatory concerns, particularly where biometrics are involved. Intermediaries clearly have a big role to play. Some of those intermediaries may be AI agents, which can ease data sharing, but does not address the central concern about how to limit information sharing while delivering trust. Esplin argues for verifiable credentials as the answer, with the signature of the issuer providing the trust and the consent-based sharing model of VCs satisfying user’s desire to limit data sharing. Because VCs are standardized, the need for complicated integrations is removed. Biometric templates are stored by the user, enabling strong binding without the data privacy concerns that come with legacy architectures.


Beyond speed: Measuring engineering success by impact, not velocity

From a planning and accountability perspective, velocity gives teams a clean way to measure output vs. effort. It can help them plan for sprints and prioritize long-term productivity targets. It can even help with accountability, allowing teams to rightsize their work and communicate it cross-departmentally. The issues begin when it is used as the sole metric of success for teams, as it fails to reveal the nuances necessary for high-level strategic thinking and positioning by leadership. It sets up a standard that over-emphasizes pure workload rather than productive effort towards organizational objectives. ... When leadership works with their engineering teams to find solutions to business challenges, they create a highly visible value stream between each individual developer and the customer at the end of the line. For engineering-forward organizations, developer experience and satisfaction is a top priority, so factors like transparency and recognition of work go a long way towards developer well-being. Perhaps most vital is for business and tech leaders to create roadmaps of success for engineers that clearly align with the goals of the overall business. LinearB cofounder and COO Lines acknowledges that these business goals can differ wildly between businesses: “For some of the leaders that I work with, real business impact might be as simple as, we got to get to production faster…”


Sakana introduces new AI architecture, ‘Continuous Thought Machines’ to make models reason with less guidance — like human brains

Sakana AI’s Continuous Thought Machine is not designed to chase leaderboard-topping benchmark scores, but its early results indicate that its biologically inspired design does not come at the cost of practical capability. On the widely used ImageNet-1K benchmark, the CTM achieved 72.47% top-1 and 89.89% top-5 accuracy. While this falls short of state-of-the-art transformer models like ViT or ConvNeXt, it remains competitive—especially considering that the CTM architecture is fundamentally different and was not optimized solely for performance. What stands out more are CTM’s behaviors in sequential and adaptive tasks. In maze-solving scenarios, the model produces step-by-step directional outputs from raw images—without using positional embeddings, which are typically essential in transformer models. Visual attention traces reveal that CTMs often attend to image regions in a human-like sequence, such as identifying facial features from eyes to nose to mouth. The model also exhibits strong calibration: its confidence estimates closely align with actual prediction accuracy. Unlike most models that require temperature scaling or post-hoc adjustments, CTMs improve calibration naturally by averaging predictions over time as their internal reasoning unfolds. 


How to build (real) cloud-native applications

Cloud-native applications are designed and built specifically to operate in cloud environments. It’s not about just “lifting and shifting” an existing application that runs on-premises and letting it run in the cloud. Unlike traditional monolithic applications that are often tightly coupled, cloud-native applications are modular in a way that monolithic applications are not. A cloud-native application is not an application stack, but a decoupled application architecture. Perhaps the most atomic level of a cloud-native application is the container. A container could be a Docker container, though really any type of container that matches the Open Container Interface (OCI) specifications works just as well. Often you’ll see the term microservices used to define cloud-native applications. Microservices are small, independent services that communicate over APIs—and they are typically deployed in containers. A microservices architecture allows for independent scaling in an elastic way that supports the way the cloud is supposed to work. While a container can run on all different types of host environments, the most common way that containers and microservices are deployed is inside of an orchestration platform. The most commonly deployed container orchestration platform today is the open source Kubernetes platform, which is supported on every major public cloud.


Responsible AI as a Business Necessity: Three Forces Driving Market Adoption

AI systems introduce operational, reputational, and regulatory risks that must be actively managed and mitigated. Organizations implementing automated risk management tools to monitor and mitigate these risks operate more efficiently and with greater resilience. The April 2024 RAND report, “The Root Causes of Failure for Artificial Intelligence Projects and How They Can Succeed,” highlights that underinvestment in infrastructure and immature risk management are key contributors to AI project failures. ... Market adoption is the primary driver for AI companies, while organizations implementing AI solutions seek internal adoption to optimize operations. In both scenarios, trust is the critical factor. Companies that embed responsible AI principles into their business strategies differentiate themselves as trustworthy providers, gaining advantages in procurement processes where ethical considerations are increasingly influencing purchasing decisions. ... Stakeholders extend beyond regulatory bodies to include customers, employees, investors, and affected communities. Engaging these diverse perspectives throughout the AI lifecycle, from design and development to deployment and decommissioning, yields valuable insights that improve product-market fit while mitigating potential risks.


Leading high-performance engineering teams: Lessons from mission-critical systems

Conducting blameless post-mortems was imperative to focus on improving the systems without getting into blame avoidance or blame games. Building trust required consistency from me: admitting mistakes, getting feedback, going through exercises suggesting improvements, and responding in a constructive way. At the heart of this was creating the conditions for the team to feel safe taking interpersonal risks, so it was my role to steer conversation towards systemic factors that contributed to failures (“What process or procedures change could prevent this?”) and I was regularly looking for the opportunity to discuss, or later analyze, patterns across incidents so I could work towards higher order improvements. ... For teams just starting out, my advice is to take a staged approach. Pick one or two practices they can begin, evolve their plan for how they will evolve the practice and some metrics for the team to realize early value. Questions to ask yourselfHow comfortable are team members sharing reliability concerns? Does your team look for ways to prevent incidents through your reviews or look for ways to blame others? How often does your team practice responding to failure? ... In my experience, leading top engineering teams requires a set of skills. Building a strong technical culture, focusing on people, guiding teams through difficult times, and establishing durable practices.