Showing posts with label security architecture. Show all posts
Showing posts with label security architecture. Show all posts

Daily Tech Digest - October 14, 2025


Quote for the day:

"What you get by achieving your goals is not as important as what you become by achieving your goals." -- Zig Ziglar


Know your ops: Why all ops lead back to devops

When you see more terms that include the “ops” suffix, you should understand them as ideas that, as Graham Krizek, CEO of Voltage, puts it, “represent different layers of the same overarching goal. These concepts are not isolated silos but overlapping practices that support automation, collaboration, and scalability.” ... While site reliability engineering (SRE) and infrastructure as code (IaC) don’t have “ops” attached to their names, they can be seen in many ways as offshoots of the devops movement. SRE applies software engineering techniques to operations problems, with an emphasis on service-level objectives and error budgets. IaC shops manage and provision infrastructure using machine-readable definition files and scripts that can be version-controlled, automated, and tested just like application code. IaC underpins devops, gitops, and many specialized ops practices. ... “While it is not necessary for every IT professional to master each one individually, understanding the principles behind them is essential for navigating modern infrastructure,” he says. “The focus should remain on creating reliable systems and delivering value, not simply keeping up with new terminology.” In other words: you don’t need to collect ops like trading cards. You need to understand the fundamentals, specialize where it makes sense, and ignore the rest. Start with devops, add security if your compliance requirements demand it, and adopt cloudops practices if you’re heavily in the cloud. 


Digital Trust as a Strategic Asset: Why CISOs Must Think Like CFOs

CFOs are great at framing problems in terms of money. CISOs must also figure out how much risks cost, what not taking action costs, how much revenue loss comes from median dwell time, and how much it will cost to recover. Boards want the truth, not spin. Translate technical metrics into business impact (e.g., how detection/response times and dwell time drive incident scope and recovery costs). Recent threat reports show global median dwell time has fallen to ~10 days, but impact still depends on speed of containment. ... Stop talking about technology. Start describing cybersecurity as keeping your business running, protecting your reputation and building consumer trust – not simply operational disruption, but also how risk scenarios affect P&Ls. ... CISOs need to know how to read trust balance sheets, not simply logs. This entails being able to understand risk economics, insurance models and how to allocate resources strategically. ... We are entering a new era in which CFOs and CISOs are both responsible for keeping the business running: Earnings calls that include integrated trust measures;  Cyber insurance coverage that is in line with active threat modeling; Cyber posture reports that meet regulatory standards, like financial audits; and Shared leadership on risk and value initiatives at the board level. CISOs who understand trust economics will impact the futures of businesses by making security a part of strategy as well as operations.


Five actions for CISOs to manage cloud concentration risks

To effectively mitigate concentration risks, CISOs should start by identifying and documenting both third-party and fourth-party risks, with a focus on the most critical cloud providers. It is important to recognize that some non-cloud products may also have cloud dependencies, such as management consoles or reporting engines. Collaborating closely with strategic procurement and vendor management (SPVM) leaders ensures that each cloud provider has a clearly documented owner who understands their responsibilities. ... CISOs should not rely solely on service level agreements (SLAs) to mitigate financial losses from outages, as SLA payouts are often insufficient. Instead, focus on designing applications to gracefully manage limited failures and use cloud-native resilience patterns. In IaaS and PaaS, focus on short-term failure of some cloud services first, rather than catastrophic failure of a large provider and use cloud-native resilience patterns in your architecture. In addition, special attention should be given to cloud identity providers due to their position as a large single point of failure. ... To reduce the risk associated with single-vendor dependency, organizations should intentionally distribute applications and workloads across at least two cloud providers. While single-vendor solutions can simplify integration and sourcing, a multi-cloud approach limits the potential impact of an issue affecting any one provider.


Your cyber risk problem isn’t tech — it’s architecture

The development of a risk culture — including appetite, tolerance and profile — within the scope of the management program is essential to provide real visibility into ongoing risks, how they are being perceived and mitigated, and to leverage the organization’s ability to improve its security posture. Consequently, the company begins to deliver reliable products to customers, secure its reputation and build a secure image to achieve a competitive advantage and brand recognition. ... Another important factor to be developed in parallel with raising risk culture is the continuous Information security awareness process. This action should include all employees, especially those involved in Incident Management and cyber Resilience. ... From a technical standpoint, it is important to select and implement appropriate controls from the NIST CSF stages: Identify, Protect, Detect, Respond and Recover. However, the selection of each control for building guardrails will depend on the overall cybersecurity big picture and market best practices. For each identified issue, the corresponding control must be determined, each monitored by the three lines of defense ... Finally, the cyber management program must also consider legal, regulatory and regional requirements, including privacy and cybersecurity laws. This covers LGPD, CCPA, GDPR, FFEIC, Central Bank regulations, etc., to understand the consequences of non-compliance, which can pose serious issues for the organization.


Even the best AI agents are thwarted by this protocol - what can be done

An emerging category of artificial intelligence middleware known as Model Context Protocol is meant to make generative AI programs such as chatbots bots more powerful by letting them connect with various resources, including packaged software such as databases. Multiple studies, however, reveal that even the best AI models struggle to use Model Context Protocol. ... Having a standard does not mean that an AI model, whose functionality includes a heavy dose of chance ("probability" in technical terms), will faithfully implement MCP. An AI model plugged into MCP has to generate output that achieves several things, such as formulating a plan to answer a query by choosing which external resources to access, in what order to contact the MCP servers that lead to those external applications, and then structuring several requests for information to produce a final output to answer the query. ... The immediate takeaway from the various benchmarks is that AI models need to adapt to a new epoch in which using MCP is a challenge. AI models may have to evolve in new directions to fulfill the challenge. All three studies identify a problem: Performance degrades as the AI models have to access more MCP servers. The complexity of multiple resources starts to overwhelm even the models that can best plan what steps to take at the outset. As Wu and team put it in their MCPMark paper, the complexity of all those MCP servers strains any AI model's ability to keep track of it all.


Chaos engineering on Google Cloud: Principles, practices, and getting started

A common misconception is that cloud environments automatically provide application resiliency, eliminating the need for testing. Although cloud providers do offer various levels of resiliency and SLAs for their cloud products, these alone do not guarantee that your business applications are protected. If applications are not designed to be fault-tolerant or if they assume constant availability of cloud services, they will fail when a particular cloud service they depend on is not available. ... As a proactive discipline, chaos engineering enables organizations to identify weaknesses in their systems before they lead to significant outages or failures, where a system includes not only the technology components but also the people and processes of an organization. By introducing controlled, real-world disruptions, chaos engineering helps test a system's robustness, recoverability, and fault tolerance. This approach allows teams to uncover potential vulnerabilities, so that systems are better equipped to handle unexpected events and continue functioning smoothly under stress. ... Chaos Toolkit is an open-source framework written in Python that provides a modular architecture where you can plug in other libraries (also known as ‘drivers’) to extend your chaos engineering experiments. ... to enable Google Cloud customers and engineers to introduce chaos testing in their applications, we’ve created a series of Google Cloud-specific chaos engineering recipes. Each recipe covers a specific scenario to introduce chaos in a particular Google Cloud service.


The attack surface you can’t see: Securing your autonomous AI and agentic systems

The deep, non-deterministic nature of the underlying Large Language Models (LLMs) and the complex, multi-step reasoning they perform create systems where key decisions are often unexplainable. When an AI agent performs an unauthorized or destructive action, auditing it becomes nearly impossible. ... When you give an AI agent autonomy and tool access, you create a new class of trusted digital insider. If that agent is compromised, the attacker inherits all its permissions. An autonomous agent, which often has persistent access to critical systems, can be compromised and used to move laterally across the network and escalate privileges. The consequences of this over-permissioning are already being felt. ... The sheer speed and scale of agent autonomy demand a shift from traditional perimeter defense to a Zero Trust model specifically engineered for AI. This is no longer an optional security project; it is an organizational mandate for any leader deploying AI agents at scale. ... Securing Agentic AI is not just about extending your traditional security tools. It requires a new governance framework built for autonomy, not just execution. The complexity of these systems demands a new security playbook focused on control and transparency ... The future of enterprise efficiency is agentic, but the future of enterprise security must be built around controlling that agency. 


Systems that Sustain: Lessons that Nature Never Forgot but We Did

In practice, a major flaw in many technology projects is that existing multi-level approval systems are simply digitalised, leading to only marginal improvements. The process becomes a digital twin of the old: while processing speeds increase, the workflow itself remains long, redundant, and often cumbersome. The introduction of a new digital interface adds to the woes rather than simplifies them. Had processes been genuinely reengineered, digitisation could have saved time by simplifying steps, reducing the training load, improving efficiency, cutting costs, and enabling quicker adaptation in response to change. Another persistent pitfall in public sector digital transformation is misunderstanding the promise of analytics, and more crucially, confusing outputs with outcomes. ... Humans, as players in nature’s game, are unique. Evolution gifted us consciousness, language, memory, and complex social bonds—traits that allowed the creation of technology, law, storytelling, and culture. Yet these very blessings seeded traits antithetical to nature’s raw logic ... Artificial intelligence presents a tantalising prospect. Unlike its human creators, a well-designed AI can, under ideal circumstances, create technologies based on the same bias-free principles that drive nature: redesign for purpose, learn and adapt from data, and commit to real, measurable outcomes. 


California introduces new child safety law aimed at AI chatbots

The law is set to come into effect on Jan. 1, 2026, and requires chatbot operators to implement age verification and warn users of the risks of companion chatbots. The bill implements harsher penalties for anyone profiting from illegal deepfakes, with fines of up to $250,000 per offense. In addition, technology companies must establish protocols that seek to prevent self-harm and suicide. These protocols will have to be shared with the California Department of Health to ensure they’re suitable. Companies will also be required to share statistics on how often their services issue crisis center prevention alerts to their users. Some AI companies have already taken steps to protect children, with OpenAI recently introducing parental controls and content safeguards in ChatGPT, along with a self-harm detection feature. Meanwhile, Character AI has added a disclaimer to its chatbot that reminds users that all chats are generated by AI and fictional. Newsom is no stranger to AI legislation. In September, he signed into law another bill called SB 53, which mandates greater transparency from AI companies. More specifically, it requires AI firms to be fully transparent about the safety protocols they implement, while providing protections for whistleblower employees. The bill means that California is the first U.S. state to require AI chatbots to implement safety protocols, but other states have previously introduced more limited legislation. 


Embedding Security into Enterprise Architecture: A TOGAF-Based Approach to Risk-Aligned Design

Treating security as a separate discipline leads to inefficiencies, redundancies, and vulnerabilities. Bolting on security after systems are designed often results in costly retrofits, fragmented controls, and misaligned priorities. It also creates friction between teams — where security is seen as a blocker rather than a partner. Integrating ESA into EA from the outset changes the dynamic. It ensures that security is considered in every architectural decision — from business processes to data flows, from application design to infrastructure deployment. It aligns security with business goals, reduces risk exposure, and accelerates delivery. ... ISM brings operational rigor to ESA. It defines how security is implemented, monitored, and improved. ISM includes identity and access management, continuity planning, compliance management, and security awareness. When ISM is integrated into EA, security becomes part of the enterprise fabric. It’s not just a set of policies — it’s a way of working. ... This integration is not a technical adjustment — it’s a strategic evolution. It requires collaboration, shared language, and a commitment to embedding security into every architectural decision. When done right, it reduces risk, accelerates delivery, and builds confidence across the enterprise. Security by design is not a luxury — it’s a necessity. And EA Capability is how we make it real.

Daily Tech Digest - October 08, 2025


Quote for the day:

"Life is what happens to you while you’re busy making other plans." -- John Lennon



Network digital twin technology faces headwinds

Just like Google Maps is able to overlay information, such as driving directions, traffic alerts or locations of gas stations or restaurants, digital twin technology enables network teams to overlay information, such as a software upgrade, a change to firewalls rules, new versions of network operating systems, vendor or tool consolidation, or network changes triggered by mergers and acquisitions. Network teams can then run the model, evaluate different approaches, make adjustments, and conduct validation and assurance to make sure any rollout accomplishes its goals and doesn’t cause any problems, explains Maccioni ... “Configuration errors are a major cause of network incidents resulting in downtime,” says Zimmerman. “Enterprise networks, as part of a modern change management process, should use digital twin tools to model and test network functionality business rules and policies. This approach will ensure that network capabilities won’t fall short in the age of vendor-driven agile development and updates to operating systems, firmware or functionality.” ... Another valuable use case is testing failover scenarios, says Wheeler. Network engineers can design a topology that has alternative traffic paths in case a network component fails, but there’s really no way to stress test the architecture under real world conditions. He says that in one digital twin customer engagement “they found failure scenarios that they never knew existed.”


Autonomous AI hacking and the future of cybersecurity

The cyberattack/cyberdefense balance has long skewed towards the attackers; these developments threaten to tip the scales completely. We’re potentially looking at a singularity event for cyber attackers. Key parts of the attack chain are becoming automated and integrated: persistence, obfuscation, command-and-control, and endpoint evasion. Vulnerability research could potentially be carried out during operations instead of months in advance. The most skilled will likely retain an edge for now. But AI agents don’t have to be better at a human task in order to be useful. They just have to excel in one of four dimensions: speed, scale, scope, or sophistication. But there is every indication that they will eventually excel at all four. By reducing the skill, cost, and time required to find and exploit flaws, AI can turn rare expertise into commodity capabilities and gives average criminals an outsized advantage. ... If enterprises adopt AI-powered security the way they adopted continuous integration/continuous delivery (CI/CD), several paths open up. AI vulnerability discovery could become a built-in stage in delivery pipelines. We can envision a world where AI vulnerability discovery becomes an integral part of the software development process, where vulnerabilities are automatically patched even before reaching production — a shift we might call continuous discovery/continuous repair (CD/CR).


AI inference: reshaping the enterprise IT landscape across industries

AI inference is a complex operation that transforms intricate models into actionable agents. This process is essential for making real-time decisions, which can significantly improve user experiences. ... As AI systems handle more sensitive information, data security and private AI become a key part of effective inference processes. In cloud and Edge computing environments, where data often moves between multiple networks and devices, ensuring the confidentiality of user information is paramount. Private AI limits queries and requests to a company's internal database, SharePoint, API, or other private sources. It prevents unauthorized access and ensures that sensitive information remains confidential even when processed in the cloud or at the Edge. ... For AI to be truly transformative, low latency is a necessity, ensuring that real-time responses are both swift and seamless. In the realm of AI chatbots, for instance, the difference between a seamless conversation and a frustrating user experience often comes down to the speed of the AI’s response. Users expect immediate and accurate replies, and any delay can lead to a loss of engagement and trust. By minimising latency, AI chatbots can provide a more natural and fluid interaction, enhancing user satisfaction, and driving better outcomes. ... By reducing the distance data must travel, Edge computing significantly reduces latency, enabling faster and more reliable AI inference.


Smarter Systems, Safer Data: How to Outsmart Threat Actors

One of the clearest signs that a cybersecurity strategy is outdated is a lack of control and visibility over who can access what data, and on which systems. Many organizations still rely on fragmented identity management systems or grant broad access to database administrators. Others have yet to implement basic protections such as multi-factor authentication. ... Security concerns are commonly quoted as a top barrier to innovation. This is why many organizations struggle to adopt artificial intelligence, migrate to the cloud, share data externally or even internally. The only way to unblock this impasse is to start treating security as an enabler. Think about it this way: when done right, security is that key element that allows data to be moved, analyzed and shared. To exemplify this approach, if data is de-identified to maintain data privacy through the means of encryption or tokenization, in a situation of a breach, it will remain useless to attackers. ... What’s been key for the organizations that succeed in managing data risk while simultaneously unlocking value is a mindset shift. They stop seeing security as a roadblock and start seeing it as a foundation for growth. As an example, a large financial institution client has built an AI-powered solution for anti-money laundering. By protecting incoming data before it enters their system, they ensure that no sensitive data is fed to their algorithms, and thus the risk of a privacy breach, even incidental, is essentially null.


AI could prove CIOs’ worst tech debt yet

AI tools can be used to clean up old code and trim down bloated software, thus reducing one major form of tech debt. In September, for example, Microsoft announced a new suite of autonomous AI agents designed to automatically modernize legacy Java and .NET applications. At the same time, IT leaders see the potential for AI to add to their tech debt, with too many AI projects relying on models or agents that can be expensive to deploy and maintain and AI coding assistants generating more lines of software than may be necessary. ... Endless AI pilot projects create their own form of tech debt as well, says Ryan Achterberg, CTO at IT consulting firm Resultant. This “pilot paralysis,” in which organizations launch dozens of proofs of concepts that never scale, can drain IT resources, he says. “Every experiment carries an ongoing cost,” Achterberg says. “Even if a model is never scaled, it leaves behind artifacts that require upkeep and security oversight.” Part of the problem is that AI data foundations are still shaky, even as AI ambition remains high, he adds. ... In addition to tech debt from too many AI pilot projects, coding assistants can create their own problems without proper oversight, adds Jaideep Vijay Dhok, COO for technology at digital engineering provider Persistent Systems. In some cases, AI coding assistants will generate more lines of software than a developer asked for, he says. 


Hackers Exploit RMM Tools to Deploy Malware

RMM platforms typically operate with elevated permissions across endpoints. Once compromised, they offer adversaries a ready-made channel for privilege escalation, lateral movement and payload delivery, including ransomware ... Threat actors frequently repurpose legitimate RMM tools or hijack valid credentials, allowing malicious activity to blend seamlessly with routine administrative tasks. This tactic complicates detection and response, especially in environments lacking behavioral baselining. ... "This is a typical living-off-the-land attack used by many adversaries considering the success and ease of execution. Typically, such software are whitelisted in most of the controls to avoid blocking and noise, due to which its activities are not monitored much," Varkey said. "Like in most adversarial acts, getting access to the software is their initial step, so if access is limited to specific people with multifactor authorization and audited periodically, unauthorized access can be limited. .." ... "Treat RMM seriously. Assume compromise is possible and build defenses around prevention, detection and rapid response. Start with a full audit of your RMM deployment - map every agent, session and integration to identify shadow access points: asset management is key and a good RMM solution should be able to assist here. Layered controls are key - think defense-in-depth tailored to RMM's remote nature," Beuchelt said.


From Data to Doing: Agentic AI Will Revolutionize the Enterprise

Where do organizations see the greatest opportunities for agentic AI? The answer is: everywhere. Survey results show that business leaders view agentic AI as equally relevant to productivity gains, better decision-making, and enhanced customer experiences. When asked to rank potential benefits, improving customer experience and personalization emerge as the top priority, followed closely by sharper decision-making and increased efficiency. What's telling is what landed at the bottom of the list. Few organizations currently view market and business expansion as critical. This suggests that, at least in the near term, agentic AI will be applied less as a driver of bold new growth and more as a catalyst for improving and extending existing operations. ... Agentic AI is not simply the next technology wave -- it is the next great inflection point for enterprise software. Just as client–server, the Internet, and the cloud radically redefined industry leaders, agentic AI will determine which vendors and enterprises can adapt quickly enough to thrive. The lesson is clear: organizations that treat data as a strategic asset, modernize their platforms, and embed intelligence into their workflows will not only move faster but also serve customers better. The rest risk being left behind -- just as the mainframe giants once were.


Is That Your Boss or a Deepfake on the Other Side of That Video Call?

Sophisticated deepfake technology had perfectly replicated not just the appearance but the mannerisms and decision-making patterns of the company’s executives. The real managers were elsewhere, unaware their digital twins were orchestrating one of the largest deepfake heists in corporate history. This reflects a terrifying trend of AI fraud that is shaking the financial services industry. Deepfake-enabled attacks have grown by an alarming 1,740% in just one year, representing one of the fastest-growing AI-powered threats. More than half of businesses in the U.S. and U.K. have been targeted by deepfake-powered financial scams, with 43% falling victim. ... The deepfake threat extends far beyond immediate financial losses. Each successful attack erodes the foundation of digital communication itself. When employees can no longer trust that their CEO is real during a video call, the entire remote work infrastructure becomes suspect in particular for financial institutions, which deal in the currency of trust. ... Financial services companies must implement comprehensive AI governance frameworks, continuous monitoring systems, and robust incident response plans to address these evolving threats while maintaining operational efficiency and customer trust. These systems and protocols must extend not only within their front office but to their back office, including vendor management and third-party suppliers who manage their data.


Rethinking AI security architectures beyond Earth

The researchers outline three architectures: centralized, distributed, and federated. In a centralized model, the heavy lifting happens on Earth. Satellites send telemetry data to a large AI system, which analyzes it and sends back security updates. Training is fast because powerful ground-based resources are available, but the response to threats is slower due to long transmission times. In a distributed model, satellites still rely on the ground for training but perform inference locally. This setup reduces delay when responding to a threat, though smaller onboard systems can limit model accuracy. Federated learning goes a step further. Satellites train and infer on their own data without sending it to Earth. They share only model updates with other satellites and ground stations. This keeps latency low and improves privacy, but synchronizing models across a large constellation can be difficult. ... Byrne pointed out that while space-based architectures vary in resilience, recovery often depends on shared fundamentals. “Most systems across all segments will need to be restored from secure backups,” he said. “One architectural enhancement to help reduce recovery time is the implementation of distributed Inter-Satellite Links. These links enable faster propagation of recovery updates between satellites, minimizing latency and accelerating system-wide restoration.”


Who Governs Your NHIs? The Challenge of Defining Ownership in Modern Enterprise IT

What we should actually mean by ownership is the person who can answer the basic questions about why this NHI exists, what access it has, how often credentials should be rotated, whether it's being used in a way that could introduce new risks, and whether the credentials have been properly stored or have been leaked. ... Instead of focusing solely on assigning human ownership, we should be working to ensure that the questions we would ask the owner are easily answerable by our tools. This approach makes answers persistent and usable by multiple teams over time and provides consistency across the organization. It does not rely on specific individuals being eternally available or up to speed on how the NHI they created is being used. Ultimately, it scales better than human-dependent processes. Just as governing an application and all of the NHIs involved is almost never going to be the responsibility of one person, the ideal scenario where a single person can outright own an NHI and be responsible for every aspect is going to be a rare situation. ... The conversation about ownership often gets stuck on blame. Let's reframe it around assurance. Let's ensure that if a secret exists, no matter where or how it is stored, governance questions can be answered quickly and consistently.

Daily Tech Digest - June 25, 2025


Quote for the day:

"Your present circumstances don’t determine where you can go; they merely determine where you start." -- Nido Qubein



Why data observability is the missing layer of modern networking

You might hear people use these terms interchangeably, but they’re not the same thing. Visibility is about what you can see – dashboard statistics, logs, uptime numbers, bandwidth figures, the raw data that tells you what’s happening across your network. Observability, on the other hand, is about what that data actually means. It’s the ability to interpret, analyse, and act on those insights. It’s not just about seeing a traffic spike but instead understanding why it happened. It’s not just spotting a latency issue, but knowing which apps are affected and where the bottleneck sits. ... Today, connectivity needs to be smart, agile, and scalable. It’s about building infrastructure that supports cloud, remote work, and everything in between. Whether you’re adding a new site, onboarding a remote team, or launching a cloud-hosted app, your network should be able to scale and respond at speed. Then there’s security, a non-negotiable layer that protects your entire ecosystem. Great security isn’t about throwing up walls, it’s about creating confidence. That means deploying zero trust principles, segmenting access, detecting threats in real time, and encrypting data, without making users lives harder. ... Finally, we come to observability. Arguably the most unappreciated of the three but quickly becoming essential. 


6 Key Security Risks in LLMs: A Platform Engineer’s Guide

Prompt injection is the AI-era equivalent of SQL injection. Attackers craft malicious inputs to manipulate an LLM, bypass safeguards or extract sensitive data. These attacks range from simple jailbreak prompts that override safety rules to more advanced exploits that influence backend systems. ... Model extraction attacks allow adversaries to systematically query an LLM to reconstruct its knowledge base or training data, essentially cloning its capabilities. These attacks often rely on automated scripts submitting millions of queries to map the model’s responses. One common technique, model inversion, involves strategically structured inputs that extract sensitive or proprietary information embedded in the model. Attackers may also use repeated, incremental queries with slight variations to amass a dataset that mimics the original training data. ... On the output side, an LLM might inadvertently reveal private information embedded in its dataset or previously entered user data. A common risk scenario involves users unknowingly submitting financial records or passwords into an AI-powered chatbot, which could then store, retrieve or expose this data unpredictably. With cloud-based LLMs, the risk extends further. Data from one organization could surface in another’s responses.


Adopting Agentic AI: Ethical Governance, Business Impact, Talent Demand, and Data Security

Agentic AI introduces a spectrum of ethical challenges that demand proactive governance. Given its capacity for independent decision-making, there is a heightened need for transparent, accountable, and ethically driven AI models. Ethical governance in Agentic AI revolves around establishing robust policies that govern decision logic, bias mitigation, and accountability. Organizations leveraging Agentic AI must prioritize fairness, inclusivity, and regulatory compliance to avoid unintended consequences. ... The integration of Agentic AI into business ecosystems promises not just automation but strategic enhancement of decision-making. These AI agents are designed to process real-time data, predict market shifts, and autonomously execute decisions that would traditionally require human intervention. In sectors such as finance, healthcare, and manufacturing, Agentic AI is optimizing supply chains, enhancing predictive analytics, and streamlining operations with unparalleled accuracy. ... One of the major concerns surrounding Agentic AI is data security. Autonomous decision-making systems require vast amounts of real-time data to function effectively, raising questions about data privacy, ownership, and cybersecurity. Cyber threats aimed at exploiting autonomous decision-making could have severe consequences, especially in sectors like finance and healthcare.


Unveiling Supply Chain Transformation: IIoT and Digital Twins

Digital twins and IIoTs are evolving technologies that are transforming the digital landscape of supply chain transformation. The IIoT aims to connect to actual physical sensors and actuators. On the other hand, DTs are replica copies that virtually represent the physical components. The DTs are invaluable for testing and simulating design parameters instead of disrupting production elements. ... Contrary to generic IoT, which is more oriented towards consumers, the IIoT enables the communication and interconnection between different machines, industrial devices, and sensors within a supply chain management ecosystem with the aim of business optimization and efficiency. The incubation of IIoT in supply chain management systems aims to enable real-time monitoring and analysis of industrial environments, including manufacturing, logistics management, and supply chain. It boosts efforts to increase productivity, cut downtime, and facilitate information and accurate decision-making. ... A supply chain equipped with IIoT will be a main ingredient in boosting real-time monitoring and enabling informed decision-making. Every stage of the supply chain ecosystem will have the impact of IIoT, like automated inventory management, health monitoring of goods and their tracking, analytics, and real-time response to meet the current marketplace. 


The state of cloud security

An important complicating factor in all this is that customers don’t always know what’s happening in cloud data centers. At the same time, De Jong acknowledges that on-premises environments have the same problem. “There’s a spectrum of issues, and a lot of overlap,” he says, something Wesley Swartelé agrees with: “You have to align many things between on-prem and cloud.” Andre Honders points to a specific aspect of the cloud: “You can be in a shared environment with ten other customers. This means you have to deal with different visions and techniques that do not exist on-premises.” This is certainly the case. There are plenty of worst case scenarios to consider in the public cloud. ... However, a major bottleneck remains the lack of qualified personnel. We hear this all the time when it comes to security. And in other IT fields too, as it happens, meaning one could draw a society-wide conclusion. Nevertheless, staff shortages are perhaps more acute in this sector. Erik de Jong sees society as a whole having similar problems, at any rate. “This is not an IT problem. Just ask painters. In every company, a small proportion of the workforce does most of the work.” Wesley Swartelé agrees it is a challenge for organizations in this industry to find the right people. “Finding a good IT professional with the right mindset is difficult.


As AI reshapes the enterprise, security architecture can’t afford to lag behind

Technology works both ways – it enables the attacker and the smart defender. Cybercriminals are already capitalising on its potential, using open source AI models like DeepSeek and Grok to automate reconnaissance, craft sophisticated phishing campaigns, and produce deepfakes that can convincingly impersonate executives or business partners. What makes this especially dangerous is that these tools don’t just improve the quality of attacks; they multiply their volume. That’s why enterprises need to go beyond reactive defenses and start embedding AI-aware policies into their core security fabric. It starts with applying Zero Trust to AI interactions, limiting access based on user roles, input/output restrictions, and verified behaviour. ... As attackers deploy AI to craft polymorphic malware and mimic legitimate user behaviour, traditional defenses struggle to keep up. AI is now a critical part of the enterprise security toolkit, helping CISOs and security teams move from reactive to proactive threat defense. It enables rapid anomaly detection, surfaces hidden risks earlier in the kill chain, and supports real-time incident response by isolating threats before they can spread. But AI alone isn’t enough. Security leaders must strengthen data privacy and security by implementing full-spectrum DLP, encryption, and input monitoring to protect sensitive data from exposure, especially as AI interacts with live systems. 


Identity Is the New Perimeter: Why Proofing and Verification Are Business Imperatives

Digital innovation, growing cyber threats, regulatory pressure, and rising consumer expectations all drive the need for strong identity proofing and verification. Here is why it is more important than ever:Combatting Fraud and Identity Theft: Criminals use stolen identities to open accounts, secure loans, or gain unauthorized access. Identity proofing is the first defense against impersonation and financial loss. Enabling Secure Digital Access: As more services – from banking to healthcare – go digital, strong remote verification ensures secure access and builds trust in online transactions. Regulatory Compliance: Laws such as KYC, AML, GDPR, HIPAA, and CIPA require identity verification to protect consumers and prevent misuse. Compliance is especially critical in finance, healthcare, and government sectors. Preventing Account Takeover (ATO): Even legitimate accounts are at risk. Continuous verification at key moments (e.g., password resets, high-risk actions) helps prevent unauthorized access via stolen credentials or SIM swapping. Enabling Zero Trust Security: Zero Trust assumes no inherent trust in users or devices. Continuous identity verification is central to enforcing this model, especially in remote or hybrid work environments. 


Why should companies or organizations convert to FIDO security keys?

FIDO security keys significantly reduce the risk of phishing, credential theft, and brute-force attacks. Because they don’t rely on shared secrets like passwords, they can’t be reused or intercepted. Their phishing-resistant protocol ensures authentication is only completed with the correct web origin. FIDO security keys also address insider threats and endpoint vulnerabilities by requiring physical presence, further enhancing protection, especially in high-security environments such as healthcare or public administration. ... In principle, any organization that prioritizes a secure IT infrastructure stands to benefit from adopting FIDO-based multi-factor authentication. Whether it’s a small business protecting customer data or a global enterprise managing complex access structures, FIDO security keys provide a robust, phishing-resistant alternative to passwords. That said, sectors with heightened regulatory requirements, such as healthcare, finance, public administration, and critical infrastructure, have particularly strong incentives to adopt strong authentication. In these fields, the risk of breaches is not only costly but can also have legal and operational consequences. FIDO security keys are also ideal for restricted environments, such as manufacturing floors or emergency rooms, where smartphones may not be permitted. 


Data Warehouse vs. Data Lakehouse

Data warehouses and data lakehouses have emerged as two prominent adversaries in the data storage and analytics markets, each with advantages and disadvantages. The primary difference between these two data storage platforms is that while the data warehouse is capable of handling only structured and semi-structured data, the data lakehouse can store unlimited amounts of both structured and unstructured data – and without any limitations. ... Traditional data warehouses have long supported all types of business professionals in their data storage and analytics endeavors. This approach involves ingesting structured data into a centralized repository, with a focus on warehouse integration and business intelligence reporting. Enter the data lakehouse approach, which is vastly superior for deep-dive data analysis. The lakehouse has successfully blended characteristics of the data warehouse and the data lake to create a scalable and unrestricted solution. The key benefit of this approach is that it enables data scientists to quickly extract insights from raw data with advanced AI tools. ... Although a data warehouse supports BI use cases and provides a “single source of truth” for analytics and reporting purposes, it can also become difficult to manage as new data sources emerge. The data lakehouse has redefined how global businesses store and process data. 


AI or Data Governance? Gartner Says You Need Both

Data and analytics leaders, such as chief data officers, or CDOs, and chief data and analytics officers, or CDAOs, play a significant role in driving their organizations' data and analytics, D&A, successes, which are necessary to show business value from AI projects. Gartner predicts that by 2028, 80% of gen AI business apps will be developed on existing data management platforms. Their analysts say, "This is the best time to be in data and analytics," and CDAOs need to embrace the AI opportunity eyed by others in the C-suite, or they will be absorbed into other technical functions. With high D&A ambitions and AI pilots becoming increasingly ubiquitous, focus is shifting toward consistent execution and scaling. But D&A leaders are overwhelmed with their routine data management tasks and need a new AI strategy. ... "We've never been good at governance, and now AI demands that we be even faster, which means you have to take more risks and be prepared to fail. We have to accept two things: Data will never be fully governed. Secondly, attempting to fully govern data before delivering AI is just not realistic. We need a more practical solution like trust models," Zaidi said. He said trust models provide a trust rating for data assets by examining their value, lineage and risk. They offer up-to-date information on data trustworthiness and are crucial for fostering confidence. 

Daily Tech Digest - May 11, 2025


Quote for the day:

"To do great things is difficult; but to command great things is more difficult." -- Friedrich Nietzsche



The Human-Centric Approach To Digital Transformation

Involving employees from the beginning of the transformation process is vital for fostering buy-in and reducing resistance. When employees feel they have a say in how new tools and processes will be implemented, they’re more likely to support them. In practice, early involvement can take many forms, including workshops, pilot programs, and regular feedback sessions. For instance, if a company is considering adopting a new project management tool, it can start by inviting employees to test various options, provide feedback, and voice their preferences. ... As companies increasingly adopt digital tools, the need for digital literacy grows. Employees who lack confidence or skills in using new technology are more likely to feel overwhelmed or resistant. Providing comprehensive training and support is essential to ensuring that all employees feel capable and empowered to leverage digital tools. Digital literacy training should cover the technical aspects of new tools and focus on their strategic benefits, helping employees see how these technologies align with broader company goals. ... The third pillar, adaptability, is crucial for sustaining digital transformation. In a human-centered approach, adaptability is encouraged and rewarded, creating a growth-oriented culture where employees feel safe to experiment, take risks, and share ideas. 


Forging OT Security Maturity: Building Cyber Resilience in EMEA Manufacturing

When it comes to OT security maturity, pragmatic measures that are easily implementable by resource-constrained SME manufacturers are the name of the game. Setting up an asset visibility program, network segmentation, and simple threat detection can attain significant value without requiring massive overhauls. Meanwhile, cultural alignment across IT and OT teams is essential. ... “To address evolving OT threats, organizations must build resilience from the ground up,” Mashirova told Industrial Cyber. “They should enhance incident response, invest in OT continuous monitoring, and promote cross-functional collaboration to improve operational resilience while ensuring business continuity and compliance in an increasingly hostile cyber environment.” ... “Manufacturers throughout the region are increasingly recognizing that cyber threats are rapidly shifting toward OT environments,” Claudio Sangaletti, OT leader at medmix, told Industrial Cyber. “In response, many companies are proactively developing and implementing comprehensive OT security programs. These initiatives aim not only to safeguard critical assets but also to establish robust business recovery plans to swiftly address and mitigate the impacts of potential attacks.”


Quantum Leap? Opinion Split Over Quantum Computing’s Medium-Term Impact

“While the actual computations are more efficient, the environment needed to keep quantum machines running, especially the cooling to near absolute zero, is extremely energy-intensive,” he says. When companies move their infrastructure to cloud platforms and transition key platforms like CRM, HCM, and Unified Comms Platform (UCP) to cloud-native versions, they can reduce the energy use associated with running large-scale physical servers 24/7. “If and when quantum computing becomes commercially viable at scale, cloud partners will likely absorb the cooling and energy overhead,” Johnson says. “That’s a win for sustainability and focus.” Alexander Hallowell, principal analyst at Omdia’s advanced computing division, says that unless one of the currently more “out there” technology options proves itself (e.g., photonics or something semiconductor-based), quantum computing is likely to remain infrastructure-intensive and environmentally fragile. “Data centers will need to provide careful isolation from environmental interference and new support services such as cryogenic cooling,” he says. He predicts the adoption of quantum computing within mainstream data center operations is at least five years out, possibly “quite a bit more.” 


Introduction to Observability

Observability has become a concept, in the field of information technology in areas like DevOps and system administration. Essentially, observability involves measuring a system’s states by observing its outputs. This method offers an understanding of how systems behave, enabling teams to troubleshoot problems, enhance performance and ensure system reliability. In today’s IT landscape, the complexity and size of applications have grown significantly. Traditional monitoring techniques have struggled to keep up with the rise of technologies like microservices, containers and serverless architectures. ... Transitioning from monitoring to observability signifies a progression, in the management and upkeep of systems. Although monitoring is crucial for keeping tabs on metrics and reacting to notifications, observability offers a comprehensive perspective and the in-depth analysis necessary for comprehending and enhancing system efficiency. By combining both methods, companies can attain a more effective IT infrastructure. ... Observability depends on three elements to offer a perspective of system performance and behavior: logs, metrics and traces. These components, commonly known as the “three pillars of observability,” collaborate to provide teams, with the information to analyze and enhance their systems.


Cloud Strategy 2025: Repatriation Rises, Sustainability Matures, and Cost Management Tops Priorities

After more than twenty years of trial-and-error, the cloud has arrived at its steady state. Many organizations have seemingly settled on the cloud mix best suited to their business needs, embracing a hybrid strategy that utilizes at least one public and one private cloud. ... Sustainability is quickly moving from aspiration to expectation for businesses. ... Cost savings still takes the top spot for a majority of organizations, but notably, 31% now report equal prioritization between cost optimization and sustainability. The increased attention on sustainability comes as the internal and external regulatory pressures mount for technology firms to meet environmental requirements. There is also the reputational cost at play – scrutiny over sustainability efforts is on the rise from customers and employees alike. ... As organizations maintain a laser focus on cost management, FinOps has emerged as a viable solution for combating cost management challenges. A comprehensive FinOps infrastructure is a game-changer when it comes to an organization’s ability to wrangle overspending and maximize business value. Additionally, FinOps helps businesses activate on timely, data-driven insights, improving forecasting and encouraging cross-functional financial accountability.


Building Adaptive and Future-Ready Enterprise Security Architecture: A Conversation with Yusfarizal Yusoff

Securing Operational Technology (OT) environments in critical industries presents a unique set of challenges. Traditional IT security solutions are often not directly applicable to OT due to the distinctive nature of these environments, which involve legacy systems, proprietary protocols, and long lifecycle assets that may not have been designed with cybersecurity in mind. As these industries move toward greater digitisation and connectivity, OT systems become more vulnerable to cyberattacks. One major challenge is ensuring interoperability between IT and OT environments, especially when OT systems are often isolated and have been built to withstand physical and environmental stresses, rather than being hardened against cyber threats. Another issue is the lack of comprehensive security monitoring in many OT environments, which can leave blind spots for attackers to exploit. To address these challenges, security architects must focus on network segmentation to separate IT and OT environments, implement robust access controls, and introduce advanced anomaly detection systems tailored for OT networks. Furthermore, organisations must adopt specialised OT security tools capable of addressing the unique operational needs of industrial environments. 


CDO and CAIO roles might have a built-in expiration date

“The CDO role is likely to be durable, much due to the long-term strategic value of data; however, it is likely to evolve to encompass more strategic business responsibility,” he says. “The CAIO, on the other hand, is likely to be subsumed into CTO or CDO roles as AI technology folds into core technologies and architectures standardize.” For now, both CIAOs and CDOs have responsibilities beyond championing the use of AI and good data governance, Stone adds. They will build the foundation for enterprise-wide benefits of AI and good data management. “As AI and data literacy take hold across the enterprise, CDOs and CAIOs will shift from internal change enablers and project champions to strategic leaders and organization-wide enablers,” he says. “They are, and will continue to grow more, responsible for setting standards, aligning AI with business goals, and ensuring secure, scalable operations.” Craig Martell, CAIO at data security and management vendor Cohesity, agrees that the CDO position may have a better long-term prognosis than the CAIO position. Good data governance and management will remain critical for many organizations well into the future, he says, and that job may not be easy to fold into the CIO’s responsibilities. “What the chief data officer does is different than what the CIO does,” says Martell, 


Chaos Engineering with Gremlin and Chaos-as-a-Service: An Empirical Evaluation

As organizations increasingly adopt microservices and distributed architectures, the potential for unpredictable failures grows. Traditional testing methodologies often fail to capture the complexity and dynamism of live systems. Chaos engineering addresses this gap by introducing carefully planned disturbances to test system responses under duress. This paper explores how Gremlin can be used to perform such experiments on AWS EC2 instances, providing actionable insights into system vulnerabilities and recovery mechanisms. ... Chaos engineering originated at Netflix with the development of the Chaos Monkey tool, which randomly terminated instances in production to test system reliability. Since then, the practice has evolved with tools like Gremlin, LitmusChaos, and Chaos Toolkit offering more controlled and systematic approaches. Gremlin offers a SaaS-based chaos engineering platform with a focus on safety, control, and observability. ... Chaos engineering using Gremlin on EC2 has proven effective in validating the resilience of distributed systems. The experiments helped identify areas for improvement, including better configuration of health checks and fine-tuning auto-scaling thresholds. The blast radius concept ensured safe testing without risking the entire system.


How digital twins are reshaping clinical trials

While the term "digital twin" is often associated with synthetic control arms, Walsh stressed that the most powerful and regulatory-friendly application lies in randomized controlled trials (RCTs). In this context, digital twins do not replace human subjects but act as prognostic covariates, enhancing trial efficiency while preserving randomization and statistical rigor. "Digital twins make every patient more valuable," Walsh explained. "Applied correctly, this means that trials may be run with fewer participants to achieve the same quality of evidence." ... "Digital twins are one approach to enable highly efficient replication studies that can lower the resource burden compared to the original trial," Walsh clarified. "This can include supporting novel designs that replicate key results while also assessing additional clinical or biological questions of interest." In effect, this strategy allows for scientific reproducibility without repeating entire protocols, making it especially relevant in therapeutic areas with limited eligible patient populations or high participant burden. In early development -- particularly phase 1b and phase 2 -- digital twins can be used as synthetic controls in open-label or single-arm studies. This design is gaining traction among sponsors seeking to make faster go/no-go decisions while minimizing patient exposure to placebos or standard-of-care comparators.


The Great European Data Repatriation: Why Sovereignty Starts with Infrastructure

Data repatriation is not merely a reactive move driven by fear. It’s a conscious and strategic pivot. As one industry leader recently noted in Der Spiegel, “We’re receiving three times as many inquiries as usual.” The message is clear: European companies are actively evaluating alternatives to international cloud infrastructures—not out of nationalism, but out of necessity. The scale of this shift is hard to ignore. Recent reports have cited a 250% user growth on platforms offering sovereign hosting, and inquiries into EU-based alternatives have surged over a matter of months. ... Challenges remain: Migration is rarely a plug-and-play affair. As one European CEO emphasized to The Register, “Migration timelines tend to be measured in months or years.” Moreover, many European providers still lack the breadth of features offered by global cloud platforms, as a KPMG report for the Dutch government pointed out. Yet the direction is clear.  ... Europe’s data future is not about isolation, but balance. A hybrid approach—repatriating sensitive workloads while maintaining flexibility where needed—can offer both resilience and innovation. But this journey starts with one critical step: ensuring infrastructure aligns with European values, governance, and control.

Daily Tech Digest - March 14, 2025


Quote for the day:

“Success does not consist in never making mistakes but in never making the same one a second time.” --George Bernard Shaw


The Maturing State of Infrastructure as Code in 2025

The progression from cloud-specific frameworks to declarative, multicloud solutions like Terraform represented the increasing sophistication of IaC capabilities. This shift enabled organizations to manage complex environments with never-before-seen efficiency. The emergence of programming language-based IaC tools like Pulumi then further blurred the lines between application development and infrastructure management, empowering developers to take a more active role in ops. ... For DevOps and platform engineering leaders, this evolution means preparing for a future where cloud infrastructure management becomes increasingly automated, intelligent and integrated with other aspects of the software development life cycle. It also highlights the importance of fostering a culture of continuous learning and adaptation, as the IaC landscape continues to evolve at a rapid pace. ... Firefly’s “State of Infrastructure as Code (IaC)” report is an annual pulse check on the rapidly evolving state of IaC adoption, maturity and impact. Over the course of the past few editions, this report has become an increasingly crucial resource for DevOps professionals, platform engineers and site reliability engineers (SREs) navigating the complexities of multicloud environments and a changing IaC tooling landscape.


Consent Managers under the Digital Personal Data Protection Act: A Game Changer or Compliance Burden?

The use of Consent Managers provides advantages for both Data Fiduciaries and Data Principals. For Data Fiduciaries, Consent Managers simplify compliance with consent-related legal requirements, making it easier to manage and document user consent in line with regulatory obligations. For Data Principals, Consent Managers offer a streamlined and efficient way to grant, modify, and revoke consent, empowering them with greater control over how their personal data is shared. This enhanced efficiency in managing consent also leads to faster, more secure, and smoother data flows, reducing the complexities and risks associated with data exchanges. Additionally, Consent Managers play a crucial role in helping Data Principals exercise their right to grievance redressal. ... Currently, Data Fiduciaries can manage user consent independently, making the role of Consent Managers optional. If this remains voluntary, many companies may avoid them, reducing their effectiveness. For Consent Managers to succeed, they need regulatory support, flexible compliance measures, and a business model that balances privacy protection with industry participation. ... Rooted in the fundamental right to privacy under Article 21 of the Constitution of India, the DPDPA aims to establish a structured approach to data processing while preserving individual control over personal information.


The future of AI isn’t the model—it’s the system

Enterprise leaders are thinking differently about AI in 2025. Several founders here told me that unlike in 2023 and 2024, buyers are now focused squarely on ROI. They want systems that move beyond pilot projects and start delivering real efficiencies. Mensch says enterprises have developed “high expectations” for AI, and many now understand that the hard part of deploying it isn’t always the model itself—it’s everything around it: governance, observability, security. Mistral, he says, has gotten good at connecting these layers, along with systems that orchestrate data flows between different models and subsystems. Once enterprises grapple with the complexity of building full AI systems—not just using AI models—they start to see those promised efficiencies, Mensch says. But more importantly, C-suite leaders are beginning to recognize the transformative potential. Done right, AI systems can radically change how information moves through a company. “You’re making information sharing easier,” he says. Mistral encourages its customers to break down silos so data can flow across departments. One connected AI system might interface with HR, R&D, CRM, and financial tools. “The AI can quickly query other departments for information,” Mensch explains. “You no longer need to query the team.”


Generative AI is finally finding its sweet spot, says Databricks chief AI scientist

Beyond the techniques, knowing what apps to build is itself a journey and something of a fishing expedition. "I think the hardest part in AI is having confidence that this will work," said Frankle. "If you came to me and said, 'Here's a problem in the healthcare space, here are the documents I have, do you think AI can do this?' my answer would be, 'Let's find out.'" ... "Suppose that AI could automate some of the most boring legal tasks that exist?" offered Frankle, whose parents are lawyers. "If you wanted an AI to help you do legal research, and help you ideate about how to solve a problem, or help you find relevant materials -- phenomenal!" "We're still in very early days" of generative AI, "and so, kind of, we're benefiting from the strengths, but we're still learning how to mitigate the weaknesses." ... In the midst of uncertainty, Frankle is impressed with how customers have quickly traversed the learning curve. "Two or three years ago, there was a lot of explaining to customers what generative AI was," he noted. "Now, when I talk to customers, they're using vector databases." "These folks have a great intuition for where these things are succeeding and where they aren't," he said of Databricks customers. Given that no company has an unlimited budget, Frankle advised starting with an initial prototype, so that investment only proceeds to the extent that it's clear an AI app will provide value.


Australia’s privacy watchdog publishes regulatory strategy prioritizing biometrics

The strategy plan includes a table of activities and estimated timelines, a detailed breakdown of actions in specific categories, and a list of projected long- and short-term outcomes. The goals are ambitious in scope: a desired short-term outcome is to “mature existing awareness about privacy across multiple domains of life” so that “individuals will develop a more nuanced understanding of privacy issues recognising their significance across various aspects of their lives, including personal, professional, and social domains.” Laws, skills training and better security tools are one thing, but changing how people understand their privacy is a major social undertaking. The OAIC’s long-term outcomes seem more rooted in practicality; they include the widespread implementation of enhanced privacy compliance practices for organizations, better public understanding of the OAIC’s role as regulator, and enhanced data handling industry standards. ... AI is a matter of going concern, and compliance for model training and development will be a major focus for the regulator. In late February, Kind delivered a speech on privacy and security in retail that references her decision on the Bunnings case, which led to the publication of guidance on the use of facial recognition technology, focused on four key privacy concepts: necessity/proportionality, consent/transparency, accuracy/bias, and governance.


Hiring privacy experts is tough — here’s why

“Some organizations think, ‘Well, we’re funding security, and privacy is basically the same thing, right?’ And I think that’s really one of my big concerns,” she says. This blending of responsibilities is reflected in training practices, according to Kazi, who notes how many organizations combine security and privacy training, which isn’t inherently problematic, but it carries risks. “One of the questions we ask in our survey is, ‘Do you combine security training and privacy training?’ Some organizations say they do not necessarily see it as a bad thing, but you can … be doing security, but you’re not doing privacy. And so that’s what’s highly concerning is that you can’t have privacy without security, but you could potentially do security well without considering privacy.” As Trovato emphasizes, “cybersecurity people tend to be from Mars and privacy people from Venus”, yet he also observes how privacy and cybersecurity professionals are often grouped together, adding to the confusion about what skills are truly needed. ... “Privacy includes how are we using data, how are you collecting it, who are you sharing it with, how are you storing it — all of these are more subtle component pieces, and are you meeting the requirements of the customer, of the regulator, so it’s a much more outward business focus activity day-to-day versus we’ve got to secure everything and make sure it’s all protected.”


Security Maturity Models: Leveraging Executive Risk Appetite for Your Secure Development Evolution

With developers under pressure to produce more code than ever before, development teams need to have a high level of security maturity to avoid rework. That necessitates having highly skilled personnel working within a strategic, prevention-focused framework. Developer and AppSec teams must work closely together, as opposed to the old model of operating as separate entities. Today, developers need to assume a significant role in ensuring security best practices. The most recent BSIMM report from Black Duck Software, for instance, found that there are only 3.87 AppSec professionals for every 100 developers, which doesn’t bode well for AppSec teams trying to secure an organization’s software all on their own. A critical part of learning initiatives is the ability to gauge the progress of developers in the program, both to ensure that developers are qualified to work on the organization’s most sensitive projects and to assess the effectiveness of the program. This upskilling should be ongoing, and you should always look for areas that can be improved. Making use of a tool like SCW’s Trust Score, which uses benchmarks to gauge progress both internally and against industry standards, can help ensure that progress is being made.


Why thinking like a tech company is essential for your business’s survival

The phrase “every company is a tech company” gets thrown around a lot, but what does that actually mean? To us, it’s not just about using technology — it’s about thinking like a tech company. The most successful tech companies don’t just refine what they already do; they reinvent themselves in anticipation of what’s next. They place bets. They ask: Where do we need to be in five or 10 years? And then, they start moving in that direction while staying flexible enough to adapt as the market evolves. ... Risk management is part of our DNA, but AI presents new types of risks that businesses haven’t dealt with before. ... No matter how good our technology is, our success ultimately comes down to people. And we’ve learned that mindset matters more than skill set. When we launched an AI proof-of-concept project for our interns, we didn’t recruit based on technical acumen. Instead, we looked for curious, self-starting individuals willing to experiment and learn. What we found was eye-opening—these interns thrived despite having little prior experience with AI. Why? Because they asked great questions, adapted quickly, and weren’t afraid to explore. ... Aligning your culture, processes and technology strategy ensures you can adapt to a rapidly changing landscape while staying true to your core purpose.


Realizing the Internet of Everything

The obvious answer to this problem is governance, a set of rules that constrain use and technology to enforce them. The problem, as it is so often with the “obvious,” is that setting the rules would be difficult and constraining use through technology would be difficult to do, and probably harder to get people to believe in. Think about Asimov’s Three Laws of Robotics and how many of his stories focused on how people worked to get around them. Two decades ago, a research lab did a video collaboration experiment that involved a small camera in offices so people could communicate remotely. Half the workforce covered their camera when they got in. I know people who routinely cover their webcams when they’re not on a scheduled video chat or meeting, and you probably do too. So what if the light isn’t on? Somebody has probably hacked in. Social concerns inevitably collide with attempts to integrate technology tightly with how we live. Have we reached a point where dealing with those concerns convincingly is essential in letting technology improve our work, our lives, further? We do have widespread, if not universal, video surveillance. On a walk this week, I found doorbell cameras or other cameras on about a quarter of the homes I passed, and I’d bet there are even more in commercial areas. 


Cloud Security Architecture: Your Guide to a Secure Infrastructure

Threat modeling can be a good starting point, but it shouldn't end with a stack-based security approach. Rather than focusing solely on the technologies, approach security by mapping parts of your infrastructure to equivalent security concepts. Here are some practical suggestions and areas to zoom in on for implementation. ... When protecting workloads in the cloud, consider using some variant of runtime security. Kubernetes users have no shortage of choice here with tools such as Falco, an open-source runtime security tool that monitors your applications and detects anomalous behaviors. However, chances are your cloud provider has some form of dynamic threat detection for your workloads. For example, AWS offers Amazon GuardDuty, which continuously monitors your workloads for malicious activity and unauthorized behavior. ... Implementing two-factor authentication adds an extra layer of protection by requiring a second form of verification, such as an authenticator app or a passkey, in addition to your password. While reaching for your authenticator app every time you log in might seem slightly inconvenient, it's a far better outcome than dealing with the aftermath of a breached account. The minor inconvenience is a small price to pay for the added security it provides.