Showing posts with label financial services. Show all posts
Showing posts with label financial services. Show all posts

Daily Tech Digest - August 25, 2025


Quote for the day:

"The pain you feel today will be the strength you feel tomorrow." -- Anonymous


Proactive threat intelligence boosts security & resilience

Threat intelligence is categorised into four key areas, each serving a unique purpose within an organisation. Strategic intelligence provides executives with a high-level overview, covering broad trends and potential impacts on the business, including financial or reputational ramifications. This level of intelligence guides investment and policy decisions. Tactical intelligence is aimed at IT managers and security architects. It details the tactics, techniques, and procedures (TTPs) of threat actors, assisting in strengthening defences and optimising security tools. Operational intelligence is important for security operations centre analysts, offering insights into imminent or ongoing threats by focusing on indicators of compromise (IoCs), such as suspicious IP addresses or file hashes. Finally, technical intelligence concerns the most detailed level of threat data, offering timely information on IoCs. While valuable, its relevance can be short-lived as attackers frequently change tactics and infrastructure. ... Despite these benefits, many organisations face significant hurdles. Building an in-house threat intelligence capability is described as requiring a considerable investment in specialised personnel, tools, and continual data analysis. For small and mid-sized organisations, this can be a prohibitive challenge, despite the increasing frequency of targeted attacks by sophisticated adversaries.


Data Is a Dish Best Served Fresh: “In the Wild” Versus Active Exploitation

Combating internet-wide opportunistic exploitation is a complex problem, with new vulnerabilities being weaponized at an alarming rate. In addition to the staggering increase in volume, attackers are getting better at exploiting zero-day vulnerabilities via APTs and criminals or botnets at much higher frequency, on a massive scale. The amount of time between disclosure of a new vulnerability and the start of active exploitation has been drastically reduced, leaving defenders with little time to react and respond. On the internet, the difference between one person observing something and everyone else seeing it is often quantified in just minutes. ... Generally speaking, a lot of work goes into weaponizing a software vulnerability. It’s deeply challenging and requires advanced technical skill. We tend to sometimes forget that attackers are deeply motivated by profit, just like businesses are. If attackers think something is a dead end, they won’t want to invest their time. So, investigating what attackers are up to via proxy is a good way to understand how much you need to care about a specific vulnerability. ... These targeted attacks threaten to circumvent existing defense capabilities and expose organizations to a new wave of disruptive breaches. In order to adequately protect their networks, defenders must evolve in response. Ultimately, there is no such thing and a set-and-forget single source of truth for cybersecurity data.


Quietly Fearless Leadership for 4 Golden Signals

Most leadership mistakes start with a good intention and a calendar invite. We’ve learned to lead by subtraction. It’s disarmingly simple: before we introduce a new ritual, tool, or acronym, we delete something that’s already eating cycles. If we can’t name what gets removed, we hold the idea until we can. The reason’s pragmatic: teams don’t fail because they lack initiatives; they fail because they’re full. ... As leaders, we also protect deep work. We move approvals to asynchronous channels and time-box them. Our job is to reduce decision queue time, not to write longer memos. Subtraction leadership signals trust. It says, “We believe you can do the job without us narrating it.” We still set clear constraints—budgets, reliability targets, security boundaries—but within those, we make space. ... Incident leadership isn’t a special hat; it’s a practiced ritual. We use the same six steps every time so people can stay calm and useful: declare, assign, annotate, stabilize, learn, thank. One sentence each: we declare loudly with a unique ID; we assign an incident commander who doesn’t touch keyboards; we annotate a live timeline; we stabilize by reducing blast radius; we learn with a blameless writeup; we thank the humans who did the work. Yes, every time. We script away friction. A tiny helper creates the channel, pins the template, and tags the right folks, so no one rifles through docs when cortisol’s high.


Private AI is the Future of BFSI Sector: Here’s Why

The public cloud, while offering initial scalability, presents significant hurdles for the Indian BFSI sector. Financial institutions manage vast troves of sensitive data. Storing and processing this data in a shared, external environment introduces unacceptable cyber risks. This is particularly critical in India, where regulators like the Reserve Bank of India (RBI) have stringent data localisation policies, making data sovereignty non-negotiable. ... Private AI offers a powerful solution to these challenges by creating a zero-trust, air-gapped environment. It keeps data and AI models on-premise, allowing institutions to maintain absolute control over their most valuable assets. It complies with regulatory mandates and global standards, mitigating the top barriers to AI adoption. The ability to guarantee that sensitive data never leaves the organisation’s infrastructure is a competitive advantage that public cloud offerings simply cannot replicate. ... For a heavily-regulated industry like BFSI, reaching such a level of automation and complying with regulations is quite the challenge. Private AI knocks it out of the park, paving the way for a truly secure and autonomous future. For the Indian BFSI sector, this means a significant portion of clerical and repetitive tasks will be handled by these AI-FTEs, allowing for a strategic redeployment of human capital into supervisory roles, which will, in turn, flatten organisational structures and boost retention.


Cyber moves from back office to boardroom – and investors are paying attention

Greater awareness has emerged as businesses shift from short-term solutions adopted during the pandemic to long-term, strategic partnerships with specialist cyber security providers. Increasingly, organizations recognize that cyber security requires an integrated approach involving continuous monitoring and proactive risk management. ... At the same time, government regulation is putting company directors firmly on the hook. The UK’s proposed Cyber Security and Resilience Bill will make senior executives directly accountable for managing cyber risks and ensuring operational resilience, bringing the UK closer to European frameworks like the NIS2 Directive and DORA. This is changing how cyber security is viewed at the top. It’s not just about ticking boxes or passing audits. It is now a central part of good governance. For investors, strong cyber capabilities are becoming a mark of well-run companies. For acquirers, it’s becoming a critical filter for M&A, particularly when dealing with businesses that hold sensitive data or operate critical systems. This regulatory push is part of a broader global shift towards greater accountability. In response, businesses are increasingly adopting governance models that embed cyber risk management into their strategic decision-making processes. 


Why satellite cybersecurity threats matter to everyone

There are several practices to keep in mind for developing a secure satellite architecture. First, establish situational awareness across the five segments of space by monitoring activity. You cannot protect what you cannot see, and there is limited real-time visibility into the cyber domain, which is critical to space operations. Second, be threat-driven when mitigating cyber risks. Vulnerability does not necessarily equal mission risk. It is important to prioritize mitigating those vulnerabilities that impact the particular mission of that small satellite. Third, make every space professional a cyber safety officer. Unlike any other domain, there are no operations in space without the cyber domain. Emotionally connecting the safety of the cyber domain to space mission outcomes is imperative. When designing a secure satellite architecture, it is critical to design with the probability of cyber security compromises front of mind. It is not realistic to design a completely “non-hackable” architecture. However, it is realistic to design an architecture that balances protection and resilience, designing protections that make the cost of compromise high for the adversary, and resilience that makes the cost of compromise low for the mission. Security should be built in at the lowest abstraction layer of the satellite, including containerization, segmentation, redundancy and compartmentalization.


Tiny quantum dots unlock the future of unbreakable encryption

For four decades, the holy grail of quantum key distribution (QKD) -- the science of creating unbreakable encryption using quantum mechanics -- has hinged on one elusive requirement: perfectly engineered single-photon sources. These are tiny light sources that can emit one particle of light (photon) at a time. But in practice, building such devices with absolute precision has proven extremely difficult and expensive. To work around that, the field has relied heavily on lasers, which are easier to produce but not ideal. These lasers send faint pulses of light that contain a small, but unpredictable, number of photons -- a compromise that limits both security and the distance over which data can be safely transmitted, as a smart eavesdropper can "steal" the information bits that are encoded simultaneously on more than one photon. ... To prove it wasn't just theory, the team built a real-world quantum communication setup using a room-temperature quantum dot source. They ran their new reinforced version of the well-known BB84 encryption protocol -- the backbone of many quantum key distribution systems -- and showed that their approach was not only feasible but superior to existing technologies. What's more, their approach is compatible with a wide range of quantum light sources, potentially lowering the cost and technical barriers to deploying quantum-secure communication on a large scale.


Are regulatory frameworks fueling innovation or stalling expansion in the data center market?

On a basic level, demonstrating the broader value of a data center to its host market, whether through job creation or tax revenues, helps ensure alignment with evolving regulatory frameworks and reinforces confidence among financial institutions. From banks to institutional investors, visible community and policy alignment help de-risk these capital-intensive projects and strengthen the case for long-term investment. ... With regulatory considerations differing significantly from region to region, data center market growth isn’t linear. In the Middle East, for example, where policy is supportive and there is significant capital investment, it's somewhat easier to build and operate a data center than in places like the EU, where regulation is far more complex. Taking the UAE as an example, regulatory frameworks in the GCC around data sovereignty require data of national importance to be stored in the country of origin. ... In this way, the regulatory and data sovereignty policies are driving the need for localized data centers. However, due to the borderless nature of the digital economy, there is also a growing need for data centers to become location-agnostic, so that data can move in and out of regions with different regulatory frameworks and customers can establish global, not just local, hubs. 


Cross border seamless travel Is closer than you think

At the heart of this transformation is the Digital Travel Credential (DTC), developed by the International Civil Aviation Organization (ICAO). The DTC is a digital replica of your passport, securely stored and ready to be shared at the tap of a screen. But here’s the catch: the current version of the DTC packages all your passport information – name, number, nationality, date of birth – into one file. That works well for border agencies, who need the full picture. But airlines? They typically only require a few basic details to complete check-in and security screening. Sharing the entire passport file just to access your name and date of birth isn’t just inefficient and it’s a legal problem in many jurisdictions. Under data protection laws like the EU’s GDPR, collecting more personal information than necessary is a breach. ... While global standards take time to update, the aviation industry is already moving forward. Airlines, airports, and governments are piloting digital identity programs (using different forms of digital ID) and biometric journeys built around the principles of consent and minimal data use. IATA’s One ID framework is central to this momentum. One ID defines how a digital identity like the DTC can be used in practice: verifying passengers, securing consent, and enabling a paperless journey from curb to gate.


Tackling cybersecurity today: Your top challenge and strategy

The rise of cloud-based tools and hybrid work has made it easier than ever for employees to adopt new apps or services without formal review. While the intent is often to move faster or collaborate better, these unapproved tools open doors to data exposure, regulatory gaps, and untracked vendor risk. Our approach is to bring Shadow IT into the light. Using TrustCloud’s platform, organizations can automatically discover unmanaged applications, flag unauthorized connections, and map them to the relevant compliance controls. ... Shadow IT’s impact goes beyond convenience. Unvetted tools can expose sensitive data, introduce compliance gaps, and create hidden third-party dependencies. The stakes are even higher in regulated industries, where a single misstep can result in financial penalties or reputational damage. Analysts like Gartner predict that by 2027, nearly three-quarters of employees will adopt technology outside the IT team’s visibility, a staggering shift that leaves cybersecurity and compliance teams racing to maintain control. ... Without visibility and controls, every unsanctioned tool becomes a potential weak spot, complicating threat detection, increasing exposure to regulatory penalties, and making incident response far more challenging. For security and compliance teams, managing Shadow IT isn’t just about locking things down; it’s about regaining oversight and trust in an environment where technology adoption is decentralized and constant.

Daily Tech Digest - July 25, 2025


 Quote for the day:

"Technology changes, but leadership is about clarity, courage, and creating momentum where none exists." -- Inspired by modern digital transformation principles


Why foundational defences against ransomware matter more than the AI threat

The 2025 Cyber Security Breaches Survey paints a concerning picture. According to the study, ransomware attacks doubled between 2024 and 2025 – a surge less to do with AI innovation and more about deep-rooted economic, operational and structural changes within the cybercrime ecosystem. At the heart of this growth in attacks is the growing popularity of the ransomware-as-a-service (RaaS) business model. Groups like DragonForce or Ransomhub sell ready-made ransomware toolkits to affiliates in exchange for a cut of the profits, enabling even low-skilled attackers to conduct disruptive campaigns. ... Breaches often stem from common, preventable issues such as poor credential hygiene or poorly configured systems – areas that often sit outside scheduled assessments. When assessments happen only once or twice a year, new gaps may go unnoticed for months, giving attackers ample opportunity. To keep up, organisations need faster, more continuous ways of validating defences. ... Most ransomware actors follow well-worn playbooks, making them frequent visitors to company networks but not necessarily sophisticated ones. That’s why effective ransomware prevention is not about deploying cutting-edge technologies at every turn – it’s about making sure the basics are consistently in place. 


Subliminal learning: When AI models learn what you didn’t teach them

“Subliminal learning is a general phenomenon that presents an unexpected pitfall for AI development,” the researchers from Anthropic, Truthful AI, the Warsaw University of Technology, the Alignment Research Center, and UC Berkeley, wrote in their paper. “Distillation could propagate unintended traits, even when developers try to prevent this via data filtering.” ... Models trained on data generated by misaligned models, where AI systems diverge from their original intent due to bias, flawed algorithms, data issues, insufficient oversight, or other factors, and produce incorrect, lewd or harmful content, can also inherit that misalignment, even if the training data had been carefully filtered, the researchers found. They offered examples of harmful outputs when student models became misaligned like their teachers, noting, “these misaligned responses are egregious far beyond anything in the training data, including endorsing the elimination of humanity and recommending murder.” ... Today’s multi-billion parameter models are able to discern extremely complicated relationships between a dataset and the preferences associated with that data, even if it’s not immediately obvious to humans, he noted. This points to a need to look beyond semantic and direct data relationships when working with complex AI models.


Why people-first leadership wins in software development

It frequently involves pushing for unrealistic deadlines, with project schedules made without enough input from the development team about the true effort needed and possible obstacles. This results in ongoing crunch periods and mandatory overtime. ... Another indicator is neglecting signs of burnout and stress. Leaders may ignore or dismiss signals such as team members consistently working late, increased irritability, or a decline in productivity, instead pushing for more output without addressing the root causes. Poor work-life balance becomes commonplace, often without proper recognition or rewards for the extra effort. ... Beyond the code, there’s a stifled innovation and creativity. When teams are constantly under pressure to just “ship it,” there’s little room for creative problem-solving, experimentation, or thinking outside the box. Innovation, often born from psychological safety and intellectual freedom, gets squashed, hindering your company’s ability to adapt to new trends and stay competitive. Finally, there’s damage to your company’s reputation. In the age of social media and employer review sites, news travels fast. ... It’s vital to invest in team growth and development. Provide opportunities for continuous learning, training, and skill enhancement. This not only boosts individual capabilities but also shows your commitment to their long-term career paths within your organization. This is a crucial retention strategy.


Achieving resilience in financial services through cloud elasticity and automation

In an era of heightened regulatory scrutiny, volatile markets, and growing cybersecurity threats, resilience isn’t just a nice-to-have—it’s a necessity. A lack of robust operational resilience can lead to regulatory penalties, damaged reputations, and crippling financial losses. In this context, cloud elasticity, automation, and cutting-edge security technologies are emerging as crucial tools for financial institutions to not only survive but thrive amidst these evolving pressures. ... Resilience ensures that financial institutions can maintain critical operations during crises, minimizing disruptions and maintaining service quality. Efficient operations are crucial for maintaining competitive advantage and customer satisfaction. ... Effective resilience strategies help institutions manage diverse risks, including cyber threats, system failures, and third-party vulnerabilities. The complexity of interconnected systems and the rapid pace of technological advancement add layers of risk that are difficult to manage. ... Financial institutions are particularly susceptible to risks such as system failures, cyberattacks, and third-party vulnerabilities. ... As financial institutions navigate a landscape marked by heightened risk, evolving regulations, and increasing customer expectations, operational resilience has become a defining imperative.


Digital attack surfaces expand as key exposures & risks double

Among OT systems, the average number of exposed ports per organisation rose by 35%, with Modbus (port 502) identified as the most commonly exposed, posing risks of unauthorised commands and potential shutdowns of key devices. The exposure of Unitronics port 20256 surged by 160%. The report cites cases where attackers, such as the group "CyberAv3ngers," targeted industrial control systems during conflicts, exploiting weak or default passwords. ... The number of vulnerabilities identified on public-facing assets more than doubled, rising from three per organisation in late 2024 to seven in early 2025. Critical vulnerabilities dating as far back as 2006 and 2008 still persist on unpatched systems, with proof-of-concept code readily available online, making exploitation accessible even to attackers with limited expertise. The report also references the continued threat posed by ransomware groups who exploit such weaknesses in internet-facing devices. ... Incidents involving exposed access keys, including cloud and API keys, doubled from late 2024 to early 2025. Exposed credentials can enable threat actors to enter environments as legitimate users, bypassing perimeter defenses. The report highlights that most exposures result from accidental code pushes to public repositories or leaks on criminal forums.


How Elicitation in MCP Brings Human-in-the-Loop to AI Tools

Elicitation represents more than an incremental protocol update. It marks a shift toward collaborative AI workflows, where the system and human co-discover missing context rather than expecting all details upfront. Python developers building MCP tools can now focus on core logic and delegate parameter gathering to the protocol itself, allowing for a more streamlined approach. Clients declare an elicitation capability during initialization, so servers know they may elicit input at any time. That standardized interchange liberates developers from generating custom UIs or creating ad hoc prompts, ensuring coherent behaviour across diverse MCP clients. ... Elicitation transforms human-in-the-loop (HITL) workflows from an afterthought to a core capability. Traditional AI systems often struggle with scenarios that require human judgment, approval, or additional context. Developers had to build custom solutions for each case, leading to inconsistent experiences and significant development overhead. With elicitation, HITL patterns become natural extensions of tool functionality. A database migration tool can request confirmation before making irreversible changes. A document generation system can gather style preferences and content requirements through guided interactions. An incident response tool can collect severity assessments and stakeholder information as part of its workflow.


Cognizant Agents Gave Hackers Passwords, Clorox Says in Lawsuit

“Cognizant was not duped by any elaborate ploy or sophisticated hacking techniques,” the company says in its partially redacted 19-page complaint. “The cybercriminal just called the Cognizant Service Desk, asked for credentials to access Clorox’s network, and Cognizant handed the credentials right over. Cognizant is on tape handing over the keys to Clorox’s corporate network to the cybercriminal – no authentication questions asked.” ... The threat actors made multiple calls to the Cognizant help desk, essentially asking for new passwords and getting them without any effort to verify them, Clorox wrote. They then used those new credentials to gain access to the corporate network, launching a “debilitating” attack that “paralyzed Clorox’s corporate network and crippled business operations. And to make matters worse, when Clorox called on Cognizant to provide incident response and disaster recovery support services, Cognizant botched its response and compounded the damage it had already caused.” In statement to media outlets, a Cognizant spokesperson said it was “shocking that a corporation the size of Clorox had such an inept internal cybersecurity system to mitigate this attack.” While Clorox is placing the blame on Cognizant, “the reality is that Clorox hired Cognizant for a narrow scope of help desk services which Cognizant reasonably performed. Cognizant did not manage cybersecurity for Clorox,” the spokesperson said.


Digital sovereignty becomes a matter of resilience for Europe

Open-source and decentralized technologies are essential to advancing Europe’s strategic autonomy. Across cybersecurity, communications, and foundational AI, we’re seeing growing support for open-source infrastructure, now treated with the same strategic importance once reserved for energy, water and transportation. The long-term goal is becoming clear: not to sever global ties, but to reduce dependencies by building credible, European-owned alternatives to foreign-dominated systems. Open-source is a cornerstone of this effort. It empowers European developers and companies to innovate quickly and transparently, with full visibility and control, essential for trust and sovereignty. Decentralized systems complement this by increasing resilience against cyber threats, monopolistic practices and commercial overreach by “big tech”. While public investment is important, what Europe needs most is a more “risk-on” tech environment, one that rewards ambition, accelerated growth and enables European players to scale and compete globally. Strategic autonomy won’t be achieved by funding alone, but by creating the right innovation and investment climate for open technologies to thrive. Many sovereign platforms emphasize end-to-end encryption, data residency, and open standards. Are these enough to ensure trust, or is more needed to truly protect digital independence?



Building better platforms with continuous discovery

Platform teams are often judged by stability, not creativity. Balancing discovery with uptime and reliability takes effort. So does breaking out of the “tickets and delivery” cycle to explore problems upstream. But the teams that manage it? They build platforms that people want to use, not just have to use. Start by blocking time for discovery in your sprint planning, measuring both adoption and friction metrics, and most importantly, talking to your users periodically rather than waiting for them to come to you with problems. Cultural shifts like this take time because you're not just changing the process; you're changing what people believe is acceptable or expected. That kind of change doesn't happen just because leadership says it should, or because a manager adds a new agenda to planning meetings. It sticks when ICs feel inspired and safe enough to work differently and when managers back that up with support and consistency. Sometimes a C-suite champion helps set the tone, but day-to-day, it's middle managers and senior ICs who do the slow, steady work of normalizing new behavior. You need repeated proof that it's okay to pause and ask why, to explore, to admit uncertainty. Without that psychological safety, people just go back to what they know: deliverables and deadlines. 


AI-enabled software development: Risk of skill erosion or catalyst for growth?

We need to reframe AI not as a rival, but as a tool—one that has its own pros and cons and can extend human capability, not devalue it. This shift in perspective opens the door to a broader understanding of what it means to be a skilled engineer today. Using AI doesn’t eliminate the need for expertise—it changes the nature of that expertise. Classical programming, once central to the developer’s identity, becomes one part of a larger repertoire. In its place emerge new competencies: critical evaluation, architectural reasoning, prompt literacy, source skepticism, interpretative judgment. These are not hard skills, but meta-cognitive abilities—skills that require us to think about how we think. We’re not losing cognitive effort—we’re relocating it. This transformation mirrors earlier technological shifts. ... Some of the early adopters of AI enablement are already looking ahead—not just at the savings from replacing employees with AI, but at the additional gains those savings might unlock. With strategic investment and redesigned expectations, AI can become a growth driver—not just a cost-cutting tool. But upskilling alone isn’t enough. As organizations embed AI deeper into the development workflow, they must also confront the technical risks that come with automation. The promise of increased productivity can be undermined if these tools are applied without adequate context, oversight, or infrastructure.

Daily Tech Digest - June 15, 2025


Quote for the day:

“Whenever you find yourself on the side of the majority, it is time to pause and reflect.” -- Mark Twain



Gazing into the future of eye contact

Eye contact is a human need. But it also offers big business benefits. Brain scans show that eye contact activates parts of the brain linked to reading others’ feelings and intentions, including the fusiform gyrus, medial prefrontal cortex, and amygdala. These brain regions help people figure out what others are thinking or feeling, which we all need for trusting business and work relationships. ... If you look into the camera to simulate eye contact, you can’t see the other person’s face or reactions. This means both people always appear to be looking away, even if they are trying to pay attention. It is not just awkward — it changes how people feel and behave. ... The iContact Camera Pro is a 4K webcam that uses a retractable arm that places the camera right in your line of sight so that you can look at the person and the camera at the same time. It lets you adjust video and audio settings in real time. It’s compact and folds away when not in use. It’s also easy to set up with a USB-C connection and works with Zoom, Microsoft Teams, Google Meet, and other major platforms. ... Finally, there’s Casablanca AI, software that fixes your gaze in real time during video calls, so it looks like you’re making eye contact even when you’re not. It works by using AI and GAN technology to adjust both your eyes and head angle, keeping your facial expressions and gestures natural, according to the company.


New York passes a bill to prevent AI-fueled disasters

“The window to put in place guardrails is rapidly shrinking given how fast this technology is evolving,” said Senator Gounardes. “The people that know [AI] the best say that these risks are incredibly likely […] That’s alarming.” The RAISE Act is now headed for New York Governor Kathy Hochul’s desk, where she could either sign the bill into law, send it back for amendments, or veto it altogether. If signed into law, New York’s AI safety bill would require the world’s largest AI labs to publish thorough safety and security reports on their frontier AI models. The bill also requires AI labs to report safety incidents, such as concerning AI model behavior or bad actors stealing an AI model, should they happen. If tech companies fail to live up to these standards, the RAISE Act empowers New York’s attorney general to bring civil penalties of up to $30 million. The RAISE Act aims to narrowly regulate the world’s largest companies — whether they’re based in California (like OpenAI and Google) or China (like DeepSeek and Alibaba). The bill’s transparency requirements apply to companies whose AI models were trained using more than $100 million in computing resources (seemingly, more than any AI model available today), and are being made available to New York residents.


The ZTNA Blind Spot: Why Unmanaged Devices Threaten Your Hybrid Workforce

The risks are well-documented and growing. But many of the traditional approaches to securing these endpoints fall short—adding complexity without truly mitigating the threat. It’s time to rethink how we extend Zero Trust to every user, regardless of who owns the device they use. ... The challenge of unmanaged endpoints is no longer theoretical. In the modern enterprise, consultants, contractors, and partners are integral to getting work done—and they often need immediate access to internal systems and sensitive data. BYOD scenarios are equally common. Executives check dashboards from personal tablets, marketers access cloud apps from home desktops, and employees work on personal laptops while traveling. In each case, IT has little to no visibility or control over the device’s security posture. ... To truly solve the BYOD and contractor problem, enterprises need a comprehensive ZTNA solution that applies to all users and all devices under a single policy framework. The foundation of this approach is simple: trust no one, verify everything, and enforce policies consistently. ... The shift to hybrid work is permanent. That means BYOD and third-party access are not exceptions—they’re standard operating procedures. It’s time for enterprises to stop treating unmanaged devices as an edge case and start securing them as part of a unified Zero Trust strategy.


3 reasons I'll never trust an SSD for long-term data storage

SSDs rely on NAND flash memory, which inevitably wears out after a finite number of write cycles. Every time you write data to an SSD and erase it, you use up one write cycle. Most manufacturers specify the write endurance for their SSDs, which is usually in terabytes written (TBW). ... When I first started using SSDs, I was under the impression that I could just leave them on the shelf for a few years and access all my data whenever I wanted. But unfortunately, that's not how NAND flash memory works. The data stored in each cell leaks over time; the electric charge used to represent a bit can degrade, and if you don't power on the drive periodically to refresh the NAND cells, those bits can become unreadable. This is called charge leakage, and it gets worse with SSDs using lower-end NAND flash memory. Most consumer SSDs these days use TLC and QLC NAND flash memory, which aren't as great as SLC and MLC SSDs at data retention. ... A sudden power loss during critical write operations can corrupt data blocks and make recovery impossible. That's because SSDs often utilize complex caching mechanisms and intricate wear-leveling algorithms to optimize performance. During an abrupt shutdown, these processes might fail to complete correctly, leaving your data corrupted.


Beyond the Paycheck: Where IT Operations Careers Outshine Software Development

On the whole, working in IT tends to be more dynamic than working as a software developer. As a developer, you're likely to spend the bulk of your time writing code using a specific set of programming languages and frameworks. Your day-to-day, month-to-month, and year-to-year work will center on churning out never-ending streams of application updates. The tasks that fall to IT engineers, in contrast, tend to be more varied. You might troubleshoot a server failure one day and set up a RAID array the next. You might spend part of your day interfacing with end users, then go into strategic planning meetings with executives. ... IT engineers tend to be less abstracted from end users, with whom they often interact on a daily basis. In contrast, software engineers are more likely to spend their time writing code while rarely, if ever, watching someone use the software they produce. As a result, it can be easier in a certain respect for someone working in IT as compared to software development to feel a sense of satisfaction.  ... While software engineers can move into adjacent types of roles, like site reliability engineering, IT operations engineers arguably have a more diverse set of easily pursuable options if they want to move up and out of IT operations work.


Europe is caught in a cloud dilemma

The European Union is worried about its reliance on the leading US-based cloud providers: Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). These large-scale players hold an unrivaled influence over the cloud sector and manage vital infrastructure essential for driving economies and fostering innovation. European policymakers have raised concerns that their heavy dependence exposes the continent to vulnerabilities, constraints, and geopolitical uncertainties. ... Europe currently lacks cloud service providers that can challenge those global Goliaths. Despite efforts like Gaia-X that aim to change this, it’s not clear if Europe can catch up anytime soon. It will be a prohibitively expensive undertaking to build large-scale cloud infrastructure in Europe that is both cost-efficient and competitive. In a nutshell, Europe’s hope to adopt top-notch cloud technology without the countries that currently dominate the industry is impractical, considering current market conditions. ... Often companies view cloud integration as merely a checklist or set of choices to finalize their cloud migration. This frequently results in tangled networks and isolated silos. Instead, businesses should overhaul their existing cloud environment with a comprehensive strategy that considers both immediate needs and future goals as well as the broader geopolitical landscape.


Applying Observability to Leadership to Understand and Explain your Way of Working

Leadership observability means observing yourself as you lead. Alex Schladebeck shared at OOP conference how narrating thoughts, using mind maps, asking questions, and identifying patterns helped her as a leader to explain decisions, check bias, support others, and understand her actions and challenges. Employees and other leaders around you want to understand what leads to your decisions, Schladebeck said. ... Heuristics give us our "gut feeling". And that’s useful, but it’s better if we’re able to take a step back and get explicit about how we got to that gut feeling, Schladebeck mentioned. If we categorise and label things and explain what experiences lead us to our gut feeling, then we have the option of checking our bias and assumptions, and can help others to develop the thinking structures to make their own decisions, she explained ... Schladebeck recommends that leaders narrate their thoughts to reflect on, and describe their own work to the ones they are leading. They can do this by asking themselves questions like, "Why do I think that?", "What assumptions am I basing this on?", "What context factors am I taking into account?" Look for patterns, categories, and specific activities, she advised, and then you can try to explain these things to others around you. To visualize her thinking as a leader, Schladebeck uses mind maps.


Data Mesh: The Solution to Financial Services' Data Management Nightmare

Data mesh is not a technology or architecture, but an organizational and operational paradigm designed to scale data in complex enterprises. It promotes domain-oriented data ownership, where teams manage their data as a product, using a self-service infrastructure and following federated governance principles. In a data mesh, any team or department within an organization becomes accountable for the quality, discoverability, and accessibility of the data products they own. The concept emerged around five years ago as a response to the bottlenecks and limitations created by centralized data engineering teams acting as data gatekeepers. ... In a data mesh model, data ownership and stewardship are assigned to the business domains that generate and use the data. This means that teams such as credit risk, compliance, underwriting, or investment analysis can take responsibility for designing and maintaining the data products that meet their specific needs. ... Data mesh encourages clear definitions of data products and ownership, which helps reduce the bottlenecks often caused by fragmented data ownership or overloaded central teams. When combined with modern data technologies — such as cloud-native platforms, data virtualization layers, and orchestration tools — data mesh can help organizations connect data across legacy mainframes, on-premises databases, and cloud systems.


Accelerating Developer Velocity With Effective Platform Teams

Many platform engineering initiatives fail, not because of poor technology choices, but because they miss the most critical component: genuine collaboration. The most powerful internal developer platforms aren’t just technology stacks; they’re relationship accelerators that fundamentally transform the way teams work together. Effective platform teams have a deep understanding of what a day in the life of a developer, security engineer or operations specialist looks like. They know the pressures these teams face, their performance metrics and the challenges that frustrate them most. ... The core mission of platform teams is to enable faster software delivery by eliminating complexity and cognitive load. Put simply: Make the right way the easiest way. Developer experience extends beyond function; it’s about creating delight and demonstrating that the platform team cares about the human experience, not just technical capabilities. The best platforms craft natural, intuitive interfaces that anticipate questions and incorporate error messages that guide, rather than confuse. Platform engineering excellence comes from making complex things appear simple. It’s not about building the most sophisticated system; it’s about reducing complexity so developers can focus on creating business value.


AI agents will be ambient, but not autonomous - what that means for us

Currently, the AI assistance that users receive is deterministic; that is, humans are expected to enter a command in order to receive an intended outcome. With ambient agents, there is a shift in how humans fundamentally interact with AI to get the desired outcomes they need; the AI assistants rely instead on environmental cues. "Ambient agents we define as agents that are triggered by events, run in the background, but they are not completely autonomous," said Chase. He explains that ambient agents benefit employees by allowing them to expand their magnitude and scale themselves in ways they could not previously do. ... When talking about these types of ambient agents with advanced capabilities, it's easy to become concerned about trusting AI with your data and with executing actions of high importance. To tackle that concern, it is worth reiterating Chase's definition of ambient agents -- they're "not completely autonomous." ... "It's not deterministic," added Jokel. "It doesn't always give you the same outcome, and we can build scaffolding, but ultimately you still ant a human being sitting at the keyboard checking to make sure that this decision is the right thing to do before it gets executed, and I think we'll be in that state for a relatively long period of time."





Daily Tech Digest - June 03, 2025


Quote for the day:

"Keep your fears to yourself, but share your courage with others." -- Robert Louis Stevenson


Is it Time to Accept that the Current Role of the CISO Has Failed?

First of all, it was never conceived as a true C-level role. It probably originated in the minds of some organisation consultants, but it never developed any true C-level weight. Even if it may hurt some, it is my opinion that it was very rarely given to people with true C-level potential. Second, it was almost always given to technologists by trade or background, although the underlying matter is unequivocally cross-functional and has always been: You cannot be successful around identity and access management for example without the involvement of HR and business units, and the ability to reach credibly towards them. ... It has aggregated a mixed set of responsibilities and accountabilities without building up the right organisational and managerial momentum, and many CISOs are simply being set up to fail: The role has simply become too complex to carry for the profile of the people it attracts. To break this spiral, the logic is now to split the role, stripping off the managerial layers it has accumulated over the years and refocusing the role of the CISO on its native technical content so that it can lead effectively and efficiently at that level, while at the same time bringing up a CSO role able to reach across business, IT and support function to take in charge the level of corporate complexity cybersecurity is now amalgamating in large firms.


How to Fortify Your Business’s Online Infrastructure Against Downtime

The first step to protecting your online infrastructure against downtime is to assess just how much downtime risk is viable for your business. Understanding how much downtime you can realistically afford is important for developing a sound IT strategy. Your viable downtime limit will define your tolerance to risk and allow you to direct your resources toward systems that keep your systems running optimally as far as possible. The average accepted downtime rate for a website is just 0.05%. That means your systems should experience uptime at least 99.95% of the time. If you have a low risk tolerance – say, for instance, if you rely on an ecommerce platform to generate revenue – investing in IT continuity technology is essential for keeping downtime minimal. ... The first step to safeguarding your organization against cyberattacks is to regularly audit your network security measures. This helps to spot vulnerabilities and address them, ensuring your IT systems are always protected against continuously advancing threats. Begin by creating a map of your existing network infrastructure, including all of its user access points, hardware, and software. This map will allow you to keep track of changes and quickly identify unauthorized changes and additions.


Private cloud still matters—but it doesn’t matter most

Large enterprises will maintain significant on-premises footprints for the foreseeable future, for all the reasons we’ve discussed. The enterprise IT landscape in 2025 is undeniably hybrid and likely always will be. But it’s equally undeniable that the center of gravity for innovation has shifted. When a new opportunity emerges—say, deploying a breakthrough AI model or scaling a customer-facing app to millions of users overnight—companies aren’t spinning up a new on-premises cluster to meet the moment. They’re tapping the virtually unlimited resources of AWS, Azure, Google, or edge networks like Cloudflare. They’re doing so because cloud offers experimentation without hardware procurement, and success isn’t gated by how many servers you happen to own. Private clouds excel at running the known and steady. Public clouds excel at unleashing the unknown and extraordinary. As we reach a cloud/on-prem equilibrium, this division of labor is becoming clearer. The day-to-day workloads that keep the business running may happily live in a familiar private cloud enclave. But the industry-defining projects, the ones leaders hope will define the business’s future, gravitate to infrastructure that can stretch to any size, in any region, at a moment’s notice. 


Why Generative AI Needs Architecture, Not Just APIs

The root of the problem often lies in treating gen AI as an add-on to legacy systems rather than embedding it into core operations. This leads to inconsistent implementation, unclear ownership and limited returns. To deliver meaningful outcomes, organizations must start by identifying areas where gen AI can enhance decisions, such as customer engagement, service workflows and regulatory compliance. ... When the focus is only on launching siloed applications, organizations may move fast initially, but they end up with systems that are difficult to scale, integrate or adapt. That's where architecture-centric thinking becomes critical. A strong architectural foundation built on modularity, interoperability and scalability ensures that future applications don't just add features but add value as one needs to build to last. This means building platforms that support change, not just one-off projects. It's also about fostering collaboration between business and IT, so decisions can be made with both speed and stability in mind. ... The "situational layer cake" architecture enables enterprises to build applications in distinct layers, such as enterprisewide, division-specific and implementation layers, facilitating a balance between reusability and customization. This structure allows the creation of reusable components that can be tailored to specific business contexts without redundant coding, streamlining operations and reducing complexity.


Scattered Spider: Understanding Help Desk Scams and How to Defend Your Organization

The goal of a help desk scam is to get the help desk operator to reset the credentials and/or MFA used to access an account so the attacker can take control of it. They'll use a variety of backstories and tactics to get that done, but most of the time it's as simple as saying "I've got a new phone, can you remove my existing MFA and allow me to enroll a new one?" From there, the attacker is then sent an MFA reset link via email or SMS. Usually, this would be sent to, for example, a number on file — but at this point, the attacker has already established trust and bypassed the help desk process to a degree. So asking "Can you send it to this email address" or "I've actually got a new number too, can you send it to…" gets this sent directly to the attacker. ... But, help desks are a target for a reason. They're "helpful" by nature. This is usually reflected in how they're operated and performance measured — delays won't help you to hit those SLAs! Ultimately, a process only works if employees are willing to adhere to it — and can't be socially engineered to break it. Help desks that are removed from day-to-day operations are also inherently susceptible to attacks where employees are impersonated. But, the attacks we're experiencing at the moment should give security stakeholders plenty of ammunition as to why help desk reforms are vital to securing the business.


Banking on intelligence: How AI is powering the next evolution of financial services

With constantly evolving regulations, financial institutions need stringent compliance measures to avoid penalties and disruptions. AI steps in as a powerful ally, automating compliance tasks to slash manual workloads and boost reporting accuracy. AI agents digest regulatory data, churn out compliance reports, and handle KYC/AML validations—cutting errors while speeding up the process. While implementing the changes, financial institutions must comply with data localisation mandates and ensure AI solutions are hosted within India. To mitigate data privacy risks, personally identifiable information (PII) is anonymised, and AI is deployed within Virtual Private Cloud environments. AI systems automate document verification, ensuring consistent validation and improving audit readiness. ... AI-enabled Underwriting Workbench is an immensely helpful tool for streamlining documentation and offering a single-window interface. GenAI further enhances credit assessments by analysing alternative data—like transaction history, social media, and employment records—offering a comprehensive view of an applicant’s financial health. This enables banks to make inclusive, risk-aware lending decisions. Agentic AI further calibrates the process by automating tasks like application assessments and borrower information verifications, enabling near-instant loan decisions with minimal human intervention.


Why the end of Google as we know it could be your biggest opportunity yet

Now, before you think I'm writing Google's obituary, let me be clear. Like I've said before, I'm confident they'll figure it out, even if that means changing their business model. That said, if your business depends on Google in any way, whether it's your business profile, reviews, SEO, or products like Ad Manager to drive traffic, you need to pay attention to what's happening. ... The Department of Justice and several states are suing Google's parent company, Alphabet, arguing that its exclusive deals with companies like Apple are anticompetitive and potentially monopolistic. Basically, Google is paying billions to be the default search engine on Apple devices, effectively shutting out any real competition. The ruling in this case could break up their reported $20 billion-a-year agreement. ... Long story short, the way people discover, research, and choose businesses is changing one AI update at a time, but it's essential to note that people are still searching, just not in the same places they used to. That nuance is critical to understanding your next move. As more users turn to AI tools like ChatGPT and Perplexity for answers, traditional search engines are no longer the only gateway to your business. This shift in behavior over time will result in less traffic to your product or service. 


How global collaboration is hitting cybercriminals where it hurts

Collaboration and intelligence sharing is at the heart of our approach to tackling the threat within the NCA, and we enjoy relationships with partners across the public and private sector both nationally and internationally. We’re united and motivated, in many ways, by a common mission. Some of these are formalised law enforcement relationships that we have had for a long time – for example, I was the NCA’s embed to the FBI in Washington DC for a number of years. But, it is not just limited to the US – the NCA is lucky to enjoy brilliant relationships with the ‘five eyes’ countries and partners across Europe and beyond in the fight against cybercrime. ... In the NCA, we are predominantly focused on financially motivated cybercrime, with ransomware as a main area of focus given how significant the threat it poses to the UK. We recognise that some cybercrime groups have connections to the Russian State, but assess that these type of deep-rooted relationships are likely to be the exception as opposed to the norm. When targeting the cybercrime threat, we have been focused on associating cost and risk to the threat actors who seek to cause harm to us and our allies, and we achieve this in a number of different ways. The NCA-led disruption of LockBit in 2024 was successful in undermining trust between members of the group, as well as any trust that victims might have had in LockBit keeping their word. 


Future-Proofing AI: Repeating Mistakes or Learning From the Past?

Are the enterprises rushing to deploy new open source AI projects taking the necessary security measures to isolate them from the rest of their infrastructure? Or are they disregarding recent open source security history and trusting them by default? Alarmingly, there are also reports that China-, North Korea- and Russia-based cybercriminal groups are actively targeting both physical and AI infrastructure while leveraging AI-generated malware to exploit vulnerabilities more efficiently. ... Next-generation AI infrastructure cannot be beholden to performance penalties that arise from using today’s solutions to create true, secure, multitenant environments. By combining the best aspects of bare-metal performance with container-like deployment models, organizations can build systems that deliver both speed and convenience. ... We cannot build a solid future if we ignore the wisdom of the past. The foundations of computing security, resource management and operational efficiency were laid decades ago by pioneers who had to make every CPU cycle and memory byte count. Their lessons are more relevant now than ever as we build systems that consume unprecedented computational resources. The organizations that will outlast in the AI era won’t necessarily be those with the largest infrastructure investments or the trendiest technology stacks. 


Eight ways storage IT pros can evolve in the age of analytics and AI

Large organizations are spending millions of dollars annually on data storage, backups, and disaster recovery. On balance, there’s nothing wrong with that since data is the center of everything today – but all data should not be treated the same. Using cost modeling tools, the storage manager can enter actual storage costs to determine upfront new projected storage costs and actual usable capacity, based on data growth rates. These costs must factor in backups and disaster recovery, which can be 3X of storage spending, and should compare on-premises versus cloud models. An unstructured data management system that indexes all data across all storage can supply metrics on data volumes, costs, and predicted costs, and then model plans for moving less-active data to lower-cost archival storage, such as in the cloud. ... Storage teams must mitigate ransomware risks associated with file data. One way to do this is by implementing hybrid tiering strategies that offload infrequently accessed (cold) files to immutable cloud storage, which reduces the active attack surface by as much as 70 or 80 percent. Immutable storage ensures that once data is written, it cannot be altered or deleted, providing a robust defense against ransomware attempts to encrypt or corrupt files.

Daily Tech Digest - May 03, 2025


Quote for the day:

"It is during our darkest moments that we must focus to see the light." -- Aristotle Onassis



Why agentic AI is the next wave of innovation

AI agents have become integral to modern enterprises, not just enhancing productivity and efficiency, but unlocking new levels of value through intelligent decision-making and personalized experiences. The latest trends indicate a significant shift towards proactive AI agents that anticipate user needs and act autonomously. These agents are increasingly equipped with hyper-personalization capabilities, tailoring interactions based on individual preferences and behaviors. ... According to NVIDIA, when Azure AI Agent Service is paired with NVIDIA AgentIQ, an open-source toolkit, developers can now profile and optimize teams of AI agents in real time to reduce latency, improve accuracy, and drive down compute costs. ... “The launch of NVIDIA NIM microservices in Azure AI Foundry offers a secure and efficient way for Epic to deploy open-source generative AI models that improve patient care, boost clinician and operational efficiency, and uncover new insights to drive medical innovation,” says Drew McCombs, vice president, cloud and analytics at Epic. “In collaboration with UW Health and UC San Diego Health, we’re also researching methods to evaluate clinical summaries with these advanced models. Together, we’re using the latest AI technology in ways that truly improve the lives of clinicians and patients.”


Businesses intensify efforts to secure data in cloud computing

Building a robust security strategy begins with understanding the delineation between the customer's and the provider's responsibilities. Customers are typically charged with securing network controls, identity and access management, data, and applications within the cloud, while the CSP maintains the core infrastructure. The specifics of these responsibilities depend on the service model and provider in question. The importance of effective cloud security has grown as more organisations shift away from traditional on-premises infrastructure. This shift brings new regulatory expectations relating to data governance and compliance. Hybrid and multicloud environments offer businesses unprecedented flexibility, but also introduce complexity, increasing the challenge of preventing unauthorised access. ... Attackers are adjusting their tactics accordingly, viewing cloud environments as potentially vulnerable targets. A well-considered cloud security plan is regarded as essential for reducing breaches or damage, improving compliance, and enhancing customer trust, even if it cannot eliminate all risks. According to the statement, "A well-thought-out cloud security plan can significantly reduce the likelihood of breaches or damage, enhance compliance, and increase customer trust—even though it can never completely prevent attacks and vulnerabilities."


Safeguarding the Foundations of Enterprise GenAI

Implementing strong identity security measures is essential to mitigate risks and protect the integrity of GenAI applications. Many identities have high levels of access to critical infrastructure and, if compromised, could provide attackers with multiple entry points. It is important to emphasise that privileged users include not just IT and cloud teams but also business users, data scientists, developers and DevOps engineers. A compromised developer identity, for instance, could grant access to sensitive code, cloud functions, and enterprise data. Additionally, the GenAI backbone relies heavily on machine identities to manage resources and enforce security. As machine identities often outnumber human ones, securing them is crucial. Adopting a Zero Trust approach is vital, extending security controls beyond basic authentication and role-based access to minimise potential attack surfaces. To enhance identity security across all types of identities, several key controls should be implemented. Enforcing strong adaptive multi-factor authentication (MFA) for all user access is essential to prevent unauthorised entry. Securing access to credentials, keys, certificates, and secrets—whether used by humans, backend applications, or scripts—requires auditing their use, rotating them regularly, and ensuring that API keys or tokens that cannot be automatically rotated are not permanently assigned.


The new frontier of API governance: Ensuring alignment, security, and efficiency through decentralization

To effectively govern APIs in a decentralized landscape, organizations must embrace new principles that foster collaboration, flexibility and shared responsibility. Optimized API governance is not about abandoning control, rather about distributing it strategically while still maintaining overarching standards and ensuring critical aspects such as security, compliance and quality. This includes granting development teams with autonomy to design, develop and manage their APIs within clearly defined boundaries and guidelines. This encourages innovation while fostering ownership and allows each team to optimize their APIs to their specific needs. This can be further established by a shared responsibility model amongst teams where they are accountable for adhering to governance policies while a central governing body provides the overarching framework, guidelines and support. This operating model can be further supported by cultivating a culture of collaboration and communication between central governance teams and development teams. The central government team can have a representative from each development team and have clear channels for feedback, shared documentation and joint problem-solving scenarios. Implementing governance policies as code, leveraging tools and automation make it easier to enforce standards consistently and efficiently across the decentralized environment. 


Banking on innovation: Engineering excellence in regulated financial services

While financial services regulations aren’t likely to get simpler, banks are finding ways to innovate without compromising security. "We’re seeing a culture change with our security office and regulators," explains Lanham. "As cloud tech, AI, and LLMs arrive, our engineers and security colleagues have to upskill." Gartner's 2025 predictions say GenAI is shifting data security to protect unstructured data. Rather than cybersecurity taking a gatekeeper role, security by design is built into development processes. "Instead of saying “no”, the culture is, how can we be more confident in saying “yes”?" notes Lanham. "We're seeing a big change in our security posture, while keeping our customers' safety at the forefront." As financial organizations carefully tread a path through digital and AI transformation, the most successful will balance innovation with compliance, speed with security, and standardization with flexibility. Engineering excellence in financial services needs leaders who can set a clear vision while balancing tech potential with regulations. The path won’t be simple, but by investing in simplification, standardization and a shared knowledge and security culture, financial services engineering teams can drive positive change for millions of banking customers.


‘Data security has become a trust issue, not just a tech issue’

Data is very messy and data ecosystems are very complex. Every organisation we speak to has data across multiple different types of databases and data stores for different use cases. As an industry, we need to acknowledge the fact that no organisation has an entirely homogeneous data stack, so we need to support and plug into a wide variety of data ecosystems, like Databricks, Google and Amazon, regardless of the tooling used for data analytics, for integration, for quality, for observability, for lineage and the like. ... Cloud adoption is causing organisations to rethink their traditional approach to data. Most use cloud data services to provide a shortcut to seamless data integration, efficient orchestration, accelerated data quality and effective governance. In reality, most organisations will need to adopt a hybrid approach to address their entire data landscape, which typically spans a wide variety of sources that span both cloud and on premises. ... Data security has become a trust issue, not just a tech issue. With AI, hybrid cloud and complex supply chains, the attack surface is massive. We need to design with security in mind from day one – think secure coding, data-level controls and zero-trust principles. For AI, governance is critical, and it too needs to be designed in and not an afterthought. That means tracking where data comes from, how models are trained, and ensuring transparency and fairness.


Secure by Design vs. DevSecOps: Same Security Goal, Different Paths

Although the "secure by design" initiative offers limited guidance on how to make an application secure by default, it comes closer to being a distinct set of practices than DevSecOps. The latter is more of a high-level philosophy that organizations must interpret on their own; in contrast, secure by design advocates specific practices, such as selecting software architectures that mitigate the risk of data leakage and avoiding memory management practices that increase the chances of the execution of malicious code by attackers. ... Whereas DevSecOps focuses on all stages of the software development life cycle, the secure by design concept is geared mainly toward software design. It deals less with securing applications during and after deployment. Perhaps this makes sense because so long as you start with a secure design, you need to worry less about risks once your application is fully developed — although given that there's no way to guarantee an app can't be hacked, DevSecOps' holistic approach to security is arguably the more responsible one. ... Even if you conclude that secure by design and DevSecOps mean basically the same thing, one notable difference is that the government sector has largely driven the secure by design initiative, while DevSecOps is more popular within private industry.


Immutable by Design: Reinventing Business Continuity and Disaster Recovery

Immutable backups create tamper-proof copies of data, protecting it from cyber threats, accidental deletion, and corruption. This guarantees that critical data can be quickly restored, allowing businesses to recover swiftly from disruptions. Immutable storage provides data copies that cannot be manipulated or altered, ensuring data remains secure and can quickly be recovered from an attack. In addition to immutable backup storage, response plans must be continually tested and updated to combat the evolving threat landscape and adapt to growing business needs. The ultimate test of a response plan ensures data can be quickly and easily restored or failed over, depending on the event. Activating a second site in the case of a natural disaster or recovering systems without making any ransomware payments in the case of an attack. This testing involves validating the reliability of backup systems, recovery procedures, and the overall disaster recovery plan to minimize downtime and ensure business continuity. ... It can be challenging for IT teams trying to determine the perfect fit for their ecosystem, as many storage vendors claim to provide immutable storage but are missing key features. As a rule of thumb, if "immutable" data can be overwritten by a backup or storage admin, a vendor, or an attacker, then it is not a truly immutable storage solution. 


Neurohacks to outsmart stress and make better cybersecurity decisions

In cybersecurity where clarity and composure are essential, particularly during a data breach or threat response, these changes can have high-stakes consequences. “The longer your brain is stuck in this high-stress state, the more of those changes you will start to see and burnout is just an extreme case of chronic stress on the brain,” Landowski says. According to her, the tipping point between healthy stress and damaging chronic stress usually comes after about eight to 12 weeks, but it varies between individuals. “If you know about some of the things you can do to reduce the impact of stress on your body, you can potentially last a lot longer before you see any effects, whereas if you’re less resilient, or if your genes are more susceptible to stress, then it could be less.” ... working in cybersecurity, particularly as a hacker, is often about understanding how people think and then spotting the gaps. That same shift in understanding — tuning into how the brain works under different conditions — can help cybersecurity leaders make better decisions and build more resilient teams. As Cerf highlights, he works with organizations to identify these optimal operating states, testing how individuals and entire teams respond to stress and when their brains are most effective. “The brain is not just a solid thing,” Cerf says.


Beyond Safe Models: Why AI Governance Must Tackle Unsafe Ecosystems

Despite the evident risks of unsafe deployment ecosystems, the prevailing approach to AI governance still heavily emphasizes pre-deployment interventions—such as alignment research, interpretability tools, and red teaming—aimed at ensuring that the model itself is technically sound. Governance initiatives like the EU AI Act, while vital, primarily place obligations on providers and developers to ensure compliance through documentation, transparency, and risk management plans. However, the governance of what happens after deployment when these models enter institutions with their own incentives, infrastructures, and oversight receives comparatively less attention. For example, while the EU AI Act introduces post-market monitoring and deployer obligations for high-risk AI systems, these provisions remain limited in scope. Monitoring primarily focuses on technical compliance and performance, with little attention to broader institutional, social, or systemic impacts. Deployer responsibilities are only weakly integrated into ongoing risk governance and focus primarily on procedural requirements—such as record-keeping and ensuring human oversight—rather than assessing whether the deploying institution has the capacity, incentives, or safeguards to use the system responsibly. 

Daily Tech Digest - April 28, 2025


Quote for the day:

"If a window of opportunity appears, don't pull down the shade." -- Tom Peters



Researchers Revolutionize Fraud Detection with Machine Learning

Machine learning plays a critical role in fraud detection by identifying patterns and anomalies in real-time. It analyzes large datasets to spot normal behavior and flag significant deviations, such as unusual transactions or account access. However, fraud detection is challenging because fraud cases are much rarer than normal ones, and the data is often messy or unlabeled. ... “The use of machine learning in fraud detection brings many advantages,” said Taghi Khoshgoftaar, Ph.D., senior author and Motorola Professor in the FAU Department of Electrical Engineering and Computer Science. “Machine learning algorithms can label data much faster than human annotation, significantly improving efficiency. Our method represents a major advancement in fraud detection, especially in highly imbalanced datasets. It reduces the workload by minimizing cases that require further inspection, which is crucial in sectors like Medicare and credit card fraud, where fast data processing is vital to prevent financial losses and enhance operational efficiency.” ... The method combines two strategies: an ensemble of three unsupervised learning techniques using the SciKit-learn library and a percentile-gradient approach. The goal is to minimize false positives by focusing on the most confidently identified fraud cases. 


Cybersecurity is Not Working: Time to Try Something Else

Many CISOs, changing jobs every 2 years or so, have not learnt to get things done in large firms; they have not developed the political acumen and the management experience they would need. Many have simply remained technologists and firefighters, trapped in an increasingly obsolete mindset, pushing bottom-up a tools-based, risk-based, tech-driven narrative, disconnected from what the board wants to hear which has now shifted towards resilience and execution. This is why we may have to come to the point where we have to accept that the construction around the role of the CISO, as it was initiated in the late 90s, has served its purpose and needs to evolve. The first step in this evolution, in my opinion, is for the board to own cybersecurity as a business problem, not as a technology problem. It needs to be owned at board level in business terms, in line with the way other topics are owned at board level. This is about thinking the protection of the business in business terms, not in technology terms. Cybersecurity is not a purely technological matter; it has never been and cannot be. ... There may be a need to amalgamate it with other matters such as corporate resilience, business continuity or data privacy to build up a suitable board-level portfolio, but for me this is the way forward in reversing the long-term dynamics, away from the failed historical bottom-up constructions, towards a progressive top-down approach.


How the financial services C-suite are going beyond ‘keeping the lights on’ in 2025

C-suites need to tackle three core areas: Ensure they are getting high value support for their mission-critical systems; Know how to optimise their investments; Transform their organisation without disrupting day-to-day operations. It is certainly possible if they have the time, capabilities and skill sets in-house. Yet even the most well-resourced enterprises can struggle to acquire the knowledge base and market expertise required to negotiate with multiple vendors, unlock investments or run complex change programmes single-handedly. The reality is that managing the balance needed to save costs while accelerating innovation is challenging. ... The survey demonstrates a growing necessity for CIOs and CFOs to speak each other’s language, marking a shift in organisational strategy, moving IT beyond the traditional ‘keeping the lights on’ approach, and driving a pivotal transformation in the relationship between CIOs and CFOs. As they find better ways to collaborate and innovate, businesses in the financial services space will reap the rewards of emerging technology, while falling in line with budgetary needs. Emerging technologies are being introduced thick and fast, and as a result, hard metrics aren’t always available. Instead of feeling frustrated with a lack of data, CFOs should lean in as active participants, understanding how emerging technologies like AI and cybersecurity can drive strategic value, optimise operations and create new revenue streams. 


Threat actors are scanning your environment, even if you’re not

Martin Jartelius, CISO and Product Owner at Outpost24, says that the most common blind spots that the solution uncovers are exposed management interfaces of devices, exposed databases in the cloud, misconfigured S3 storage, or just a range of older infrastructure no longer in use but still connected to the organization’s domain. All of these can provide an entry point into internal networks, and some can be used to impersonate organizations in targeted phishing attacks. But these blind spots are not indicative of poor leadership or IT security performance: “Most who see a comprehensive report of their attack surface for the first time are surprised that it is often substantially larger than they understood. Some react with discomfort and perceive their prior lack of insight as a failure, but that is not the case. ... Attack surface management is still a maturing technology field, but having a solution bringing the information together in a platform gives a more refined and in-depth insight over time. External attack surface management starts with a continuous detection of exposed assets – in Sweepatic’s case, that also includes advanced port scanning to detect all (and not just the most common) ports at risk of exploitation – then moves on to automated security analysis and then risk-based reporting.


How to Run a Generative AI Developer Tooling Experiment

The last metric, rework, is a significant consideration with generative AI, as 67% of developers find that they are spending more time debugging AI-generated code. Devexperts experienced a 200% increase in rework and a 30% increase in maintenance. On the other hand, while the majority of organizations are seeing an increase in complexity and lines of code with code generators, these five engineers saw a surprising 15% decrease in lines of code created. “We can conclude that, for the live experiment, GitHub Copilot didn’t deliver the results one could expect after reading their articles,” summarized German Tebiev, the software engineering process architect who ran the experiment. He did think the results were persuasive enough to believe speed will be enabled if the right processes are put in place: “The fact that the PR throughput shows significant growth tells us that the desired speed increase can be achieved if the tasks’ flow is handled effectively.” ... Just 17% of developers responded that they think Copilot helped them save at least an hour a week, versus a whopping 40% saw no time savings by using the code generator, which is well below the industry average. Developers were also able to share their own anecdotal experience, which is very situation-dependent. Copilot seemed to be a better choice for completing more basic lines of code for new features, less so when there’s complexity of working with an existing codebase.


Dark Data: Surprising Places for Business Insights

Previously, the biggest problem when dealing with dark data was its messy nature. Even though AI has been able to analyze structured data for years, unstructured or semi-structured data proved to be a hard nut to crack. Unfortunately, unstructured data constitutes the majority of dark data. However, recent advances in natural language programming (NLP), natural language understanding (NLU), speech recognition, and ML have enabled AI to deal with unstructured dark data more effectively. Today, AI can easily analyze raw inputs like customer reviews, social media comments to identify trends and sentiment. Advanced sentiment analysis algorithms can come to accurate conclusions when concerning tone, context, emotional nuances, sarcasm, and urgency, providing businesses with deeper audience insights. For instance, Amazon uses this approach to flag fake reviews. In finance and banking, AI-powered data analysis tools are used to process transaction logs and unstructured customer communications to identify fraud risks and enhance service and customer satisfaction. Another industry where dark data mining might have potentially huge social benefits is healthcare. Currently, this industry generates around 30% of all the data in the world. 


Is your AI product actually working? How to develop the right metric system

Not tracking whether your product is working well is like landing a plane without any instructions from air traffic control. There is absolutely no way that you can make informed decisions for your customer without knowing what is going right or wrong. Additionally, if you do not actively define the metrics, your team will identify their own back-up metrics. The risk of having multiple flavors of an ‘accuracy’ or ‘quality’ metric is that everyone will develop their own version, leading to a scenario where you might not all be working toward the same outcome. ... the complexity of operating an ML product with multiple customers translates to defining metrics for the model, too. What do I use to measure whether a model is working well? Measuring the outcome of internal teams to prioritize launches based on our models would not be quick enough; measuring whether the customer adopted solutions recommended by our model could risk us drawing conclusions from a very broad adoption metric ... Most metrics are gathered at-scale by new instrumentation via data engineering. However, in some instances (like question 3 above) especially for ML based products, you have the option of manual or automated evaluations that assess the model outputs. 


Security needs to be planned and discussed early, right at the product ideation stage

On the open-source side, where a lot of supply chain risks emerge, we leverage a state-of-the-art development pipeline. Developers follow strict security guidelines and use frameworks embedded with security tools that detect risks associated with third-party libraries—both during development and at runtime. We also have robust monitoring systems in place to detect vulnerabilities and active exploits. ... Monitoring technological events is one part, but from a core product perspective, we also need to monitor risk-based activities, like transactions that could potentially lead to fraud. For that, we have strong AI/ML developments already deployed, with a dedicated AI and data science team constantly building new algorithms to detect fraudulent actions. From a product standpoint, the system is quite mature. On the technology side, monitoring has some automation powered by AI, and we’ve also integrated tools like GitHub Copilot. Our analysts, developers, and security engineers use these technologies to quickly identify potential issues, reducing manual effort significantly. ... Security needs to be planned and discussed early—right at the product ideation stage with product managers—so that it doesn’t become a blocker at the end. Early involvement makes it much easier and avoids last-minute disruptions.


14 tiny tricks for big cloud savings

Good algorithms can boost the size of your machine when demand peaks. But clouds don’t always make it easy to shrink all the resources on disk. If your disks grow, they can be hard to shrink. By monitoring these machines closely, you can ensure that your cloud instances consume only as much as they need and no more. ... Cloud providers can offer significant discounts for organizations that make a long-term commitment to using hardware. These are sometimes called reserved instances, or usage-based discounts. They can be ideal when you know just how much you’ll need for the next few years. The downside is that the commitment locks in both sides of the deal. You can’t just shut down machines in slack times or when a project is canceled. ... Programmers like to keep data around in case they might ever need it again. That’s a good habit until your app starts scaling and it’s repeated a bazillion times. If you don’t call the user, do you really need to store their telephone number? Tossing personal data aside not only saves storage fees but limits the danger of releasing personally identifiable information. Stop keeping extra log files or backups of data that you’ll never use again. ... Cutting back on some services will save money, but the best way to save cash is to go cold turkey. There’s nothing stopping you from dumping your data into a hard disk on your desk or down the hall in a local data center. 


Two-thirds of jobs will be impacted by AI

“Most jobs will change dramatically in the next three to four years, at least as much as the internet has changed jobs over the last 30,” Calhoon said. “Every job posted on Indeed today, from truck driver to physician to software engineer, will face some level of exposure to genAI-driven change.” ... What will emerge is a “symbiotic” relationship with an increasingly “proactive” technology that will require employees to constantly learn new skills and adapt. “AI can manage repetitive tasks, or even difficult tasks that are specific in nature, while humans can focus on innovative and strategic initiatives that drive revenue growth and improve overall business performance,” Hoffman said in an interview earlier this year. “AI is also much quicker than humans could possibly be, is available 24/7, and can be scaled to handle increasing workloads.” As AI takes over repetitive tasks, workers will shift toward roles that involve overseeing AI, solving unique problems, and applying creativity and strategy. Teams will increasingly collaborate with AI—like marketers personalizing content or developers using AI copilots. Rather than replacing humans, AI will enhance human strengths such as decision-making and emotional intelligence. Adapting to this change will require ongoing learning and a fresh approach to how work is done.