Daily Tech Digest - July 11, 2025


Quote for the day:

"People may forget what you say, but they won't forget how you them feel." -- Mary Kay Ash


Throwing AI at Developers Won’t Fix Their Problems

Organizations are spending too much time, money and energy focusing on the tools themselves. “Should we use OpenAI or Anthropic? Copilot or Cursor?” We see two broad patterns for how organizations approach AI tool adoption. The first is that leadership has a relationship with a certain vendor or just a personal preference, so they pick a tool and mandate it. This can work, but you’ll often get poor results — not because the tool is bad, but because the market is moving too fast for centralized teams to keep up. ... The second model, which generally works much better, is to allow early adopters to try new tools and find what works. This gives developers autonomy to improve their own workflows and reduces the need for a central team to test every new tool exhaustively. Comparing the tools by features or technology is less important every day. You’ll waste a lot of energy debating minor differences that won’t matter next year. Instead, focus on what problem you want to solve. Are you trying to improve testing? Code review? Documentation? Incident response? Figure out the goal first. Then see if an AI tool (or any tool) actually helps. If you don’t, you’ll just make DevEx worse: You’ll have a landscape of 100 tools nobody knows how to use, and you’ll deliver no real value.


Anatomy of a Scattered Spider attack: A growing ransomware threat evolves

Scattered Spider began its attack against the unnamed organization’s public-facing Oracle Cloud authentication portal, targeting its chief financial officer. Using personal details, such as the CFO’s date of birth and the last four digits of their Social Security number obtained from public sources and previous breaches, Scattered Spider impersonated the CFO in a call to the company’s help desk, tricking help desk staff into resetting the CFO’s registered device and credentials. ... The cybercriminals extracted more than 1,400 secrets by taking advantage of compromised admin accounts tied to the target’s CyberArk password vault and likely an automated script. Scattered Spider granted administrator roles to compromised user accounts before using tools, including ngrok, to maintain access on compromised virtual machines. ... Scattered Spider’s operations have become more aggressive and compressed. “Within hours of initial compromise — often via social engineering — they escalate privileges, move laterally, establish persistence, and begin reconnaissance across both cloud and on-prem environments,” Beek explained. “This speed and fluidity represent a significant escalation in operational maturity.” ... Defending effectively against Scattered Spider involves tackling both human and technical vulnerabilities, ReliaQuest researchers noted.


Data governance: The contract layer that makes agentic systems possible

Today, AI has changed everything. Lineage, access enforcement and cataloging must operate in real time and cover vastly more data types and sources. Models consume data continuously and make decisions instantly, raising the stakes for mistakes or gaps in oversight. What used to be a once-a-week check is now an always-on discipline. This transformation has turned data governance from a checklist into a living system that protects quality and trust at scale. ... One of the biggest misconceptions is that governance slows down innovation. In reality, good governance speeds it up. By clarifying ownership, policies and data quality from the start, teams avoid spending precious time reconciling mismatches and can focus on delivering AI that works as intended. A clear governance framework reduces unnecessary data copies, lowers regulatory risk and prevents AI from producing unpredictable results. Getting this right also requires a culture shift. Producers and consumers alike need to see themselves as co-stewards of shared data products. ... Enterprises deploying agentic AI cannot leave governance behind. These systems run continuously, make autonomous decisions and rely on accurate context to stay relevant. Governance must move from passive checks to an active, embedded foundation within both architecture and culture.


How CIOs Are Navigating Today’s Hyper Volatility

“When it comes to changing dynamics, [such as] AI and driving innovation, there are several things that people like me are dealing with right now. There is an impact on how you hire people, staffing, how to structure your organization,” says Johar. “There is an impact on risk. I’m also responsible within my organization for managing the risk of data, privacy and security, and AI is bringing a new dimension to that risk. It’s an opportunity, but it's also a risk. How you structure your organization, how you manage risk, how you drive transformation -- these things are all connected.” ... “[CIOs] are emerging as transformation leaders, so they need to understand how to navigate the culture change of an organization, the change in people in an organization. They must know how to tell stories so they can get the organization on board,” says Danielle Phaneuf, a partner, PwC cloud and digital strategy operating model leader. “Their mindset is different, so they're embracing the transformation with a product model that allows them to move faster [and] allows them to think long term. They’re building these new muscles around change leadership and engaging the business early, co-creating solutions, not thinking they must solve everything on their own, and doing that in an agile way.”


What Is AI Agent Washing And Why Is It A Risk To Businesses?

You’ve heard of greenwashing and AI-washing? Well, now it seems that the hype-merchants and bandwagon-jumpers with technology to sell have come up with a new (and perhaps predictably inevitable) scam. Analysts at Gartner say unscrupulous vendors are increasingly engaging in "agent washing" and say that out of “thousands” of supposedly agentic AI products tested, only 130 truly lived up to the claim. ... So, what’s the scam? Well, according to the report, agent washing involves passing off existing automation technology, including LLM-powered chatbots and robotic process automation, as agentic, when in reality it lacks those capabilities. ... Tools that claim to be agentic because they orchestrate and pull together multiple AI systems, such as marketing automation platforms and workflow automation tools, are stretching the term, too, unless they are also capable of autonomously coordinating the usage of those tools for long-term planning and decision-making. A few more hypothetical examples: While an AI chatbot-based system can write emails on command, an agentic system might write emails, identify the best recipients for marketing purposes, send the emails out, monitor responses, and then generate follow-up emails, tailored to individual responders.


Agentic AI Architecture Framework for Enterprises

The critical decision point lies in understanding when predictability and control take precedence versus when flexibility and autonomous decision-making deliver greater value. This understanding leads to a fundamental principle: start with the simplest effective solution, adding complexity only when clear business value justifies the additional operational overhead and risk. ... Enterprise deployment of agentic AI creates an inherent tension between AI autonomy and organizational governance requirements. Our Analysis of successful MVPs and on-going production implementations across multiple industries reveals three distinct architectural tiers, each representing different trade-offs between capability and control while anticipating emerging regulatory frameworks like the EU AI Act and others coming. These tiers form a systematic maturity progression, so organizations can build competency and stakeholder trust incrementally before advancing to more sophisticated implementations. ... Our three-tier progression manifests differently across industries, reflecting unique regulatory environments, risk tolerances, customer expectations and operational requirements. Understanding these industry-specific approaches enables organizations to tailor their implementation strategies while maintaining systematic capability development.


Rewriting the rules of enterprise architecture with AI agents

In enterprise architecture, agentic AI systems can be deployed as digital “co-architects”, process optimizers, compliance monitors and scenario planners — each acting with a degree of independence and intelligence previously impossible. So why agentic AI and simulations for governance…and why now? Governance in enterprise architecture is about ensuring that IT systems, processes and data align with business goals, comply with regulations and adapt to change. ... These methods are increasingly inadequate in the face of real-time business dynamics. Agentic AI introduces a new composability model that is achievable: Governance that is continuous, adaptive and proactive. Agentic systems can monitor the enterprise landscape, simulate the impact of changes, enforce policies autonomously and even resolve conflicts or escalate issues when necessary. This results in governance that is both more robust and more responsive to business needs. Gartner’s research reinforces the impact of agency and simulations on enterprise architecture’s future. According to its Enterprise Architecture Services Predictions for 2025, 55% of EA teams will act as coordinators of autonomous governance automation by 2028 and shift from a direct oversight role to that of model curation and certification, agent simulations and oversight, and business outcome alignment with machine-led governance.


With tools like Alpha and Coherence, we’re turning risk management from reactive to real-time

Those days when it was more of a very reactive and process-heavy system, where you had to follow a set of dilutive processes all the time and react to risks being observed in the system, and then you had a standard operating procedure to deal with it step by step. Those days are behind us. That scenario was there for a number of decades. But with AI and intelligent-led solution capabilities transforming the landscape, it has become proactive and extremely real-time. So what we propose, we always have lived by our Digital Knowledge Operations framework. The three words in it: digital, knowledge, and operations. Digital makes you proactive because you’re building solutions not for today but for the future. You rely on knowledge, and you transform your operations. That’s our philosophy that unlocks this proactive ability of capturing the possibilities of risk in real time. That drove us to build something like Alpha. It’s essentially a very strong and effective transaction monitoring framework and tool that can detect a whole lot of false alerts with over 75% to 80% accuracy. Now, in risk management, what happens is that a lot of operational bandwidth, effort, and talent capability is lost in assessing all of these false positives that are generated because of risk management procedures. Most of them can be taken care of by a combination of machine learning, artificial intelligence, and some sort of robotics.


Banking on Better Data: Why Financial Institutions Need an Agile Cloud Strategy

The urgency to migrate to the cloud is particularly pronounced in the banking sector, where legacy institutions are under mounting pressure to keep pace with digital-native competitors. These agile challengers can roll out new features in a matter of weeks, while traditional banks remain constrained by older mainframes. It is clear that the risk of standing still is no longer theoretical. Earlier this year, over 1.2 million UK customers experienced banking outages on pay day, a critical moment for both individuals and businesses. Several major retail banks reported widespread issues, including login failures and prolonged delays in customer service. Far from being one-off glitches, these disruptions point to a broader pattern of structural fragility rooted in outdated technology. Unlike legacy systems, cloud-native platforms are engineered for adaptability, resilience, and real-time performance, which are traits that traditional banking environments have been struggling to deliver. These failures weren’t just accidents; they were foreseeable outcomes of prolonged underinvestment in modernization. This reinforced a critical truth for traditional banks, which is that cloud transformation is no longer a future aspiration, but an immediate requirement to safeguard customer trust and remain viable in a rapidly evolving market.


Why knowledge is the ultimate weapon in the Information Age

To turn AI into an asset rather than a liability, organisations must rethink their approach to knowledge management. At its core, knowledge management is a learning cycle centred on people, with technology acting as a force multiplier, not a substitute for judgment. The objective is to establish a virtuous loop in which data is collected, validated, and transformed into actionable insight. The tighter and more disciplined this cycle, the higher the quality of the resulting knowledge. In practice, this means treating AI as just another tool in the toolkit. ... In an age of information warfare, perception is the battleground. To stay ahead, decision-makers must be trained not just in AI tools but in understanding their strengths, limitations, and potential biases, including their own. The ability to critically assess AI-generated content is essential, not optional. More than static planning, modern organisations need situational awareness and strategic agility, embedding AI within a human-centric knowledge strategy. We can shift the balance in the information war by curating trusted sources, rigorously verifying content, and sustaining a culture of learning. This new knowledge ecosystem embraces uncertainty, leverages AI wisely, and keeps cognitive bias in control, wielding knowledge as a disciplined and secure strategic asset.

Daily Tech Digest - July 10, 2025


Quote for the day:

"Strive not to be a success, but rather to be of value." -- Albert Einstein


Domain-specific AI beats general models in business applications

Like many AI teams in the mid-2010s, Visma’s group initially relied on traditional deep learning methods such as recurrent neural networks (RNNs), similar to the systems that powered Google Translate back in 2015. But around 2020, the Visma team made a change. “We scrapped all of our development plans and have been transformer-only since then,” says Claus Dahl, Director ML Assets at Visma. “We realized transformers were the future of language and document processing, and decided to rebuild our stack from the ground up.” ... The team’s flagship product is a robust document extraction engine that processes documents in the countries where Visma companies are active. It supports a variety of languages. The AI could be used for documents such as invoices and receipts. The engine identifies key fields, such as dates, totals, and customer references, and feeds them directly into accounting workflows. ... “High-quality data is more valuable than high volumes. We’ve invested in a dedicated team that curates these datasets to ensure accuracy, which means our models can be fine-tuned very efficiently,” Dahl explains. This strategy mirrors the scaling laws used by large language models but tailors them for targeted enterprise applications. It allows the team to iterate quickly and deliver high performance in niche use cases without excessive compute costs.


The case for physical isolation in data centre security

Hardware-enforced physical isolation is fast becoming a cornerstone of modern cybersecurity strategy. These physical-layer security solutions allow your critical infrastructure – servers, storage and network segments – to be instantly disconnected on demand, using secure, out-of-band commands. This creates a last line of defence that holds even when everything else fails. After all, if malware can’t reach your system, it can’t compromise it. If a breach does occur, physical segmentation contains it in milliseconds, stopping lateral movement and keeping operations running without disruption. In stark contrast to software-only isolation, which relies on the very systems it seeks to protect, hardware isolation remains immune to tampering. ... When ransomware strikes, every second counts. In a colocation facility, traditional defences might flag the breach, but not before it worms its way across tenants. By the time alerts go out, the damage is done. With hardware isolation, there’s no waiting: the compromised tenant can be physically disconnected in milliseconds, before the threat spreads, before systems lock up, before wallets and reputations take a hit. What makes this model so effective is its simplicity. In an industry where complexity is the norm, physical isolation offers a simple, fundamental truth: you’re either connected or you’re not. No grey areas. No software dependency. Just total certainty.


Scaling without outside funding: Intuitive's unique approach to technology consulting

We think for any complex problem, a good 60–70% of it can be solved through innovation. That's always our first principle. Then where we see any inefficiencies; be it in workflows or process, automation works for the other 20% of the friction. The remaining 10–20% is where the engineering plays its important role, and it allows to touch on the scale, security and governance aspects. In data specifically, we are referencing the last 5–6 years of massive investments. We partner with platforms like Databricks and DataMiner and we've invested in companies like TESL and Strike AI for securing their AI models. ... In the cloud space, we see a shift from migration to modernisation (and platform engineering). Enterprises are focussing on modernisation of both applications and databases because those are critical levers of agility, security, and business value. In AI it is about data readiness; the majority of enterprise data is very fragmented or very poor quality which makes any AI effort difficult. Next is understanding existing processes—the way work is done at scale—which is critical for enabling GenAI. But the true ROI is Agentic AI—autonomous systems which don’t just tell you what to do, but just do it. We’ve been investing heavily in this space since 2018. 


The Future of Professional Ethics in Computing

Recent work on ethics in computing has focused on artificial intelligence (AI) with its success in solving problems, processing large amounts of data, and with the award of Nobel Prizes to AI researchers. Large language models and chatbots such as ChatGPT suggest that AI will continue to develop rapidly, acquire new capabilities, and affect many aspects of human existence. Many of the issues raised in the ethics of AI overlap previous discussions. The discussion of ethical questions surrounding AI is reaching a much broader audience, has more societal impact, and is rapidly transitioning to action through guidelines and the development of organizational structure, regulation, and legislation. ... Ethics of digital technologies in modern societies raises questions that traditional ethical theories find difficult to answer. Current socio-technical arrangements are complex ecosystems with a multitude of human and non-human stakeholders, influences, and relationships. The questions of ethics in ecosystems include: Who are members? On what grounds are decisions made and how are they implemented and enforced? Which normative foundations are acceptable? These questions are not easily answered. Computing professionals have important contributions to make to these discussions and should use their privileges and insights to help societies navigate them.


AI Agents Vs RPA: What Every Business Leader Needs To Know

Technically speaking, RPA isn’t intelligent in the same way that we might consider an AI system like ChatGPT to mimic some functions of human intelligence. It simply follows the same rules over and over again in order to spare us the effort of doing it. RPA works best with structured data because, unlike AI, it doesn't have the ability to analyze and understand unstructured data, like pictures, videos, or human language. ... AI agents, on the other hand, use language models and other AI technologies like computer vision to understand and interpret the world around them. As well as simply analyzing and answering questions about data, they are capable of taking action by planning how to achieve the results they want and interacting with third-party services to get it done. ... Using RPA, it would be possible to extract details about who sent the mail, the subject line, and the time and date it was sent. This can be used to build email databases and broadly categorize emails according to keywords. An agent, on the other hand, could analyze the sentiment of the email using language processing, prioritize it according to urgency, and even draft and send a tailored response. Over time, it learns how to improve its actions in order to achieve better resolutions.


How To Keep AI From Making Your Employees Stupid

Treat AI-generated content like a highly caffeinated first draft – full of energy, but possibly a little messy and prone to making things up. Your job isn’t to just hit “generate” and walk away unless you enjoy explaining AI hallucinations or factual inaccuracies to your boss (or worse, your audience). Always, always edit aggressively, proofread and, most critically, fact-check every single output. This process isn’t just about catching AI’s mistakes; it actively engages your critical thinking skills, forcing you to verify information and refine expression. Think of it as intellectual calisthenics. ... Don’t settle for the first answer AI gives you. Engage in a dialogue. Refine your prompts, ask follow-up questions, request different perspectives and challenge its assumptions. This iterative process of refinement forces you to think more clearly about your own needs, to be precise in your instructions, and to critically evaluate the nuances of the AI’s response. ... The MIT study serves as a crucial wake-up call: over-reliance on AI can indeed make us “stupid” by atrophying our critical thinking skills. However, the solution isn’t to shun AI, but to engage with it intelligently and responsibly. By aggressively editing, proofreading and fact-checking AI outputs, by iteratively refining prompts and by strategically choosing the right AI tool for each task, we can ensure AI serves as a powerful enhancer, not a detrimental crutch.


What EU’s PQC roadmap means on the ground

The EU’s PQC roadmap is broadly aligned with that from NIST; both advise a phased migration to PQC with hybrid-PQC ciphers and hybrid digital certificates. These hybrid solutions provide the security promises of brand new PQC algorithms, whilst allowing legacy devices that do not support them, to continue using what’s now being called ‘classical cryptography’. In the first instance, both the EU and NIST are recommending that non-PQC encryption is removed by 2030 for critical systems, with all others following suit by 2035. While both acknowledge the ‘harvest now, decrypt later’ threat, neither emphasise the importance of understanding the cover time of data; nor reference the very recent advancements in quantum computing. With many now predicting the arrival of cryptographically relevant quantum computers (CRQC) by 2030, if organizations or governments have information with a cover time of five years or more, it is already too late for many to move to PQC in time. Perhaps the most significant difference that EU organizations will face compared to their American counterparts, is that the European roadmap is more than just advice; in time it will be enforced through various directives and regulations. PQC is not explicitly stated in EU regulations, although that is not surprising.


The trillion-dollar question: Who pays when the industry’s AI bill comes due?

“The CIO is going to be very, very busy for the next three, four years, and that’s going to be the biggest impact,” he says. “All of a sudden, businesspeople are starting to figure out that they can save a ton of money with AI, or they can enable their best performers to do the actual job.” Davidov doesn’t see workforce cuts matching AI productivity increases, even though some job cuts may be coming. ... “The costs of building out AI infrastructure will ultimately fall to enterprise users, and for CIOs, it’s only a question of when,” he says. “While hyperscalers and AI vendors are currently shouldering much of the expense to drive adoption, we expect to see pricing models evolve.” Bhathena advises CIOs to look beyond headline pricing because hidden costs, particularly around integrating AI with existing legacy systems, can quickly escalate. Organizations using AI will also need to invest in upskilling employees and be ready to navigate increasingly complex vendor ecosystems. “Now is the time for organizations to audit their vendor agreements, ensure contract flexibility, and prepare for potential cost increases as the full financial impact of AI adoption becomes clearer,” he says. ... Baker advises CIOs to be careful about their purchases of AI products and services and tie new deployments to business needs.


Multi-Cloud Adoption Rises to Boost Control, Cut Cost

Instead of building everything on one platform, IT leaders are spreading out their workloads, said Joe Warnimont, senior analyst at HostingAdvice. "It's no longer about chasing the latest innovation from a single provider. It's about building a resilient architecture that gives you control and flexibility for each workload." Cost is another major factor. Even though hyperscalers promote their pay-as-you-go pricing, many enterprises find it difficult to predict and manage costs at scale. This is true for companies running hundreds or thousands of workloads across different regions and teams. "You'd think that pay-as-you-go would fit any business model, but that's far from the case. Cost predictability is huge, especially for businesses managing complex budgets," Warnimont said. To gain more control over pricing and features, companies are turning to alternative cloud providers, such as DigitalOcean, Vultr and Backblaze. These platforms may not have the same global footprint as AWS or Azure but they offer specialized services, better pricing and flexibility for certain use cases. An organization needing specific development environments may go to DigitalOcean. Another may chose Vultr for edge computing. Sometimes the big players just don't offer what a specific workload requires. 


How CISOs are training the next generation of cyber leaders

While Abousselham champions a personalized, hands-on approach to developing talent, other CISOs are building more formal pathways to support emerging leaders at scale. For others like PayPal CISO Shaun Khalfan, structured development was always part of his career. He participated in formal leadership training programs offered by the Department of Defense and those run by the American Council for Technology. ... Structured development is also happening inside companies like the insurance brokerage firm Brown & Brown. CISO Barry Hensley supports an internal cohort program designed to identify and grow emerging leaders early in their careers. “We look at our – I’m going to call it newer or younger – employees,” he explains. “And if you become recognized in your first, second, or third year as having the potential to [become a leader], you get put in a program,” he explains. ... Khalfan believes good CISOs should be able to dive deep with engineers while also leading boardroom conversations. “It’s been a long time since I’ve written code,” he says, “but I at least understand how to have a deep conversation and also be able to have a board discussion with someone.” Abousselham agrees that technical experience is only one part of the puzzle. 

Daily Tech Digest - July 09, 2025


Quote for the day:

"Whenever you see a successful person you only see the public glories, never the private sacrifices to reach them." -- Vaibhav Shah


Why CIOs see APIs as vital for agentic AI success

API access also goes beyond RAG. It allows agents and their underlying language models not just to retrieve information, but perform database mutations and trigger external actions. This shift allows agents to carry out complex, multi-step workflows that once required multiple human touchpoints. “AI-ready APIs paired with multi-agentic capabilities can unlock a broad range of use cases, which have enterprise workflows at their heart,” says Milind Naphade, SVP of technology and head of AI foundations at Capital One. In addition, APIs are an important bridge out of previously isolated AI systems. ... AI agents can make unprecedented optimizations on the fly using APIs. Gartner reports that PC manufacturer Lenovo uses a suite of autonomous agents to optimize marketing and boost conversions. With the oversight of a planning agent, these agents call APIs to access purchase history, product data, and customer profiles, and trigger downstream applications in the server configuration process. ... But the bigger wins will likely be increased operational efficiency and cost reduction. As Fox describes, this stems from a newfound best-of-breed business agility. “When agentic AI can dynamically reconfigure business processes, using just what’s needed from the best-value providers, you’ll see streamlined operations, reduced complexity, and better overall resource allocation,” she says.


What we can learn about AI from the ‘dead internet theory’

The ‘dead internet theory,’ or the idea that much of the web is now dominated by bots and AI-generated content, is largely speculative. However, the concern behind it is worth taking seriously. The internet is changing, and the content that once made it a valuable source of knowledge is increasingly diluted by duplication, misinformation, and synthetic material. For the development of artificial intelligence, especially large language models (LLMs), this shift presents an existential problem. ... One emerging model for collecting and maintaining this kind of data is Knowledge as a Service (KaaS). Rather than scraping static sources, KaaS creates a living, structured ecosystem of contributions from real users (often experts in their fields) who continuously validate and update content. This approach takes inspiration from open-source communities but remains focused on knowledge creation and maintenance rather than code. KaaS supports AI development with a sustainable, high-quality stream of data that reflects current thinking. It’s designed to scale with human input, rather than in spite of it. ... KaaS helps AI stay relevant by providing fresh, domain-specific input from real users. Unlike static datasets, KaaS adapts as conditions change. It also brings greater transparency, illustrating directly how contributors’ inputs are utilised. This level of attribution represents a step toward more ethical and accountable AI.


The Value of Threat Intelligence in Ensuring DORA Compliance

One of the biggest challenges for security teams today is securing visibility into third-party providers within their ecosystem due to their volume, diversity, and the constant monitoring required. Utilising a Threat Intelligence Platform (TIP) with advanced capabilities can enable a security team to address this gap by monitoring and triaging threats within third-party systems through automation. It can flag potential signs of compromise, vulnerabilities, and risky behaviour, enabling organisations to take pre-emptive action before risks escalate and impact their systems. ... A major aspect of DORA is implementing a robust risk management framework. However, to keep pace with global expansion and new threats and technologies, this framework must be responsive, flexible, and up-to-date. Sourcing, aggregating, and collating threat intelligence data to facilitate this is a time-exhaustive task, and unfeasible for many resource-stretched and siloed security teams. ... From tabletop scenarios to full-scale simulations, these exercises evaluate how well systems, processes, and people can withstand and respond to real-world cyber threats. With an advanced TIP, security teams can leverage customisable workflows to recreate specific operational stress scenarios. These scenarios can be further enhanced by feeding real-world data on attacker behaviours, tactics, and trends, ensuring that simulations reflect actual threats rather than outdated risks.


Why your security team feels stuck

The problem starts with complexity. Security stacks have grown dense, and tools like EDR, SIEM, SOAR, CASB, and DSPM don’t always integrate well. Analysts often need to jump between multiple dashboards just to confirm whether an alert matters. Tuning systems properly takes time and resources, which many teams don’t have. So alerts pile up, and analysts waste energy chasing ghosts. Then there’s process friction. In many organizations, security actions, especially the ones that affect production systems, require multiple levels of approval. On paper, that’s to reduce risk. But these delays can mean missing the window to contain an incident. When attackers move in minutes, security teams shouldn’t be stuck waiting for a sign-off. ... “Security culture is having a bit of a renaissance. Each member of the security team may be in a different place as we undertake this transformation, which can cause internal friction. In the past, security was often tasked with setting and enforcing rules in order to secure the perimeter and ensure folks weren’t doing risky things on their machines. While that’s still part of the job, security and privacy teams today also need to support business growth while protecting customer data and company assets. If business growth is the top priority, then security professionals need new tools and processes to secure those assets.”


Your data privacy is slipping away. Here's why, and what you can do about it

In 2024, the Identity Theft Resource Center reported that companies sent out 1.3 billion notifications to the victims of data breaches. That's more than triple the notices sent out the year before. It's clear that despite growing efforts, personal data breaches are not only continuing, but accelerating. What can you do about this situation? Many people think of the cybersecurity issue as a technical problem. They're right: Technical controls are an important part of protecting personal information, but they are not enough. ... Even the best technology falls short when people make mistakes. Human error played a role in 68% of 2024 data breaches, according to a Verizon report. Organizations can mitigate this risk through employee training, data minimization—meaning collecting only the information necessary for a task, then deleting it when it's no longer needed—and strict access controls. Policies, audits and incident response plans can help organizations prepare for a possible data breach so they can stem the damage, see who is responsible and learn from the experience. It's also important to guard against insider threats and physical intrusion using physical safeguards such as locking down server rooms. ... Despite years of discussion, the U.S. still has no comprehensive federal privacy law. Several proposals have been introduced in Congress, but none have made it across the finish line. 


How To Build Smarter Factories With Edge Computing

According to edge computing experts, these are essentially rugged versions of computers, of any size, purpose-built for their harsh environments. Forget standard form factors; industrial edge devices come in varied configurations specific to the application. This means a device shaped to fit precisely where it’s needed, whether tucked inside a machine or mounted on a factory wall. ... What makes these tough machines intelligent? It’s the software revolution happening on factory floors right now. Historically, industrial computing relied on software specially built to run on bare metal; custom code directly installed on specific machines. While this approach offered reliability and consistent, deterministic performance, it came with significant limitations: slow development cycles, difficult updates and vendor lock-in. ... Communication between smart devices presents unique challenges in industrial environments. Traditional networking approaches often fall short when dealing with thousands of sensors, robots and automated systems. Standard Wi-Fi faces significant constraints in factories where heavy machinery creates electromagnetic interference, and critical operations can’t tolerate wireless dropouts.


Fighting in a cloudy arena

“There are a few primary problems. Number one is that the hyperscalers leverage free credits to get digital startups to build their entire stack on their cloud services,” Cochrane says, adding that as the startups grow, the technical requirements from hyperscalers leave them tied to that provider. “The second thing is also in the relationship they have with enterprises. They say, ‘Hey, we project you will have a $250 million cloud bill, we are going to give you a discount.’ Then, because the enterprise has a contractual vehicle, there’s a mad rush to use as much of the hyperscalers compute as possible because you either lose it or use it. “At the end of the day, it’s like the roach motel. You can check in, but you can’t check out,” he sums up. ... "We are exploring our options to continue to fight against Microsoft’s anti competitive licensing in order to promote choice, innovation, and the growth of the digital economy in Europe." Mark Boost, CEO of UK cloud company Civo, said: ”However they position it, we cannot shy away from what this deal appears to be: a global powerful company paying for the silence of a trade body, and avoiding having to make fundamental changes to their software licensing practices on a global basis.” In the months that followed this decision, things got interesting.


How passkeys work: The complete guide to your inevitable passwordless future

Passkeys are often described as a passwordless technology. In order for passwords to work as a part of the authentication process, the website, app, or other service -- collectively referred to as the "relying party" -- must keep a record of that password in its end-user identity management system. This way, when you submit your password at login time, the relying party can check to see if the password you provided matches the one it has on record for you. The process is the same, whether or not the password on record is encrypted. In other words, with passwords, before you can establish a login, you must first share your secret with the relying party. From that point forward, every time you go to login, you must send your secret to the relying party again. In the world of cybersecurity, passwords are considered shared secrets, and no matter who you share your secret with, shared secrets are considered risky. ... Many of the largest and most damaging data breaches in history might not have happened had a malicious actor not discovered a shared password. In contrast, passkeys also involve a secret, but that secret is never shared with a relying party. Passkeys are a form of Zero Knowledge Authentication (ZKA). The relying party has zero knowledge of your secret, and in order to sign in to a relying party, all you have to do is prove to the relying party that you have the secret in your possession.


Crafting a compelling and realistic product roadmap

The most challenging aspect of roadmap creation is often prioritization. Given finite resources, not everything can be built at once. Effective prioritization requires a clear framework. Common methods include scoring features based on business value versus effort, using frameworks like RICE, or focusing on initiatives that directly address key strategic objectives. Be prepared to say “no” to good ideas that don’t align with current priorities. Transparency in this process is vital. Communicate why certain items are prioritized over others to stakeholders, fostering understanding and buy-in, even when their preferred feature isn’t immediately on the roadmap. ... A product roadmap is a living document, not a static contract. The B2B software landscape is constantly evolving, with new technologies emerging, customer needs shifting, and competitive pressures mounting. A realistic roadmap acknowledges this dynamism. While it provides a clear direction, it should also be adaptable. Plan for regular reviews and updates – quarterly or even monthly – to adjust based on new insights, validated learnings, and changes in the market or business environment. Embrace iterative development and be prepared to pivot or adjust priorities as new information comes to light. 


Are software professionals ready for the AI tsunami?

Modern AI assistants can translate plain-English prompts into runnable project skeletons or even multi-file apps aligned with existing style guides (e.g., Replit). This capability accelerates experimentation and learning, especially when teams are exploring unfamiliar technology stacks. A notable example is MagicSchool.com, a real-world educational platform created using AI-assisted coding workflows, showcasing how AI can powerfully convert conceptual prompts into usable products. These tools enable rapid MVP development that can be tested directly with customers. Once validated, the MVP can then be scaled into a full-fledged product. Rapid code generation can lead to fragile or opaque implementations if teams skip proper reviews, testing, and documentation. Without guardrails, it risks technical debt and poor maintainability. To stay reliable, agile teams must pair AI-generated code with sprint reviews, CI pipelines, automated testing, and strategies to handle evolving features and business needs. Recognising the importance of this shift, tech giants like Amazon (CodeWhisperer) and Google (AlphaCode) are making significant investments in AI development tools, signaling just how central this approach is becoming to the future of software engineering.

Daily Tech Digest - July 08, 2025


Quote for the day:

“If you really want the key to success, start by doing the opposite of what everyone else is doing.” -- Brad Szollose


MCP Vulnerability Exposes the AI Untrusted Code Crisis

Most organizations have rigorous approval processes before allowing arbitrary code to run in their environments whether from open source projects or vendor solutions. Yet with this new wave of tools, we’re simultaneously allowing thousands of employees to constantly update codebases with arbitrary, untrusted AI-generated code or wiring said codebases and applications to mechanisms that can alter or modify their behavior. This isn’t about stopping the use of AI coding agents or sacrificing the massive productivity gains they provide. Instead, we should standardize better ways that allow us to run untrusted code across our software development pipelines. ... As AI development tools gain adoption across enterprises, there is a new class of systems to support them that can execute code on behalf of developers. This includes AI code assistants generating and running code snippets, MCP servers providing AI systems access to local tools and data, automated testing tools executing AI-generated test cases and development agents performing complex multistep operations. Each of these represents a potential code execution pathway that often bypasses traditional security controls. The risk isn’t just that AI-generated code can be inadvertently malicious; it’s that these new systems also create pathways for untrusted code execution.


Is English the next programming language? JetBrains’ CEO says no

JetBrains does need to contend with the fact that many of its users are being threatened by AI replacing them, even if he notes that job displacement isn’t happening at anywhere near the rate some have suggested. Products, languages and IT infrastructure can indeed be made redundant too. We may also add that many layoff rounds use AI as an excuse to make cuts that are simply financially motivated. Still, we need to appreciate that AI is indeed changing the overall landscape. Tasks can be automated, and AI is eagerly shoveling up the developer code that’s freely available online. What about Kotlin specifically?  ... “Here’s my vision. I think programming languages will evolve a lot. I admit that you may not need high level programming languages in the classical sense anymore, but the solution still wouldn’t be English.” Skrygan envisions a middle ground between Kotlin and natural language. Currently, the closest approximation is Kotlin DSL. It’s a design doc that can be compiled as code. Ultimately, like anything digital, it converts into binary at the lowest level. The JetBrains CEO highlights how this is merely a repeat of what we’ve already seen: “People were writing in bytecode and assembler 40 years ago. Now, nobody cares about it anymore. It’s secondary.”

Privacy is blockchain’s missing link—and America’s opportunity to lead

We are at an inflection point. On one hand, blockchain has evolved from an experimental idea into a foundational layer for decentralized finance (DeFi), gaming, cross-border payments, and digital identity. On the other, the absence of privacy threatens to stall its momentum. Without privacy guarantees, Web3 won’t scale into a secure, inclusive internet economy—it will remain a risky, self-surveilling shadow of its potential. It’s not just user safety at stake. Institutional adoption, long seen as the tipping point for crypto’s maturation, is lagging in part because privacy solutions are underdeveloped. Financial institutions and enterprises cannot embrace systems that force them to reveal business-sensitive transactions to competitors and regulators alike. Privacy is not the enemy of compliance; it’s a prerequisite for serious engagement. ... First, policymakers must move past the false binary of privacy versus compliance. These are not mutually exclusive goals. Clear guidelines that embrace advanced cryptography, establish safe harbors for privacy-preserving innovation, and differentiate between consumer protection and surveillance will enable the next generation of secure digital finance. Second, industry leaders need to elevate privacy to the level of consensus mechanisms, scalability, and user experience. 


How scientists are trying to use AI to unlock the human mind

In one of the studies, researchers transformed a large language model into what they refer to as a “foundation model of human cognition.” Out of the box, large language models aren’t great at mimicking human behavior—they behave logically in settings where humans abandon reason, such as casinos. So the researchers fine-tuned Llama 3.1, one of Meta’s open-source LLMs, on data from a range of 160 psychology experiments, which involved tasks like choosing from a set of “slot machines” to get the maximum payout or remembering sequences of letters. ... Accurate predictions of how humans respond in psychology experiments are valuable in and of themselves: For example, scientists could use Centaur to pilot their experiments on a computer before recruiting, and paying, human participants. In their paper, however, the researchers propose that Centaur could be more than just a prediction machine. ... The second of the two Nature studies focuses on minuscule neural networks—some containing only a single neuron—that nevertheless can predict behavior in mice, rats, monkeys, and even humans. Because the networks are so small, it’s possible to track the activity of each individual neuron and use that data to figure out how the network is producing its behavioral predictions. 


New Study Reveals True AI Capabilities And Job Replacement Risk

For business leaders, this framework offers something really valuable: a reality check that cuts through vendor marketing speak. When a sales representative promises their AI solution will "revolutionize your operations," you can now ask pointed questions about which capability levels their system actually achieves and in which specific domains. The gap analysis between current AI capabilities and the requirements of specific business tasks becomes clearer when standardized benchmarks are in place. Consider customer service, where companies are deploying AI chatbots with the enthusiasm of gold rush prospectors. The OECD framework suggests that while AI can handle structured interactions reasonably well, anything requiring genuine social intelligence, nuanced problem-solving, or creative thinking quickly exposes current limitations. This doesn't mean AI isn't useful in customer service, but it helps set realistic expectations about what human oversight will still be necessary. It's the difference between using AI as a sophisticated tool versus expecting it to be a replacement employee. One approach leads to productivity gains; the other leads to customer complaints and public relations disasters.


Why EU Policy Must Catch Up to the Neurotechnology Boom

After conducting a comprehensive analysis of nearly 300 neurotechnology companies worldwide, the Center for Future Generations discovered a surprising trend: among firms fully dedicated to neurotech, consumer firms now outnumber medical ones, making up 60% of the global neurotechnology landscape. And they're proliferating at an unprecedented rate—more than quadrupling in the past decade compared to the previous 25 years. ... EEG, the technology at the heart of this revolution, has been around since the 1920s. It's crude and can't read individual thoughts, but it can detect patterns of brain activity related to focus, fatigue, and even emotional states. And when coupled with artificial intelligence and other personal data—like location, buying behaviors, and biometrics—these patterns can reveal far more about us than we might imagine. ... As this technology moves into the mainstream, the potential for misuse becomes profound. Imagine pre-election advertising that adapts its messaging based on your emotional reaction. Imagine disinformation campaigns tailored to your subconscious fears, measured directly from your brain. Imagine authoritarian governments monitoring emotional responses to propaganda, searching for dissent in citizens' brainwaves. This marks a critical moment for European policymakers.


Enterprises Are Prioritizing Generative AI Spending in 2025

The report, "Generative AI Adoption Index," highlights how organizations are moving gen AI from experimentation to full-scale implementation and offers practical strategies to create business value. CEOs, CTOs and CIOs currently lead most gen AI innovation, but leadership structures are evolving to include specialized AI roles, such as CAIOs, at the highest levels of organizations. ... Along with CAIOs, a thoughtful change management strategy will be critical. The ideal strategy should address operating model changes, data management practices and talent pipelines. Today, just 14% of organizations have a change management strategy, but this will increase to 76% by end of 2026, highlighting a growing recognition of the need for structured adaptation. But a sizable proportion of organizations may still struggle to keep pace with AI-driven transformation, with one in four organizations still lacking a strategy in 2026. ... Third-party vendors are becoming key enablers of gen AI transformation across organizations globally. From supplying outsourced talent to offering services such as cloud computing and storage, these vendors help bridge critical technology and talent gaps. Effective gen AI deployment will depend on strong collaboration between external experts and internal teams. 


AI’s rise demands more from the UK data center market

The growing demand for digital infrastructure, fueled by the surge in AI, has intensified competition for suitable land to build data centers. This scarcity (particularly in London), coupled with the rise in construction and operational costs, makes it difficult to establish data centers in the most efficient and cost-effective manner. Similarly, an over-reliance on well-established technology clusters (such as West London) can increase resource restraints and vulnerability to power outages and downtime. With UK policy frameworks around data centers still evolving, discussions are ongoing around security, energy consumption, and specific regulatory needs. ... Similarly, traditional methods demand a high level of energy consumption to keep AI chips operating at optimal temperatures. Given the energy-intensive nature of air cooling and it being unlikely to keep up with cooling demands, the data center industry is reaching a critical juncture: stifle the capabilities of AI technologies by not integrating effective thermal management, or investing in a more effective, future-thinking approach to cooling? ... The UK’s data center expansion is not just a scaling project, it is a rethinking of what data centers and associated cooling infrastructures must become. 


Why CISOs are making the SASE switch: Fewer vendors, smarter security, better AI guardrails

“SASE is an existential threat to all appliance-based network security companies,” Shlomo Kramer, Cato’s CEO, told VentureBeat. “The vast majority of the market is going to be refactored from appliances to cloud service, which means SASE [is going to be] 80% of the market.” A fundamental architectural transformation is driving that shift. SASE converges traditionally siloed networking and security functions into a single, cloud-native service edge. It combines SD-WAN with critical security capabilities, including secure web gateway (SWG), cloud access security broker (CASB) and ZTNA to enforce policy and protect data regardless of where users or workloads reside. ... The SASE consolidation wave reveals how enterprises are fundamentally rethinking security architecture. With AI attacks exploiting integration gaps instantly, single-vendor SASE has become essential for both protection and operational efficiency. The reasoning is straightforward. Every vendor handoff creates vulnerability. Each integration adds latency. Security leaders know that unified platforms can help eliminate these risks while enabling business velocity. CISOs are increasingly demanding a single console, a single agent and unified policies. 


CISOs urged to fix API risk before regulation forces their hand

The widespread use of APIs to support mobile apps, cloud services, and partner integrations means that the attack surface has changed. But the security practices often haven’t. APIs today handle everything from identity claims and cardholder data to health and account information. Yet in many organizations, they remain outside the scope of standard security programs. ... Oppenheim added that meaningful oversight at the board level doesn’t require technical fluency. “Board-level metrics in such a technically complex space can be difficult to surface meaningfully, but there are still effective ways to guide oversight and investment. Directors should ask which recognised standards (e.g. FAPI) have been adopted or are in the roadmap, and whether the organization has applied a maturity model or framework to benchmark its current posture and track improvements over time.” ... So far, the biggest improvements in API security have come either through direct regulation or industry-led mandates. But pressure is building elsewhere. “Again, organizational size plays a key role,” said Oppenheim. “Larger firms and infrastructure providers are already moving ahead voluntarily – not just in banking, but in payments and identity platforms – because they see strong API security as a necessary foundation for scale and trust.”

Daily Tech Digest - July 07, 2025


Quote for the day:

"To live a creative life, we must lose our fear of being wrong." -- Anonymous


Forget the hype — real AI agents solve bounded problems, not open-world fantasies

When people imagine AI agents today, they tend to picture a chat window. A user types a prompt, and the agent responds with a helpful answer (maybe even triggers a tool or two). That’s fine for demos and consumer apps, but it’s not how enterprise AI will actually work in practice. In the enterprise, most useful agents aren’t user-initiated, they’re autonomous. They don’t sit idly waiting for a human to prompt them. They’re long-running processes that react to data as it flows through the business. They make decisions, call services and produce outputs, continuously and asynchronously, without needing to be told when to start. ... The problems worth solving in most businesses are closed-world: Problems with known inputs, clear rules and measurable outcomes. But the models we’re using, especially LLMs, are inherently non-deterministic. They’re probabilistic by design. The same input can yield different outputs depending on context, sampling or temperature. That’s fine when you’re answering a prompt. But when you’re running a business process? That unpredictability is a liability. ... Closed-world problems don’t require magic. They need solid engineering. And that means combining the flexibility of LLMs with the structure of good software engineering. 


Has CISO become the least desirable role in business?

Being a CISO today is not for the faint of heart. To paraphrase Rodney Dangerfield, CISOs (some, anyway) get no respect. You’d think in a job where perpetual stress over the threat of a cyberattack is the norm, there would be empathy for security leaders. Instead, they face the growing challenge of trying to elicit support across departments and managing security threats, according to a recent report from WatchGuard. ... It’s no secret CISOs are under tremendous pressure. “They’ve got the regulatory scrutiny, they’ve got public visibility,” along with the increasing complexity of threats, and “AI is just adding to that fire, and the mismatch between the accountability and the authority,” says Myers, who wrote “The CISO Dilemma,” which explores CISO turnover rates and how companies can change that moving forward. Often, CISOs don’t have the mandate to influence the business systems or processes that are creating that risk, she says. “I think that’s a real disconnect and that’s what’s really driving the burnout and turnover.” ... Some CISOs are stepping back from operational roles into more advisory ones. Patricia Titus, who recently took a position as a field CISO at startup Abnormal AI after 25 years as a CISO, does not think the CISO role has become less desirable. “The regulatory scrutiny has been there all along,” she says. “It’s gotten a light shined on it.


Enforcement Gaps in India’s DPDP Act and the case for decentralized data protection boards

The DPDP Act’s centralized enforcement model suffers from structural weaknesses that hinder effective data protection. A primary concern is the lack of independence of the Data Protection Board. Because the DPB is both appointed and funded by the Union government, with its officials classified as civil servants under central rules , it does not enjoy the institutional autonomy typically expected of a watchdog agency. ... By design, the executive branch holds decisive power over who sits on the Board and can even influence its operations through service rules. This raises a conflict of interest, given that the government itself is a major collector and processor of citizens’ data. In the words of Justice B.N. Srikrishna, having a regulator under government control is problematic “since the State will be the biggest data processor” – a regulator must be “free from the clutches of the Government” to fairly oversee both private and government actors . ... Another structural limitation is the potential for executive interference in enforcement actions, which dilutes accountability. The DPDP Act contains provisions such as Section 27(3) enabling the Central Government to issue directions that the DPB “may modify or suspend” its own orders based on a government reference . 


The Good AI: Cultivating Excellence Through Data

In today’s enterprise landscape, the quality of AI systems depends fundamentally on the data that flows through them. While most organizational focus remains on AI models and algorithms, it’s the often-under-appreciated current of data flowing through these systems that truly determines whether an AI application becomes “good AI” or problematic technology. Just as ancient Egyptians developed specialized irrigation techniques to cultivate flourishing agriculture, modern organizations must develop specialized data practices to cultivate AI that is effective, ethical, and beneficial. My new column, “The Good AI,” will examine how proper data practices form the foundation for responsible and high-performing AI systems. We’ll explore how organizations can channel their data resources to create AI applications that are not just powerful, but trustworthy, inclusive, and aligned with human values. ... As organizations increasingly integrate artificial intelligence into their operations, the need for robust AI governance has never been more critical. However, establishing effective AI governance doesn’t happen in a vacuum—it must be built upon the foundation of solid data governance practices. The path to responsible AI governance varies significantly depending on your organization’s current data governance maturity level.


AI Infrastructure Inflection Point: 60% Cloud Costs Signal Time to Go Private

Perhaps the most immediate challenge facing IT teams identified in the research is the dramatic cost scaling of public cloud AI workloads. Unlike traditional applications where cloud costs scale somewhat linearly, AI workloads create exponential cost curves due to their intensive compute and storage requirements. The research identifies a specific economic threshold where cloud costs become unsustainable. When monthly cloud spending for a given AI workload reaches 60-70% of what it would cost to purchase and operate dedicated GPU-powered infrastructure, organizations hit their inflection point. At this threshold, the total cost of ownership calculation shifts decisively toward private infrastructure. IT teams can track this inflection point by monitoring data and model-hosting requirements relative to GPU transaction throughput. ... Identifying when to move from a public cloud to private cloud or some form of on-premises deployment is critical. Thomas noted that there are many flavors of hybrid FinOps tooling available in the marketplace that, when configured appropriately for an environment, will spot trend anomalies. Anomalies may be triggered by swings in GPU utilization, costs per token/inferences, idle percentages, and data-egress fees. On-premises factors include material variations in hardware, power, cooling, operations, and more over a set period of time.


AI built it, but can you trust it?

AI isn’t inherently bad nor inherently good from a security perspective. It’s another tool that can accelerate and magnify both good and bad behaviors. On the good side, if models can learn to assess the vulnerability state and general trustworthiness of app components, and factor that learning into code they suggest, AI can have a positive impact on the security of the resultant output. Open source projects can already leverage AI to help find potential vulnerabilities and even submit PRs to address them, but there still needs to be significant human oversight to ensure that the results actually improve the project’s security. ... If you simply trust an AI to generate all the artifacts needed to build, deploy, and run anything sophisticated it will be very difficult to know if it’s done so well and what risks it’s mitigated. In many ways, this looks a lot like the classic “curl and pipe to bash” kinds of risks that have long existed where users put blind trust in what they’re getting from external sources. Many times that can work out fine but sometimes it doesn’t. ... AI can create impressive results quickly but it doesn’t necessarily prioritize security and may in fact make many choices that degrade it. Have good architectures and controls and human experts that really understand the recommendations it’s making and can adapt and re-prompt as necessary to provide the right balance.


How to shift left on finops, and why you need to

Building cost awareness in devops requires asking an upfront question when spinning up new cloud environments. Developers and data scientists should ask if the forecasted cloud and other costs align with the targeted business value. When cloud costs do increase because of growing utilization, it’s important to relate the cost escalation to whether there’s been a corresponding increase in business value. The FinOps Foundation recommends that SaaS and cloud-driven commercial organizations measure cloud unit economics. The basic measure calculates the difference between marginal cost and marginal revenue and determines where cloud operations break even and begin to generate a profit. Other companies can use these concepts to correlate business value and cost and make smarter cloud architecture and automation decisions. ... “Engineers especially can get tunnel vision on delivering features and the art of code, and cost modeling should happen as a part of design, at the start of a project, not at the end,” says Mason of RecordPoint. “Companies generally limit the staff with access to and knowledge of cloud cost data, which is a mistake. Companies should strive to spread awareness of costs, educating users of services with the highest cost impacts, so that more people recognize opportunities to optimize or eliminate spend.”


How Cred Built Its Observability-Led Tech Stack

Third-party integrations are critical to any fintech ecosystem, and at Cred, we manage them through a rigorous, life cycle-based third-party risk management framework. This approach is designed to minimize risk and maximize reliability, with security and resilience built in from the start. Before onboarding any external partner, whether for KYC, APIs or payment rails, we conduct thorough due diligence to evaluate their security posture. Each partner is categorized as high, medium or low risk, which then informs the depth and frequency of ongoing assessments. These reviews go well beyond standard compliance checks. ... With user goals validated, our teams then move into secure architecture design. Every integration point, data exchange and system interaction are examined to preempt vulnerabilities and ensure that sensitive information is protected by default. We use ThreatShield, an internal AI-powered threat-modeling tool, to analyze documentation and architecture against the Stride framework, a threat model designed by Microsoft that is used in cybersecurity to identify potential security threats to applications and systems. This architecture-first thinking enables us to deliver powerful features, such as surfacing hidden charges in smart statements or giving credit insights without ever compromising the user's data or experience.


How To Tackle Tech Debt Without Slowing Innovation

Implement a “boy scout rule” under which developers are encouraged to make small improvements to existing code during feature work. This maintains development momentum while gradually improving code quality, and developers are more motivated to clean up code they’re already actively working with. ... Proactively analyze user engagement metrics to pinpoint friction points where users spend excessive time. Prioritize these areas for targeted debt reduction, aligning technical improvements closely with meaningful user experience enhancements. ... Pre-vacation handovers are an excellent opportunity to reduce tech debt. Planning and carrying out handovers before we take a holiday are crucial to maintaining smooth IT operations. Giving your employees the choice to hand tasks over to automation or a human colleague can help reduce tech debt and automate tasks. Critically, it utilizes time already allocated for addressing this work. ... Resolving technical debt is development. The Shangri-la of “no tech debt” does not survive contact with reality. It’s a balance of doing what’s right for the business. Making sure the product and engineering teams are on the same page is critical. You should have sprints where tech debt is the focus.


Why cybersecurity should be seen as a business enabler, not a blocker

Among the top challenges facing the IT sector today, says Jackson, is the rapid development of the tech world. “The pace of change is outpacing many organisations’ ability to adapt securely – whether due to AI, rapid cloud adoption, evolving regulatory frameworks like DORA, or the ongoing shortage of skilled cybersecurity professionals,” he says. “These challenges, combined with cost pressures and the perception that security is not always an enabler, make adaptation even harder.” AI in particular, to no surprise, is having a significant effect on the cybersecurity world – reshaping both sides of the “cybersecurity battlefield”, according to Jackson. “We’re seeing attackers utilise large language models (LLMs) like ChatGPT to scale social engineering and refine malicious code, while defenders are using the same tools (or leveraging them in some way) to enhance threat detection, streamline triage and gain broader context at much greater speed,” he says. While he doesn’t believe AI will have as great an impact as some suggest, he says it still represents an “exciting evolution”, particularly in how it can benefit organisations. “AI won’t replace individuals such as SOC analysts anytime soon, but it can augment and support their roles freeing up time to focus on higher priority tasks,” he says.