Daily Tech Digest - April 30, 2025


Quote for the day:

"You can’t fall if you don’t climb. But there’s no joy in living your whole life on the ground." -- Unknown


Common Pitfalls and New Challenges in IT Automation

“You don’t know what you don’t know and can’t improve what you can’t see. Without process visibility, automation efforts may lead to automating flawed processes. In effect, accelerating problems while wasting both time and resources and leading to diminished goodwill by skeptics,” says Kerry Brown, transformation evangelist at Celonis, a process mining and process intelligence provider. The aim of automating processes is to improve how the business performs. That means drawing a direct line from the automation effort to a well-defined ROI. ... Data is arguably the most boring issue on IT’s plate. That’s because it requires a ton of effort to update, label, manage and store massive amounts of data and the job is never quite done. It may be boring work, but it is essential and can be fatal if left for later. “One of the most significant mistakes CIOs make when approaching automation is underestimating the importance of data quality. Automation tools are designed to process and analyze data at scale, but they rely entirely on the quality of the input data,” says Shuai Guan, co-founder and CEO at Thunderbit, an AI web scraper tool. ... "CIOs often fall into the trap of thinking automation is just about suppressing noise and reducing ticket volumes. While that’s one fairly common use case, automation can offer much more value when done strategically,” says Erik Gaston


Outmaneuvering Tariffs: Navigating Disruption with Data-Driven Resilience

The fact that tariffs are coming was expected – President Donald Trump campaigned promising tariffs – but few could have expected their severity (145% on Chinese imports, as of this writing) and their pace of change (prohibitively high “reciprocal” tariffs on 100+ countries, only to be temporarily rescinded days later). Also unpredictable were second-order effects such as stock and bond market reactions, affecting the cost of capital, and the impact on consumer demand, due to the changing expectations of inflation or concerns of job loss. ... Most organizations will have fragmented views of data, including views of all of the components that come from a given supplier or are delivered through a specific transportation provider. They may have a product-centric view that includes all suppliers that contribute all of the components of a given product. But this data often resides in a variety of supplier-management apps, procurement apps, demand forecasting apps, and other types of apps. Some may be consolidated into a data lakehouse or a cloud data warehouse to enable advanced analytics, but the time required by a data engineering team to build the necessary data pipelines from these systems is often multiple days or weeks, and such pipelines will usually only be implemented for scenarios that the business expects will be stable over time.


The state of intrusions: Stolen credentials and perimeter exploits on the rise, as phishing wanes

What’s worrying is that in over half of intrusions (57%) the victim organizations learned about the compromise of their networks and systems from a third-party rather than discovering them through internal means. In 14% of cases, organizations were notified directly by attackers, usually in the form of ransom notes, but 43% of cases involved external entities such as a cybersecurity company or law enforcement agencies. The average time attackers spent inside a network until being discovered last year was 11 days, a one-day increase over 2023, though still a major improvement versus a decade ago when the average discovery time was 205 days. Attacker dwell time, as Mandiant calls it, has steadily decreased over the years, which is a good sign ... In terms of ransomware, the most common infection vector observed by Mandiant last year were brute-force attacks (26%), such as password spraying and use of common default credentials, followed by stolen credentials and exploits (21% each), prior compromises resulting in sold access (15%), and third-party compromises (10%). Cloud accounts and assets were compromised through phishing (39%), stolen credentials (35%), SIM swapping (6%), and voice phishing (6%). Over two-thirds of cloud compromises resulted in data theft and 38% were financially motivated with data extortion, business email compromise, ransomware, and cryptocurrency fraud being leading goals.


Three Ways AI Can Weaken Your Cybersecurity

“Slopsquatting” is a fresh AI take on “typosquatting,” where ne’er-do-wells spread malware to unsuspecting Web travelers who happen to mistype a URL. With slopsquatting, the bad guys are spreading malware through software development libraries that have been hallucinated by GenAI. ... While it is still unclear whether the bad guys have weaponized slopsquatting yet, GenAI’s tendency to hallucinate software libraries is perfectly clear. Last month, researchers published a paper that concluded that GenAI recommends Python and JavaScript libraries that don’t exist about one-fifth of the time. ... Like the SQL injection attacks that plagued early Web 2.0 warriors who didn’t adequately validate database input fields, prompt injections involve the surreptitious injection of a malicious prompt into a GenAI-enabled application to achieve some goal, ranging from information disclosure and code execution rights. Mitigating these sorts of attacks is difficult because of the nature of GenAI applications. Instead of inspecting code for malicious entities, organizations must investigate the entirery of a model, including all of its weights. ... A form of adversarial AI attacks, data poisoning or data manipulation poses a serious risk to organizations that rely on AI. According to the security firm CrowdStrike, data poisoning is a risk to healthcare, finance, automotive, and HR use cases, and can even potentially be used to create backdoors.


AI Has Moved From Experimentation to Execution in Enterprise IT

According to the SOAS report, 94% of organisations are deploying applications across multiple environments—including public clouds, private clouds, on-premises data centers, edge computing, and colocation facilities—to meet varied scalability, cost, and compliance requirements. Consequently, most decision-makers see hybrid environments as critical to their operational flexibility. 91% cited adaptability to fluctuating business needs as the top benefit of adopting multiple clouds, followed by improved app resiliency (68%) and cost efficiencies (59%). A hybrid approach is also reflected in deployment strategies for AI workloads, with 51% planning to use models across both cloud and on-premises environments for the foreseeable future. Significantly, 79% of organisations recently repatriated at least one application from the public cloud back to an on-premises or co-location environment, citing cost control, security concerns, and predictability. ... “While spreading applications across different environments and cloud providers can bring challenges, the benefits of being cloud-agnostic are too great to ignore. It has never been clearer that the hybrid approach to app deployment is here to stay,” said Cindy Borovick, Director of Market and Competitive Intelligence,


Trying to Scale With a Small Team? Here's How to Drive Growth Without Draining Your Resources

To be an effective entrepreneur or leader, communication is key, and being able to prioritize initiatives that directly align with the overall strategic vision ensures that your lean team is working on projects that have the greatest impact. Integrate key frameworks such as Responsible, Accountable, Consulted, and Informed (RACI) and Objectives and Key Results (OKRs) to maintain transparency, focus and measure progress. By focusing efforts on high-impact activities, your lean team can achieve high success and significant results without the unnecessary strain usually attributable to early-stage organizations. ... Many think that agile methodologies are only for the fast-moving software development industry — but in reality, the frameworks are powerful tools for lean teams in any industry. Encouraging the right culture is key where quick pivots, regular genuine feedback loops and leadership that promotes continuous improvement are part of the everyday workflows. This agile mindset, when adopted early, helps teams rapidly respond to market changes and client issues. ... Trusting others builds rapport. Assigning clear ownership of tasks while allowing those team members the autonomy to execute the strategies creatively and efficiently, while also allowing them to fail, is how trust is created.


Effecting Culture Changes in Product Teams

Depending on the organization, the responsibility of successfully leading a culture shift among the product team could fall to various individuals – the CPO, VP of product development, product manager, etc. But regardless of the specific title, to be an effective leader, you can’t assume you know all the answers. Start by having one-to-one conversations with numerous members on the product/engineering team. Ask for their input and understand, from their perspective, what is working, what’s not working, and what ideas they have for how to accelerate product release timelines. After conducting one-to-one discussions, sit down and correlate the information. Where are the common denominators? Did multiple team members make the same suggestions? Identify the roadblocks that are slowing down the product team or standing in the way of delivering incremental value on a more regular basis. In many cases, tech leaders will find that their team already knows how to fix the issue – they just need permission to do things a bit differently and adjust company policies/procedures to better support a more accelerated timeline. Talking one-on-one with team members also helps resolve any misunderstandings around why the pace of work must change as the company scales and accumulates more customers. Product engineers often have a clear vision of what the end product should entail, and they want to be able to deliver on that vision.


Microsoft Confirms Password Spraying Attack — What You Need To Know

The password spraying attack exploited a command line interface tool called AzureChecker to “download AES-encrypted data that when decrypted reveals the list of password spray targets,” the report said. It then, to add salt to the now open wound, accepted an accounts.txt file containing username and password combinations used for the attack, as input. “The threat actor then used the information from both files and posted the credentials to the target tenants for validation,” Microsoft explained. The successful attack enabled the Storm-1977 hackers to then leverage a guest account in order to create a compromised subscription resource group and, ultimately, more than 200 containers that were used for cryptomining. ... Passwords are no longer enough to keep us safe online. That’s the view of Chris Burton, head of professional services at Pentest People, who told me that “where possible, we should be using passkeys, they’re far more secure, even if adoption is still patchy.” Lorri Janssen-Anessi, director of external cyber assessments at BlueVoyant is no less adamant when it comes to going passwordless. ... And Brian Pontarelli, CEO of FusionAuth, said that the teams who are building the future of passwords are the same ones that are building and managing the login pages of their apps. “Some of them are getting rid of passwords entirely,” Pontarelli said


The secret weapon for transformation? Treating it like a merger

Like an IMO, a transformation office serves as the conductor — setting the tempo, aligning initiatives and resolving portfolio-level tensions before they turn into performance issues. It defines the “music” everyone should be playing: a unified vision for experience, business architecture, technology design and most importantly, change management. It also builds connective tissue. It doesn’t just write the blueprint — it stays close to initiative or project leads to ensure adherence, adapts when necessary and surfaces interdependencies that might otherwise go unnoticed. ... What makes the transformation office truly effective isn’t just the caliber of its domain leaders — it’s the steering committee of cross-functional VPs from core business units and corporate functions that provides strategic direction and enterprise-wide accountability. This group sets the course, breaks ties and ensures that transformation efforts reflect shared priorities rather than siloed agendas. Together, they co-develop and maintain a multi-year roadmap that articulates what capabilities the enterprise needs, when and in what sequence. Crucially, they’re empowered to make decisions that span the legacy seams of the organization — the gray areas where most transformations falter. In this way, the transformation office becomes more than connective tissue; it becomes an engine for enterprise decision-making.


Legacy Modernization: Architecting Real-Time Systems Around a Mainframe

When traffic spikes hit our web portal, those requests would flow through to the mainframe. Unlike cloud systems, mainframes can't elastically scale to handle sudden load increases. This created a bottleneck that could overload the mainframe, causing connection timeouts. As timeouts increased, the mainframe would crash, leading to complete service outages with a large blast radius, hundreds of other applications which depend on the mainframe would also be impacted. This is a perfect example of the problems with synchronous connections to the mainframes. When the mainframes could be overwhelmed by a highly elastic resource like the web, the result could be failure in datastores, and sometimes that failure could result in all consuming applications failing. ... Change Data Capture became the foundation of our new architecture. Instead of batch ETLs running a few times daily, CDC streamed data changes from the mainframes in near real-time. This created what we called a "system-of-reference" - not the authoritative source of truth (the mainframe remains "system-of-record"), but a continuously updated reflection of it. The system of reference is not a proxy of the system of record, which is why our website was still live when the mainframe went down.

Daily Tech Digest - April 29, 2025


Quote for the day:

"Don't let yesterday take up too much of today." -- Will Rogers



AI and Analytics in 2025 — 6 Trends Driving the Future

As AI becomes deeply embedded in enterprise operations and agentic capabilities are unlocked, concerns around data privacy, security and governance will take center stage. With emerging technologies evolving at speed, a mindset of continuous adaptation will be required to ensure requisite data privacy, combat cyber risks and successfully achieve digital resilience. As organizations expand their global footprint, understanding the implications of evolving AI regulations across regions will be crucial. While unifying data is essential for maximizing value, ensuring compliance with diverse regulatory frameworks is mandatory. A nuanced approach to regional regulations will be key for organizations navigating this dynamic landscape. ... As the technology landscape evolves, continuous learning becomes essential. Professionals must stay updated on the latest technologies while letting go of outdated practices. Tech talent responsible for building AI systems must be upskilled in evolving AI technologies. At the same time, employees across the organization need training to collaborate effectively with AI, ensuring seamless integration and success. Whether through internal upskilling or embarking on skills-focused partnerships, investment in talent management will prove crucial to winning the tech-talent gold rush and thriving in 2025 and beyond.


Generative AI is not replacing jobs or hurting wages at all, say economists

The researchers looked at the extent to which company investment in AI has contributed to worker adoption of AI tools, and also how chatbot adoption affected workplace processes. While firm-led investment in AI boosted the adoption of AI tools — saving time for 64 to 90 percent of users across the studied occupations — chatbots had a mixed impact on work quality and satisfaction. The economists found for example that "AI chatbots have created new job tasks for 8.4 percent of workers, including some who do not use the tools themselves." In other words, AI is creating new work that cancels out some potential time savings from using AI in the first place. "One very stark example that it's close to home for me is there are a lot of teachers who now say they spend time trying to detect whether their students are using ChatGPT to cheat on their homework," explained Humlum. He also observed that a lot of workers now say they're spending time reviewing the quality of AI output or writing prompts. Humlum argues that can be spun negatively, as a subtraction from potential productivity gains, or more positively, in the sense that automation tools historically have tended to generate more demand for workers in other tasks. "These new job tasks create new demand for workers, which may boost their wages, if these are more high value added tasks," he said.


Advancing Digital Systems for Inclusive Public Services

Uganda adopted the modular open-source identity platform, MOSIP, two years ago. A small team of 12, with limited technical expertise, began adapting the MOSIP platform to align with Uganda's Registration of Persons Act, gradually building internal capacity. By the time the system integrator was brought in, Uganda incorporated digital public good, DPG, into its legal framework, providing the integrator with a foundation to build upon. This early customization helped shape the legal and technical framework needed to scale the platform. But improvements are needed, particularly in the documentation of the DPG. "Standardization, information security and inclusion were central to our work with MOSIP," Kisembo said. "Consent became a critical focus and is now embedded across the platform, raising awareness about privacy and data protection." ... Nigeria, with a population of approximately 250 million, is taking steps to coordinate its previously fragmented digital systems through a national DPI framework. The country deployed multiple digital solutions over the last 10 to 15 years, which were often developed in silos by different ministries and private sector agencies. In 2023 and 2024, Nigeria developed a strategic framework to unify these systems and guide its DPI adoption. 


Eyes, ears, and now arms: IoT is alive

In just a few years, devices at home and work started including cameras to see and microphones to hear. Now, with new lines of vacuums and emerging humanoid robots, devices have appendages to manipulate the world around them. They’re not only able to collect information about their environment but can touch, “feel”, and move it. ... But, knowing the history of smart devices getting hacked, there’s cause for concern. From compromised baby monitors to open video doorbell feeds, bad actors have exploited default passwords and unencrypted communications for years. And now, beyond seeing and hearing, we’re on the verge of letting devices roam around our homes and offices with literal arms. What’s stopping a hacked robot vacuum from tampering with security systems? Or your humanoid helper from opening the front door? ... If developers want robots to become a reality, they need to create confidence in these systems immediately. This means following best practice cybersecurity by enabling peer-to-peer connectivity, outlawing generic credentials, and supporting software throughout the device lifecycle. Likewise, users can more safely participate in the robot revolution by segmenting their home networks, implementing multi-factor authentication, and regularly reviewing device permissions.


How to Launch a Freelance Software Development Career

Finding freelance work can be challenging in many fields, but it tends to be especially difficult for software developers. One reason is that many software development projects do not lend themselves well to a freelancing model because they require a lot of ongoing communication and maintenance. This means that, to freelance successfully as a developer, you'll need to seek out gigs that are sufficiently well-defined and finite in scope that you can complete within a predictable period of time. ... Specifically, you need to envision yourself also as a project manager, a finance director, and an accountant. When you can do these things, it becomes easier not just to freelance profitably, but also to convince prospective clients that you know what you're doing and that they can trust you to complete projects with quality and on time. ... While creating a portfolio may seem obvious enough, one pitfall that new freelancers sometimes run into is being unable to share work due to nondisclosure agreements they sign with clients. When negotiating contracts, avoid this risk by ensuring that you'll retain the right to share any key aspects of a project for the purpose of promoting your own services. Even if clients won't agree to letting you share source code, they'll often at least allow you to show off the end product and discuss at a high level how you approached and completed a project.


Digital twins critical for digital transformation to fly in aerospace

Among the key conclusions were that there was a critical need to examine the standards that currently support the development of digital twins, identify gaps in the governance landscape, and establish expectations for the future. ... The net result will be that stakeholder needs and objectives become more achievable, resulting in affordable solutions that shorten test, demonstration, certification and verification, thereby decreasing lifecycle cost while increasing product performance and availability. Yet the DTC cautioned that cyber security considerations within a digital twin and across its external interfaces must be customisable to suit the environment and risk tolerance of digital twin owners. ... First, the DTC said that evidence suggests a necessity to examine the standards that currently support digital twins, identify gaps in the governance landscape, and set expectations for future standard development. In addition, the research team identified that standardisation challenges exist when developing, integrating and maintaining digital twins during design, production and sustainment. There was also a critical need to identify and manage requirements that support interoperability between digital twins throughout the lifecycle. This recommendation also applied to the more complex SoS Digital Twins development initiatives. Digital twin model calibration needs to be an automated process and should be applicable to dynamically varying model parameters.


Quality begins with planning: Building software with the right mindset

Too often, quality is seen as the responsibility of QA engineers. Developers write the code, QA tests it, and ops teams deploy it. But in high-performing teams, that model no longer works. Quality isn’t one team’s job; it’s everyone’s job. Architects defining system components, developers writing code, product managers defining features, and release managers planning deployments all contribute to delivering a reliable product. When quality is owned by the entire team, testing becomes a collaborative effort. Developers write testable code and contribute to test plans. Product managers clarify edge cases during requirements gathering. Ops engineers prepare for rollback scenarios. This collective approach ensures that no aspect of quality is left to chance. ... One of the biggest causes of software failure isn’t building the wrong way, it’s building the wrong thing. You can write perfectly clean, well-tested code that works exactly as intended and still fail your users if the feature doesn’t solve the right problem. That’s why testing must start with validating the requirements themselves. Do they align with business goals? Are they technically feasible? Have we considered the downstream impact on other systems or components? Have we defined what success looks like?


What Makes You a Unicorn in Your Industry? Start by Mastering These 4 Pillars

First, you have to have the capacity, the skill, to excel in that area. Additionally, you have to learn how to leverage that standout aspect to make it work for you in the marketplace - incorporating it into your branding, spotlighting it in your messaging, maybe even including it in your name. Concise as the notion is, there's actually a lot of breadth and flexibility in it, for when it comes to selecting what you want to do better than anyone else is doing it, your choices are boundless. ... Consumers have gotten quite savvy at sniffing out false sincerity, so when they come across the real thing, they're much more prone to give you their business. Basically, when your client base believes you prioritize your vision, your team and creating an incredible product or service over financial gain, they want to work with you. ... Building and maintaining a remarkable "company culture" can just be a buzzword to you, or you can bring it to life. I can't think of any single factor that makes my company more valuable to my clients than the value I place on my people and the experience I endeavor to provide them by working for me. When my staff feels openly recognized, wholly supported and vitally important to achieving our shared outcomes, we're truly unstoppable. So keep in mind that your unicorn focus can be internal, not necessarily client-facing.



Conquering the costs and complexity of cloud, Kubernetes, and AI

While IT leaders clearly see the value in platform teams—nine in 10 organizations have a defined platform engineering team—there’s a clear disconnect between recognizing their importance and enabling their success. This gap signals major stumbling blocks ahead that risk derailing platform team initiatives if not addressed early and strategically. For example, platform teams find themselves burdened by constant manual monitoring, limited visibility into expenses, and a lack of standardization across environments. These challenges are only amplified by the introduction of new and complex AI projects. ... Platform teams that manually juggle cost monitoring across cloud, Kubernetes, and AI initiatives find themselves stretched thin and trapped in a tactical loop of managing complex multi-cluster Kubernetes environments. This prevents them from driving strategic initiatives that could actually transform their organizations’ capabilities. These challenges reflect the overall complexity of modern cloud, Kubernetes, and AI environments. While platform teams are chartered with providing infrastructure and tools necessary to empower efficient development, many resort to short-term patchwork solutions without a cohesive strategy. 


Reporting lines: Could separating from IT help CISOs?

CFOs may be primarily concerned with the financial performance of the business, but they also play a key role in managing organizational risk. This is where CISOs can learn the tradecraft in translating technical measures into business risk management. ... “A CFO comes through the finance ranks without a lot of exposure to IT and I can see how they’re incentivized to hit targets and forecasts, rather than thinking: if I spend another two million on cyber risk mitigation, I may save 20 million in three years’ time because an incident was prevented,” says Schat. Budgeting and forecasting cycles can be a mystery to CISOs, who may engage with the CFO infrequently, and interactions are mostly transactional around budget sign-off on cybersecurity initiatives, according to Gartner. ... It’s not uncommon for CISOs to find security seen as a barrier, where the benefits aren’t always obvious, and are actually at odds with the metrics that drive the CIO. “Security might slow down a project, introduce a layer of complexity that we need from a security perspective, but it doesn’t obviously help the customer,” says Bennett. Reporting to CFOs can relieve potential conflicts of interest. It can allow CISOs to broaden their involvement across all areas of the organization, beyond input in technology, because security and managing risk is a whole-of-business mission.

Daily Tech Digest - April 28, 2025


Quote for the day:

"If a window of opportunity appears, don't pull down the shade." -- Tom Peters



Researchers Revolutionize Fraud Detection with Machine Learning

Machine learning plays a critical role in fraud detection by identifying patterns and anomalies in real-time. It analyzes large datasets to spot normal behavior and flag significant deviations, such as unusual transactions or account access. However, fraud detection is challenging because fraud cases are much rarer than normal ones, and the data is often messy or unlabeled. ... “The use of machine learning in fraud detection brings many advantages,” said Taghi Khoshgoftaar, Ph.D., senior author and Motorola Professor in the FAU Department of Electrical Engineering and Computer Science. “Machine learning algorithms can label data much faster than human annotation, significantly improving efficiency. Our method represents a major advancement in fraud detection, especially in highly imbalanced datasets. It reduces the workload by minimizing cases that require further inspection, which is crucial in sectors like Medicare and credit card fraud, where fast data processing is vital to prevent financial losses and enhance operational efficiency.” ... The method combines two strategies: an ensemble of three unsupervised learning techniques using the SciKit-learn library and a percentile-gradient approach. The goal is to minimize false positives by focusing on the most confidently identified fraud cases. 


Cybersecurity is Not Working: Time to Try Something Else

Many CISOs, changing jobs every 2 years or so, have not learnt to get things done in large firms; they have not developed the political acumen and the management experience they would need. Many have simply remained technologists and firefighters, trapped in an increasingly obsolete mindset, pushing bottom-up a tools-based, risk-based, tech-driven narrative, disconnected from what the board wants to hear which has now shifted towards resilience and execution. This is why we may have to come to the point where we have to accept that the construction around the role of the CISO, as it was initiated in the late 90s, has served its purpose and needs to evolve. The first step in this evolution, in my opinion, is for the board to own cybersecurity as a business problem, not as a technology problem. It needs to be owned at board level in business terms, in line with the way other topics are owned at board level. This is about thinking the protection of the business in business terms, not in technology terms. Cybersecurity is not a purely technological matter; it has never been and cannot be. ... There may be a need to amalgamate it with other matters such as corporate resilience, business continuity or data privacy to build up a suitable board-level portfolio, but for me this is the way forward in reversing the long-term dynamics, away from the failed historical bottom-up constructions, towards a progressive top-down approach.


How the financial services C-suite are going beyond ‘keeping the lights on’ in 2025

C-suites need to tackle three core areas: Ensure they are getting high value support for their mission-critical systems; Know how to optimise their investments; Transform their organisation without disrupting day-to-day operations. It is certainly possible if they have the time, capabilities and skill sets in-house. Yet even the most well-resourced enterprises can struggle to acquire the knowledge base and market expertise required to negotiate with multiple vendors, unlock investments or run complex change programmes single-handedly. The reality is that managing the balance needed to save costs while accelerating innovation is challenging. ... The survey demonstrates a growing necessity for CIOs and CFOs to speak each other’s language, marking a shift in organisational strategy, moving IT beyond the traditional ‘keeping the lights on’ approach, and driving a pivotal transformation in the relationship between CIOs and CFOs. As they find better ways to collaborate and innovate, businesses in the financial services space will reap the rewards of emerging technology, while falling in line with budgetary needs. Emerging technologies are being introduced thick and fast, and as a result, hard metrics aren’t always available. Instead of feeling frustrated with a lack of data, CFOs should lean in as active participants, understanding how emerging technologies like AI and cybersecurity can drive strategic value, optimise operations and create new revenue streams. 


Threat actors are scanning your environment, even if you’re not

Martin Jartelius, CISO and Product Owner at Outpost24, says that the most common blind spots that the solution uncovers are exposed management interfaces of devices, exposed databases in the cloud, misconfigured S3 storage, or just a range of older infrastructure no longer in use but still connected to the organization’s domain. All of these can provide an entry point into internal networks, and some can be used to impersonate organizations in targeted phishing attacks. But these blind spots are not indicative of poor leadership or IT security performance: “Most who see a comprehensive report of their attack surface for the first time are surprised that it is often substantially larger than they understood. Some react with discomfort and perceive their prior lack of insight as a failure, but that is not the case. ... Attack surface management is still a maturing technology field, but having a solution bringing the information together in a platform gives a more refined and in-depth insight over time. External attack surface management starts with a continuous detection of exposed assets – in Sweepatic’s case, that also includes advanced port scanning to detect all (and not just the most common) ports at risk of exploitation – then moves on to automated security analysis and then risk-based reporting.


How to Run a Generative AI Developer Tooling Experiment

The last metric, rework, is a significant consideration with generative AI, as 67% of developers find that they are spending more time debugging AI-generated code. Devexperts experienced a 200% increase in rework and a 30% increase in maintenance. On the other hand, while the majority of organizations are seeing an increase in complexity and lines of code with code generators, these five engineers saw a surprising 15% decrease in lines of code created. “We can conclude that, for the live experiment, GitHub Copilot didn’t deliver the results one could expect after reading their articles,” summarized German Tebiev, the software engineering process architect who ran the experiment. He did think the results were persuasive enough to believe speed will be enabled if the right processes are put in place: “The fact that the PR throughput shows significant growth tells us that the desired speed increase can be achieved if the tasks’ flow is handled effectively.” ... Just 17% of developers responded that they think Copilot helped them save at least an hour a week, versus a whopping 40% saw no time savings by using the code generator, which is well below the industry average. Developers were also able to share their own anecdotal experience, which is very situation-dependent. Copilot seemed to be a better choice for completing more basic lines of code for new features, less so when there’s complexity of working with an existing codebase.


Dark Data: Surprising Places for Business Insights

Previously, the biggest problem when dealing with dark data was its messy nature. Even though AI has been able to analyze structured data for years, unstructured or semi-structured data proved to be a hard nut to crack. Unfortunately, unstructured data constitutes the majority of dark data. However, recent advances in natural language programming (NLP), natural language understanding (NLU), speech recognition, and ML have enabled AI to deal with unstructured dark data more effectively. Today, AI can easily analyze raw inputs like customer reviews, social media comments to identify trends and sentiment. Advanced sentiment analysis algorithms can come to accurate conclusions when concerning tone, context, emotional nuances, sarcasm, and urgency, providing businesses with deeper audience insights. For instance, Amazon uses this approach to flag fake reviews. In finance and banking, AI-powered data analysis tools are used to process transaction logs and unstructured customer communications to identify fraud risks and enhance service and customer satisfaction. Another industry where dark data mining might have potentially huge social benefits is healthcare. Currently, this industry generates around 30% of all the data in the world. 


Is your AI product actually working? How to develop the right metric system

Not tracking whether your product is working well is like landing a plane without any instructions from air traffic control. There is absolutely no way that you can make informed decisions for your customer without knowing what is going right or wrong. Additionally, if you do not actively define the metrics, your team will identify their own back-up metrics. The risk of having multiple flavors of an ‘accuracy’ or ‘quality’ metric is that everyone will develop their own version, leading to a scenario where you might not all be working toward the same outcome. ... the complexity of operating an ML product with multiple customers translates to defining metrics for the model, too. What do I use to measure whether a model is working well? Measuring the outcome of internal teams to prioritize launches based on our models would not be quick enough; measuring whether the customer adopted solutions recommended by our model could risk us drawing conclusions from a very broad adoption metric ... Most metrics are gathered at-scale by new instrumentation via data engineering. However, in some instances (like question 3 above) especially for ML based products, you have the option of manual or automated evaluations that assess the model outputs. 


Security needs to be planned and discussed early, right at the product ideation stage

On the open-source side, where a lot of supply chain risks emerge, we leverage a state-of-the-art development pipeline. Developers follow strict security guidelines and use frameworks embedded with security tools that detect risks associated with third-party libraries—both during development and at runtime. We also have robust monitoring systems in place to detect vulnerabilities and active exploits. ... Monitoring technological events is one part, but from a core product perspective, we also need to monitor risk-based activities, like transactions that could potentially lead to fraud. For that, we have strong AI/ML developments already deployed, with a dedicated AI and data science team constantly building new algorithms to detect fraudulent actions. From a product standpoint, the system is quite mature. On the technology side, monitoring has some automation powered by AI, and we’ve also integrated tools like GitHub Copilot. Our analysts, developers, and security engineers use these technologies to quickly identify potential issues, reducing manual effort significantly. ... Security needs to be planned and discussed early—right at the product ideation stage with product managers—so that it doesn’t become a blocker at the end. Early involvement makes it much easier and avoids last-minute disruptions.


14 tiny tricks for big cloud savings

Good algorithms can boost the size of your machine when demand peaks. But clouds don’t always make it easy to shrink all the resources on disk. If your disks grow, they can be hard to shrink. By monitoring these machines closely, you can ensure that your cloud instances consume only as much as they need and no more. ... Cloud providers can offer significant discounts for organizations that make a long-term commitment to using hardware. These are sometimes called reserved instances, or usage-based discounts. They can be ideal when you know just how much you’ll need for the next few years. The downside is that the commitment locks in both sides of the deal. You can’t just shut down machines in slack times or when a project is canceled. ... Programmers like to keep data around in case they might ever need it again. That’s a good habit until your app starts scaling and it’s repeated a bazillion times. If you don’t call the user, do you really need to store their telephone number? Tossing personal data aside not only saves storage fees but limits the danger of releasing personally identifiable information. Stop keeping extra log files or backups of data that you’ll never use again. ... Cutting back on some services will save money, but the best way to save cash is to go cold turkey. There’s nothing stopping you from dumping your data into a hard disk on your desk or down the hall in a local data center. 


Two-thirds of jobs will be impacted by AI

“Most jobs will change dramatically in the next three to four years, at least as much as the internet has changed jobs over the last 30,” Calhoon said. “Every job posted on Indeed today, from truck driver to physician to software engineer, will face some level of exposure to genAI-driven change.” ... What will emerge is a “symbiotic” relationship with an increasingly “proactive” technology that will require employees to constantly learn new skills and adapt. “AI can manage repetitive tasks, or even difficult tasks that are specific in nature, while humans can focus on innovative and strategic initiatives that drive revenue growth and improve overall business performance,” Hoffman said in an interview earlier this year. “AI is also much quicker than humans could possibly be, is available 24/7, and can be scaled to handle increasing workloads.” As AI takes over repetitive tasks, workers will shift toward roles that involve overseeing AI, solving unique problems, and applying creativity and strategy. Teams will increasingly collaborate with AI—like marketers personalizing content or developers using AI copilots. Rather than replacing humans, AI will enhance human strengths such as decision-making and emotional intelligence. Adapting to this change will require ongoing learning and a fresh approach to how work is done.

Daily Tech Digest - April 27, 2025


Quote for the day:

“Most new jobs won’t come from our biggest employers. They will come from our smallest. We’ve got to do everything we can to make entrepreneurial dreams a reality.” -- Ross Perot



7 key strategies for MLops success

Like many things in life, in order to successfully integrate and manage AI and ML into business operations, organisations first need to have a clear understanding of the foundations. The first fundamental of MLops today is understanding the differences between generative AI models and traditional ML models. Cost is another major differentiator. The calculations of generative AI models are more complex resulting in higher latency, demand for more computer power, and higher operational expenses. Traditional models, on the other hand, often utilise pre-trained architectures or lightweight training processes, making them more affordable for many organisations. ... Creating scalable and efficient MLops architectures requires careful attention to components like embeddings, prompts, and vector stores. Fine-tuning models for specific languages, geographies, or use cases ensures tailored performance. An MLops architecture that supports fine-tuning is more complicated and organisations should prioritise A/B testing across various building blocks to optimise outcomes and refine their solutions. Aligning model outcomes with business objectives is essential. Metrics like customer satisfaction and click-through rates can measure real-world impact, helping organisations understand whether their models are delivering meaningful results. 


If we want a passwordless future, let's get our passkey story straight

When passkeys work, which is not always the case, they can offer a nearly automagical experience compared to the typical user ID and password workflow. Some passkey proponents like to say that passkeys will be the death of passwords. More realistically, however, at least for the next decade, they'll mean the death of some passwords -- perhaps many passwords. We'll see. Even so, the idea of killing passwords is a very worthy objective. ... With passkeys, the device that the end user is using – for example, their desktop computer or smartphone -- is the one that's responsible for generating the public/private key pair as a part of an initial passkey registration process. After doing so, it shares the public key – the one that isn't a secret – with the website or app that the user wants to login to. The private key -- the secret -- is never shared with that relying party. This is where the tech article above has it backward. It's not "the site" that "spits out two pieces of code" saving one on the server and the other on your device. ... Passkeys have a long way to go before they realize their potential. Some of the current implementations are so alarmingly bad that it could delay their adoption. But adoption of passkeys is exactly what's needed to finally curtail a decades-long crime spree that has plagued the internet. 



AI: More Buzzword Than Breakthrough

While Artificial Intelligence focuses on creating systems that simulate human intelligence, Intelligent Automation leverages these AI capabilities to automate end-to-end business processes. In essence, AI is the brain that provides cognitive functions, while Intelligent Automation is the body that executes tasks using AI’s intelligence. This distinction is critical; although Artificial Intelligence is a component of Intelligent Automation, not all AI applications result in automation, and not all automation requires advanced Artificial Intelligence. ... Intelligent Automation automates and optimizes business processes by combining AI with automation tools. This integration results in increased efficiency and reduced operating costs. For instance, Intelligent Automation can streamline supply chain operations by automating inventory management, order fulfillment, and logistics, resulting in faster turnaround times and fewer errors. ... In recent years, the term “AI” has been widely used as a marketing buzzword, often applied to technologies that do not have true AI capabilities. This phenomenon, sometimes referred to as “AI washing,” involves branding traditional automation or data processing systems as AI in order to capitalize on the term’s popularity. Such practices can mislead consumers and businesses, leading to inflated expectations and potential disillusionment with the technology.


Introduction to API Management

API gateways are pivotal in managing both traffic and security for APIs. They act as the frontline interface between APIs and the users, handling incoming requests and directing them to the appropriate services. API gateways enforce policies such as rate limiting and authentication, ensuring secure and controlled access to API functions. Furthermore, they can transform and route requests, collect analytics data and provide caching capabilities. ... With API governance, businesses get the most out of their investment. The purpose of API governance is to make sure that APIs are standardized so that they are complete, compliant and consistent. Effective API governance enables organizations to identify and mitigate API-related risks, including performance concerns, compliance issues and security vulnerabilities. API governance is complex and involves security, technology, compliance, utilization, monitoring, performance and education. Organizations can make their APIs secure, efficient, compliant and valuable to users by following best practices in these areas. ... Security is paramount in API management. Advanced security features include authentication mechanisms like OAuth, API keys and JWT (JSON Web Tokens) to control access. Encryption, both in transit and at rest, ensures data integrity and confidentiality.


Sustainability starts within: Flipkart & Furlenco on building a climate-conscious culture

Based on the insights from Flipkart and Furlenco, here are six actionable steps for leaders seeking to embed climate goals into their company culture: Lead with intent: Make climate goals a strategic priority, not just a CSR initiative. Signal top-level commitment and allocate leadership roles accordingly. Operationalise sustainability: Move beyond policies into process design — from green supply chains to net-zero buildings and water reuse systems. Make It measurable: Integrate climate-related KPIs into team goals, performance reviews, and business dashboards. Empower employees: Create space for staff to lead climate initiatives, volunteer, learn, and innovate. Build purpose into daily roles. Foster dialogue and storytelling: Share wins, losses, and journeys. Use Earth Day campaigns, internal newsletters, and learning modules to bring sustainability to life. Measure Culture, Not Just Carbon: Assess how employees feel about their role in climate action — through surveys, pulse checks, and feedback loops. ... Beyond the company walls, this cultural approach to climate leadership has ripple effects. Customers are increasingly drawn to brands with strong environmental values, investors are rewarding companies with robust ESG cultures, and regulators are moving from voluntary frameworks to mandatory disclosures.


Proof-of-concept bypass shows weakness in Linux security tools

An Israeli vendor was able to evade several leading Linux runtime security tools using a new proof-of-concept (PoC) rootkit that it claims reveals the limitations of many products in this space. The work of cloud and Kubernetes security company Armo, the PoC is called ‘Curing’, a portmanteau word that combines the idea of a ‘cure’ with the io_uring Linux kernel interface that the company used in its bypass PoC. Using Curing, Armo found it was possible to evade three Linux security tools to varying degrees: Falco (created by Sysdig but now a Cloud Native Computing Foundation graduated project), Tetragon from Isovalent (now part of Cisco), and Microsoft Defender. ... Armo said it was motivated to create the rootkit to draw attention to two issues. The first was that, despite the io_uring technique being well documented for at least two years, vendors in the Linux security space had yet to react to the danger. The second purpose was to draw attention to deeper architectural challenges in the design of the Linux security tools that large numbers of customers rely on to protect themselves: “We wanted to highlight the lack of proper attention in designing monitoring solutions that are forward-compatible. Specifically, these solutions should be compatible with new features in the Linux kernel and address new techniques,” said Schendel.


Insider threats could increase amid a chaotic cybersecurity environment

Most organisations have security plans and policies in place to decrease the potential for insider threats. No policy will guarantee immunity to data breaches and IT asset theft but CISOs can make sure their policies are being executed through routine oversight and audits. Best practices include access control and least privilege, which ensures employees, contractors and all internal users only have access to the data and systems necessary for their specific roles. Regular employee training and awareness programmes are also critical. Training sessions are an effective means to educate employees on security best practices such as how to recognise phishing attempts, social engineering attacks and the risks associated with sharing sensitive information. Employees should be trained in how to report suspicious activities – and there should be a defined process for managing these reports. Beyond the security controls noted above, those that govern the IT asset chain of custody are crucial to mitigating the fallout of a breach should assets be stolen by employees, former employees or third parties. The IT asset chain of custody refers to the process that tracks and documents the physical possession, handling and movement of IT assets throughout their lifecycle. A sound programme ensures that there is a clear, auditable trail of who has access to and controls the asset at any given time. 


Distributed Cloud Computing: Enhancing Privacy with AI-Driven Solutions

AI has the potential to play a game-changing role in distributed cloud computing and PETs. By enabling intelligent decision-making and automation, AI algorithms can help us optimize data processing workflows, detect anomalies, and predict potential security threats. AI has been instrumental in helping us identify patterns and trends in complex data sets. We're excited to see how it will continue to evolve in the context of distributed cloud computing. For instance, homomorphic encryption allows computations to be performed on encrypted data without decrypting it first. This means that AI models can process and analyze encrypted data without accessing the underlying sensitive information. Similarly, AI can be used to implement differential privacy, a technique that adds noise to the data to protect individual records while still allowing for aggregate analysis. In anomaly detection, AI can identify unusual patterns or outliers in data without requiring direct access to individual records, ensuring that sensitive information remains protected. While AI offers powerful capabilities within distributed cloud environments, the core value proposition of integrating PETs remains in the direct advantages they provide for data collaboration, security, and compliance. Let's delve deeper into these key benefits, challenges and limitations of PETs in distributed cloud computing.


Mobile Applications: A Cesspool of Security Issues

"What people don't realize is you ship your entire mobile app and all your code to this public store where any attacker can download it and reverse it," Hoog says. "That's vastly different than how you develop a Web app or an API, which sit behind a WAF and a firewall and servers." Mobile platforms are difficult for security researchers to analyze, Hoog says. One problem is that developers rely too much on the scanning conducted by Apple and Google on their app stores. When a developer loads an application, either company will conduct specific scans to detect policy violations and to make malicious code more difficult to upload to the repositories. However, developers often believe the scanning is looking for security issues, but it should not be considered a security control, Hoog says. "Everybody thinks Apple and Google have tested the apps — they have not," he says. "They're testing apps for compliance with their rules. They're looking for malicious malware and just egregious things. They are not testing your application or the apps that you use in the way that people think." ... In addition, security issues on mobile devices tend to have a much shorter lifetime, because of the closed ecosystems and the relative rarity of jailbreaking. When NowSecure finds a problem, there is no guarantee that it will last beyond the next iOS or Android update, he says.


The future of testing in compliance-heavy industries

In today’s fast-evolving technology landscape, being an engineering leader in compliance-heavy industries can be a struggle. Managing risks and ensuring data integrity are paramount, but the dangers are constant when working with large data sources and systems. Traditional integration testing within the context of stringent regulatory requirements is more challenging to manage at scale. This leads to gaps, such as insufficient test coverage across interconnected systems, a lack of visibility into data flows, inadequate logging, and missed edge case conditions, particularly in third-party interactions. Due to these weaknesses, security vulnerabilities can pop up and incident response can be delayed, ultimately exposing organizations to violations and operational risk. ... API contract testing is a modern approach used to validate the expectations between different systems, making sure that any changes in APIs don’t break expectations or contracts. Changes might include removing or renaming a field and altering data types or response structures. These seemingly small updates can cause downstream systems to crash or behave incorrectly if they are not properly communicated or validated ahead of time. ... The shifting left practice has a lesser-known cousin: shifting right. Shifting right focuses on post-deployment validation using concepts such as observability and real-time monitoring techniques.

Daily Tech Digest - April 25, 2025


Quote for the day:

"Whatever you can do, or dream you can, begin it. Boldness has genius, power and magic in it." -- Johann Wolfgang von Goethe


Revolutionizing Application Security: The Plea for Unified Platforms

“Shift left” is a practice that focuses on addressing security risks earlier in the development cycle, before deployment. While effective in theory, this approach has proven problematic in practice as developers and security teams have conflicting priorities. ... Cloud native applications are dynamic; constantly deployed, updated and scaled, so robust real-time protection measures are absolutely necessary. Every time an application is updated or deployed, new code, configurations or dependencies appear, all of which can introduce new vulnerabilities. The problem is that it is difficult to implement real-time cloud security with a traditional, compartmentalized approach. Organizations need real-time security measures that provide continuous monitoring across the entire infrastructure, detect threats as they emerge and automatically respond to them. As Tager explained, implementing real-time prevention is necessary “to stay ahead of the pace of attackers.” ... Cloud native applications tend to rely heavily on open source libraries and third-party components. In 2021, Log4j’s Log4Shell vulnerability demonstrated how a single compromised component could affect millions of devices worldwide, exposing countless enterprises to risk. Effective application security now extends far beyond the traditional scope of code scanning and must reflect the modern engineering environment. 


AI-Powered Polymorphic Phishing Is Changing the Threat Landscape

Polymorphic phishing is an advanced form of phishing campaign that randomizes the components of emails, such as their content, subject lines, and senders’ display names, to create several almost identical emails that only differ by a minor detail. In combination with AI, polymorphic phishing emails have become highly sophisticated, creating more personalized and evasive messages that result in higher attack success rates. ... Traditional detection systems group phishing emails together to enhance their detection efficacy based on commonalities in phishing emails, such as payloads or senders’ domain names. The use of AI by cybercriminals has allowed them to conduct polymorphic phishing campaigns with subtle but deceptive variations that can evade security measures like blocklists, static signatures, secure email gateways (SEGs), and native security tools. For example, cybercriminals modify the subject line by adding extra characters and symbols, or they can alter the length and pattern of the text. ... The standard way of grouping individual attacks into campaigns to improve detection efficacy will become irrelevant by 2027. Organizations need to find alternative measures to detect polymorphic phishing campaigns that don’t rely on blocklists and that can identify the most advanced attacks.


Does AI Deserve Worker Rights?

Chalmers et al declare that there are three things that AI-adopting institutions can do to prepare for the coming consciousness of AI: “They can (1) acknowledge that AI welfare is an important and difficult issue (and ensure that language model outputs do the same), (2) start assessing AI systems for evidence of consciousness and robust agency, and (3) prepare policies and procedures for treating AI systems with an appropriate level of moral concern.” What would “an appropriate level of moral concern” actually look like? According to Kyle Fish, Anthropic’s AI welfare researcher, it could take the form of allowing an AI model to stop a conversation with a human if the conversation turned abusive. “If a user is persistently requesting harmful content despite the model’s refusals and attempts at redirection, could we allow the model simply to end that interaction?” Fish told the New York Times in an interview. What exactly would model welfare entail? The Times cites a comment made in a podcast last week by podcaster Dwarkesh Patel, who compared model welfare to animal welfare, stating it was important to make sure we don’t reach “the digital equivalent of factory farming” with AI. Considering Nvidia CEO Jensen Huang’s desire to create giant “AI factories” filled with millions of his company’s GPUs cranking through GenAI and agentic AI workflows, perhaps the factory analogy is apropos.


Cybercriminals switch up their top initial access vectors of choice

“Organizations must leverage a risk-based approach and prioritize vulnerability scanning and patching for internet-facing systems,” wrote Saeed Abbasi, threat research manager at cloud security firm Qualys, in a blog post. “The data clearly shows that attackers follow the path of least resistance, targeting vulnerable edge devices that provide direct access to internal networks.” Greg Linares, principal threat intelligence analyst at managed detection and response vendor Huntress, said, “We’re seeing a distinct shift in how modern attackers breach enterprise environments, and one of the most consistent trends right now is the exploitation of edge devices.” Edge devices, ranging from firewalls and VPN appliances to load balancers and IoT gateways, serve as the gateway between internal networks and the broader internet. “Because they operate at this critical boundary, they often hold elevated privileges and have broad visibility into internal systems,” Linares noted, adding that edge devices are often poorly maintained and not integrated into standard patching cycles. Linares explained: “Many edge devices come with default credentials, exposed management ports, secret superuser accounts, or weakly configured services that still rely on legacy protocols — these are all conditions that invite intrusion.”


5 tips for transforming company data into new revenue streams

Data monetization can be risky, particularly for organizations that aren’t accustomed to handling financial transactions. There’s an increased threat of security breaches as other parties become aware that you’re in possession of valuable information, ISG’s Rudy says. Another risk is unintentionally using data you don’t have a right to use or discovering that the data you want to monetize is of poor quality or doesn’t integrate across data sets. Ultimately, the biggest risk is that no one wants to buy what you’re selling. Strong security is essential, Agility Writer’s Yong says. “If you’re not careful, you could end up facing big fines for mishandling data or not getting the right consent from users,” he cautions. If a data breach occurs, it can deeply damage an enterprise’s reputation. “Keeping your data safe and being transparent with users about how you use their info can go a long way in avoiding these costly mistakes.” ... “Data-as-a-service, where companies compile and package valuable datasets, is the base model for monetizing data,” he notes. However, insights-as-a-service, where customers provide prescriptive/predictive modeling capabilities, can demand a higher valuation. Another consideration is offering an insights platform-as-a-service, where subscribers can securely integrate their data into the provider’s insights platform.


Are AI Startups Faking It Till They Make It?

"A lot of VC funds are just kind of saying, 'Hey, this can only go up.' And that's usually a recipe for failure - when that starts to happen, you're becoming detached from reality," Nnamdi Okike, co-founder and managing partner at 645 Ventures, told Tradingview. Companies are branding themselves as AI-driven, even when their core technologies lack substantive AI components. A 2019 study by MMC Ventures found 40% of surveyed "AI startups" in Europe showed no evidence of AI integration in their products or services. And this was before OpenAI further raised the stakes with the launch of ChatGPT in 2022. It's a slippery slope. Even industry behemoths have had to clarify the extent of their AI involvement. Last year, tech giant and the fourth-most richest company in the world Amazon pushed back on allegations that its AI-powered "Just Walk Out" technology installed at its physical grocery stores for a cashierless checkout was largely being driven by around 1,000 workers in India who manually checked almost three quarters of the transactions. Amazon termed these reports "erroneous" and "untrue," adding that the staff in India were not reviewing live footage from the stores but simply reviewing the system. The incentive to brand as AI-native has only intensified. 


From deployment to optimisation: Why cloud management needs a smarter approach

As companies grow, so does their cloud footprint. Managing multiple cloud environments—across AWS, Azure, and GCP—often results in fragmented policies, security gaps, and operational inefficiencies. A Multi-Cloud Maturity Research Report by Vanson Bourne states that nearly 70% of organisations struggle with multi-cloud complexity, despite 95% agreeing that multi-cloud architectures are critical for success. Companies are shifting away from monolithic architecture to microservices, but managing distributed services at scale remains challenging. ... Regulatory requirements like SOC 2, HIPAA, and GDPR demand continuous monitoring and updates. The challenge is not just staying compliant but ensuring that security configurations remain airtight. IBM’s Cost of a Data Breach Report reveals that the average cost of a data breach in India reached ₹195 million in 2024, with cloud misconfiguration accounting for 12% of breaches. The risk is twofold: businesses either overprovision resources—wasting money—or leave environments under-secured, exposing them to breaches. Cyber threats are also evolving, with attackers increasingly targeting cloud environments. Phishing and credential theft accounted for 18% of incidents each, according to the IBM report. 


Inside a Cyberattack: How Hackers Steal Data

Once a hacker breaches the perimeter the standard practice is to beachhead, and then move laterally to find the organisation’s crown jewels: their most valuable data. Within a financial or banking organisation it is likely there is a database on their server that contains sensitive customer information. A database is essentially a complicated spreadsheet, wherein a hacker can simply click SELECT and copy everything. In this instance data security is essential, however, many organisations confuse data security with cybersecurity. Organisations often rely on encryption to protect sensitive data, but encryption alone isn’t enough if the decryption keys are poorly managed. If an attacker gains access to the decryption key, they can instantly decrypt the data, rendering the encryption useless. ... To truly safeguard data, businesses must combine strong encryption with secure key management, access controls, and techniques like tokenisation or format-preserving encryption to minimise the impact of a breach. A database protected by Privacy Enhancing Technologies (PETs), such as tokenisation, becomes unreadable to hackers if the decryption key is stored offsite. Without breaching the organisation’s data protection vendor to access the key, an attacker cannot decrypt the data – making the process significantly more complicated. This can be a major deterrent to hackers.


Why Testing is a Long-Term Investment for Software Engineers

At its core, a test is a contract. It tells the system—and anyone reading the code—what should happen when given specific inputs. This contract helps ensure that as the software evolves, its expected behavior remains intact. A system without tests is like a building without smoke detectors. Sure, it might stand fine for now, but the moment something catches fire, there’s no safety mechanism to contain the damage. ... Over time, all code becomes legacy. Business requirements shift, architectures evolve, and what once worked becomes outdated. That’s why refactoring is not a luxury—it’s a necessity. But refactoring without tests? That’s walking blindfolded through a minefield. With a reliable test suite, engineers can reshape and improve their code with confidence. Tests confirm that behavior hasn’t changed—even as the internal structure is optimized. This is why tests are essential not just for correctness, but for sustainable growth. ... There’s a common myth: tests slow you down. But seasoned engineers know the opposite is true. Tests speed up development by reducing time spent debugging, catching regressions early, and removing the need for manual verification after every change. They also allow teams to work independently, since tests define and validate interfaces between components.


Why the road from passwords to passkeys is long, bumpy, and worth it - probably

While the current plan rests on a solid technical foundation, many important details are barriers to short-term adoption. For example, setting up a passkey for a particular website should be a rather seamless process; however, fully deactivating that passkey still relies on a manual multistep process that has yet to be automated. Further complicating matters, some current user-facing implementations of passkeys are so different from one another that they're likely to confuse end-users looking for a common, recognizable, and easily repeated user experience. ... Passkey proponents talk about how passkeys will be the death of the password. However, the truth is that the password died long ago -- just in a different way. We've all used passwords without considering what is happening behind the scenes. A password is a special kind of secret -- a shared or symmetric secret. For most online services and applications, setting a password requires us to first share that password with the relying party, the website or app operator. While history has proven how shared secrets can work well in very secure and often temporary contexts, if the HaveIBeenPawned.com website teaches us anything, it's that site and app authentication isn't one of those contexts. Passwords are too easily compromised.