Showing posts with label maturity model. Show all posts
Showing posts with label maturity model. Show all posts

Daily Tech Digest - August 14, 2025


Quote for the day:

"Act as if what you do makes a difference. It does." -- William James


What happens the day after superintelligence?

As context, artificial superintelligence (ASI) refers to systems that can outthink humans on most fronts, from planning and reasoning to problem-solving, strategic thinking and raw creativity. These systems will solve complex problems in a fraction of a second that might take the smartest human experts days, weeks or even years to work through. ... So ask yourself, honestly, how will humans act in this new reality? Will we reflexively seek advice from our AI assistants as we navigate every little challenge we encounter? Or worse, will we learn to trust our AI assistants more than our own thoughts and instincts? ... Imagine walking down the street in your town. You see a coworker heading towards you. You can’t remember his name, but your AI assistant does. It detects your hesitation and whispers the coworker’s name into your ears. The AI also recommends that you ask the coworker about his wife, who had surgery a few weeks ago. The coworker appreciates the sentiment, then asks you about your recent promotion, likely at the advice of his own AI. Is this human empowerment, or a loss of human agency? ... Many experts believe that body-worn AI assistants will make us feel more powerful and capable, but that’s not the only way this could go. These same technologies could make us feel less confident in ourselves and less impactful in our lives.


Confidential Computing: A Solution to the Uncertainty of Using the Public Cloud

Confidential computing is a way to ensure that no external party can look at your data and business logic while it is executed. It looks to secure Data in Use. When you now add to that the already established way to secure Data at Rest and Data in Transit it can be ensured that most likely no external party can access secured data running in a confidential computing environment wherever that may be. ... To be able to execute services in the cloud the company needs to be sure that the data and the business logic cannot be accessed or changed from third parties especially by the system administrator of that cloud provider. It needs to be protected. Or better, it needs to be executed in the Trusted Compute Base (TCB) of the company. This is the environment where specific security standards are set to restrict all possible access to data and business logic. ... Here attestation is used to verify that a confidential environment (instance) is securely running in the public cloud and it can be trusted to implement all the security standards necessary. Only after successful attestation the TCB is then extended into the Public cloud to incorporate the attested instances. One basic requirement of attestation is that the attestation service is located independently of the infrastructure where the instance is running. 


Open Banking's Next Phase: AI, Inclusion and Collaboration

Think of open banking as the backbone for secure, event-driven automation: a bill gets paid, and a savings allocation triggers instantly across multiple platforms. The future lies in secure, permissioned coordination across data silos, and when applied to finance, it unlocks new, high-margin services grounded in trust, automation and personalisation. ... By building modular systems that handle hierarchy, fee setup, reconciliation and compliance – all in one cohesive platform – we can unlock new revenue opportunities. ... Regulators must ensure they are stepping up efforts to sustain progress and support fintech innovation whilst also meeting their aim to keep customers safe. Work must also be done to boost public awareness of the value of open banking. Many consumers are unaware of the financial opportunities open banking offers and some remain wary of sharing their data with unknown third parties. ... Rather than duplicating efforts or competing head-to-head, institutions and fintechs should focus on co-developing shared infrastructure. When core functions like fee management, operational controls and compliance processes are unified in a central platform, fintechs can innovate on customer experience, while banks provide the stability, trust and reach. 


Data centers are eating the economy — and we’re not even using them

Building new data centers is the easy solution, but it’s neither sustainable nor efficient. As I’ve witnessed firsthand in developing compute orchestration platforms, the real problem isn’t capacity. It’s allocation and optimization. There’s already an abundant supply sitting idle across thousands of data centers worldwide. The challenge lies in efficiently connecting this scattered, underutilized capacity with demand. ... The solution isn’t more centralized infrastructure. It’s smarter orchestration of existing resources. Modern software can aggregate idle compute from data centers, enterprise servers, and even consumer devices into unified, on-demand compute pools. ... The technology to orchestrate distributed compute already exists. Some network models already demonstrate how software can abstract away the complexity of managing resources across multiple providers and locations. Docker containers and modern orchestration tools make workload portability seamless. The missing piece is just the industry’s willingness to embrace a fundamentally different approach. Companies need to recognize that most servers are idle 70%-85% of the time. It’s not a hardware problem requiring more infrastructure. 


How an AI-Based 'Pen Tester' Became a Top Bug Hunter on HackerOne

While GenAI tools can be extremely effective at finding potential vulnerabilities, XBOW's team found they were't very good at validating the findings. The trick to making a successful AI-driven pen tester, Dolan-Gavitt explained, was to use something other than an LLM to verify the vulnerabilities. In this case of XBOW, researchers used a deterministic validation approach. "Potentially, maybe in a couple years down the road, we'll be able to actually use large language models out of the box to verify vulnerabilities," he said. "But for today, and for the rest of this talk, I want to propose and argue for a different way, which is essentially non-AI, deterministic code to validate vulnerabilities." But AI still plays an integral role with XBOW's pen tester. Dolan-Gavitt said the technology uses a capture-the-flag (CTF) approach in which "canaries" are placed in the source code and XBOW sends AI agents after them to see if they can access them. For example, he said, if researchers want to find a remote code execution (RCE) flaw or an arbitrary file read vulnerability, they can plant canaries on the server's file system and set the agents loose. ... Dolan-Gavitt cautioned that AI-powered pen testers are not panacea. XBOW still sees some false positives because some vulnerabilities, like business logic flaws, are difficult to validate automatically.


Data Governance Maturity Models and Assessments: 2025 Guide

Data governance maturity frameworks help organizations assess their data governance capabilities and guide their evolution toward optimal data management. To implement a data governance or data management maturity framework (a “model”) it is important to learn what data governance maturity is, explore how and why it should be assessed, discover various maturity models and their features, and understand the common challenges associated with using maturity models. Data governance maturity refers to the level of sophistication and effectiveness with which an organization manages its data governance processes. It encompasses the extent to which an organization has implemented, institutionalized, and optimized its data governance practices. A mature data governance framework ensures that the organization can support its business objectives with accurate, trusted, and accessible data. Maturity in data governance is typically assessed through various models that measure different aspects of data management such as data quality and compliance and examine processes for managing data’s context (metadata) and its security. Maturity models provide a structured way to evaluate where an organization stands and how it can improve for a given function.


Open-source flow monitoring with SENSOR: Benefits and trade-offs

Most flow monitoring setups rely on embedded flow meters that are locked to a vendor and require powerful, expensive devices. SENSOR shows it’s possible to build a flexible and scalable alternative using only open tools and commodity hardware. It also allows operators to monitor internal traffic more comprehensively, not just what crosses the network border. ... For a large network, that can make troubleshooting and oversight more complex. “Something like this is fine for small networks,” David explains, “but it certainly complicates troubleshooting and oversight on larger networks.” David also sees potential for SENSOR to expand beyond historical analysis by adding real-time alerting. “The paper doesn’t describe whether the flow collectors can trigger alarms for anomalies like rapidly spiking UDP traffic, which could indicate a DDoS attack in progress. Adding real-time triggers like this would be a valuable enhancement that makes SENSOR more operationally useful for network teams.” ... “Finally, the approach is fragile. It relies on precise bridge and firewall configurations to push traffic through the RouterOS stack, which makes it sensitive to updates, misconfigurations, or hardware changes. 


Network Segmentation Strategies for Hybrid Environments

It's not a simple feat to implement network segmentation. Network managers must address network architectural issues, obtain tools and methodologies, review and enact security policies, practices and protocols, and -- in many cases -- overcome political obstacles. ... The goal of network segmentation is to place the most mission-critical and sensitive resources and systems under comprehensive security for a finite ecosystem of users. From a business standpoint, it's equally critical to understand the business value of each network asset and to gain support from users and management before segmenting. ... Divide the network segments logically into security segments based on workload, whether on premises, cloud-based or within an extranet. For example, if the Engineering department requires secure access to its product configuration system, only that team would have access to the network segment that contains the Engineering product configuration system. ... A third prong of segmented network security enforcement in hybrid environments is user identity management. Identity and access management (IAM) technology identifies and tracks users at a granular level based on their authorization credentials in on-premises networks but not on the cloud. 


Convergence of AI and cybersecurity has truly transformed the CISO’s role

The most significant impact of AI in security at present is in automation and predictive analysis. Automation especially when enhanced with AI, such as integrating models like Copilot Security with tools like Microsoft Sentinel allows organisations to monitor thousands of indicators of compromise in milliseconds and receive instant assessments. ... The convergence of AI and cybersecurity has truly transformed the CISO’s role, especially post-pandemic when user locations and systems have become unpredictable. Traditionally, CISOs operated primarily as reactive defenders responding to alerts and attacks as they arose. Now, with AI-driven predictive analysis, we’re moving into a much more proactive space. CISOs are becoming strategic risk managers, able to anticipate threats and respond with advanced tools. ... Achieving real-time threat detection in the cloud through AI requires the integration of several foundational pillars that work in concert to address the complexity and speed of modern digital environments. At the heart of this approach is the adoption of a Zero Trust Architecture: rather than assuming implicit trust based on network perimeters, this model treats every access request whether to data, applications, or infrastructure as potentially hostile, enforcing strict verification and comprehensive compliance controls. 


Initial Access Brokers Selling Bundles, Privileges and More

"By the time a threat actor logs in using the access and privileged credentials bought from a broker, a lot of the heavy lifting has already been done for them. Therefore, it's not about if you're exposed, but whether you can respond before the intrusion escalates." More than one attacker may use any given initial access, either because the broker sells it to multiple customers, or because a customer uses the access for one purpose - say, to steal data - then sells it on to someone else, who perhaps monetizes their purchase by further ransacking data and unleashing ransomware. "Organizations that unwittingly have their network access posted for sale on initial access broker forums have already been victimized once, and they are on their way to being victimized once again when the buyer attacks," the report says. ... "Access brokers often create new local or domain accounts, sometimes with elevated privileges, to maintain persistence or allow easier access for buyers," says a recent report from cybersecurity firm Kela. For detecting such activity, "unexpected new user accounts are a major red flag." So too is "unusual login activity" to legitimate accounts that traces to never-before-seen IP addresses, or repeat attempts that only belatedly succeed, Kela said. "Watch for legitimate accounts doing unusual actions or accessing resources they normally don't - these can be signs of account takeover."

Daily Tech Digest - April 23, 2025


Quote for the day:

“Become the kind of leader that people would follow voluntarily, even if you had no title or position.” -- Brian Tracy


MLOps vs. DevOps: Key Differences — and Why They Work Better Together

Arguably, the greatest difference between DevOps and MLOps is that DevOps is, by most definitions, an abstract philosophy, whereas MLOps comes closer to prescribing a distinct set of practices. Ultimately, the point of DevOps is to encourage software developers to collaborate more closely with IT operations teams, based on the idea that software delivery processes are smoother when both groups work toward shared goals. In contrast, collaboration is not a major focus for MLOps. You could argue that MLOps implies that some types of collaboration between different stakeholders — such as data scientists, AI model developers, and model testers — need to be part of MLOps workflows. ... Another key difference is that DevOps centers solely on software development. MLOps is also partly about software development to the extent that model development entails writing software. However, MLOps also addresses other processes — like model design and post-deployment management — that don't overlap closely with DevOps as traditionally defined. ... Differing areas of focus lead to different skill requirements for DevOps versus MLOps. To thrive at DevOps, you must master DevOps tools and concepts like CI/CD and infrastructure-as-code (IaC).


Transforming quality engineering with AI

AI-enabled quality engineering promises to be a game changer, driving a level of precision and efficiency that is beyond the reach of traditional testing. AI algorithms can analyse historical data to identify patterns and predict quality issues, enabling organisations to take early action; machine learning tools detect anomalies with great accuracy, ensuring nothing is missed. Self-healing test scripts update automatically, without manual intervention. Machine Learning models automate test selection, picking the most relevant ones, while reducing both manual effort and errors. In addition, AI can prioritise test cases based on criticality, thus optimising resources and improving testing outcomes. Further, it can integrate with CI/CD pipelines, providing real-time feedback on code quality, and distributing updates automatically to ensure software applications are always ready for deployment. ... AI brings immense value to quality engineering, but also presents a few challenges. To function effectively, algorithms require high-quality datasets, which may not always be available. Organisations will likely need to invest significant resources in acquiring AI talent or building skills in-house. There needs to be a clear plan for integrating AI with existing testing tools and processes. Finally, there are concerns such as protecting data privacy and confidentiality, and implementing Responsible AI.


The Role of AI in Global Governance

Aurora drew parallels with transformative technologies such as electricity and the internet. "If AI reaches some communities late, it sets them far behind," he said. He pointed to Indian initiatives such as Bhashini for language inclusion, e-Sanjeevani for telehealth, Karya for employment through AI annotation and farmer.ai in Baramati, which boosted farmers' incomes by 30% to 40%. Schnorr offered a European perspective, stressing that AI's transformative impact on economies and societies demands trustworthiness. Reflecting on the EU's AI Act, he said its dual aim is fostering innovation while protecting rights. "We're reviewing the Act to ensure it doesn't hinder innovation," Schnorr said, advocating for global alignment through frameworks such as the G7's Hiroshima Code of Conduct and bilateral dialogues with India. He underscored the need for rules to make AI human-centric and accessible, particularly for small and medium enterprises, which form the backbone of both German and Indian economies. ... Singh elaborated on India's push for indigenous AI models. "Funding compute is critical, as training models is resource-intensive. We have the talent and datasets," he said, citing India's second-place ranking in GitHub AI projects per the Stanford AI Index. "Building a foundation model isn't rocket science - it's about providing the right ingredients."


Cisco ThousandEyes: resilient networks start with global insight

To tackle the challenges that arise from (common or uncommon) misconfigurations and other network problems, we need an end-to-end topology, Vaccaro reiterates. ThousandEyes (and Cisco as a whole) have recently put a lot of extra work into this. We saw a good example of this recently during Mobile World Congress. There, ThousandEyes announced Connected Devices. This is intended for service providers and extends their insight into the performance of their customers’ networks in their home environments. The goal, as Vaccaro describes it, is to help service providers see deeper so that they can catch an outage or other disruption quickly, before it impacts customers who might be streaming their favorite show or getting on a work call. ... The Digital Operational Resilience Act (DORA) will be no news to readers who are active in the financial world. You can see DORA as a kind of advanced NIS2, only directly enforced by the EU. It is a collection of best practices that many financial institutions must adhere to. Most of it is fairly obvious. In fact, we would call it basic hygiene when it comes to resilience. However, one component under DORA will have caused financial institutions some stress and will continue to do so: they must now adhere to new expectations when it comes to the services they provide and the resilience of their third-party ICT dependencies.


A Five-Step Operational Maturity Model for Benchmarking Your Team

An operational maturity model is your blueprint for building digital excellence. It gives you the power to benchmark where you are, spot the gaps holding you back and build a roadmap to where you need to be. ... Achieving operational maturity starts with knowing where you are and defining where you want to go. From there, organizations should focus on four core areas: Stop letting silos slow you down. Unify data across tools and teams to enable faster incident resolution and improve collaboration. Integrated platforms and a shared data view reduce context switching and support informed decision-making. Because in today’s fast-moving landscape, fragmented visibility isn’t just inefficient — it’s dangerous. ... Standardize what matters. Automate what repeats. Give your teams clear operational frameworks so they can focus on innovation instead of navigation. Eliminate alert noise and operational clutter that’s holding your teams back. Less noise, more impact. ... Deploy automation and AI across the incident lifecycle, from diagnostics to communication. Prioritize tools that integrate well and reduce manual tasks, freeing teams for higher-value work. ... Use data and automation to minimize disruptions and deliver seamless experiences. Communicate proactively during incidents and apply learnings to prevent future issues.


The Future is Coded: How AI is Rewriting the Rules of Decision Theaters

At the heart of this shift is the blending of generative AI with strategic foresight practices. In the past, planning for the future involved static models and expert intuition. Now, AI models (including advanced neural networks) can churn through reams of historical data and real-time information to project trends and outcomes with uncanny accuracy. Crucially, these AI-powered projections don’t operate in a vacuum – they’re designed to work with human experts. By integrating AI’s pattern recognition and speed with human intuition and domain expertise, organizations create a powerful feedback loop. ... The fusion of generative AI and foresight isn’t confined to tech companies or futurists’ labs – it’s already reshaping industries. For instance, in finance, banks and investment firms are deploying AI to synthesize market signals and predict economic trends with greater accuracy than traditional econometric models. These AI systems can simulate how different strategies might play out under various future market conditions, allowing policymakers in central banks or finance ministries to test interventions before committing to them. The result is a more data-driven, preemptive strategy – allowing decision-makers to adjust course before a forecasted risk materializes. 


More accurate coding: Researchers adapt Sequential Monte Carlo for AI-generated code

The researchers noted that AI-generated code can be powerful, but it can also often lead to code that disregards the semantic rules of programming languages. Other methods to prevent this can distort models or are too time-consuming. Their method makes the LLM adhere to programming language rules by discarding code outputs that may not work early in the process and “allocate efforts towards outputs that more most likely to be valid and accurate.” ... The researchers developed an architecture that brings SMC to code generation “under diverse syntactic and semantic constraints.” “Unlike many previous frameworks for constrained decoding, our algorithm can integrate constraints that cannot be incrementally evaluated over the entire token vocabulary, as well as constraints that can only be evaluated at irregular intervals during generation,” the researchers said in the paper. Key features of adapting SMC sampling to model generation include proposal distribution where the token-by-token sampling is guided by cheap constraints, important weights that correct for biases and resampling which reallocates compute effort towards partial generations. ... AI models have made engineers and other coders work faster and more efficiently. It’s also given rise to a whole new kind of software engineer: the vibe coder. 


You Can't Be in Recovery Mode All the Time — Superna CEO

The proactive approach, he explains, shifts their position in the security lifecycle: "Now we're not responding with a very tiny blast radius and instantly recovering. We are officially left-of-the-boom; we are now ‘the incident never occurred.’" Next, Hesterberg reveals that the next wave of innovation focuses on leveraging the unique visibility his company has in terms of how critical data is accessed. “We have a keen understanding of where your critical data is and what users, what servers, and what services access that data.” From a scanning, patching, and upgrade standpoint, Hesterberg shares that large organizations often face the daunting task of addressing hundreds or even thousands of systems flagged for vulnerabilities daily. To help streamline this process, he says that his team is working on a new capability that integrates with the tools these enterprises already depend on. This upcoming feature will surface, in a prioritized way, the specific servers or services that interact with an organization's most critical data, highlighting the assets that matter most. By narrowing down the list, Hesterberg notes, teams can focus on the most potentially dangerous exposures first. Instead of trying to patch everything, he says, “If you know the 15, 20, or 50 that are most dangerous, potentially most dangerous, you're going to prioritize them.” 


When confusion becomes a weapon: How cybercriminals exploit economic turmoil

Defending against these threats doesn’t start with buying more tools. It starts with building a resilient mindset. In a crisis, security can’t be an afterthought – it must be a guiding principle. Organizations relying on informal workflows or inconsistent verification processes are unknowingly widening their attack surface. To stay ahead, protocols must be defined before uncertainty takes hold. Employees should be trained not just to spot technical anomalies, but to recognize emotional triggers embedded in legitimate looking messages. Resilience, at its core, is about readiness. Not just to respond, but to also anticipate. Organizations that view economic disruption as a dual threat, both financial and cyber, will position themselves to lead with control rather than react in chaos. This means establishing behavioral baselines, implementing layered authentication, and adopting systems that validate not just facilitate. As we navigate continued economic uncertainty, we are reminded once again that cybersecurity is no longer just about technology. It’s about psychology, communication, and foresight. Defending effectively means thinking tactically, staying adaptive, and treating clarity as a strategic asset.


The productivity revolution – enhancing efficiency in the workplace

In difficult economic times, when businesses are tightening the purse strings, productivity improvements may often be overlooked in favour of cost reductions. However, cutting costs is merely a short-term solution. By focusing on sustainable productivity gains, businesses will reap dividends in the long term. To achieve this, organisations must turn their focus to technology. Some technology solutions, such as cloud computing, ERP systems, project management and collaboration tools, produce significant flexibility or performance advantages compared to legacy approaches and processes. Whilst an initial expense, the long-term benefits are often multiples of the investment – cost reductions, time savings, employee motivation, to name just a few. And all of those technology categories are being enhanced with artificial intelligence – for example adding virtual agents to help us do more, quickly. ... At a time when businesses and labour markets are struggling with employee retention and availability, it has become more critical than ever for organisations to focus on effective training and wellbeing initiatives. Minimising staff turnover and building up internal skill sets is vital for businesses looking to improve their key outputs. Getting this right will enable organisations to build smarter and more effective productivity strategies.


Daily Tech Digest - April 18, 2025


Quote for the day:

“Failures are finger posts on the road to achievement.” -- C.S. Lewis



How to Use Passive DNS To Trace Hackers Command And Control Infrastructure

This technology works through a network of sensors that monitor DNS query-response pairs, forwarding this information to central collection points for analysis without disrupting normal network operations. The resulting historical databases contain billions of unique records that security analysts can query to understand how domain names have resolved over time. ... When investigating potential threats, analysts can review months or even years of DNS resolution data without alerting adversaries to their investigation—a critical advantage when dealing with sophisticated threat actors. ... The true power of passive DNS in C2 investigation comes through various pivoting techniques that allow analysts to expand from a single indicator to map entire attack infrastructures. These techniques leverage the interconnected nature of DNS to reveal relationships between seemingly disparate domains and IP addresses. IP-based pivoting represents one of the most effective approaches. Starting with a known malicious IP address, analysts can query passive DNS to identify all domains that have historically resolved to that address. This technique often reveals additional malicious domains that share infrastructure but might otherwise appear unrelated.


Why digital identity is the cornerstone of trust in modern business

The foundation of digital trust is identity. It is no longer sufficient to treat identity management as a backend IT concern. Enterprises must now embed identity solutions into every digital touchpoint, ensuring that user interactions – whether by customers, employees, or partners – are both frictionless and secure. Modern enterprises must shift from fragmented, legacy systems to a unified identity platform. This evolution allows organisations to scale securely, eliminate redundancies and deliver the streamlined experiences users now expect. ... Digital identity is also a driver of customer experience. In today’s hyper-competitive digital landscape, the sign-up process can make or break a brand relationship. Clunky login screens or repeated verification prompts are quick ways to lose a customer. ... The foundation of digital trust is identity. It is no longer sufficient to treat identity management as a backend IT concern. Enterprises must now embed identity solutions into every digital touchpoint, ensuring that user interactions – whether by customers, employees, or partners – are both frictionless and secure. Modern enterprises must shift from fragmented, legacy systems to a unified identity platform. This evolution allows organisations to scale securely, eliminate redundancies and deliver the streamlined experiences users now expect.


Is your business ready for the IDP revolution?

AI-powered document processing offers significant advantages. Using advanced ML, IDP systems accurately interpret even complex and low-quality documents, including those with intricate tables and varying formats. This reduces manual work and the risk of human error. ... IDP also significantly improves data quality and accuracy by eliminating manual data entry, ensuring critical information is captured correctly and consistently. This leads to better decision-making, regulatory compliance and increased efficiency. IDP has wide-ranging applications. In healthcare, it speeds up claims processing and improves patient data management. In finance, it automates invoice processing and streamlines loan applications. In legal, it assists with contract analysis and due diligence. And in insurance, IDP automates information extraction from claims and reports, accelerating processing and boosting customer satisfaction. One specific example of this innovation in action is DocuWare’s own Intelligent Document Processing (DocuWare IDP). Our AI-powered solution streamlines how businesses handle even the most complex documents. Available as a standalone product, in the DocuWare Cloud or on-premises, DocuWare IDP automates text recognition, document classification and data extraction from various document types, including invoices, contracts and ID cards.


Practical Strategies to Overcome Cyber Security Compliance Standards Fatigue

The suitability of a cyber security framework must be determined based on applicable laws, industry standards, organizational risk profile, business goals, and resource constraints. It goes without saying that organizations providing critical services to the USA federal government will pursue NIST compliance while Small and Medium-sized Enterprises (SMEs) may want to focus on CIS Top 20, given resource constraints. Once the cyber security team has selected the most suitable framework, they should seek endorsement from the executive team or cyber risk governance committee to ensure shared sense of purpose. ... Mapping will enable organizations to identify overlapping controls to create a unified control set that addresses the requirements of multiple frameworks. This way, the organization can avoid redundant controls and processes, which in turn reduces cyber security team fatigue, accelerates innovation and lowers the cost of security. ... Cyber compliance standards play an integral role to ensure organizations prioritize the protection of consumer confidential and sensitive information above profits. But to reduce pressure on cyber teams already battling stress, cyber leaders must take a pragmatic approach that carefully balances compliance with innovation, agility and efficiency.


The Elaboration of a Modern TOGAF Architecture Maturity Model

This innovative TOGAF architecture maturity model provides a structured framework for assessing and enhancing an organization’s enterprise architecture capabilities in organizations that need to become more agile. By defining maturity levels across ten critical domains, the model enables organizations to transition from unstructured, reactive practices to well-governed, data-driven, and continuously optimized architectural processes. The five maturity levels—Initial, Under Development, Defined, Managed, and Measured—offer a clear roadmap for organizations to integrate EA into strategic decision-making, align business and IT investments, and establish governance frameworks that enhance operational efficiency. Through this approach, EA evolves from a support function into a key driver of innovation and business transformation. This model emphasizes continuous improvement and strategic alignment, ensuring that EA not only supports but actively contributes to an organization’s long-term success. By embedding EA into business strategy, security, governance, and solution delivery, enterprises can enhance agility, mitigate risks, and drive competitive advantage. Measuring EA’s impact through financial metrics and performance indicators further ensures that architecture initiatives provide tangible business value. 


Securing digital products under the Cyber Resilience Act

CRA explicitly states that products should have appropriate level of cybersecurity based on the risks, the risk based approach is fundamental in the regulation. This has the advantage that we can set the bar wherever we want as long as we make a good risk based argumentation for this level. This implies that we must have a methodical categorization of risk, hence we need application risk profiles. In order to implement this we can follow the quality criteria of maturity level 1, 2 and 3 of the application risk profiles practice. This includes having a clearly agreed upon, understood, accessible and updated risk classification system. ... Many companies already have SAMM assessments, if you do not have SAMM assessments but use another maturity framework such as OWASP DSOMM or NIST CSF you could use the available mappings to accelerate the translation to SAMM. Otherwise we recommend doing SAMM assessments and identifying the gaps in the processes needed. Then deciding on a roadmap to develop the processes and capabilities in time. ... In CRA we need to demonstrate that we have adequate security processes in place, and that we do not ship products with known vulnerabilities. So apart from having a good picture of the data flows we need to have a good picture of the processes in place.


Insider Threats, AI and Social Engineering: The Triad of Modern Cybersecurity Threats

Insiders who are targeted or influenced by external adversaries to commit data theft may not be addressed by traditional security solutions because attackers might use a combination of manipulation techniques with tactics to get access to the confidential data of an organization.  This can be seen in the case of Insider Threats carried out by Famous Chollima, a cyber-criminal group that targeted organizations through the employees, that were working for the criminal group. This criminal group collected individuals, falsified their identities, and helped them secure employment with the organization. Once inside, the group got access to sensitive information through the employees they helped get into the organization. ... Since AI can mimic user behavior, it is hard for security teams to detect the difference between normal activity and AI-generated activity. AI can also be used by insiders to assist in their plans, such as like an insider could use AI or train AI models to analyze user activity and pinpoint the window of least activity to deploy malware onto a critical system at an optimal time and disguise this activity under a legitimate action, to avoid detection with monitoring solutions.


How Successful Leaders Get More Done in Less Time

In order to be successful, leaders must make a conscious shift to move from reactive to intentional. They must guard their calendars, build in time for deep work, and set clear boundaries to focus on what truly drives progress. ... Time-blocking is one of the simplest, most powerful tools a leader can use. At its core, time-blocking is the practice of assigning specific blocks of time to different types of work: deep focus, meetings, admin, creative thinking or even rest. Why does it work? Because it eliminates context-switching, which is the silent killer of productivity. Instead of bouncing between tasks and losing momentum, time-blocking gives your day structure. It creates rhythm and ensures that what matters most actually gets done. ... Not everything on your to-do list matters. But without a clear system to prioritize, everything feels urgent. That's how leaders end up spending hours on reactive work while their most impactful tasks get pushed to "tomorrow." The fix? Use prioritization frameworks like the 80/20 rule (20% of tasks drive 80% of results) to stay focused on what actually moves the needle. ... If you're still doing everything yourself, there's a chance you're creating a bottleneck. The best leaders know that delegation buys back time and creates opportunities for others to grow. 


The tech backbone creating the future of infrastructure

Governments and administrators around the world are rapidly realizing the benefits of integrated infrastructure. A prime example is the growing trend for connecting utilities across borders to streamline operations and enhance efficiency. The Federal-State Modern Grid Deployment Initiative, involving 21 US states, is a major step towards modernizing the power grid, boosting reliability and enhancing resource management. Across the Atlantic, the EU is linking energy systems; by 2030, each member nation should be sharing at least 15% of its electricity production with its neighbors. On a smaller scale, the World Economic Forum is encouraging industrial clusters—including in China, Indonesia, Ohio and Australia—to share resources, infrastructure and risks to maximize economic and environmental value en route to net zero. ... Data is a nation’s most valuable asset. It is now being collected from multiple infrastructure points—traffic, energy grids, utilities. Infusing it with artificial intelligence (AI) in the cloud enables businesses to optimize their operations in real time. Centralizing this information, such as in an integrated command-and-control center, facilitates smoother collaboration and closer interaction among different sectors. 


No matter how advanced the technology is, it can all fall apart without strong security

One cybersecurity trend that truly excites me is the convergence of Artificial Intelligence (AI) with cybersecurity, especially in the areas of threat detection, incident response, and predictive risk management. This has motivated me to pursue a PhD in Cybersecurity using AI. Unlike traditional rule-based systems, AI is revolutionising cybersecurity by enabling proactive and adaptive defence strategies through contextual intelligence, shifting the focus from reactive to proactive measures. ... The real magic lies in combining AI with human judgement — what I often refer to as “human-in-the-loop cybersecurity.” This balance allows teams to scale faster, stay sharp, and focus on strategic defence instead of chasing every alert manually. What I have learnt from all this is the fusion of AI and cybersecurity is not just an enhancement, it’s a paradigm shift. However, the key is achieving balance. Hence, AI should augment human intelligence, rather than supplant them.... In the realm of financial cybersecurity, the most significant risk isn’t solely technical; it stems from the gap between security measures and business objectives. As the CISO, my responsibility extends beyond merely protecting against threats; I aim to integrate cybersecurity into the core of the organisation, transforming it into a strategic enabler rather than a reactive measure.

Daily Tech Digest - March 14, 2025


Quote for the day:

“Success does not consist in never making mistakes but in never making the same one a second time.” --George Bernard Shaw


The Maturing State of Infrastructure as Code in 2025

The progression from cloud-specific frameworks to declarative, multicloud solutions like Terraform represented the increasing sophistication of IaC capabilities. This shift enabled organizations to manage complex environments with never-before-seen efficiency. The emergence of programming language-based IaC tools like Pulumi then further blurred the lines between application development and infrastructure management, empowering developers to take a more active role in ops. ... For DevOps and platform engineering leaders, this evolution means preparing for a future where cloud infrastructure management becomes increasingly automated, intelligent and integrated with other aspects of the software development life cycle. It also highlights the importance of fostering a culture of continuous learning and adaptation, as the IaC landscape continues to evolve at a rapid pace. ... Firefly’s “State of Infrastructure as Code (IaC)” report is an annual pulse check on the rapidly evolving state of IaC adoption, maturity and impact. Over the course of the past few editions, this report has become an increasingly crucial resource for DevOps professionals, platform engineers and site reliability engineers (SREs) navigating the complexities of multicloud environments and a changing IaC tooling landscape.


Consent Managers under the Digital Personal Data Protection Act: A Game Changer or Compliance Burden?

The use of Consent Managers provides advantages for both Data Fiduciaries and Data Principals. For Data Fiduciaries, Consent Managers simplify compliance with consent-related legal requirements, making it easier to manage and document user consent in line with regulatory obligations. For Data Principals, Consent Managers offer a streamlined and efficient way to grant, modify, and revoke consent, empowering them with greater control over how their personal data is shared. This enhanced efficiency in managing consent also leads to faster, more secure, and smoother data flows, reducing the complexities and risks associated with data exchanges. Additionally, Consent Managers play a crucial role in helping Data Principals exercise their right to grievance redressal. ... Currently, Data Fiduciaries can manage user consent independently, making the role of Consent Managers optional. If this remains voluntary, many companies may avoid them, reducing their effectiveness. For Consent Managers to succeed, they need regulatory support, flexible compliance measures, and a business model that balances privacy protection with industry participation. ... Rooted in the fundamental right to privacy under Article 21 of the Constitution of India, the DPDPA aims to establish a structured approach to data processing while preserving individual control over personal information.


The future of AI isn’t the model—it’s the system

Enterprise leaders are thinking differently about AI in 2025. Several founders here told me that unlike in 2023 and 2024, buyers are now focused squarely on ROI. They want systems that move beyond pilot projects and start delivering real efficiencies. Mensch says enterprises have developed “high expectations” for AI, and many now understand that the hard part of deploying it isn’t always the model itself—it’s everything around it: governance, observability, security. Mistral, he says, has gotten good at connecting these layers, along with systems that orchestrate data flows between different models and subsystems. Once enterprises grapple with the complexity of building full AI systems—not just using AI models—they start to see those promised efficiencies, Mensch says. But more importantly, C-suite leaders are beginning to recognize the transformative potential. Done right, AI systems can radically change how information moves through a company. “You’re making information sharing easier,” he says. Mistral encourages its customers to break down silos so data can flow across departments. One connected AI system might interface with HR, R&D, CRM, and financial tools. “The AI can quickly query other departments for information,” Mensch explains. “You no longer need to query the team.”


Generative AI is finally finding its sweet spot, says Databricks chief AI scientist

Beyond the techniques, knowing what apps to build is itself a journey and something of a fishing expedition. "I think the hardest part in AI is having confidence that this will work," said Frankle. "If you came to me and said, 'Here's a problem in the healthcare space, here are the documents I have, do you think AI can do this?' my answer would be, 'Let's find out.'" ... "Suppose that AI could automate some of the most boring legal tasks that exist?" offered Frankle, whose parents are lawyers. "If you wanted an AI to help you do legal research, and help you ideate about how to solve a problem, or help you find relevant materials -- phenomenal!" "We're still in very early days" of generative AI, "and so, kind of, we're benefiting from the strengths, but we're still learning how to mitigate the weaknesses." ... In the midst of uncertainty, Frankle is impressed with how customers have quickly traversed the learning curve. "Two or three years ago, there was a lot of explaining to customers what generative AI was," he noted. "Now, when I talk to customers, they're using vector databases." "These folks have a great intuition for where these things are succeeding and where they aren't," he said of Databricks customers. Given that no company has an unlimited budget, Frankle advised starting with an initial prototype, so that investment only proceeds to the extent that it's clear an AI app will provide value.


Australia’s privacy watchdog publishes regulatory strategy prioritizing biometrics

The strategy plan includes a table of activities and estimated timelines, a detailed breakdown of actions in specific categories, and a list of projected long- and short-term outcomes. The goals are ambitious in scope: a desired short-term outcome is to “mature existing awareness about privacy across multiple domains of life” so that “individuals will develop a more nuanced understanding of privacy issues recognising their significance across various aspects of their lives, including personal, professional, and social domains.” Laws, skills training and better security tools are one thing, but changing how people understand their privacy is a major social undertaking. The OAIC’s long-term outcomes seem more rooted in practicality; they include the widespread implementation of enhanced privacy compliance practices for organizations, better public understanding of the OAIC’s role as regulator, and enhanced data handling industry standards. ... AI is a matter of going concern, and compliance for model training and development will be a major focus for the regulator. In late February, Kind delivered a speech on privacy and security in retail that references her decision on the Bunnings case, which led to the publication of guidance on the use of facial recognition technology, focused on four key privacy concepts: necessity/proportionality, consent/transparency, accuracy/bias, and governance.


Hiring privacy experts is tough — here’s why

“Some organizations think, ‘Well, we’re funding security, and privacy is basically the same thing, right?’ And I think that’s really one of my big concerns,” she says. This blending of responsibilities is reflected in training practices, according to Kazi, who notes how many organizations combine security and privacy training, which isn’t inherently problematic, but it carries risks. “One of the questions we ask in our survey is, ‘Do you combine security training and privacy training?’ Some organizations say they do not necessarily see it as a bad thing, but you can … be doing security, but you’re not doing privacy. And so that’s what’s highly concerning is that you can’t have privacy without security, but you could potentially do security well without considering privacy.” As Trovato emphasizes, “cybersecurity people tend to be from Mars and privacy people from Venus”, yet he also observes how privacy and cybersecurity professionals are often grouped together, adding to the confusion about what skills are truly needed. ... “Privacy includes how are we using data, how are you collecting it, who are you sharing it with, how are you storing it — all of these are more subtle component pieces, and are you meeting the requirements of the customer, of the regulator, so it’s a much more outward business focus activity day-to-day versus we’ve got to secure everything and make sure it’s all protected.”


Security Maturity Models: Leveraging Executive Risk Appetite for Your Secure Development Evolution

With developers under pressure to produce more code than ever before, development teams need to have a high level of security maturity to avoid rework. That necessitates having highly skilled personnel working within a strategic, prevention-focused framework. Developer and AppSec teams must work closely together, as opposed to the old model of operating as separate entities. Today, developers need to assume a significant role in ensuring security best practices. The most recent BSIMM report from Black Duck Software, for instance, found that there are only 3.87 AppSec professionals for every 100 developers, which doesn’t bode well for AppSec teams trying to secure an organization’s software all on their own. A critical part of learning initiatives is the ability to gauge the progress of developers in the program, both to ensure that developers are qualified to work on the organization’s most sensitive projects and to assess the effectiveness of the program. This upskilling should be ongoing, and you should always look for areas that can be improved. Making use of a tool like SCW’s Trust Score, which uses benchmarks to gauge progress both internally and against industry standards, can help ensure that progress is being made.


Why thinking like a tech company is essential for your business’s survival

The phrase “every company is a tech company” gets thrown around a lot, but what does that actually mean? To us, it’s not just about using technology — it’s about thinking like a tech company. The most successful tech companies don’t just refine what they already do; they reinvent themselves in anticipation of what’s next. They place bets. They ask: Where do we need to be in five or 10 years? And then, they start moving in that direction while staying flexible enough to adapt as the market evolves. ... Risk management is part of our DNA, but AI presents new types of risks that businesses haven’t dealt with before. ... No matter how good our technology is, our success ultimately comes down to people. And we’ve learned that mindset matters more than skill set. When we launched an AI proof-of-concept project for our interns, we didn’t recruit based on technical acumen. Instead, we looked for curious, self-starting individuals willing to experiment and learn. What we found was eye-opening—these interns thrived despite having little prior experience with AI. Why? Because they asked great questions, adapted quickly, and weren’t afraid to explore. ... Aligning your culture, processes and technology strategy ensures you can adapt to a rapidly changing landscape while staying true to your core purpose.


Realizing the Internet of Everything

The obvious answer to this problem is governance, a set of rules that constrain use and technology to enforce them. The problem, as it is so often with the “obvious,” is that setting the rules would be difficult and constraining use through technology would be difficult to do, and probably harder to get people to believe in. Think about Asimov’s Three Laws of Robotics and how many of his stories focused on how people worked to get around them. Two decades ago, a research lab did a video collaboration experiment that involved a small camera in offices so people could communicate remotely. Half the workforce covered their camera when they got in. I know people who routinely cover their webcams when they’re not on a scheduled video chat or meeting, and you probably do too. So what if the light isn’t on? Somebody has probably hacked in. Social concerns inevitably collide with attempts to integrate technology tightly with how we live. Have we reached a point where dealing with those concerns convincingly is essential in letting technology improve our work, our lives, further? We do have widespread, if not universal, video surveillance. On a walk this week, I found doorbell cameras or other cameras on about a quarter of the homes I passed, and I’d bet there are even more in commercial areas. 


Cloud Security Architecture: Your Guide to a Secure Infrastructure

Threat modeling can be a good starting point, but it shouldn't end with a stack-based security approach. Rather than focusing solely on the technologies, approach security by mapping parts of your infrastructure to equivalent security concepts. Here are some practical suggestions and areas to zoom in on for implementation. ... When protecting workloads in the cloud, consider using some variant of runtime security. Kubernetes users have no shortage of choice here with tools such as Falco, an open-source runtime security tool that monitors your applications and detects anomalous behaviors. However, chances are your cloud provider has some form of dynamic threat detection for your workloads. For example, AWS offers Amazon GuardDuty, which continuously monitors your workloads for malicious activity and unauthorized behavior. ... Implementing two-factor authentication adds an extra layer of protection by requiring a second form of verification, such as an authenticator app or a passkey, in addition to your password. While reaching for your authenticator app every time you log in might seem slightly inconvenient, it's a far better outcome than dealing with the aftermath of a breached account. The minor inconvenience is a small price to pay for the added security it provides.

Daily Tech Digest - January 10, 2022

The positive side of ransomware for data transformation

The new architecture should be more responsive and interface with enterprise data via a smart API that can automate the processes by which data is fetched, merged, transformed, secured, and delivered, all without users having to put in a request to the data team. It must work with application data generated outside the data warehouse and programmatically combine it with data from either side of the wall, as well as with sources external to the organization. It must also be fully programmable, eliminating the need to predetermine exactly what data is going to be combined with what data. The “smarts” of this smart API means users can make data requests that may seem simple to them but that set off complex automated processes that deliver data within seconds or minutes, not days or weeks. One of the important functions of this API must be to perform automated backups on an optimized and context-aware schedule, including rapid backups to the cloud as well as slower backups to physical media in-house or remote. Such a system will enable enterprises to say, “No, thanks” to cybercriminals demanding payment to undo their ransomware’s damage on a company’s data.


How COVID-19 made zero trust the right approach to modernise networks

Zero trust networks operate under a principle of least-privilege, meaning that all programmes, processes, devices, or users are limited to the minimum privilege required to carry out their functions. The best analogy is to think of it like the government or military’s “need-to-know” policy. Access rights don’t need to be too restrictive as they are balanced against privileges that can range from ‘full access’ to ‘no rights at all’, depending on the conditions. Naturally, as remote working became the norm, these protocols made the most sense for dispersed networking environments. Though, many organisations may have quickly found that traditional detection and prevention technologies were insufficient to mitigate the threats posed by opportunistic actors on remote environments. While information into how previous attacks were carried out provided some context, it could only go so far with this new way of working in a highly distributed and decentralised way. The key for many was to plan for the worst and assume a breach had occurred.


Analyst predictions 2022: The future of data management

There’s a huge gap there in practice and knowledge. Also to a lesser extent, there’s a technology gap which is basically in the self-service technologies that will help teams essentially govern data; through the full life cycle, from develop, from selecting the data from building the pipelines from determining your access control, looking at quality, looking at basically whether the data is fresh or whether it’s trending off course. So my prediction is that it will receive the first harsh scrutiny this year. You are going to see some organizations and enterprises declare premature victory when they build some federated query implementations. You’re going to see vendors start to “data mesh-wash” their products, be it a pipelining tool, ELT [extract, transform, load process], a catalog or federated query tool. Vendors will be promoting how they support data mesh. Hopefully nobody’s going to call themselves a data mesh tool because data mesh is not a technology. We’re going to see one other thing come out of this. And this harks back to the metadata that Sanjeev was talking about and the data catalog.


How Artificial Intelligence is Changing the Payment Gateway Industry

Artificial intelligence and machine learning in the payment industry can reduce fraud detection significantly. An AI-powered payment gateway looks at a range of factors and provides a risk score. For example, suppose a merchant has a good record. It will be assigned a low-risk score. But, a time zone, location, or IP address that is previously not known will get a higher risk score. Several other factors are taken into account. Then, it is determined if the transaction is fraudulent. This way, it becomes easier to analyze data in a better way. AI can go through a larger volume of data and detect even unexpected anomalies. A financial institution or payment industry using AI can process real-time transactions. Technology is continually evolving; it is finding new advanced ways to differentiate between fraudulent and real transactions. Thus, AI provides the payment gateway industry with the ability to process a vast number of transactions with accuracy and lower error rates.


Eight resolutions to help navigate the new hybrid office model

The cloud and “on demand” models have become hugely important for helping users access the applications they need to do work from anywhere. While this shift to the cloud has its productivity benefits, it has not come without its security challenges. It is important to remember that cloud environments are not automatically secure when they are first created. Securing them requires knowledge and time. To keep business safe, security controls must span all environments – providing 360-degree application protection for both the application surface and the cloud application infrastructure. 5. Know your suppliers. The SolarWinds vulnerability highlighted the need for companies to thoroughly evaluate the tools and services they integrate into their operations. This includes the careful installation and configuration of the product or service, tracking patches and new releases from the vendor, and monitoring for any suspicious behavior. In a highly sensitive environment, some companies may choose not to use third-party products or services.


Avoiding the two-tier workforce in the third workplace

Most crucial of all, though, Farque believes, is for leaders and managers to consciously set the tone and role-model the behaviour they expect to see among their staff. Because dealing with hybrid teams equitably is not part of the usual management training rulebook, however, she advises investing in additional learning and development here. Mildon, on the other hand, believes there are lots of “little, everyday bits of inclusion” that can make a big difference too. These include simple things, such as ensuring that when remote employees dial into a meeting where others are physically present, they are invited to speak first. Alternatively, everyone could attend the event virtually. Digital tools, such as whiteboards and instant messaging, are also useful in enabling team members to communicate, collaborate and keep each other in the loop. But as Mildon points out, while there are lots of tools on the market that can help here. “It’s not necessarily about the technology,” he says. 


Why digital maturity is key to harnessing the Digital Twin

It’s a mistake to think that implementing a Digital Twin is a huge undertaking that will require the overhaul of technologies and processes. Fundamentally, so long as the business has data, and that data is captured, stored in an information management system, and understood by all, then you have the right foundation to get started. However, ultimately, a Digital Twin can only be successful if it is fed by the right data. And that is not only down to information management systems – it requires the workforce being able to read, understand, create, and communicate data as information. In other words, they need to be data literate. With the UK on the verge of a digital skills disaster, however, some business leaders may find that they’ll need to upskill their employees in this area. This will require the full backing of the C-level and a top-down leadership approach to encourage a data-driven culture that’s engrained across the company. As the saying goes, you can have all the gear but no idea – and organisations that upskill their employees in data will be the ones capable of bringing advanced concepts such as Digital Twins to fruition and reaping the benefits.


How Database Virtualization Could Break Vendor Lock-in

While the concept of database virtualization has started attracting attention and several products have become available, there is still a lot of skepticism and confusion. To be fair, the notion of virtualization has been used loosely in the context of databases. From data virtualization to the abstraction of database files, the label virtualization is used liberally. The concept of data virtualization has been around for decades and may warrant a clean delineation. Data virtualization is the idea of using a universal query language across all applications. While this seems beneficial in the long run, it requires all applications to be rewritten to use this artificial language. This does not solve the original problem: In order to overcome the vendor lock-in, the enterprise has to, well, overcome the vendor lock-in in the first place? Put differently, data virtualization is certainly aspirational, but it does not solve the actual problem of moving existing applications between databases.


2022 is shaping up to be an epic fight to protect data

COVID forever changed the way we work. Even in the highly unlikely event that the pandemic winds down this year, many people will keep working from home at least part of the time. Cybercriminals will be working, too. They'll be hunting for new ways to take advantage of the connections and devices that workers use to dial in remotely. NYU's Cappos says the cybersecurity industry will likely get a better handle on how to manage hybrid work situations, introducing new recommendations and products that boost security and make it easier for workers to connect. Consumers will also need to up their security game, Clay says. Good methods of two-factor authentication, such as biometrics and push notifications, are going to be a must. Simpler verification methods, like codes sent as SMS messages, just can't be trusted anymore. That goes for smartphones, too. Phishing, the practice of sending deceptive emails in order to get personal information, is going mobile. Similar attempts using SMS, known unimaginatively as smishing, and voice calls, which are called – you guessed it – vishing, will become more common this year as people move more of their online activity to mobile devices, Clay says.


Top tips to protect online networks in schools from cyber attacks

There’s a certain appeal that comes with free tools, especially if you have been looking at an alternative that is otherwise expensive to purchase. It’s easy to look at the free alternative and acknowledge that it isn’t exactly what you wanted, but it will do the trick. Avoid the temptation at all costs on a school set-up. An appealing free online tool could be dressed as malware or viruses, built to wreak havoc as soon as they have been downloaded. A lot of these come with Adware, which isn’t particularly dangerous but clocks up and slows your system and can showcase dangerous websites. Finally, there is the danger that a downloaded free tool will unleash spyware on your system, which integrates into your system and steals valuable data. ... Updating a school’s active directory, is quite possibly, the most basic, yet laborious, task on this list. Regardless, its importance is undeniable. This article has largely been about preventing online hackers, scammers, and cyber attackers from getting into your system. However, in some cases, those attacks can come from within, namely from former and previous students.



Quote for the day:

"Great leaders are almost always great simplifiers who can cut through argument, debate and doubt to offer a solution everybody can understand." -- General Colin Powell